diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Branding.zip Igo Primo 2.4https Scoutmails.com Index301.php K Branding.zip Igo Primo 2.4.md b/spaces/1gistliPinn/ChatGPT4/Examples/Branding.zip Igo Primo 2.4https Scoutmails.com Index301.php K Branding.zip Igo Primo 2.4.md
deleted file mode 100644
index 4d77ee036d921c1442003511a125b7900201ff75..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Branding.zip Igo Primo 2.4https Scoutmails.com Index301.php K Branding.zip Igo Primo 2.4.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
branding.zip igo primo 2.4https: scoutmails.com index301.php k branding.zip igo primo 2.4
-
-CyberLink PowerDirector Ultimate 16.0.20271 Incl Serial Key ... CyberLink PowerDirector Ultimate 15.0.2509.0 Final + Crack-Keygen - [Softhound]. Download Alawar Games Keys (Alawar) - Download
-Game Keys to Alawar (Alawar)
-Download Alawar game keys download for free.
-New Games Alawar.
-Download key to the game Alawar - Search Site
-How to choose the right games for your computer on the site Alawar to ...
-How to download Alawar games for free, download Alawar games without restrictions and without a key, look for Alawar games and play for free and without a key, ...
-Key to the game Alawar: Twilight.
-The key to the game Alawar: Twilight / Alawar: Twilight. 8a78ff9644
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/BombSquad Pro Mod APK Unlock All Features and Enjoy Explosive Fun.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/BombSquad Pro Mod APK Unlock All Features and Enjoy Explosive Fun.md
deleted file mode 100644
index 3670a558a9ec3d7b5a12edcf45369a3f39c2aba0..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/BombSquad Pro Mod APK Unlock All Features and Enjoy Explosive Fun.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-
Download BombSquad Pro Mod APK and Enjoy Explosive Fun with Your Friends
-
Do you love blowing things up and having fun with your friends? If yes, then you should try BombSquad, a hilarious and addictive game that lets you play various mini-games with bombs, pirates, ninjas, and more. And if you want to enjoy the game without any limitations, then you should download BombSquad Pro Mod APK, which gives you access to all the characters, tickets, and features of the game for free. In this article, we will tell you everything you need to know about BombSquad and BombSquad Pro Mod APK, and how to download and install it on your device.
BombSquad is a game developed by Eric Froemling that allows you to blow up your friends in various mini-games ranging from capture-the-flag to hockey. The game features 8 players local/networked multiplayer, gratuitous explosions, advanced ragdoll face-plant physics, pirates, ninjas, barbarians, insane chefs, and more. You can play the game on your Android device, or on your PC using a controller or a keyboard.
-
Features of BombSquad
-
BombSquad has many features that make it a fun and exciting game to play with your friends. Here are some of them:
-
Multiplayer mode
-
You can play BombSquad with up to 8 players on the same device or over the internet. You can also join online servers and play with other players from around the world. You can create your own team or join an existing one, and compete against other teams in various modes.
-
Various mini-games
-
BombSquad has a variety of mini-games that you can choose from, such as capture-the-flag, king-of-the-hill, elimination, race, hockey, football, basketball, and more. Each mini-game has its own rules and objectives, and requires different strategies and skills to win. You can also create your own custom mini-games using the built-in editor.
-
gcash 5.26 2 apk download free
-gcash 5.26 2 apk download latest version
-gcash 5.26 2 apk download for android
-gcash 5.26 2 apk download update
-gcash 5.26 2 apk download old version
-gcash 5.26 2 apk download xapk
-gcash 5.26 2 apk download apkcombo
-gcash 5.26 2 apk download app
-gcash 5.26 2 apk download install
-gcash 5.26 2 apk download offline
-gcash 5.26 2 apk download mod
-gcash 5.26 2 apk download hack
-gcash 5.26 2 apk download no root
-gcash 5.26 2 apk download mirror
-gcash 5.26 2 apk download direct link
-gcash 5.26 2 apk download for pc
-gcash 5.26 2 apk download for windows
-gcash 5.26 2 apk download for mac
-gcash 5.26 2 apk download for laptop
-gcash 5.26 2 apk download for tablet
-gcash 5.26 2 apk download for tv
-gcash 5.26 2 apk download for firestick
-gcash 5.26 2 apk download for chromebook
-gcash 5.26 2 apk download for ios
-gcash 5.26 2 apk download for iphone
-gcash 5.26 2 apk download for ipad
-gcash 5.26 2 apk download for ipod touch
-gcash 5.26 2 apk download review
-gcash 5.26 2 apk download rating
-gcash 5.26 2 apk download feedback
-gcash 5.26 2 apk download features
-gcash 5.26 2 apk download benefits
-gcash 5.26 2 apk download advantages
-gcash 5.26 2 apk download disadvantages
-gcash 5.26 2 apk download pros and cons
-gcash 5.26 2 apk download comparison
-gcash 5.26 2 apk download alternatives
-gcash 5.26 2 apk download competitors
-gcash 5.26
-
Customizable characters
-
You can customize your character in BombSquad by choosing from different outfits, colors, accessories, and taunts. You can also unlock new characters by playing the game or buying them with tickets. Some of the characters include pirates, ninjas, barbarians, robots, zombies, aliens, animals, and more.
-
Ragdoll physics
-
BombSquad has realistic ragdoll physics that make the game more hilarious and enjoyable. You can see your character fly through the air, bounce off walls, fall down stairs, get hit by bombs, and more. You can also use the ragdoll button to make your character go limp at any time.
-
What is BombSquad Pro Mod APK?
-
BombSquad Pro Mod APK is a modified version of the original BombSquad game that gives you access to all the pro features of the game for free. This means that you can enjoy all the characters, tickets, and modes of the game without spending any money or watching any ads.
-
Benefits of BombSquad Pro Mod APK
-
BombSquad Pro Mod APK has many benefits that make it better than the original game. Here are some of them:
All characters unlocked
-
With BombSquad Pro Mod APK, you can unlock all the characters in the game without having to play the game or buy them with tickets. You can choose from over 50 characters, each with their own unique appearance and personality. You can also mix and match different outfits, colors, and accessories to create your own custom character.
-
All tickets unlocked
-
Tickets are the currency of BombSquad that you can use to buy new characters, outfits, accessories, and mini-games. You can earn tickets by playing the game or watching ads, but it can take a long time to accumulate enough tickets to buy everything you want. With BombSquad Pro Mod APK, you can get unlimited tickets for free, and buy anything you want without any restrictions.
-
No ads
-
Ads can be annoying and distracting when you are playing a game, especially when they pop up in the middle of a match or a mini-game. They can also slow down your device and consume your data. With BombSquad Pro Mod APK, you can get rid of all the ads in the game, and enjoy a smooth and uninterrupted gaming experience.
-
How to download and install BombSquad Pro Mod APK?
-
If you want to download and install BombSquad Pro Mod APK on your device, you need to follow some simple steps. Here they are:
-
Steps to download and install BombSquad Pro Mod APK
-
Step 1: Enable unknown sources
-
Before you can install BombSquad Pro Mod APK on your device, you need to enable unknown sources in your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
-
Step 2: Download the APK file
-
Next, you need to download the APK file of BombSquad Pro Mod APK from a reliable source. You can use the link below to download the latest version of the file:
Once you have downloaded the APK file, you need to locate it in your device storage and tap on it to start the installation process. You may see a warning message asking for your permission to install the app. Just tap on Install and wait for the installation to complete.
-
Step 4: Launch the game and enjoy
-
After the installation is done, you can launch the game from your app drawer or home screen. You will see that you have access to all the pro features of the game for free. You can now enjoy playing BombSquad with your friends and have explosive fun.
-
Conclusion
-
BombSquad is a fun and addictive game that lets you play various mini-games with bombs and your friends. It has many features that make it an enjoyable game for all ages. However, if you want to enjoy the game without any limitations, you should download BombSquad Pro Mod APK, which gives you access to all the characters, tickets, and features of the game for free. You can download and install BombSquad Pro Mod APK by following the steps mentioned above. We hope this article was helpful for you. If you have any questions or feedback, feel free to leave a comment below.
-
FAQs
-
Here are some frequently asked questions about BombSquad and BombSquad Pro Mod APK:
-
-
Q: Is BombSquad Pro Mod APK safe to use?
A: Yes, BombSquad Pro Mod APK is safe to use as long as you download it from a trusted source. However, we recommend that you use it at your own risk, as we are not responsible for any damage or loss caused by using it.
-
Q: Can I play BombSquad online with BombSquad Pro Mod APK?
A: Yes, you can play BombSquad online with BombSquad Pro Mod APK. However, you may face some issues or errors while playing online, as some servers may not support modded versions of the game.
-
Q: Can I update BombSquad Pro Mod APK?
A: Yes, you can update BombSquad Pro Mod APK whenever a new version is available. However, you may lose some of your progress or data if you update it without backing it up first.
Q: What are the minimum requirements to play BombSquad on Android?
A: The minimum requirements to play BombSquad on Android are: Android 4.4 or higher, 1 GB of RAM, and 100 MB of free storage space.
-
Q: How can I contact the developer of BombSquad?
A: You can contact the developer of BombSquad by visiting his website, or by sending him an email at eric@froemling.net.
-
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Beach Buggy Racing for PC Download the EXE File and Race Against Crazy Characters.md b/spaces/1phancelerku/anime-remove-background/Beach Buggy Racing for PC Download the EXE File and Race Against Crazy Characters.md
deleted file mode 100644
index 7fa148eb77fc3e96819140e231fd435dd37630fa..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Beach Buggy Racing for PC Download the EXE File and Race Against Crazy Characters.md
+++ /dev/null
@@ -1,142 +0,0 @@
-
-
Beach Buggy Racing Exe File Download: How to Enjoy This Fun and Free Kart Racing Game on Your PC
-
Do you love kart racing games? If so, you might have heard of Beach Buggy Racing, a popular game for mobile devices that lets you drive into an action-packed, surprise-filled world of off-road kart racing mayhem. You can race against a field of rival drivers, each with unique personalities and special abilities. You can also build a collection of crazy power-ups, like Dodgeball Frenzy, Fireball, and Oil Slick. You can unlock and upgrade a variety of cars, from dune buggies to monster trucks. You can test your skills in 6 different game modes on 15 imaginative 3D race tracks, against a pack of tropical-loving rivals with a serious case of road rage!
But what if you want to play Beach Buggy Racing on your PC instead of your mobile device? Is there a way to do that? The answer is yes! You can download and play Beach Buggy Racing on your PC using an exe file. An exe file is an executable file that contains a program or software that can run on your PC. By downloading an exe file of Beach Buggy Racing, you can enjoy this fun and free kart racing game on your PC without any hassle.
-
Why would you want to play Beach Buggy Racing on your PC instead of your mobile device? Well, there are many benefits of playing Beach Buggy Racing on your PC. For example:
-
-
You can play with better graphics quality and sound effects on a larger screen and with better speakers
-
You can play with more comfortable and precise controls using your keyboard, mouse, or gamepad
-
You can play with your friends or family on the same PC using the split-screen multiplayer mode for up to 4 players
-
You can play with more stability and security without worrying about battery drain, data usage, or malware
-
-
As you can see, playing Beach Buggy Racing on your PC using an exe file download has many advantages. But how can you do that? How can you download and install Beach Buggy Racing exe file on your PC? How can you play Beach Buggy Racing on your PC? What are the game features, tips, and tricks that you need to know? In this article, we will answer all these questions and more. So, buckle up and get ready for some beach buggy racing fun!
-
How to Download and Install Beach Buggy Racing Exe File on Your PC
-
Downloading and installing Beach Buggy Racing exe file on your PC is very easy and simple. You just need to follow these steps:
-
beach buggy racing pc game free download
-beach buggy racing windows 10 download
-beach buggy racing for laptop download
-beach buggy racing full version download
-beach buggy racing offline installer download
-beach buggy racing microsoft store download
-beach buggy racing softonic download
-beach buggy racing cnet download
-beach buggy racing kart racing game download
-beach buggy racing sequel game download
-beach buggy racing 3d race tracks download
-beach buggy racing powerups and cars download
-beach buggy racing tropical island adventure download
-beach buggy racing vector unit game download
-beach buggy racing action-packed game download
-beach buggy racing surprise-filled game download
-beach buggy racing off-road karting game download
-beach buggy racing rival drivers game download
-beach buggy racing crazy powerups game download
-beach buggy racing dodgeball frenzy game download
-beach buggy racing fireball game download
-beach buggy racing oil slick game download
-beach buggy racing dune buggies game download
-beach buggy racing monster trucks game download
-beach buggy racing 6 game modes download
-beach buggy racing 15 3d race tracks download
-beach buggy racing road rage game download
-beach buggy racing fast and furious game download
-beach buggy racing fun and free game download
-beach buggy racing split-screen multiplayer mode download
-beach buggy racing premium version download
-beach buggy racing infinite tickets download
-beach buggy racing updates and images download
-beach buggy racing google+ page download
-beach buggy racing facebook page download
-beach buggy racing twitter page download
-beach buggy racing web page download
-beach buggy racing screenshots and reviews download
-beach buggy racing phoenix force game download
-beach buggy racing upward game download
-beach buggy racing dictionary app download
-beach buggy racing high-speed fun in the sun game download
-beach buggy racing tilt controls and powerups game download
-beach buggy racing special move and character game download
-beach buggy racing lavishly-crafted summery tracks game download
-beach buggy racing crabs and seagulls smashing game download
-beach buggy racing mario kart inspired game download
-beach buggy racing battle mode and net energy gain game download
-
-
Find a reliable and safe source for the exe file download. There are many websites that offer exe file downloads for various games and software, but not all of them are trustworthy and secure. Some of them may contain viruses, malware, or spyware that can harm your PC or steal your personal information. Therefore, you need to be careful and choose a reputable and verified source for the exe file download. One of the best sources for Beach Buggy Racing exe file download is the official website of Vector Unit, the developer of the game. You can visit their website at https://www.vectorunit.com/beach-buggy-racing and click on the "Download for Windows" button to get the exe file.
-
Download the exe file to your PC and run it as an administrator. Once you have found a reliable and safe source for the exe file download, you need to download the exe file to your PC. The file size is about 100 MB, so it should not take too long to download depending on your internet speed. After downloading the exe file, you need to run it as an administrator to start the installation process. To do that, you need to right-click on the exe file and select "Run as administrator" from the menu. This will allow the exe file to make changes to your PC and install the game properly.
-
Follow the installation instructions and launch the game. After running the exe file as an administrator, you will see a window with the installation instructions. You need to follow these instructions carefully and agree to the terms and conditions of the game. You also need to choose a destination folder for the game files and create a shortcut for the game on your desktop or start menu. The installation process should not take more than a few minutes. After completing the installation process, you can launch the game by clicking on the shortcut or by finding it in your destination folder.
-
-
Congratulations! You have successfully downloaded and installed Beach Buggy Racing exe file on your PC. Now you can enjoy this fun and free kart racing game on your PC anytime you want.
How to Play Beach Buggy Racing on Your PC
-
Now that you have downloaded and installed Beach Buggy Racing exe file on your PC, you are ready to play the game. But how can you play Beach Buggy Racing on your PC? What are the settings, controls, and graphics that you need to customize for optimal performance and experience? How can you choose your driver, car, and power-ups for different game modes and tracks? How can you use keyboard, mouse, or gamepad to control your car and activate power-ups? In this section, we will answer all these questions and more. Here is how you can play Beach Buggy Racing on your PC:
-
-
Customize your settings, controls, and graphics for optimal performance and experience. Before you start playing the game, you may want to customize your settings, controls, and graphics according to your preferences and PC specifications. To do that, you need to go to the main menu of the game and click on the "Options" button. There, you will see several tabs that allow you to adjust various aspects of the game. For example, you can change the language, sound volume, music playlist, screen resolution, graphics quality, anti-aliasing, shadows, etc. You can also change the controls for keyboard, mouse, or gamepad. You can choose from different presets or customize your own key bindings. You can also enable or disable vibration, auto-acceleration, auto-steering, etc. You can also calibrate your gamepad if you are using one. You can save your settings by clicking on the "Apply" button.
-
Choose your driver, car, and power-ups for different game modes and tracks. After customizing your settings, controls, and graphics, you can choose your driver, car, and power-ups for different game modes and tracks. To do that, you need to go to the main menu of the game and click on the "Play" button. There, you will see several options for playing the game. You can choose from 6 different game modes: Championship, Quick Race, Daily Challenge, Custom Race, Split Screen Multiplayer, and Online Multiplayer. Each game mode has its own rules and objectives. For example, in Championship mode, you need to compete in a series of races and earn stars to unlock new cars, drivers, and tracks. In Quick Race mode, you can choose any track and race against random opponents. In Daily Challenge mode, you can play a special race with a random car, driver, and power-up and try to beat the best time. In Custom Race mode, you can create your own rules and challenges for any track and race against AI or human opponents. In Split Screen Multiplayer mode, you can play with up to 4 players on the same PC using different controllers. In Online Multiplayer mode, you can play with up to 8 players from around the world using the internet.
-
Depending on the game mode you choose, you can select your driver, car, and power-ups from a variety of options. You can choose from 12 different drivers, each with their own personality and special ability. For example, Rez has the ability to hack other cars and make them spin out. McSkelly has the ability to summon a swarm of bats that block the vision of other drivers. You can also choose from 25 different cars, each with their own stats and style. For example, the Lunar Rover has high speed and handling but low acceleration and strength. The Rock Stomper has high strength and acceleration but low speed and handling. You can also choose from 15 different power-ups, each with their own effect and duration. For example, the Fireball lets you shoot a ball of fire that explodes on impact. The Oil Slick lets you drop a slippery puddle that makes other cars lose control.
-
You can unlock more drivers, cars, and power-ups by earning stars in Championship mode or by buying them with coins that you earn by playing the game. You can also upgrade your car and power-ups by spending coins. You can upgrade your car's speed, acceleration, handling, and strength. You can also upgrade your power-ups' effect, duration, and frequency. You can also customize your car's appearance by changing its color, wheels, decals, etc.
-
-
Once you have chosen your driver, car, and power-ups for the game mode and track you want to play, you are ready to start the race. But how do you control your car and activate your power-ups? Here is how you can do that:
-
-
Use keyboard, mouse, or gamepad to control your car and activate power-ups. You can use any of these devices to control your car and activate power-ups in Beach Buggy Racing on your PC. You can also use a combination of them if you prefer. For example, you can use the keyboard to steer and accelerate and the mouse to activate power-ups. Or you can use the gamepad to steer and accelerate and the keyboard to activate power-ups.
-
The default controls for keyboard are as follows: Use the arrow keys or WASD keys to steer left or right and accelerate or brake. Use the spacebar or enter key to activate power-ups. Use the escape key or backspace key to pause the game.
-
The default controls for mouse are as follows: Use the left mouse button to steer left or right and accelerate or brake. Use the right mouse button to activate power-ups.
-
The default controls for gamepad are as follows: Use the left analog stick or directional pad to steer left or right and accelerate or brake. Use the A button or X button to activate power-ups. Use the start button or back button to pause the game.
-
You can change these controls in the Options menu if you want to customize them according to your preferences.
-
-
That's it! You have learned how to play Beach Buggy Racing on your PC using keyboard, mouse, or gamepad. Now you can enjoy this fun and free kart racing game on your PC with better graphics quality, sound effects, controls, performance, stability, security, multiplayer mode, custom mode, etc.
Beach Buggy Racing Game Features, Tips, and Tricks
-
Now that you know how to download, install, and play Beach Buggy Racing on your PC, you may want to learn more about the game features, tips, and tricks that will make your gaming experience more fun and exciting. In this section, we will tell you what are the main game features that make Beach Buggy Racing stand out from other kart racing games. We will also give you some tips and tricks to help you improve your skills and win more races. Here are the game features, tips, and tricks that you need to know:
-
What are the main game features that make Beach Buggy Racing fun and exciting?
-
Beach Buggy Racing is not just another kart racing game. It has many unique and amazing features that make it different from other games in the genre. Here are some of the main game features that make Beach Buggy Racing fun and exciting:
-
-
A variety of cars, drivers, power-ups, tracks, and game modes to choose from. Beach Buggy Racing offers you a lot of options to customize your gameplay and challenge yourself. You can choose from 12 different drivers, each with their own personality and special ability. You can also choose from 25 different cars, each with their own stats and style. You can also choose from 15 different power-ups, each with their own effect and duration. You can also choose from 15 imaginative 3D race tracks, each with their own theme and layout. You can also choose from 6 different game modes, each with their own rules and objectives.
-
A colorful and vibrant graphics style with a tropical theme. Beach Buggy Racing has a beautiful and eye-catching graphics style that will make you feel like you are in a tropical paradise. The game has a bright and colorful palette that creates a cheerful and lively atmosphere. The game also has a tropical theme that adds to the charm and fun of the game. The game features various tropical elements such as palm trees, beaches, waterfalls, volcanoes, caves, etc. The game also has a dynamic weather system that changes the lighting and effects of the tracks.
-
A dynamic and physics-based gameplay with realistic effects and surprises. Beach Buggy Racing has a realistic and physics-based gameplay that makes the game more immersive and thrilling. The game has realistic effects such as gravity, inertia, friction, collision, etc. that affect the movement and behavior of the cars and power-ups. The game also has surprises such as ramps, jumps, loops, shortcuts, secrets, etc. that add to the excitement and unpredictability of the game.
-
A split-screen multiplayer mode for up to 4 players on one PC. Beach Buggy Racing has a split-screen multiplayer mode that allows you to play with up to 4 players on the same PC using different controllers. This mode is perfect for playing with your friends or family on the same screen without any internet connection or online registration required. You can choose any track and game mode and race against each other in a friendly or competitive way.
-
A custom game mode where you can create your own rules and challenges. Beach Buggy Racing has a custom game mode that allows you to create your own rules and challenges for any track and race against AI or human opponents. You can change various parameters such as the number of laps, the number of opponents, the difficulty level, the power-up frequency, etc. You can also enable or disable certain power-ups or drivers to make the game easier or harder for yourself or others.
-
-
These are some of the main game features that make Beach Buggy Racing fun and exciting. But how can you master these features and win more races? What are some tips and tricks to help you improve your skills? Here are some tips and tricks to help you out:
-
What are some tips and tricks to help you improve your skills and win more races?
-
Beach Buggy Racing is not just a game of luck or chance. It is also a game of skill and strategy. You need to practice regularly and learn from your mistakes to become a better racer. You also need to use some tips and tricks to gain an edge over your opponents. Here are some tips and tricks to help you improve your skills and win more races:
-
-
Upgrade your car and power-ups regularly to boost your performance. One of the most important things to do in Beach Buggy Racing is to upgrade your car and power-ups regularly to boost your performance. You can upgrade your car's speed, acceleration, handling, and strength by spending coins that you earn by playing the game. You can also upgrade your power-ups' effect, duration, and frequency by spending coins. Upgrading your car and power-ups will make them more effective and efficient in the races. You can also customize your car's appearance by changing its color, wheels, decals, etc.
-
Master drifting, jumping, and dodging to navigate the tracks and avoid obstacles. One of the most important skills to learn in Beach Buggy Racing is how to drift, jump, and dodge. Drifting is when you slide your car sideways while turning. Drifting can help you make sharp turns without losing speed or control. Jumping is when you launch your car into the air using ramps or bumps. Jumping can help you avoid obstacles or reach shortcuts or secrets. Dodging is when you move your car left or right to avoid obstacles or power-ups. Dodging can help you prevent damage or sabotage from your opponents or the environment.
-
Use power-ups strategically to gain an advantage or sabotage your opponents. One of the most fun and exciting aspects of Beach Buggy Racing is the use of power-ups. Power-ups are special items that you can collect and use during the races. Power-ups can have various effects such as boosting your speed, shooting projectiles, dropping traps, etc. Power-ups can help you gain an advantage or sabotage your opponents depending on how and when you use them. You need to use power-ups strategically to maximize their benefits and minimize their drawbacks.
-
Learn the shortcuts and secrets of each track to save time and distance. One of the most challenging and rewarding aspects of Beach Buggy Racing is the exploration of the tracks. Each track has its own theme and layout that offer different opportunities and challenges. Each track also has its own shortcuts and secrets that can help you save time and distance or give you extra coins or power-ups. You need to learn the shortcuts and secrets of each track to improve your performance and score.
-
Practice regularly and challenge yourself with different difficulty levels. One of the most effective ways to improve your skills and win more races in Beach Buggy Racing is to practice regularly and challenge yourself with different difficulty levels. Practicing regularly will help you familiarize yourself with the game features, controls, graphics, etc. Challenging yourself with different difficulty levels will help you test your skills against tougher opponents, faster cars, harder tracks, etc.
-
-
These are some of the tips and tricks that will help you improve your skills and win more races in Beach Buggy Racing. But remember, the most important thing is to have fun and enjoy the game!
-
Conclusion
-
In conclusion, Beach Buggy Racing is a fun and free kart racing game that you can download and play on your PC using an exe file. By downloading an exe file of Beach Buggy Racing, you can enjoy this game on your PC with better graphics quality, sound effects, controls, performance, stability, security, multiplayer mode, custom mode, etc. You can also customize your settings, controls, and graphics according to your preferences and PC specifications. You can also choose your driver, car, and power-ups from a variety of options for different game modes and tracks. You can also use keyboard, mouse, or gamepad to control your car and activate power-ups. You can also learn about the game features, tips, and tricks that will make your gaming experience more fun and exciting.
-
If you love kart racing games, you should definitely try out Beach Buggy Racing on your PC using an exe file download. It is a game that will keep you entertained for hours with its colorful graphics, dynamic gameplay, diverse options, surprises, and challenges. You can also play with your friends or family on the same PC using the split-screen multiplayer mode or with other players from around the world using the online multiplayer mode. You can also create your own rules and challenges using the custom game mode. Beach Buggy Racing is a game that will make you feel like you are in a tropical paradise with its tropical theme and elements. Beach Buggy Racing is a game that will make you smile and laugh with its humorous and quirky characters and power-ups. Beach Buggy Racing is a game that will make you addicted and satisfied with its realistic and physics-based gameplay and effects.
-
So, what are you waiting for? Download Beach Buggy Racing exe file on your PC today and enjoy this fun and free kart racing game on your PC. You will not regret it!
-
If you want to learn more about Beach Buggy Racing, you can visit the official website or social media pages of Vector Unit, the developer of the game. You can also check out some reviews, videos, screenshots, and FAQs of the game online. You can also share your feedback, suggestions, questions, or comments about the game with other players or with the developers. You can also rate and review the game on various platforms and websites.
-
Thank you for reading this article. We hope you found it helpful and informative. We hope you have a great time playing Beach Buggy Racing on your PC using an exe file download. Happy racing!
-
FAQs
-
Here are some frequently asked questions (FAQs) about Beach Buggy Racing exe file download:
-
-
-
Question
-
Answer
-
-
-
Is Beach Buggy Racing exe file download safe and secure?
-
Yes, Beach Buggy Racing exe file download is safe and secure if you download it from a reliable and verified source such as the official website of Vector Unit, the developer of the game. However, you should be careful and avoid downloading the exe file from unknown or suspicious sources as they may contain viruses, malware, or spyware that can harm your PC or steal your personal information.
-
-
-
Is Beach Buggy Racing exe file download free and legal?
-
Yes, Beach Buggy Racing exe file download is free and legal if you download it from a legitimate and authorized source such as the official website of Vector Unit, the developer of the game. However, you should not download or distribute the exe file from illegal or unauthorized sources as they may violate the intellectual property rights of the developer or publisher of the game.
-
-
-
What are the system requirements for Beach Buggy Racing exe file download?
-
The minimum system requirements for Beach Buggy Racing exe file download are as follows: Windows 7 or higher; 2 GB RAM; 1 GB free disk space; DirectX 9.0c or higher; Intel HD Graphics 4000 or better; Keyboard, mouse, or gamepad. The recommended system requirements for Beach Buggy Racing exe file download are as follows: Windows 10; 4 GB RAM; 2 GB free disk space; DirectX 11 or higher; NVIDIA GeForce GTX 650 or better; Keyboard, mouse, or gamepad.
-
-
-
How can I uninstall Beach Buggy Racing exe file from my PC?
-
You can uninstall Beach Buggy Racing exe file from your PC by following these steps: Go to the Control Panel of your PC and click on "Uninstall a program". Find Beach Buggy Racing in the list of programs and click on "Uninstall". Follow the uninstallation instructions and confirm your choice. Alternatively, you can go to the destination folder where you installed Beach Buggy Racing on your PC and run the "unins000.exe" file as an administrator. Follow the uninstallation instructions and confirm your choice.
-
-
-
How can I contact Vector Unit, the developer of Beach Buggy Racing?
-
You can contact Vector Unit, the developer of Beach Buggy Racing, by visiting their website at https://www.vectorunit.com/ and clicking on the "Contact" button. There, you can fill out a form with your name, email address, subject, and message. You can also contact them by sending an email to support@vectorunit.com. You can also follow them on Facebook, Twitter, Instagram, YouTube, Discord, etc.
-
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Cross Racing Championship Extreme A Classic Racing Game with Modern Features.md b/spaces/1phancelerku/anime-remove-background/Cross Racing Championship Extreme A Classic Racing Game with Modern Features.md
deleted file mode 100644
index 16d8f031d76d49d946a92da863e8668e46bd6f40..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Cross Racing Championship Extreme A Classic Racing Game with Modern Features.md
+++ /dev/null
@@ -1,98 +0,0 @@
-
-
Cross Racing Championship Extreme: A Review
-
If you are looking for a racing game that offers you the thrill of high-speed on and off road racing across vast open terrains, then you should check out Cross Racing Championship Extreme (CRCE). CRCE is a racing simulation game that was originally released in 2005 by Invictus Games Ltd. and has recently been re-released on Steam in an enhanced version. In this article, I will review CRCE and tell you why it is a great racing game that deserves your attention.
-
Introduction
-
CRCE is a racing game that allows you to experience the excitement of various racing disciplines, such as rally, rallycross, autocross, off-road, street racing, and more. You can race in over 60 different events across six distinct environments, ranging from icy mountainous regions and lush countryside to parched desert areas and beaches. You can also choose from a wide selection of cars, from classic hatchbacks and muscle cars to exotic supercars and off-road vehicles.
One of the main features of CRCE is its realistic handling system and damage model, which make the driving experience more challenging and immersive. You have to take into account the terrain, weather, and car condition when racing, as they affect your performance and control. You also have to deal with the consequences of crashing, as your car can get damaged or even destroyed. You can repair your car in the garage, but it will cost you money and time.
-
Another feature of CRCE is its non-linear career mode, which lets you progress through different racing categories at your own pace. You can choose which events to enter, which cars to buy or sell, and how to upgrade or customize them. You can also unlock new cars, tracks, and modes by completing certain objectives or challenges. You can also earn money by winning races or performing stunts, which you can use to buy new cars or parts.
-
Gameplay
-
Single player mode
-
In single player mode, you can start your career as a rookie racer and work your way up to become a champion. You can enter various events that suit your style and preference, such as circuit races, time trials, drift contests, stunt shows, and more. You can also choose the difficulty level, the number of opponents, the weather conditions, and other settings for each event.
-
One of the most important aspects of single player mode is car customization and tuning. You can modify your car's appearance by changing its color, decals, number plates, and more. You can also improve your car's performance by upgrading its engine, transmission, suspension, brakes, tires, and more. You can also fine-tune your car's settings by adjusting its gear ratios, camber angles, brake bias, and more.
-
As you progress through your career, you will unlock new cars, tracks, and modes. Some of the cars include Ford Focus RS WRC 03, Subaru Impreza WRX STi 04, Mitsubishi Lancer Evolution VIII MR FQ400 04, Porsche 911 GT3 RS , Lamborghini Murcielago R-GT, Ferrari F430 Challenge, and more. Some of the tracks include England, France, Hungary, Egypt, Finland, and more. Some of the modes include Free Ride, where you can explore the open world and perform stunts; Ghost Race, where you can race against your own or other players' best times; and Hot Lap, where you can try to beat the lap records of the developers.
-
cross racing championship extreme free download
-cross racing championship extreme steam
-cross racing championship extreme pc game
-cross racing championship extreme full version
-cross racing championship extreme crack
-cross racing championship extreme gameplay
-cross racing championship extreme system requirements
-cross racing championship extreme mods
-cross racing championship extreme review
-cross racing championship extreme cheats
-cross racing championship extreme online
-cross racing championship extreme windows 10
-cross racing championship extreme patch
-cross racing championship extreme demo
-cross racing championship extreme trainer
-cross racing championship extreme cars
-cross racing championship extreme maps
-cross racing championship extreme multiplayer
-cross racing championship extreme keygen
-cross racing championship extreme serial number
-cross racing championship extreme iso
-cross racing championship extreme rar
-cross racing championship extreme torrent
-cross racing championship extreme direct link
-cross racing championship extreme compressed
-cross racing championship extreme arealgamer.org
-cross racing championship extreme steamunlocked.net
-cross racing championship extreme igg-games.com
-cross racing championship extreme oceanofgames.com
-cross racing championship extreme apunkagames.com
-cross racing championship extreme skidrowreloaded.com
-cross racing championship extreme fitgirl-repacks.site
-cross racing championship extreme gamefabrique.com
-cross racing championship extreme old-games.com
-cross racing championship extreme myabandonware.com
-cross racing championship extreme invictus-games.com
-cross racing championship extreme metacritic.com
-cross racing championship extreme gamespot.com
-cross racing championship extreme youtube.com
-cross racing championship extreme facebook.com
-cross racing championship extreme twitter.com
-cross racing championship extreme reddit.com
-cross racing championship extreme discord.gg
-cross racing championship extreme wikipedia.org
-cross racing championship extreme wikia.org
-
Multiplayer mode
-
In multiplayer mode, you can join or host online lobbies and race with other players from around the world. You can choose from different multiplayer game modes and maps, such as Capture the Flag, Bomb Run, Destruction Zone, and more. You can also compete with other players in ranked or unranked races and rank up on the global leaderboards.
-
Multiplayer mode is a great way to test your skills and have fun with other racing enthusiasts. You can chat with other players, challenge them to duels, or team up with them in cooperative modes. You can also customize your car and show it off to other players. You can also download and share custom cars, tracks, and mods from the Steam Workshop.
-
Graphics and Sound
-
Graphics
-
CRCE features realistic physics and damage system that make the racing experience more authentic and dynamic. You can see your car getting dented, scratched, or even losing parts as you crash into obstacles or other cars. You can also see the dust, smoke, water, and mud effects as you drive on different terrains. You can also see the weather effects, such as rain, snow, fog, and wind, that affect your visibility and traction.
-
CRCE also creates detailed and living environments that make the racing experience more immersive and diverse. You can see the trees swaying in the wind, the birds flying in the sky, the animals roaming in the fields, and the people cheering in the stands. You can also see the landmarks, buildings, bridges, and monuments that add to the realism and variety of each location.
-
CRCE also supports various screen resolutions and aspect ratios that make the racing experience more compatible and customizable. You can choose from different display modes, such as windowed, fullscreen, or borderless. You can also adjust the graphics settings, such as texture quality, shadow quality, anti-aliasing, and more. You can also enable or disable various effects, such as motion blur, lens flare, bloom, and more.
-
Sound
-
CRCE features original rock/metal soundtracks by SZEG that make the racing experience more energetic and exhilarating. You can listen to over 40 tracks that suit the mood and atmosphere of each race. You can also listen to your own music by adding your MP3 files to the game folder.
-
CRCE also allows you to listen to immersive sound effects and engine noises that make the racing experience more realistic and intense. You can hear the roar of your engine, the screech of your tires, the crunch of your collisions, and the blast of your nitro. You can also hear the ambient sounds of each environment, such as the wind blowing, the water splashing, or the crowd cheering.
-
Conclusion
-
In conclusion, CRCE is a racing game that offers you a lot of fun and challenge in various racing disciplines across vast open terrains. It has realistic physics and damage system, detailed graphics , original soundtracks, and non-linear career mode. It also has multiplayer mode, Steam Workshop support, and various customization and tuning options. It is a racing game that will keep you entertained for hours and challenge you to become the best racer.
-
If you are interested in CRCE, you can buy it on Steam for $9.99. You can also visit the official website or the Steam community page for more information and updates. You can also watch some gameplay videos or read some user reviews to see what other players think about CRCE.
-
I hope you enjoyed this article and found it helpful. If you have any questions or comments, feel free to leave them below. Thank you for reading and happy racing!
-
FAQs
-
Here are some frequently asked questions about CRCE:
-
-
What are the system requirements for CRCE?
-
The minimum system requirements for CRCE are:
-
-
OS
Windows XP/Vista/7/8/10
-
Processor
Intel Pentium 4 2.0 GHz or equivalent
-
Memory
512 MB RAM
-
Graphics
NVIDIA GeForce FX 5600 or equivalent
-
DirectX
Version 9.0c
-
Storage
1 GB available space
-
Sound Card
DirectX compatible sound card
-
-
How can I play CRCE with a controller?
-
You can play CRCE with a controller by using a third-party software such as Xpadder or JoyToKey. You can also use the in-game settings to configure your controller buttons and axes.
-
How can I mod CRCE?
-
You can mod CRCE by using the built-in editor or by downloading and installing custom cars, tracks, and mods from the Steam Workshop. You can also create your own mods by using the SDK (Software Development Kit) that is included in the game folder.
-
How can I get more nitro in CRCE?
-
You can get more nitro in CRCE by performing stunts, such as jumps, drifts, flips, or rolls. You can also get more nitro by collecting nitro cans that are scattered around the tracks.
-
How can I change the language in CRCE?
-
You can change the language in CRCE by using the launcher or by editing the config.ini file that is located in the game folder. You can choose from English, German, French, Italian, Spanish, Hungarian, Polish, Russian, Czech, or Slovak.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Egg Inc. APK A Simulation Game with Chickens Research and Space.md b/spaces/1phancelerku/anime-remove-background/Egg Inc. APK A Simulation Game with Chickens Research and Space.md
deleted file mode 100644
index 335817a56333fb03cd911cf3dc35394237ae9a19..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Egg Inc. APK A Simulation Game with Chickens Research and Space.md
+++ /dev/null
@@ -1,161 +0,0 @@
-
-
Egg Inc Download APK: How to Play the Ultimate Egg Farming Game on Your Android Device
-
If you are looking for a fun and addictive simulation game that will keep you entertained for hours, then you should try Egg Inc. This game lets you build your own egg empire from scratch, with hundreds of chickens, dozens of research items, and many challenges to complete. You can also explore the secrets of the universe hidden in the chicken egg, launch space expeditions, and join forces with other players in co-op mode. In this article, we will show you how to download and install Egg Inc APK on your Android device, how to play the game, and some tips and tricks to help you succeed.
Egg Inc is a simulation game developed by Auxbrain Inc, a company that specializes in creating casual games with unique gameplay and graphics. The game was released in 2016 and has since gained over 10 million downloads on Google Play Store. It has also received positive reviews from critics and players alike, who praised its originality, humor, and depth.
-
The game is set in the near future, where the secrets of the universe will be unlocked in the chicken egg. You have decided to get in on the gold rush and sell as many eggs as you can. To do that, you need to hatch chickens, build hen houses, hire drivers, commission research, launch space expeditions, and more. The game is an incremental (clicker) game at its core, but it also uses many elements from simulation games that give it a unique feel and play style. You can interact with your farm in various ways, such as tapping on chickens, swiping on vehicles, or zooming in and out. You can also customize your farm with different themes, decorations, and music.
-
Why you should play Egg Inc
-
There are many reasons why you should play Egg Inc, but here are some of the main ones:
-
-
It is fun and relaxing. The game has a laid-back feel and a beautiful appearance, with crisp and colorful 3D graphics and a delightful simulation of a swarm of chickens. You can play at your own pace, without any pressure or time limits.
-
It is challenging and rewarding. The game has hundreds of challenges to complete, such as reaching certain milestones, earning achievements, or completing missions. You can also unlock new types of eggs, each with their own benefits and requirements. You can also prestige your farm to start over with extra bonuses, or join contracts to cooperate with other players for bigger rewards.
-
It is creative and educational. The game has dozens of research items that you can unlock and upgrade, each with their own effects and descriptions. You can learn about various topics related to eggs, chickens, farming, science, technology, and more. You can also launch space expeditions to discover new planets and secrets.
-
-
How to download and install Egg Inc APK
-
If you want to play Egg Inc on your Android device, you have two options: you can either download it from Google Play Store, or you can download it from a third-party source such as APKCombo. The latter option may be useful if you want to access older versions of the game or if you have compatibility issues with your device. However, you should be careful about the source of the APK file, as it may contain malware or viruses that can harm your device. Only download APK files from reputable and trusted websites, such as APKCombo. Here are the steps to download and install Egg Inc APK from APKCombo: - Go to the APKCombo website and search for Egg Inc in the search bar. You can also use this direct link to go to the Egg Inc page. - On the Egg Inc page, you will see various versions of the game, along with their release dates, sizes, and ratings. Choose the version that is compatible with your device and tap on the Download APK button. - A pop-up window will appear, asking you to choose a download method. You can either use a QR code scanner app to scan the code and download the file directly to your device, or you can use a download manager app to download the file faster and more securely. Choose the option that suits you best and follow the instructions on the screen. - Once the APK file is downloaded, locate it in your device's file explorer app and tap on it to install it. You may need to allow installation from unknown sources if you haven't done so already. To do that, go to Settings > Apps > Special access > Install unknown apps and enable the permission for your browser or file manager app. - After the installation is complete, you can launch Egg Inc from your app drawer and enjoy the game.
How to Play Egg Inc
-
The basics of egg farming
-
Egg Inc is a game that simulates the process of running an egg farm. Your goal is to produce as many eggs as possible and sell them for profit. To do that, you need to hatch chickens, build hen houses, hire drivers, commission research, launch space expeditions, and more.
-
The game has a simple interface that shows you your farm and its various elements. You can tap on any element to interact with it or view more information. You can also swipe left or right to move around your farm, or pinch in or out to zoom in or out.
-
The main element of your farm is the chicken coop, where you hatch chickens by tapping on the red button. The more chickens you have, the more eggs they produce. However, you also need to provide enough space for them in your hen houses, which you can build by tapping on the construction icon. You also need to deliver your eggs to the market by hiring drivers and buying vehicles, which you can do by tapping on the delivery icon.
-
egg inc apk free download
-egg inc mod apk download
-egg inc game download apk
-egg inc latest version apk download
-egg inc hack apk download
-egg inc android apk download
-egg inc simulation game apk download
-egg inc clicker game apk download
-egg inc offline apk download
-egg inc unlimited money apk download
-egg inc cheats apk download
-egg inc pro apk download
-egg inc premium apk download
-egg inc cracked apk download
-egg inc full version apk download
-egg inc update apk download
-egg inc beta apk download
-egg inc old version apk download
-egg inc 1.27.0 apk download
-egg inc 1.26.2 apk download
-egg inc 1.26.1 apk download
-egg inc xapk download
-egg inc apks download
-egg inc obb download
-egg inc app bundle download
-how to download egg inc apk
-where to download egg inc apk
-best site to download egg inc apk
-safe site to download egg inc apk
-virus free egg inc apk download
-easy way to download egg inc apk
-fast way to download egg inc apk
-direct link to download egg inc apk
-mirror link to download egg inc apk
-alternative link to download egg inc apk
-torrent link to download egg inc apk
-magnet link to download egg inc apk
-google play store link to download egg inc apk
-apkpure link to download egg inc apk
-apkmirror link to download egg inc apk
-apptoide link to download egg inc apk
-uptodown link to download egg inc apk
-apkmody link to download egg inc apk
-rexdl link to download egg inc apk
-revdl link to download egg inc apk
-andropalace link to download egg inc apk
-android1 link to download egg inc apk
-
You can earn money by selling your eggs, which depends on the type and quality of your eggs. You can also earn golden eggs, which are a special currency that you can use to buy boosters, upgrade your farm, or launch space expeditions. You can get golden eggs by completing missions, watching ads, or finding them randomly on your farm.
-
The different types of eggs and their benefits
-
As you progress in the game, you will be able to unlock new types of eggs that have different benefits and requirements. You can switch between different types of eggs by tapping on the egg icon at the top of the screen. Each type of egg has a different value, demand, and production rate. Some types of eggs also have special effects that can affect your farm or the world.
-
Here are some examples of the types of eggs you can unlock in Egg Inc:
-
-
-
Type
-
Value
-
Demand
-
Production Rate
-
Special Effect
-
-
-
Edible Egg
-
$0.25
-
High
-
Normal
-
None
-
-
-
Superfood Egg
-
$1.25
-
High
-
Normal
-
Increases happiness and health of people who eat it
-
-
-
Medical Egg
-
$6.25
-
Medium
-
Normal
-
Cures diseases and extends lifespan of people who eat it
-
-
-
Rocket Fuel Egg
-
$30
-
Low
-
Slow
-
Powers rockets and spaceships with its high energy density
-
-
-
Fusion Egg
-
$150
-
Very Low
Very Slow
Creates clean and unlimited energy by fusing atoms inside it
-
The various buildings, vehicles, and upgrades you can use
-
To increase your egg production and income, you can also use various buildings, vehicles, and upgrades that you can buy with your money or golden eggs. Here are some examples of what you can use:
-
-
Buildings: You can build different types of hen houses that can accommodate more chickens and have different features, such as solar panels, quantum transporters, or monoliths. You can also build silos that can store your eggs and feed your chickens when you are offline.
-
Vehicles: You can buy different types of vehicles that can deliver more eggs and have different features, such as refrigeration, quantum storage, or graviton coating. You can also buy trains that can transport large amounts of eggs across the map.
-
Upgrades: You can buy different types of upgrades that can improve various aspects of your farm, such as egg laying rate, egg value, farm value, hatchery capacity, internal hatchery rate, vehicle capacity, research cost, or soul egg bonus. You can also buy epic upgrades that have permanent effects and apply to all types of eggs.
-
-
The research and missions you can complete
-
To unlock new features and challenges in the game, you can also complete research and missions that you can access by tapping on the research icon or the mission icon. Here are some examples of what you can do:
-
-
Research: You can conduct different types of research that can enhance your farm or your eggs in various ways. There are two tiers of research: common and epic. Common research is specific to each type of egg and requires money to unlock and upgrade. Epic research is universal to all types of eggs and requires golden eggs to unlock and upgrade.
-
Missions: You can complete different types of missions that can reward you with money, golden eggs, or trophies. There are two types of missions: regular and trophy. Regular missions are specific to each type of egg and require you to achieve certain goals, such as having a certain number of chickens, producing a certain amount of eggs, or earning a certain amount of money. Trophy missions are universal to all types of eggs and require you to reach a certain farm value with each type of egg.
-
-
The prestige and contracts system
-
To progress further in the game, you can also use the prestige and contracts system that you can access by tapping on the prestige icon or the contract icon. Here are some examples of what you can do:
-
-
Prestige: You can prestige your farm to start over with extra bonuses. When you prestige, you will lose all your chickens, buildings, vehicles, upgrades, and money, but you will gain soul eggs and prophecy eggs. Soul eggs are a special type of egg that increase your farm's earning bonus by a percentage. Prophecy eggs are a rare type of egg that increase the power of your soul eggs by a percentage. The more soul eggs and prophecy eggs you have, the faster you will grow your farm.
-
Contracts: You can join contracts to cooperate with other players for bigger rewards. Contracts are time-limited events that require you to produce a certain amount of eggs within a certain period of time. You can join existing contracts or create your own contracts and invite other players to join. Contracts have different difficulties and rewards, such as money, golden eggs, prophecy eggs, or boosters.
-
-
Tips and Tricks for Egg Inc
-
How to optimize your egg production and income
-
To optimize your egg production and income, you should follow these tips and tricks:
-
-
Balance your chicken population, hen house capacity, and vehicle capacity. You should always have enough space for your chickens in your hen houses and enough vehicles to deliver your eggs to the market. If you have too many chickens or too few vehicles, you will waste your eggs and lose money.
-
Upgrade your farm regularly. You should always invest in upgrading your buildings, vehicles, and research items whenever you can afford them. Upgrades can improve various aspects of your farm and increase your egg production and income.
-
Prestige often. You should prestige your farm whenever you feel like you have reached a plateau or a slow growth rate. Prestiging will give you extra bonuses that will help you grow faster in the next run.
-
-
How to use boosters and drones effectively
-
To use boosters and drones effectively, you should follow these tips and tricks:
-
-
Boosters: Boosters are special items that you can buy with golden eggs or get from contracts or events. They can give you various benefits for a limited time, such as increasing your egg laying rate, egg value, farm value, hatchery capacity, internal hatchery rate, You can see the list of trophies and your progress on the trophy screen. There are different types of trophies, such as bronze, silver, gold, and platinum.
-
-
Conclusion
-
Egg Inc is a fun and addictive simulation game that lets you build your own egg empire from scratch. You can hatch chickens, build hen houses, hire drivers, commission research, launch space expeditions, and more. You can also unlock new types of eggs, each with their own benefits and requirements. You can also prestige your farm to start over with extra bonuses, or join contracts to cooperate with other players for bigger rewards. You can also complete achievements and trophies to earn golden eggs and prophecy eggs.
-
If you want to play Egg Inc on your Android device, you can download it from Google Play Store, or you can download it from a third-party source such as APKCombo. However, you should be careful about the source of the APK file, as it may contain malware or viruses that can harm your device. Only download APK files from reputable and trusted websites, such as APKCombo.
-
We hope this article has helped you learn more about Egg Inc and how to play it. If you have any questions or feedback, please feel free to leave a comment below. Happy egg farming!
-
FAQs
-
Q: How do I get more golden eggs?
-
A: You can get more golden eggs by completing missions, watching ads, shooting down drones, finding them randomly on your farm, or buying them with real money.
-
Q: How do I get more prophecy eggs?
-
A: You can get more prophecy eggs by completing trophy missions or joining contracts that reward them.
-
Q: How do I change the theme of my farm?
-
A: You can change the theme of your farm by tapping on the settings icon and choosing the theme option. You can choose from different themes, such as classic, winter, western, or futuristic.
-
Q: How do I launch a space expedition?
-
A: You can launch a space expedition by tapping on the rocket icon and choosing the expedition option. You need to have a certain amount of golden eggs and a certain type of egg to launch an expedition. You can also choose the duration and difficulty of the expedition. You can get various rewards from expeditions, such as money, golden eggs, boosters, or secrets.
-
Q: How do I create or join a co-op contract?
-
A: You can create or join a co-op contract by tapping on the contract icon and choosing the contract option. You need to have a certain type of egg and a certain farm value to join a contract. You can either join a public contract, which is open to anyone, or a private contract, which requires a code to join. You can also create your own contract and share the code with other players. You need to produce a certain amount of eggs within a certain time limit to complete a contract. You can get various rewards from contracts, such as money, golden eggs, prophecy eggs, or boosters.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the Classic Pinoy Game of Mahjong on Your Android Device with Pinoy Mahjong APK.md b/spaces/1phancelerku/anime-remove-background/Enjoy the Classic Pinoy Game of Mahjong on Your Android Device with Pinoy Mahjong APK.md
deleted file mode 100644
index ae9ae841dd8a8ec4316e856b8b72a3020ed04e31..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Enjoy the Classic Pinoy Game of Mahjong on Your Android Device with Pinoy Mahjong APK.md
+++ /dev/null
@@ -1,146 +0,0 @@
-
-
Pinoy Mahjong APK: A Fun and Easy Way to Play Mahjong on Your Phone
-
If you love playing mahjong, you might want to try Pinoy Mahjong APK, a mobile app that lets you enjoy the game anytime, anywhere. Pinoy Mahjong is a version of mahjong that is popular in the Philippines, where it is also known as Filipino mahjong or Pusoy Dos. It is a simple and fast-paced game that can be played by anyone, even if you are not familiar with the traditional rules of mahjong. In this article, we will tell you everything you need to know about Pinoy Mahjong APK, including what it is, how to download and install it, how to play it online with friends, and how it reflects the history and culture of mahjong.
Pinoy Mahjong is a single-player game based on the mahjong rules (not yet formally defined) used in the Philippines. This app is an implementation of those rules. This app runs on iPads as well as iPhones.
-
The origin and rules of Pinoy Mahjong
-
Mahjong is a tile-based game that was developed in the 19th century in China and has spread throughout the world since the early 20th century. It is played by four players (with some three-player variations found in parts of China, Japan, South Korea and Southeast Asia). While many variations of mahjong exist, most variations have some basic rules in common including how a piece is drawn and discarded, how a piece is robbed from another player, the use of suits (numbered tiles) and honors (winds and dragons), the basic kinds of melds allowed, how to deal the tiles and the order of play.
-
Pinoy Mahjong is one of the many variations of mahjong that emerged in different regions and countries. It is believed that mahjong was introduced to the Philippines by Chinese immigrants during the Spanish colonial period. Over time, the game adapted to the local culture and preferences, resulting in a unique version that differs from other forms of mahjong in several ways. Some of the main differences are:
-
-
Pinoy Mahjong uses only one suit (bamboos) and three honors (red dragon, green dragon, white dragon). The other suits (characters and circles) and honors (winds) are not used.
-
Pinoy Mahjong uses only 84 tiles instead of 136 tiles. Each player receives 21 tiles instead of 13 tiles.
-
Pinoy Mahjong allows only three types of melds: pung (three identical tiles), kong (four identical tiles), and chow (three consecutive tiles). A pair is not required to win.
-
Pinoy Mahjong has a special rule called "pusoy dos", which means "two flushes". This rule allows a player to win with two sets of seven tiles each, regardless of whether they form any melds or not.
-
Pinoy Mahjong has a scoring system that assigns different values to different melds and combinations. For example, a pung of dragons is worth more than a pung of bamboos, and a pusoy dos is worth more than a regular hand.
-
-
The features and benefits of Pinoy Mahjong APK. jong", "mahjong", or "Filipino mahjong". You can also create your own server and invite your friends to join.
-
Once you are in a server, look for a channel that hosts Pinoy Mahjong games. You can also create your own channel and set up the game settings, such as the number of players, the difficulty level, and the game mode. Some of the game modes are:
-
-
Classic mode: This is the standard mode that follows the basic rules of Pinoy Mahjong.
-
Blitz mode: This is a fast-paced mode that gives you a limited time to make your moves.
-
Challenge mode: This is a competitive mode that pits you against other players in a tournament or a ladder.
-
Custom mode: This is a flexible mode that allows you to customize the rules and the scoring system of Pinoy Mahjong.
-
-
After you have chosen or created a channel, join the game lobby and wait for other players to join. You can also invite your friends to join by sending them a link or a code.
-
When the game starts, you will see the tiles on your screen and the other players' names and avatars. You can also chat, voice call, or video call with them using Discord's features.
-
Play Pinoy Mahjong as you normally would, following the rules and the scoring system of the game mode. You can also use Discord's features to communicate and interact with other players.
-
When the game ends, you will see the results and the rankings of each player. You can also view your stats and achievements on Discord's dashboard.
-
You can play as many games as you want with your friends online using Discord. You can also join other servers and channels to play with different people and try different game modes.
-
-
The advantages and challenges of playing Pinoy Mahjong online
-
Playing Pinoy Mahjong online with friends using Discord has many advantages and challenges that make it a different and exciting experience. Some of the advantages are:
-
-
You can play Pinoy Mahjong anytime, anywhere, as long as you have an internet connection and a compatible device.
-
You can play Pinoy Mahjong with your friends online, even if they are far away or in different time zones.
-
You can play Pinoy Mahjong with different people from different countries and cultures, and learn from their strategies and styles.
-
You can play Pinoy Mahjong with different game modes and levels, and challenge yourself with different puzzles and tasks.
-
You can play Pinoy Mahjong with Discord's features, such as chat, voice call, video call, and more, and have fun and socialize with other players.
-
-
Some of the challenges are:
-
-
You may encounter technical issues or glitches while playing Pinoy Mahjong online, such as lag, disconnects, crashes, or bugs.
-
You may encounter malicious or rude players while playing Pinoy Mahjong online, such as cheaters, hackers, trolls, or bullies.
-
You may encounter communication or cultural barriers while playing Pinoy Mahjong online, such as language differences, accents, slang, or etiquette.
-
You may encounter difficulty or frustration while playing Pinoy Mahjong online, such as losing streaks, unfair matches, or hard levels.
-
You may encounter addiction or distraction while playing Pinoy Mahjong online, such as spending too much time, money, or energy on the game.
-
-
How Pinoy Mahjong reflects the history and culture of mahjong
-
Pinoy Mahjong is not just a game, but also a reflection of the history and culture of mahjong. Mahjong is a game that has evolved and diversified over time, influenced by various factors such as geography, politics, religion, economics, and social norms. Pinoy Mahjong is one of the examples of how mahjong has adapted to different contexts and preferences. Here are some of the ways that Pinoy Mahjong reflects the history and culture of mahjong:
-
pinoy mahjong game download
-pinoy mahjong rules and scoring
-pinoy mahjong app for ipad
-pinoy mahjong youtube videos
-pinoy mahjong online free play
-pinoy mahjong rotate games llc
-pinoy mahjong appadvice review
-pinoy mahjong strategy and tips
-pinoy mahjong best tiles to use
-pinoy mahjong how to win
-pinoy mahjong history and origin
-pinoy mahjong variations and styles
-pinoy mahjong tournaments and events
-pinoy mahjong cheats and hacks
-pinoy mahjong latest version update
-pinoy mahjong for android devices
-pinoy mahjong for ios devices
-pinoy mahjong for windows devices
-pinoy mahjong for mac devices
-pinoy mahjong for linux devices
-pinoy mahjong offline mode available
-pinoy mahjong multiplayer mode available
-pinoy mahjong single-player mode available
-pinoy mahjong custom mode available
-pinoy mahjong difficulty levels available
-pinoy mahjong sound effects and music
-pinoy mahjong graphics and design
-pinoy mahjong user interface and controls
-pinoy mahjong feedback and ratings
-pinoy mahjong support and contact
-pinoy mahjong faq and help
-pinoy mahjong privacy policy and terms of service
-pinoy mahjong installation and setup guide
-pinoy mahjong features and benefits
-pinoy mahjong pros and cons comparison
-pinoy mahjong alternatives and competitors
-pinoy mahjong testimonials and reviews
-pinoy mahjong social media and community
-pinoy mahjong blog and news articles
-pinoy mahjong awards and recognition
-
The evolution and variations of mahjong
-
Mahjong is a game that has undergone many changes and modifications since its origin in China. Some of the factors that contributed to its evolution are:
-
-
The migration and trade of Chinese people to other regions and countries, such as Japan, Korea, Southeast Asia, Europe, America, and more.
-
The interaction and exchange of ideas and customs between Chinese people and other people from different cultures and backgrounds.
-
The innovation and experimentation of new rules and features by different players and groups to suit their tastes and needs.
-
The standardization and regulation of mahjong by different organizations and associations to promote and preserve the game.
-
-
Pinoy Mahjong is one of the many variations of mahjong that emerged from these factors. It is a version that reflects the preferences and needs of the Filipino people, who are known for their creativity, adaptability, and hospitality. Pinoy Mahjong is a game that is easy to learn, fun to play, and suitable for any occasion.
-
The significance and symbolism of mahjong in different communities
-
Mahjong is not just a game, but also a symbol of many things in different communities. Some of the things that mahjong represents are:
-
-
Mahjong is a symbol of luck and fortune. Many people believe that playing mahjong can bring them good luck and wealth, especially during special occasions such as festivals, holidays, or birthdays. Some people also use lucky charms, rituals, or superstitions to enhance their chances of winning.
-
Mahjong is a symbol of skill and strategy. Many people admire and respect players who can master the game and win with skill and intelligence. Some people also use mahjong as a way to train their mental abilities, such as memory, concentration, and logic.
-
Mahjong is a symbol of culture and identity. Many people cherish and celebrate the game as a part of their heritage and tradition. Some people also use mahjong as a way to express their values, beliefs, and customs, such as respect, harmony, and generosity.
-
Mahjong is a symbol of socialization and friendship. Many people enjoy and appreciate the game as a means of entertainment and relaxation. Some people also use mahjong as a way to connect and bond with their family, friends, and neighbors, such as sharing stories, jokes, and food.
-
-
Pinoy Mahjong is one of the examples of how mahjong can have different meanings and functions in different communities. It is a game that reflects the culture and identity of the Filipino people, who are known for their optimism, resilience, and hospitality. Pinoy Mahjong is a game that can bring joy and happiness to anyone who plays it.
-
Conclusion
-
Pinoy Mahjong APK is a mobile app that allows you to play Pinoy Mahjong on your phone or tablet. It is a version of mahjong that is popular in the Philippines, where it is also known as Filipino mahjong or Pusoy Dos. It is a simple and fast-paced game that can be played by anyone, even if you are not familiar with the traditional rules of mahjong.
-
A summary of the main points
-
In this article, we have told you everything you need to know about Pinoy Mahjong APK, including:
-
-
What Pinoy Mahjong is, how it differs from other forms of mahjong, and what its features and benefits are.
-
How to download and install Pinoy Mahjong APK on your device, and what tips and tricks you can use to play it well.
-
How to play Pinoy Mahjong online with your friends using Discord, and what options and modes you can choose from.
-
How Pinoy Mahjong reflects the history and culture of mahjong, and what it symbolizes in different communities.
-
-
A call to action to download and play Pinoy Mahjong APK
-
If you are interested in playing Pinoy Mahjong APK, you can download it for free from the links below . You can also visit the official website or follow the social media accounts of Pinoy Mahjong APK for more information and updates. You can also share your feedback and suggestions with the developers or other players through the app or online platforms.
-
Pinoy Mahjong APK is a fun and easy way to play mahjong on your phone or tablet. It is a game that can entertain you, challenge you, teach you, and connect you with others. It is a game that can make you happy. So what are you waiting for? Download Pinoy Mahjong APK today and enjoy the game!
-
Frequently Asked Questions
-
Here are some of the frequently asked questions about Pinoy Mahjong APK:
-
Q: Is Pinoy Mahjong APK safe to download and play?
-
A: Yes, Pinoy Mahjong APK is safe to download and play. It does not contain any viruses or malware that can harm your device or data. It also does not require any sensitive or personal information from you to play the game. However, you should always download Pinoy Mahjong APK from trusted sources such as Google Play Store or App Store to avoid any potential risks.
-
Q: Is Pin oy Mahjong APK compatible with all devices and platforms?
-
A: Pinoy Mahjong APK is compatible with most devices and platforms that run on Android or iOS operating systems. However, some older or lower-end devices may experience some performance issues or errors while playing the game. You can check the minimum system requirements and compatibility of Pinoy Mahjong APK on its official website or on Google Play Store or App Store before downloading it.
-
Q: How can I contact the developers or support team of Pinoy Mahjong APK?
-
A: If you have any questions, problems, or suggestions regarding Pinoy Mahjong APK, you can contact the developers or support team of Pinoy Mahjong APK through the following ways:
-
-
Email: You can send an email to pinoy.mahjong.apk@gmail.com and expect a reply within 24 hours.
-
Facebook: You can visit the Facebook page of Pinoy Mahjong APK and send a message or leave a comment.
-
Twitter: You can follow the Twitter account of Pinoy Mahjong APK and tweet or direct message them.
-
Instagram: You can follow the Instagram account of Pinoy Mahjong APK and comment or direct message them.
-
-
Q: How can I update Pinoy Mahjong APK to the latest version?
-
A: If you have downloaded Pinoy Mahjong APK from Google Play Store or App Store, you can update it automatically or manually through the app store. If you have downloaded Pinoy Mahjong APK from other sources, you can update it manually by downloading and installing the latest version from the official website or from the links provided below . You should always update Pinoy Mahjong APK to the latest version to enjoy the new features, improvements, and bug fixes.
-
Q: How can I uninstall Pinoy Mahjong APK from my device?
-
A: If you want to uninstall Pinoy Mahjong APK from your device, you can do so by following these steps:
-
-
Go to your device's settings and look for the apps or applications menu.
-
Find and tap on Pinoy Mahjong APK from the list of apps installed on your device.
-
Tap on the uninstall button and confirm your action.
-
Wait for the uninstallation process to finish and check if Pinoy Mahjong APK is removed from your device.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/3i2irg/first-app/app.py b/spaces/3i2irg/first-app/app.py
deleted file mode 100644
index ce9522ea334f3405c5bf0fb6929e2c640c1c387e..0000000000000000000000000000000000000000
--- a/spaces/3i2irg/first-app/app.py
+++ /dev/null
@@ -1,20 +0,0 @@
-import gradio as gr
-
-from fastai.vision.all import *
-import skimage
-
-learn = load_learner('export.pkl')
-
-labels = learn.dls.vocab
-def predict(img):
- img = PILImage.create(img)
- pred,pred_idx,probs = learn.predict(img)
- return {labels[i]: float(probs[i]) for i in range(len(labels))}
-
-title = "Emotion Classifier"
-description = "An emotion classifier trained with images from DuckDuckGo image search and fastai."
-examples = ['happyphoto.jpg', 'yoelphoto.jpg']
-interpretation='default'
-enable_queue=True
-
-gr.Interface(fn=predict,inputs=gr.inputs.Image(shape=(512, 512)),outputs=gr.outputs.Label(num_top_classes=3),title=title,description=description,examples=examples,interpretation=interpretation,enable_queue=enable_queue).launch()
\ No newline at end of file
diff --git a/spaces/4Taps/SadTalker/src/face3d/util/load_mats.py b/spaces/4Taps/SadTalker/src/face3d/util/load_mats.py
deleted file mode 100644
index f9a6fcc71de1d7dad8b0f81c67dc1c213764ff0b..0000000000000000000000000000000000000000
--- a/spaces/4Taps/SadTalker/src/face3d/util/load_mats.py
+++ /dev/null
@@ -1,120 +0,0 @@
-"""This script is to load 3D face model for Deep3DFaceRecon_pytorch
-"""
-
-import numpy as np
-from PIL import Image
-from scipy.io import loadmat, savemat
-from array import array
-import os.path as osp
-
-# load expression basis
-def LoadExpBasis(bfm_folder='BFM'):
- n_vertex = 53215
- Expbin = open(osp.join(bfm_folder, 'Exp_Pca.bin'), 'rb')
- exp_dim = array('i')
- exp_dim.fromfile(Expbin, 1)
- expMU = array('f')
- expPC = array('f')
- expMU.fromfile(Expbin, 3*n_vertex)
- expPC.fromfile(Expbin, 3*exp_dim[0]*n_vertex)
- Expbin.close()
-
- expPC = np.array(expPC)
- expPC = np.reshape(expPC, [exp_dim[0], -1])
- expPC = np.transpose(expPC)
-
- expEV = np.loadtxt(osp.join(bfm_folder, 'std_exp.txt'))
-
- return expPC, expEV
-
-
-# transfer original BFM09 to our face model
-def transferBFM09(bfm_folder='BFM'):
- print('Transfer BFM09 to BFM_model_front......')
- original_BFM = loadmat(osp.join(bfm_folder, '01_MorphableModel.mat'))
- shapePC = original_BFM['shapePC'] # shape basis
- shapeEV = original_BFM['shapeEV'] # corresponding eigen value
- shapeMU = original_BFM['shapeMU'] # mean face
- texPC = original_BFM['texPC'] # texture basis
- texEV = original_BFM['texEV'] # eigen value
- texMU = original_BFM['texMU'] # mean texture
-
- expPC, expEV = LoadExpBasis(bfm_folder)
-
- # transfer BFM09 to our face model
-
- idBase = shapePC*np.reshape(shapeEV, [-1, 199])
- idBase = idBase/1e5 # unify the scale to decimeter
- idBase = idBase[:, :80] # use only first 80 basis
-
- exBase = expPC*np.reshape(expEV, [-1, 79])
- exBase = exBase/1e5 # unify the scale to decimeter
- exBase = exBase[:, :64] # use only first 64 basis
-
- texBase = texPC*np.reshape(texEV, [-1, 199])
- texBase = texBase[:, :80] # use only first 80 basis
-
- # our face model is cropped along face landmarks and contains only 35709 vertex.
- # original BFM09 contains 53490 vertex, and expression basis provided by Guo et al. contains 53215 vertex.
- # thus we select corresponding vertex to get our face model.
-
- index_exp = loadmat(osp.join(bfm_folder, 'BFM_front_idx.mat'))
- index_exp = index_exp['idx'].astype(np.int32) - 1 # starts from 0 (to 53215)
-
- index_shape = loadmat(osp.join(bfm_folder, 'BFM_exp_idx.mat'))
- index_shape = index_shape['trimIndex'].astype(
- np.int32) - 1 # starts from 0 (to 53490)
- index_shape = index_shape[index_exp]
-
- idBase = np.reshape(idBase, [-1, 3, 80])
- idBase = idBase[index_shape, :, :]
- idBase = np.reshape(idBase, [-1, 80])
-
- texBase = np.reshape(texBase, [-1, 3, 80])
- texBase = texBase[index_shape, :, :]
- texBase = np.reshape(texBase, [-1, 80])
-
- exBase = np.reshape(exBase, [-1, 3, 64])
- exBase = exBase[index_exp, :, :]
- exBase = np.reshape(exBase, [-1, 64])
-
- meanshape = np.reshape(shapeMU, [-1, 3])/1e5
- meanshape = meanshape[index_shape, :]
- meanshape = np.reshape(meanshape, [1, -1])
-
- meantex = np.reshape(texMU, [-1, 3])
- meantex = meantex[index_shape, :]
- meantex = np.reshape(meantex, [1, -1])
-
- # other info contains triangles, region used for computing photometric loss,
- # region used for skin texture regularization, and 68 landmarks index etc.
- other_info = loadmat(osp.join(bfm_folder, 'facemodel_info.mat'))
- frontmask2_idx = other_info['frontmask2_idx']
- skinmask = other_info['skinmask']
- keypoints = other_info['keypoints']
- point_buf = other_info['point_buf']
- tri = other_info['tri']
- tri_mask2 = other_info['tri_mask2']
-
- # save our face model
- savemat(osp.join(bfm_folder, 'BFM_model_front.mat'), {'meanshape': meanshape, 'meantex': meantex, 'idBase': idBase, 'exBase': exBase, 'texBase': texBase,
- 'tri': tri, 'point_buf': point_buf, 'tri_mask2': tri_mask2, 'keypoints': keypoints, 'frontmask2_idx': frontmask2_idx, 'skinmask': skinmask})
-
-
-# load landmarks for standard face, which is used for image preprocessing
-def load_lm3d(bfm_folder):
-
- Lm3D = loadmat(osp.join(bfm_folder, 'similarity_Lm3D_all.mat'))
- Lm3D = Lm3D['lm']
-
- # calculate 5 facial landmarks using 68 landmarks
- lm_idx = np.array([31, 37, 40, 43, 46, 49, 55]) - 1
- Lm3D = np.stack([Lm3D[lm_idx[0], :], np.mean(Lm3D[lm_idx[[1, 2]], :], 0), np.mean(
- Lm3D[lm_idx[[3, 4]], :], 0), Lm3D[lm_idx[5], :], Lm3D[lm_idx[6], :]], axis=0)
- Lm3D = Lm3D[[1, 2, 0, 3, 4], :]
-
- return Lm3D
-
-
-if __name__ == '__main__':
- transferBFM09()
\ No newline at end of file
diff --git a/spaces/801artistry/RVC801/lib/infer_pack/commons.py b/spaces/801artistry/RVC801/lib/infer_pack/commons.py
deleted file mode 100644
index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000
--- a/spaces/801artistry/RVC801/lib/infer_pack/commons.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size * dilation - dilation) / 2)
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def kl_divergence(m_p, logs_p, m_q, logs_q):
- """KL(P||Q)"""
- kl = (logs_q - logs_p) - 0.5
- kl += (
- 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
- )
- return kl
-
-
-def rand_gumbel(shape):
- """Sample from the Gumbel distribution, protect from overflows."""
- uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
- return -torch.log(-torch.log(uniform_samples))
-
-
-def rand_gumbel_like(x):
- g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
- return g
-
-
-def slice_segments(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, :, idx_str:idx_end]
- return ret
-
-
-def slice_segments2(x, ids_str, segment_size=4):
- ret = torch.zeros_like(x[:, :segment_size])
- for i in range(x.size(0)):
- idx_str = ids_str[i]
- idx_end = idx_str + segment_size
- ret[i] = x[i, idx_str:idx_end]
- return ret
-
-
-def rand_slice_segments(x, x_lengths=None, segment_size=4):
- b, d, t = x.size()
- if x_lengths is None:
- x_lengths = t
- ids_str_max = x_lengths - segment_size + 1
- ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
- ret = slice_segments(x, ids_str, segment_size)
- return ret, ids_str
-
-
-def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
- position = torch.arange(length, dtype=torch.float)
- num_timescales = channels // 2
- log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
- num_timescales - 1
- )
- inv_timescales = min_timescale * torch.exp(
- torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
- )
- scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
- signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
- signal = F.pad(signal, [0, 0, 0, channels % 2])
- signal = signal.view(1, channels, length)
- return signal
-
-
-def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return x + signal.to(dtype=x.dtype, device=x.device)
-
-
-def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
- b, channels, length = x.size()
- signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
- return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
-
-
-def subsequent_mask(length):
- mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
- return mask
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a + input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-def convert_pad_shape(pad_shape):
- l = pad_shape[::-1]
- pad_shape = [item for sublist in l for item in sublist]
- return pad_shape
-
-
-def shift_1d(x):
- x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
- return x
-
-
-def sequence_mask(length, max_length=None):
- if max_length is None:
- max_length = length.max()
- x = torch.arange(max_length, dtype=length.dtype, device=length.device)
- return x.unsqueeze(0) < length.unsqueeze(1)
-
-
-def generate_path(duration, mask):
- """
- duration: [b, 1, t_x]
- mask: [b, 1, t_y, t_x]
- """
- device = duration.device
-
- b, _, t_y, t_x = mask.shape
- cum_duration = torch.cumsum(duration, -1)
-
- cum_duration_flat = cum_duration.view(b * t_x)
- path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
- path = path.view(b, t_x, t_y)
- path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
- path = path.unsqueeze(1).transpose(2, 3) * mask
- return path
-
-
-def clip_grad_value_(parameters, clip_value, norm_type=2):
- if isinstance(parameters, torch.Tensor):
- parameters = [parameters]
- parameters = list(filter(lambda p: p.grad is not None, parameters))
- norm_type = float(norm_type)
- if clip_value is not None:
- clip_value = float(clip_value)
-
- total_norm = 0
- for p in parameters:
- param_norm = p.grad.data.norm(norm_type)
- total_norm += param_norm.item() ** norm_type
- if clip_value is not None:
- p.grad.data.clamp_(min=-clip_value, max=clip_value)
- total_norm = total_norm ** (1.0 / norm_type)
- return total_norm
diff --git a/spaces/A00001/bingothoo/tests/parse.ts b/spaces/A00001/bingothoo/tests/parse.ts
deleted file mode 100644
index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000
--- a/spaces/A00001/bingothoo/tests/parse.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import { promises as fs } from 'fs'
-import { join } from 'path'
-import { parseHeadersFromCurl } from '@/lib/utils'
-
-(async () => {
- const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8')
- const headers = parseHeadersFromCurl(content)
- console.log(headers)
-
- const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8')
- const cmdHeaders = parseHeadersFromCurl(cmdContent)
- console.log(cmdHeaders)
-})()
diff --git "a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/ABstract\357\274\210\346\217\222\344\273\266\345\214\226AB Testing\345\271\263\345\217\260\357\274\211 746b87acd94643ca871ec661b63f196c.md" "b/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/ABstract\357\274\210\346\217\222\344\273\266\345\214\226AB Testing\345\271\263\345\217\260\357\274\211 746b87acd94643ca871ec661b63f196c.md"
deleted file mode 100644
index b9b5eb5b6c132f8073b5be3230d977c88d96c303..0000000000000000000000000000000000000000
--- "a/spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/ABstract\357\274\210\346\217\222\344\273\266\345\214\226AB Testing\345\271\263\345\217\260\357\274\211 746b87acd94643ca871ec661b63f196c.md"
+++ /dev/null
@@ -1,123 +0,0 @@
-# ABstract(插件化AB Testing平台)
-
-Last edited time: April 23, 2023 3:58 PM
-Owner: Anonymous
-
-Project name: "ABstract". It plays on the words "AB testing" while also hinting at the concept of abstracting away the complexity of building and managing an AB testing platform.
-
-> 这篇文档介绍了一个插件化AB Testing平台的产品愿景、项目目标、核心业务能力、业务模型和进程间架构。该平台提供配置管理、实验管理、数据收集和配置发布等核心能力,可以帮助公司业务/开发人员基于AB测试实现数据驱动产品迭代,同时提供核心能力插件化管理和最小实现,支持开发人员结合实际需求进行剪裁或扩展。
->
-
-## 产品愿景
-
-对于:我们的目标客户/用户
-
-
-
-他们想:目标客户的痛点或者希望
-
-
-
-这个:产品名称
-
-
-
-是一个:什么样的产品类型(平台?工具?)
-
-
-
-它可以:通过什么样的功能,为用户带来什么样的价值。
-
-
-
-不同于:市场上的竞品及其特点
-
-
-
-
-
-它的优势是:我们产品的独特价值
-
-
-
-## 项目目标
-
-> 完成插件化AB Testing 平台核心功能开发
->
-
-> 探索AI在软件开发中的应用实践
->
-
-## 核心业务能力
-
-- 配置管理
- 1. Feature Flag管理
- 1. 提供Feature Config的元数据
- 2. Feature Config管理
- 1. 提供依据Feature Flag生产Feature Config 配置界面的能力
-- 实验管理
- 1. 实验管理
- 1. 提供实验、分组、指标配置的管理功能
- 2. 提供实验实验运行结果查看
- 2. 实验分级管理
- 1. 提供互斥组管理
- 2. 互斥组中的实验流量之间互斥
- 3. 实验执行阶段分组结果查询能力
-- Tracking 数据收集
-
- 埋点事件上报收集
-
-- 配置发布
-
- 提供统一的通过featureKey 获取配置的结果,统一Feature Config 和实验配置下发结果
-
-
-```mermaid
-graph LR
- subgraph "AB Testing 平台"
- AB测试核心能力 --> 配置管理
- AB测试核心能力 --> 实验管理
- AB测试核心能力 --> 数据收集
- AB测试核心能力 --> 配置发布
- 数据收集 --> 指标分析
- 配置管理 --> FeatureFlag
- 配置管理 --> FeatureConfig
- 实验管理 --> 实验配置
- 实验管理 --> 实验分级
- 配置发布 --> 实验结果
- 配置发布 --> FeatureConfig结果
- end
-
-```
-
-## 业务模型
-
-[业务模型](ABstract%EF%BC%88%E6%8F%92%E4%BB%B6%E5%8C%96AB%20Testing%E5%B9%B3%E5%8F%B0%EF%BC%89%20746b87acd94643ca871ec661b63f196c/%E4%B8%9A%E5%8A%A1%E6%A8%A1%E5%9E%8B%20d31846027b4f40ca99f6e76f897663a4.md)
-
-## 进程间架构
-
-[进程间架构](ABstract%EF%BC%88%E6%8F%92%E4%BB%B6%E5%8C%96AB%20Testing%E5%B9%B3%E5%8F%B0%EF%BC%89%20746b87acd94643ca871ec661b63f196c/%E8%BF%9B%E7%A8%8B%E9%97%B4%E6%9E%B6%E6%9E%84%20d50744212b044d06a4b29fe931df391b.md)
\ No newline at end of file
diff --git a/spaces/AIGText/GlyphControl/ldm/modules/midas/midas/vit.py b/spaces/AIGText/GlyphControl/ldm/modules/midas/midas/vit.py
deleted file mode 100644
index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000
--- a/spaces/AIGText/GlyphControl/ldm/modules/midas/midas/vit.py
+++ /dev/null
@@ -1,491 +0,0 @@
-import torch
-import torch.nn as nn
-import timm
-import types
-import math
-import torch.nn.functional as F
-
-
-class Slice(nn.Module):
- def __init__(self, start_index=1):
- super(Slice, self).__init__()
- self.start_index = start_index
-
- def forward(self, x):
- return x[:, self.start_index :]
-
-
-class AddReadout(nn.Module):
- def __init__(self, start_index=1):
- super(AddReadout, self).__init__()
- self.start_index = start_index
-
- def forward(self, x):
- if self.start_index == 2:
- readout = (x[:, 0] + x[:, 1]) / 2
- else:
- readout = x[:, 0]
- return x[:, self.start_index :] + readout.unsqueeze(1)
-
-
-class ProjectReadout(nn.Module):
- def __init__(self, in_features, start_index=1):
- super(ProjectReadout, self).__init__()
- self.start_index = start_index
-
- self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU())
-
- def forward(self, x):
- readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :])
- features = torch.cat((x[:, self.start_index :], readout), -1)
-
- return self.project(features)
-
-
-class Transpose(nn.Module):
- def __init__(self, dim0, dim1):
- super(Transpose, self).__init__()
- self.dim0 = dim0
- self.dim1 = dim1
-
- def forward(self, x):
- x = x.transpose(self.dim0, self.dim1)
- return x
-
-
-def forward_vit(pretrained, x):
- b, c, h, w = x.shape
-
- glob = pretrained.model.forward_flex(x)
-
- layer_1 = pretrained.activations["1"]
- layer_2 = pretrained.activations["2"]
- layer_3 = pretrained.activations["3"]
- layer_4 = pretrained.activations["4"]
-
- layer_1 = pretrained.act_postprocess1[0:2](layer_1)
- layer_2 = pretrained.act_postprocess2[0:2](layer_2)
- layer_3 = pretrained.act_postprocess3[0:2](layer_3)
- layer_4 = pretrained.act_postprocess4[0:2](layer_4)
-
- unflatten = nn.Sequential(
- nn.Unflatten(
- 2,
- torch.Size(
- [
- h // pretrained.model.patch_size[1],
- w // pretrained.model.patch_size[0],
- ]
- ),
- )
- )
-
- if layer_1.ndim == 3:
- layer_1 = unflatten(layer_1)
- if layer_2.ndim == 3:
- layer_2 = unflatten(layer_2)
- if layer_3.ndim == 3:
- layer_3 = unflatten(layer_3)
- if layer_4.ndim == 3:
- layer_4 = unflatten(layer_4)
-
- layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1)
- layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2)
- layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3)
- layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4)
-
- return layer_1, layer_2, layer_3, layer_4
-
-
-def _resize_pos_embed(self, posemb, gs_h, gs_w):
- posemb_tok, posemb_grid = (
- posemb[:, : self.start_index],
- posemb[0, self.start_index :],
- )
-
- gs_old = int(math.sqrt(len(posemb_grid)))
-
- posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2)
- posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear")
- posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1)
-
- posemb = torch.cat([posemb_tok, posemb_grid], dim=1)
-
- return posemb
-
-
-def forward_flex(self, x):
- b, c, h, w = x.shape
-
- pos_embed = self._resize_pos_embed(
- self.pos_embed, h // self.patch_size[1], w // self.patch_size[0]
- )
-
- B = x.shape[0]
-
- if hasattr(self.patch_embed, "backbone"):
- x = self.patch_embed.backbone(x)
- if isinstance(x, (list, tuple)):
- x = x[-1] # last feature if backbone outputs list/tuple of features
-
- x = self.patch_embed.proj(x).flatten(2).transpose(1, 2)
-
- if getattr(self, "dist_token", None) is not None:
- cls_tokens = self.cls_token.expand(
- B, -1, -1
- ) # stole cls_tokens impl from Phil Wang, thanks
- dist_token = self.dist_token.expand(B, -1, -1)
- x = torch.cat((cls_tokens, dist_token, x), dim=1)
- else:
- cls_tokens = self.cls_token.expand(
- B, -1, -1
- ) # stole cls_tokens impl from Phil Wang, thanks
- x = torch.cat((cls_tokens, x), dim=1)
-
- x = x + pos_embed
- x = self.pos_drop(x)
-
- for blk in self.blocks:
- x = blk(x)
-
- x = self.norm(x)
-
- return x
-
-
-activations = {}
-
-
-def get_activation(name):
- def hook(model, input, output):
- activations[name] = output
-
- return hook
-
-
-def get_readout_oper(vit_features, features, use_readout, start_index=1):
- if use_readout == "ignore":
- readout_oper = [Slice(start_index)] * len(features)
- elif use_readout == "add":
- readout_oper = [AddReadout(start_index)] * len(features)
- elif use_readout == "project":
- readout_oper = [
- ProjectReadout(vit_features, start_index) for out_feat in features
- ]
- else:
- assert (
- False
- ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'"
-
- return readout_oper
-
-
-def _make_vit_b16_backbone(
- model,
- features=[96, 192, 384, 768],
- size=[384, 384],
- hooks=[2, 5, 8, 11],
- vit_features=768,
- use_readout="ignore",
- start_index=1,
-):
- pretrained = nn.Module()
-
- pretrained.model = model
- pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1"))
- pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2"))
- pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3"))
- pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4"))
-
- pretrained.activations = activations
-
- readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)
-
- # 32, 48, 136, 384
- pretrained.act_postprocess1 = nn.Sequential(
- readout_oper[0],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[0],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[0],
- out_channels=features[0],
- kernel_size=4,
- stride=4,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess2 = nn.Sequential(
- readout_oper[1],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[1],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[1],
- out_channels=features[1],
- kernel_size=2,
- stride=2,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess3 = nn.Sequential(
- readout_oper[2],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[2],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- )
-
- pretrained.act_postprocess4 = nn.Sequential(
- readout_oper[3],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[3],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.Conv2d(
- in_channels=features[3],
- out_channels=features[3],
- kernel_size=3,
- stride=2,
- padding=1,
- ),
- )
-
- pretrained.model.start_index = start_index
- pretrained.model.patch_size = [16, 16]
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)
- pretrained.model._resize_pos_embed = types.MethodType(
- _resize_pos_embed, pretrained.model
- )
-
- return pretrained
-
-
-def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_large_patch16_384", pretrained=pretrained)
-
- hooks = [5, 11, 17, 23] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model,
- features=[256, 512, 1024, 1024],
- hooks=hooks,
- vit_features=1024,
- use_readout=use_readout,
- )
-
-
-def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout
- )
-
-
-def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained)
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout
- )
-
-
-def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None):
- model = timm.create_model(
- "vit_deit_base_distilled_patch16_384", pretrained=pretrained
- )
-
- hooks = [2, 5, 8, 11] if hooks == None else hooks
- return _make_vit_b16_backbone(
- model,
- features=[96, 192, 384, 768],
- hooks=hooks,
- use_readout=use_readout,
- start_index=2,
- )
-
-
-def _make_vit_b_rn50_backbone(
- model,
- features=[256, 512, 768, 768],
- size=[384, 384],
- hooks=[0, 1, 8, 11],
- vit_features=768,
- use_vit_only=False,
- use_readout="ignore",
- start_index=1,
-):
- pretrained = nn.Module()
-
- pretrained.model = model
-
- if use_vit_only == True:
- pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1"))
- pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2"))
- else:
- pretrained.model.patch_embed.backbone.stages[0].register_forward_hook(
- get_activation("1")
- )
- pretrained.model.patch_embed.backbone.stages[1].register_forward_hook(
- get_activation("2")
- )
-
- pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3"))
- pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4"))
-
- pretrained.activations = activations
-
- readout_oper = get_readout_oper(vit_features, features, use_readout, start_index)
-
- if use_vit_only == True:
- pretrained.act_postprocess1 = nn.Sequential(
- readout_oper[0],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[0],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[0],
- out_channels=features[0],
- kernel_size=4,
- stride=4,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
-
- pretrained.act_postprocess2 = nn.Sequential(
- readout_oper[1],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[1],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.ConvTranspose2d(
- in_channels=features[1],
- out_channels=features[1],
- kernel_size=2,
- stride=2,
- padding=0,
- bias=True,
- dilation=1,
- groups=1,
- ),
- )
- else:
- pretrained.act_postprocess1 = nn.Sequential(
- nn.Identity(), nn.Identity(), nn.Identity()
- )
- pretrained.act_postprocess2 = nn.Sequential(
- nn.Identity(), nn.Identity(), nn.Identity()
- )
-
- pretrained.act_postprocess3 = nn.Sequential(
- readout_oper[2],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[2],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- )
-
- pretrained.act_postprocess4 = nn.Sequential(
- readout_oper[3],
- Transpose(1, 2),
- nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])),
- nn.Conv2d(
- in_channels=vit_features,
- out_channels=features[3],
- kernel_size=1,
- stride=1,
- padding=0,
- ),
- nn.Conv2d(
- in_channels=features[3],
- out_channels=features[3],
- kernel_size=3,
- stride=2,
- padding=1,
- ),
- )
-
- pretrained.model.start_index = start_index
- pretrained.model.patch_size = [16, 16]
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model)
-
- # We inject this function into the VisionTransformer instances so that
- # we can use it with interpolated position embeddings without modifying the library source.
- pretrained.model._resize_pos_embed = types.MethodType(
- _resize_pos_embed, pretrained.model
- )
-
- return pretrained
-
-
-def _make_pretrained_vitb_rn50_384(
- pretrained, use_readout="ignore", hooks=None, use_vit_only=False
-):
- model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained)
-
- hooks = [0, 1, 8, 11] if hooks == None else hooks
- return _make_vit_b_rn50_backbone(
- model,
- features=[256, 512, 768, 768],
- size=[384, 384],
- hooks=hooks,
- use_vit_only=use_vit_only,
- use_readout=use_readout,
- )
diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/__init__.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/__init__.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/__init__.py
deleted file mode 100644
index 92c7a48a200eba455044cd66e0d2c1efe6494f5c..0000000000000000000000000000000000000000
--- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# flake8: noqa
-from .musicgen import MusicGen
-from .lm import LMModel
-from .encodec import CompressionModel, EncodecModel
diff --git a/spaces/AchyuthGamer/OpenGPT/README.md b/spaces/AchyuthGamer/OpenGPT/README.md
deleted file mode 100644
index ca7f7dffc2697555bdee0feecc31a2d092db3b3e..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/README.md
+++ /dev/null
@@ -1,173 +0,0 @@
----
-license: creativeml-openrail-m
-title: OpenGPT
-emoji: 🚀
-colorFrom: blue
-colorTo: green
-pinned: true
-sdk: gradio
-app_file: run.py
----
-# FreeGPT WebUI v2
-
-
-## GPT 3.5/4
-
-NOT REQUIRE ANY API KEY ❌🔑
-
-This project features a WebUI utilizing the [G4F API](https://github.com/xtekky/gpt4free).
-Experience the power of ChatGPT with a user-friendly interface, enhanced jailbreaks, and completely free.
-
-**Important!** Don't be afraid to ask a question or write about any problem in the "issue".
-We will solve a question or a problem together! 🌍
-
-You can [buy me coffee](https://boosty.to/vadimboev/donate) here ☕🤎
-
-## Known bugs 🚧
-- Stream mode not working properly.
-- Operation timed out after 30000 milliseconds
-- Web Access is not working.
-Because the API that was used earlier in the "freegpt-webui" repository from ramonvc stopped working. This will be fixed later
-
-## Features v2 📢
-- Updated g4f
-- Fixes to make everything work
-
-## Project Hosting and Demonstration 🌐🚀
-The project is hosted on multiple platforms to be tested and modified.
-|Platform|Status|API Key|Free|Repo|Demo|
-|--|--|--|--|--|--|
-|[My site](http://vadimboev.ru:1338/)||◼️|☑️|[FreeGPT WebUI](https://github.com/VadimBoev/freegpt-webui-v2)|[Chat](http://vadimboev.ru:1338/)
-
-## Table of Contents
-- [To-Do List](#to-do-list-%EF%B8%8F)
-- [Getting Started](#getting-started-white_check_mark)
- - [Cloning the Repository](#cloning-the-repository-inbox_tray)
- - [Install Dependencies](#install-dependencies-wrench)
-- [Running the Application](#running-the-application-rocket)
-- [Docker](#docker-)
- - [Prerequisites](#prerequisites)
- - [Running the Docker](#running-the-docker)
-- [Incorporated Projects](#incorporated-projects-busts_in_silhouette)
- - [WebUI](#webui)
- - [API FreeGPT](#api-g4f)
-- [Star History](#star-history)
-- [Legal Notice](#legal-notice)
-
-## Getting Started :white_check_mark:
-To get started with this project, you'll need to clone the repository and have [Python](https://www.python.org/downloads/) installed on your system.
-(Version 3.10+ is recommended. It also works for me on 3.9.2 in debian 11).
-
-### Cloning the Repository :inbox_tray:
-Run the following command to clone the repository:
-
-```
-git clone https://github.com/VadimBoev/freegpt-webui-v2.git
-```
-
-### Install Dependencies :wrench:
-Navigate to the project directory:
-```
-cd freegpt-webui-v2
-```
-
-Install the dependencies:
-```
-pip install -r requirements.txt
-```
-## Running the Application :rocket:
-To run the application, run the following command:
-```
-python run.py
-```
-
-Access the application in your browser using the URL:
-```
-http://127.0.0.1:1338
-```
-or
-```
-http://localhost:1338
-```
-
-## Docker 🐳
-### Prerequisites
-Before you start, make sure you have installed [Docker](https://www.docker.com/get-started) on your machine.
-
-### Running the Docker
-Pull the Docker image from Docker Hub:
-```
-docker pull VadimBoev/freegpt-webui-v2
-```
-
-Run the application using Docker:
-```
-docker run -p 1338:1338 VadimBoev/freegpt-webui-v2
-```
-
-Access the application in your browser using the URL:
-```
-http://127.0.0.1:1338
-```
-or
-```
-http://localhost:1338
-```
-
-When you're done using the application, stop the Docker containers using the following command:
-```
-docker stop
-```
-
-## Incorporated Projects :busts_in_silhouette:
-I highly recommend visiting and supporting both projects.
-
-### WebUI
-The application interface was incorporated from the [chatgpt-clone](https://github.com/xtekky/chatgpt-clone) repository.
-
-### API G4F
-The free GPT-4 API was incorporated from the [GPT4Free](https://github.com/xtekky/gpt4free) repository.
-
-
-
-## Star History
-[](https://star-history.com/#VadimBoev/freegpt-webui-v2&Timeline)
-
-
-
-## Legal Notice
-This repository is _not_ associated with or endorsed by providers of the APIs contained in this GitHub repository. This
-project is intended **for educational purposes only**. This is just a little personal project. Sites may contact me to
-improve their security or request the removal of their site from this repository.
-
-Please note the following:
-
-1. **Disclaimer**: The APIs, services, and trademarks mentioned in this repository belong to their respective owners.
- This project is _not_ claiming any right over them nor is it affiliated with or endorsed by any of the providers
- mentioned.
-
-2. **Responsibility**: The author of this repository is _not_ responsible for any consequences, damages, or losses
- arising from the use or misuse of this repository or the content provided by the third-party APIs. Users are solely
- responsible for their actions and any repercussions that may follow. We strongly recommend the users to follow the
- TOS of the each Website.
-
-3. **Educational Purposes Only**: This repository and its content are provided strictly for educational purposes. By
- using the information and code provided, users acknowledge that they are using the APIs and models at their own risk
- and agree to comply with any applicable laws and regulations.
-
-4. **Copyright**: All content in this repository, including but not limited to code, images, and documentation, is the
- intellectual property of the repository author, unless otherwise stated. Unauthorized copying, distribution, or use
- of any content in this repository is strictly prohibited without the express written consent of the repository
- author.
-
-5. **Indemnification**: Users agree to indemnify, defend, and hold harmless the author of this repository from and
- against any and all claims, liabilities, damages, losses, or expenses, including legal fees and costs, arising out of
- or in any way connected with their use or misuse of this repository, its content, or related third-party APIs.
-
-6. **Updates and Changes**: The author reserves the right to modify, update, or remove any content, information, or
- features in this repository at any time without prior notice. Users are responsible for regularly reviewing the
- content and any changes made to this repository.
-
-By using this repository or any code related to it, you agree to these terms. The author is not responsible for any
-copies, forks, or reuploads made by other users. This is the author's only account and repository. To prevent
-impersonation or irresponsible actions, you may comply with the GNU GPL license this Repository uses.
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetChildrenHeight.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetChildrenHeight.js
deleted file mode 100644
index 8f60f6dc8620faf62af9192a494ca9b948adef3f..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetChildrenHeight.js
+++ /dev/null
@@ -1,58 +0,0 @@
-var GetChildrenHeight = function (minimumMode) {
- if (this.rexSizer.hidden) {
- return 0;
- }
-
- if (minimumMode === undefined) {
- minimumMode = true;
- }
-
- var result = 0;
- var children = this.sizerChildren;
- var child, padding, childHeight;
- if (this.orientation === 0) { // x
- // Get maximun height
- for (var i = 0, cnt = children.length; i < cnt; i++) {
- child = children[i];
- if (child.rexSizer.hidden) {
- continue;
- }
-
- padding = child.rexSizer.padding;
- childHeight = this.getChildHeight(child) + padding.top + padding.bottom;
- result = Math.max(childHeight, result);
- }
- } else {
- // Get summation of minimum height
- var itemSpace = this.space.item;
- var isFirstChild = true;
- for (var i = 0, cnt = children.length; i < cnt; i++) {
- child = children[i];
- if (!child.hasOwnProperty('rexSizer')) {
- continue;
- }
- if (child.rexSizer.hidden) {
- continue;
- }
-
- if ((child.rexSizer.proportion === 0) || minimumMode) {
- childHeight = this.getChildHeight(child);
- } else {
- childHeight = 0;
- }
- padding = child.rexSizer.padding;
- childHeight += (padding.top + padding.bottom);
-
- if (isFirstChild) {
- isFirstChild = false;
- } else {
- childHeight += itemSpace;
- }
-
- result += childHeight;
- }
- }
- return result + this.space.top + this.space.bottom;
-}
-
-export default GetChildrenHeight;
\ No newline at end of file
diff --git a/spaces/Akmyradov/TurkmenTTSweSTT/vits/attentions.py b/spaces/Akmyradov/TurkmenTTSweSTT/vits/attentions.py
deleted file mode 100644
index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000
--- a/spaces/Akmyradov/TurkmenTTSweSTT/vits/attentions.py
+++ /dev/null
@@ -1,303 +0,0 @@
-import copy
-import math
-import numpy as np
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-from modules import LayerNorm
-
-
-class Encoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.window_size = window_size
-
- self.drop = nn.Dropout(p_dropout)
- self.attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask):
- attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.attn_layers[i](x, x, attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class Decoder(nn.Module):
- def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
- super().__init__()
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
-
- self.drop = nn.Dropout(p_dropout)
- self.self_attn_layers = nn.ModuleList()
- self.norm_layers_0 = nn.ModuleList()
- self.encdec_attn_layers = nn.ModuleList()
- self.norm_layers_1 = nn.ModuleList()
- self.ffn_layers = nn.ModuleList()
- self.norm_layers_2 = nn.ModuleList()
- for i in range(self.n_layers):
- self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
- self.norm_layers_0.append(LayerNorm(hidden_channels))
- self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
- self.norm_layers_1.append(LayerNorm(hidden_channels))
- self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
- self.norm_layers_2.append(LayerNorm(hidden_channels))
-
- def forward(self, x, x_mask, h, h_mask):
- """
- x: decoder input
- h: encoder output
- """
- self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
- encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
- x = x * x_mask
- for i in range(self.n_layers):
- y = self.self_attn_layers[i](x, x, self_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_0[i](x + y)
-
- y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
- y = self.drop(y)
- x = self.norm_layers_1[i](x + y)
-
- y = self.ffn_layers[i](x, x_mask)
- y = self.drop(y)
- x = self.norm_layers_2[i](x + y)
- x = x * x_mask
- return x
-
-
-class MultiHeadAttention(nn.Module):
- def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
- super().__init__()
- assert channels % n_heads == 0
-
- self.channels = channels
- self.out_channels = out_channels
- self.n_heads = n_heads
- self.p_dropout = p_dropout
- self.window_size = window_size
- self.heads_share = heads_share
- self.block_length = block_length
- self.proximal_bias = proximal_bias
- self.proximal_init = proximal_init
- self.attn = None
-
- self.k_channels = channels // n_heads
- self.conv_q = nn.Conv1d(channels, channels, 1)
- self.conv_k = nn.Conv1d(channels, channels, 1)
- self.conv_v = nn.Conv1d(channels, channels, 1)
- self.conv_o = nn.Conv1d(channels, out_channels, 1)
- self.drop = nn.Dropout(p_dropout)
-
- if window_size is not None:
- n_heads_rel = 1 if heads_share else n_heads
- rel_stddev = self.k_channels**-0.5
- self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
- self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
-
- nn.init.xavier_uniform_(self.conv_q.weight)
- nn.init.xavier_uniform_(self.conv_k.weight)
- nn.init.xavier_uniform_(self.conv_v.weight)
- if proximal_init:
- with torch.no_grad():
- self.conv_k.weight.copy_(self.conv_q.weight)
- self.conv_k.bias.copy_(self.conv_q.bias)
-
- def forward(self, x, c, attn_mask=None):
- q = self.conv_q(x)
- k = self.conv_k(c)
- v = self.conv_v(c)
-
- x, self.attn = self.attention(q, k, v, mask=attn_mask)
-
- x = self.conv_o(x)
- return x
-
- def attention(self, query, key, value, mask=None):
- # reshape [b, d, t] -> [b, n_h, t, d_k]
- b, d, t_s, t_t = (*key.size(), query.size(2))
- query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
- key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
- value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
-
- scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
- if self.window_size is not None:
- assert t_s == t_t, "Relative attention is only available for self-attention."
- key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
- rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
- scores_local = self._relative_position_to_absolute_position(rel_logits)
- scores = scores + scores_local
- if self.proximal_bias:
- assert t_s == t_t, "Proximal bias is only available for self-attention."
- scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
- if mask is not None:
- scores = scores.masked_fill(mask == 0, -1e4)
- if self.block_length is not None:
- assert t_s == t_t, "Local attention is only available for self-attention."
- block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
- scores = scores.masked_fill(block_mask == 0, -1e4)
- p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
- p_attn = self.drop(p_attn)
- output = torch.matmul(p_attn, value)
- if self.window_size is not None:
- relative_weights = self._absolute_position_to_relative_position(p_attn)
- value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
- output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
- output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
- return output, p_attn
-
- def _matmul_with_relative_values(self, x, y):
- """
- x: [b, h, l, m]
- y: [h or 1, m, d]
- ret: [b, h, l, d]
- """
- ret = torch.matmul(x, y.unsqueeze(0))
- return ret
-
- def _matmul_with_relative_keys(self, x, y):
- """
- x: [b, h, l, d]
- y: [h or 1, m, d]
- ret: [b, h, l, m]
- """
- ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
- return ret
-
- def _get_relative_embeddings(self, relative_embeddings, length):
- max_relative_position = 2 * self.window_size + 1
- # Pad first before slice to avoid using cond ops.
- pad_length = max(length - (self.window_size + 1), 0)
- slice_start_position = max((self.window_size + 1) - length, 0)
- slice_end_position = slice_start_position + 2 * length - 1
- if pad_length > 0:
- padded_relative_embeddings = F.pad(
- relative_embeddings,
- commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
- else:
- padded_relative_embeddings = relative_embeddings
- used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
- return used_relative_embeddings
-
- def _relative_position_to_absolute_position(self, x):
- """
- x: [b, h, l, 2*l-1]
- ret: [b, h, l, l]
- """
- batch, heads, length, _ = x.size()
- # Concat columns of pad to shift from relative to absolute indexing.
- x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
-
- # Concat extra elements so to add up to shape (len+1, 2*len-1).
- x_flat = x.view([batch, heads, length * 2 * length])
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
-
- # Reshape and slice out the padded elements.
- x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
- return x_final
-
- def _absolute_position_to_relative_position(self, x):
- """
- x: [b, h, l, l]
- ret: [b, h, l, 2*l-1]
- """
- batch, heads, length, _ = x.size()
- # padd along column
- x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
- x_flat = x.view([batch, heads, length**2 + length*(length -1)])
- # add 0's in the beginning that will skew the elements after reshape
- x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
- x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
- return x_final
-
- def _attention_bias_proximal(self, length):
- """Bias for self-attention to encourage attention to close positions.
- Args:
- length: an integer scalar.
- Returns:
- a Tensor with shape [1, 1, length, length]
- """
- r = torch.arange(length, dtype=torch.float32)
- diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
- return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
-
-
-class FFN(nn.Module):
- def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.activation = activation
- self.causal = causal
-
- if causal:
- self.padding = self._causal_padding
- else:
- self.padding = self._same_padding
-
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
- self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
- self.drop = nn.Dropout(p_dropout)
-
- def forward(self, x, x_mask):
- x = self.conv_1(self.padding(x * x_mask))
- if self.activation == "gelu":
- x = x * torch.sigmoid(1.702 * x)
- else:
- x = torch.relu(x)
- x = self.drop(x)
- x = self.conv_2(self.padding(x * x_mask))
- return x * x_mask
-
- def _causal_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = self.kernel_size - 1
- pad_r = 0
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
-
- def _same_padding(self, x):
- if self.kernel_size == 1:
- return x
- pad_l = (self.kernel_size - 1) // 2
- pad_r = self.kernel_size // 2
- padding = [[0, 0], [0, 0], [pad_l, pad_r]]
- x = F.pad(x, commons.convert_pad_shape(padding))
- return x
diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/utils/log_utils.py b/spaces/Amrrs/DragGan-Inversion/PTI/utils/log_utils.py
deleted file mode 100644
index b29eae05f1c3ba34df60c074373b417c5420e836..0000000000000000000000000000000000000000
--- a/spaces/Amrrs/DragGan-Inversion/PTI/utils/log_utils.py
+++ /dev/null
@@ -1,79 +0,0 @@
-import numpy as np
-from PIL import Image
-import wandb
-from PTI.configs import global_config
-import torch
-import matplotlib.pyplot as plt
-
-
-def log_image_from_w(w, G, name):
- img = get_image_from_w(w, G)
- pillow_image = Image.fromarray(img)
- wandb.log(
- {f"{name}": [
- wandb.Image(pillow_image, caption=f"current inversion {name}")]},
- step=global_config.training_step)
-
-
-def log_images_from_w(ws, G, names):
- for name, w in zip(names, ws):
- w = w.to(global_config.device)
- log_image_from_w(w, G, name)
-
-
-def plot_image_from_w(w, G):
- img = get_image_from_w(w, G)
- pillow_image = Image.fromarray(img)
- plt.imshow(pillow_image)
- plt.show()
-
-
-def plot_image(img):
- img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8).detach().cpu().numpy()
- pillow_image = Image.fromarray(img[0])
- plt.imshow(pillow_image)
- plt.show()
-
-
-def save_image(name, method_type, results_dir, image, run_id):
- image.save(f'{results_dir}/{method_type}_{name}_{run_id}.jpg')
-
-
-def save_w(w, G, name, method_type, results_dir):
- im = get_image_from_w(w, G)
- im = Image.fromarray(im, mode='RGB')
- save_image(name, method_type, results_dir, im)
-
-
-def save_concat_image(base_dir, image_latents, new_inv_image_latent, new_G,
- old_G,
- file_name,
- extra_image=None):
- images_to_save = []
- if extra_image is not None:
- images_to_save.append(extra_image)
- for latent in image_latents:
- images_to_save.append(get_image_from_w(latent, old_G))
- images_to_save.append(get_image_from_w(new_inv_image_latent, new_G))
- result_image = create_alongside_images(images_to_save)
- result_image.save(f'{base_dir}/{file_name}.jpg')
-
-
-def save_single_image(base_dir, image_latent, G, file_name):
- image_to_save = get_image_from_w(image_latent, G)
- image_to_save = Image.fromarray(image_to_save, mode='RGB')
- image_to_save.save(f'{base_dir}/{file_name}.jpg')
-
-
-def create_alongside_images(images):
- res = np.concatenate([np.array(image) for image in images], axis=1)
- return Image.fromarray(res, mode='RGB')
-
-
-def get_image_from_w(w, G):
- if len(w.size()) <= 2:
- w = w.unsqueeze(0)
- with torch.no_grad():
- img = G.synthesis(w, noise_mode='const')
- img = (img.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8).detach().cpu().numpy()
- return img[0]
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/depth2img.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/depth2img.md
deleted file mode 100644
index b5602e3081daa6089265e002cc4df1cd8473a1e3..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/depth2img.md
+++ /dev/null
@@ -1,57 +0,0 @@
-
-
-# Text-guided depth-to-image 생성
-
-[[open-in-colab]]
-
-[`StableDiffusionDepth2ImgPipeline`]을 사용하면 텍스트 프롬프트와 초기 이미지를 전달하여 새 이미지의 생성을 조절할 수 있습니다. 또한 이미지 구조를 보존하기 위해 `depth_map`을 전달할 수도 있습니다. `depth_map`이 제공되지 않으면 파이프라인은 통합된 [depth-estimation model](https://github.com/isl-org/MiDaS)을 통해 자동으로 깊이를 예측합니다.
-
-
-먼저 [`StableDiffusionDepth2ImgPipeline`]의 인스턴스를 생성합니다:
-
-```python
-import torch
-import requests
-from PIL import Image
-
-from diffusers import StableDiffusionDepth2ImgPipeline
-
-pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
- "stabilityai/stable-diffusion-2-depth",
- torch_dtype=torch.float16,
-).to("cuda")
-```
-
-이제 프롬프트를 파이프라인에 전달합니다. 특정 단어가 이미지 생성을 가이드 하는것을 방지하기 위해 `negative_prompt`를 전달할 수도 있습니다:
-
-```python
-url = "http://images.cocodataset.org/val2017/000000039769.jpg"
-init_image = Image.open(requests.get(url, stream=True).raw)
-prompt = "two tigers"
-n_prompt = "bad, deformed, ugly, bad anatomy"
-image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prompt, strength=0.7).images[0]
-image
-```
-
-| Input | Output |
-|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|
-| | |
-
-아래의 Spaces를 가지고 놀며 depth map이 있는 이미지와 없는 이미지의 차이가 있는지 확인해 보세요!
-
-
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_img2img_superresolution.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_img2img_superresolution.py
deleted file mode 100644
index 500557108aed05b9b01020964f13b15fdb9abed0..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/deepfloyd_if/test_if_img2img_superresolution.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# coding=utf-8
-# Copyright 2023 HuggingFace Inc.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import random
-import unittest
-
-import torch
-
-from diffusers import IFImg2ImgSuperResolutionPipeline
-from diffusers.utils import floats_tensor
-from diffusers.utils.import_utils import is_xformers_available
-from diffusers.utils.testing_utils import skip_mps, torch_device
-
-from ..pipeline_params import TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS, TEXT_GUIDED_IMAGE_VARIATION_PARAMS
-from ..test_pipelines_common import PipelineTesterMixin
-from . import IFPipelineTesterMixin
-
-
-@skip_mps
-class IFImg2ImgSuperResolutionPipelineFastTests(PipelineTesterMixin, IFPipelineTesterMixin, unittest.TestCase):
- pipeline_class = IFImg2ImgSuperResolutionPipeline
- params = TEXT_GUIDED_IMAGE_VARIATION_PARAMS - {"width", "height"}
- batch_params = TEXT_GUIDED_IMAGE_VARIATION_BATCH_PARAMS.union({"original_image"})
- required_optional_params = PipelineTesterMixin.required_optional_params - {"latents"}
-
- def get_dummy_components(self):
- return self._get_superresolution_dummy_components()
-
- def get_dummy_inputs(self, device, seed=0):
- if str(device).startswith("mps"):
- generator = torch.manual_seed(seed)
- else:
- generator = torch.Generator(device=device).manual_seed(seed)
-
- original_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
- image = floats_tensor((1, 3, 16, 16), rng=random.Random(seed)).to(device)
-
- inputs = {
- "prompt": "A painting of a squirrel eating a burger",
- "image": image,
- "original_image": original_image,
- "generator": generator,
- "num_inference_steps": 2,
- "output_type": "numpy",
- }
-
- return inputs
-
- @unittest.skipIf(
- torch_device != "cuda" or not is_xformers_available(),
- reason="XFormers attention is only available with CUDA and `xformers` installed",
- )
- def test_xformers_attention_forwardGenerator_pass(self):
- self._test_xformers_attention_forwardGenerator_pass(expected_max_diff=1e-3)
-
- def test_save_load_optional_components(self):
- self._test_save_load_optional_components()
-
- @unittest.skipIf(torch_device != "cuda", reason="float16 requires CUDA")
- def test_save_load_float16(self):
- # Due to non-determinism in save load of the hf-internal-testing/tiny-random-t5 text encoder
- super().test_save_load_float16(expected_max_diff=1e-1)
-
- def test_attention_slicing_forward_pass(self):
- self._test_attention_slicing_forward_pass(expected_max_diff=1e-2)
-
- def test_save_load_local(self):
- self._test_save_load_local()
-
- def test_inference_batch_single_identical(self):
- self._test_inference_batch_single_identical(
- expected_max_diff=1e-2,
- )
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_unclip/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_unclip/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/fsaf/README.md b/spaces/Andy1621/uniformer_image_detection/configs/fsaf/README.md
deleted file mode 100644
index 42468c8bf596d675d74e0c1d453e0641c5dc3b9c..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/fsaf/README.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# Feature Selective Anchor-Free Module for Single-Shot Object Detection
-
-[ALGORITHM]
-
-FSAF is an anchor-free method published in CVPR2019 ([https://arxiv.org/pdf/1903.00621.pdf](https://arxiv.org/pdf/1903.00621.pdf)).
-Actually it is equivalent to the anchor-based method with only one anchor at each feature map position in each FPN level.
-And this is how we implemented it.
-Only the anchor-free branch is released for its better compatibility with the current framework and less computational budget.
-
-In the original paper, feature maps within the central 0.2-0.5 area of a gt box are tagged as ignored. However,
-it is empirically found that a hard threshold (0.2-0.2) gives a further gain on the performance. (see the table below)
-
-## Main Results
-
-### Results on R50/R101/X101-FPN
-
-| Backbone | ignore range | ms-train| Lr schd |Train Mem (GB)| Train time (s/iter) | Inf time (fps) | box AP | Config | Download |
-|:----------:| :-------: |:-------:|:-------:|:------------:|:---------------:|:--------------:|:-------------:|:------:|:--------:|
-| R-50 | 0.2-0.5 | N | 1x | 3.15 | 0.43 | 12.3 | 36.0 (35.9) | | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco_20200715-b555b0e0.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco/fsaf_pscale0.2_nscale0.5_r50_fpn_1x_coco_20200715_094657.log.json) |
-| R-50 | 0.2-0.2 | N | 1x | 3.15 | 0.43 | 13.0 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf/fsaf_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r50_fpn_1x_coco/fsaf_r50_fpn_1x_coco-94ccc51f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r50_fpn_1x_coco/fsaf_r50_fpn_1x_coco_20200428_072327.log.json)|
-| R-101 | 0.2-0.2 | N | 1x | 5.08 | 0.58 | 10.8 | 39.3 (37.9) | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf/fsaf_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r101_fpn_1x_coco/fsaf_r101_fpn_1x_coco-9e71098f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r101_fpn_1x_coco/fsaf_r101_fpn_1x_coco_20200428_160348.log.json)|
-| X-101 | 0.2-0.2 | N | 1x | 9.38 | 1.23 | 5.6 | 42.4 (41.0) | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fsaf/fsaf_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_x101_64x4d_fpn_1x_coco/fsaf_x101_64x4d_fpn_1x_coco-e3f6e6fd.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_x101_64x4d_fpn_1x_coco/fsaf_x101_64x4d_fpn_1x_coco_20200428_160424.log.json)|
-
-**Notes:**
-
-- *1x means the model is trained for 12 epochs.*
-- *AP values in the brackets represent those reported in the original paper.*
-- *All results are obtained with a single model and single-scale test.*
-- *X-101 backbone represents ResNext-101-64x4d.*
-- *All pretrained backbones use pytorch style.*
-- *All models are trained on 8 Titan-XP gpus and tested on a single gpu.*
-
-## Citations
-
-BibTeX reference is as follows.
-
-```latex
-@inproceedings{zhu2019feature,
- title={Feature Selective Anchor-Free Module for Single-Shot Object Detection},
- author={Zhu, Chenchen and He, Yihui and Savvides, Marios},
- booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
- pages={840--849},
- year={2019}
-}
-```
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/iou_calculators/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/iou_calculators/__init__.py
deleted file mode 100644
index e71369a58a05fa25e6a754300875fdbb87cb26a5..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/iou_calculators/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from .builder import build_iou_calculator
-from .iou2d_calculator import BboxOverlaps2D, bbox_overlaps
-
-__all__ = ['build_iou_calculator', 'BboxOverlaps2D', 'bbox_overlaps']
diff --git a/spaces/ArkanDash/rvc-models/vc_infer_pipeline.py b/spaces/ArkanDash/rvc-models/vc_infer_pipeline.py
deleted file mode 100644
index c26d45068f9b6bf2b194b13c3c89f8a06347c124..0000000000000000000000000000000000000000
--- a/spaces/ArkanDash/rvc-models/vc_infer_pipeline.py
+++ /dev/null
@@ -1,306 +0,0 @@
-import numpy as np, parselmouth, torch, pdb
-from time import time as ttime
-import torch.nn.functional as F
-from config import x_pad, x_query, x_center, x_max
-import scipy.signal as signal
-import pyworld, os, traceback, faiss
-from scipy import signal
-
-bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
-
-
-class VC(object):
- def __init__(self, tgt_sr, device, is_half):
- self.sr = 16000 # hubert输入采样率
- self.window = 160 # 每帧点数
- self.t_pad = self.sr * x_pad # 每条前后pad时间
- self.t_pad_tgt = tgt_sr * x_pad
- self.t_pad2 = self.t_pad * 2
- self.t_query = self.sr * x_query # 查询切点前后查询时间
- self.t_center = self.sr * x_center # 查询切点位置
- self.t_max = self.sr * x_max # 免查询时长阈值
- self.device = device
- self.is_half = is_half
-
- def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None):
- time_step = self.window / self.sr * 1000
- f0_min = 50
- f0_max = 1100
- f0_mel_min = 1127 * np.log(1 + f0_min / 700)
- f0_mel_max = 1127 * np.log(1 + f0_max / 700)
- if f0_method == "pm":
- f0 = (
- parselmouth.Sound(x, self.sr)
- .to_pitch_ac(
- time_step=time_step / 1000,
- voicing_threshold=0.6,
- pitch_floor=f0_min,
- pitch_ceiling=f0_max,
- )
- .selected_array["frequency"]
- )
- pad_size = (p_len - len(f0) + 1) // 2
- if pad_size > 0 or p_len - len(f0) - pad_size > 0:
- f0 = np.pad(
- f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
- )
- elif f0_method == "harvest":
- f0, t = pyworld.harvest(
- x.astype(np.double),
- fs=self.sr,
- f0_ceil=f0_max,
- f0_floor=f0_min,
- frame_period=10,
- )
- f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr)
- f0 = signal.medfilt(f0, 3)
- f0 *= pow(2, f0_up_key / 12)
- # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- tf0 = self.sr // self.window # 每秒f0点数
- if inp_f0 is not None:
- delta_t = np.round(
- (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
- ).astype("int16")
- replace_f0 = np.interp(
- list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
- )
- shape = f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)].shape[0]
- f0[x_pad * tf0 : x_pad * tf0 + len(replace_f0)] = replace_f0[:shape]
- # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
- f0bak = f0.copy()
- f0_mel = 1127 * np.log(1 + f0 / 700)
- f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
- f0_mel_max - f0_mel_min
- ) + 1
- f0_mel[f0_mel <= 1] = 1
- f0_mel[f0_mel > 255] = 255
- f0_coarse = np.rint(f0_mel).astype(np.int)
- return f0_coarse, f0bak # 1-0
-
- def vc(
- self,
- model,
- net_g,
- sid,
- audio0,
- pitch,
- pitchf,
- times,
- index,
- big_npy,
- index_rate,
- ): # ,file_index,file_big_npy
- feats = torch.from_numpy(audio0)
- if self.is_half:
- feats = feats.half()
- else:
- feats = feats.float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- feats = feats.view(1, -1)
- padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
-
- inputs = {
- "source": feats.to(self.device),
- "padding_mask": padding_mask,
- "output_layer": 9, # layer 9
- }
- t0 = ttime()
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = model.final_proj(logits[0])
-
- if (
- isinstance(index, type(None)) == False
- and isinstance(big_npy, type(None)) == False
- and index_rate != 0
- ):
- npy = feats[0].cpu().numpy()
- if self.is_half:
- npy = npy.astype("float32")
- _, I = index.search(npy, 1)
- npy = big_npy[I.squeeze()]
- if self.is_half:
- npy = npy.astype("float16")
- feats = (
- torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
- + (1 - index_rate) * feats
- )
-
- feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
- t1 = ttime()
- p_len = audio0.shape[0] // self.window
- if feats.shape[1] < p_len:
- p_len = feats.shape[1]
- if pitch != None and pitchf != None:
- pitch = pitch[:, :p_len]
- pitchf = pitchf[:, :p_len]
- p_len = torch.tensor([p_len], device=self.device).long()
- with torch.no_grad():
- if pitch != None and pitchf != None:
- audio1 = (
- (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- else:
- audio1 = (
- (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768)
- .data.cpu()
- .float()
- .numpy()
- .astype(np.int16)
- )
- del feats, p_len, padding_mask
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- t2 = ttime()
- times[0] += t1 - t0
- times[2] += t2 - t1
- return audio1
-
- def pipeline(
- self,
- model,
- net_g,
- sid,
- audio,
- times,
- f0_up_key,
- f0_method,
- file_index,
- file_big_npy,
- index_rate,
- if_f0,
- f0_file=None,
- ):
- if (
- file_big_npy != ""
- and file_index != ""
- and os.path.exists(file_big_npy) == True
- and os.path.exists(file_index) == True
- and index_rate != 0
- ):
- try:
- index = faiss.read_index(file_index)
- big_npy = np.load(file_big_npy)
- except:
- traceback.print_exc()
- index = big_npy = None
- else:
- index = big_npy = None
- print("Feature retrieval library doesn't exist or ratio is 0")
- audio = signal.filtfilt(bh, ah, audio)
- audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
- opt_ts = []
- if audio_pad.shape[0] > self.t_max:
- audio_sum = np.zeros_like(audio)
- for i in range(self.window):
- audio_sum += audio_pad[i : i - self.window]
- for t in range(self.t_center, audio.shape[0], self.t_center):
- opt_ts.append(
- t
- - self.t_query
- + np.where(
- np.abs(audio_sum[t - self.t_query : t + self.t_query])
- == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
- )[0][0]
- )
- s = 0
- audio_opt = []
- t = None
- t1 = ttime()
- audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
- p_len = audio_pad.shape[0] // self.window
- inp_f0 = None
- if hasattr(f0_file, "name") == True:
- try:
- with open(f0_file.name, "r") as f:
- lines = f.read().strip("\n").split("\n")
- inp_f0 = []
- for line in lines:
- inp_f0.append([float(i) for i in line.split(",")])
- inp_f0 = np.array(inp_f0, dtype="float32")
- except:
- traceback.print_exc()
- sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
- pitch, pitchf = None, None
- if if_f0 == 1:
- pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0)
- pitch = pitch[:p_len]
- pitchf = pitchf[:p_len]
- pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
- pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
- t2 = ttime()
- times[1] += t2 - t1
- for t in opt_ts:
- t = t // self.window * self.window
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- pitch[:, s // self.window : (t + self.t_pad2) // self.window],
- pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[s : t + self.t_pad2 + self.window],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- s = t
- if if_f0 == 1:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- pitch[:, t // self.window :] if t is not None else pitch,
- pitchf[:, t // self.window :] if t is not None else pitchf,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- else:
- audio_opt.append(
- self.vc(
- model,
- net_g,
- sid,
- audio_pad[t:],
- None,
- None,
- times,
- index,
- big_npy,
- index_rate,
- )[self.t_pad_tgt : -self.t_pad_tgt]
- )
- audio_opt = np.concatenate(audio_opt)
- del pitch, pitchf, sid
- if torch.cuda.is_available():
- torch.cuda.empty_cache()
- return audio_opt
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/installation_report.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/installation_report.py
deleted file mode 100644
index fef3757f222b67fc1f4de52d260c49d64b6a4e16..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/installation_report.py
+++ /dev/null
@@ -1,53 +0,0 @@
-from typing import Any, Dict, Sequence
-
-from pip._vendor.packaging.markers import default_environment
-
-from pip import __version__
-from pip._internal.req.req_install import InstallRequirement
-
-
-class InstallationReport:
- def __init__(self, install_requirements: Sequence[InstallRequirement]):
- self._install_requirements = install_requirements
-
- @classmethod
- def _install_req_to_dict(cls, ireq: InstallRequirement) -> Dict[str, Any]:
- assert ireq.download_info, f"No download_info for {ireq}"
- res = {
- # PEP 610 json for the download URL. download_info.archive_info.hashes may
- # be absent when the requirement was installed from the wheel cache
- # and the cache entry was populated by an older pip version that did not
- # record origin.json.
- "download_info": ireq.download_info.to_dict(),
- # is_direct is true if the requirement was a direct URL reference (which
- # includes editable requirements), and false if the requirement was
- # downloaded from a PEP 503 index or --find-links.
- "is_direct": bool(ireq.original_link),
- # requested is true if the requirement was specified by the user (aka
- # top level requirement), and false if it was installed as a dependency of a
- # requirement. https://peps.python.org/pep-0376/#requested
- "requested": ireq.user_supplied,
- # PEP 566 json encoding for metadata
- # https://www.python.org/dev/peps/pep-0566/#json-compatible-metadata
- "metadata": ireq.get_dist().metadata_dict,
- }
- if ireq.user_supplied and ireq.extras:
- # For top level requirements, the list of requested extras, if any.
- res["requested_extras"] = list(sorted(ireq.extras))
- return res
-
- def to_dict(self) -> Dict[str, Any]:
- return {
- "version": "1",
- "pip_version": __version__,
- "install": [
- self._install_req_to_dict(ireq) for ireq in self._install_requirements
- ],
- # https://peps.python.org/pep-0508/#environment-markers
- # TODO: currently, the resolver uses the default environment to evaluate
- # environment markers, so that is what we report here. In the future, it
- # should also take into account options such as --python-version or
- # --platform, perhaps under the form of an environment_override field?
- # https://github.com/pypa/pip/issues/11198
- "environment": default_environment(),
- }
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/sysconfig.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/sysconfig.py
deleted file mode 100644
index 6a979f8c91fce3c8239b36ddb8764dc85dea41f2..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/sysconfig.py
+++ /dev/null
@@ -1,558 +0,0 @@
-"""Provide access to Python's configuration information. The specific
-configuration variables available depend heavily on the platform and
-configuration. The values may be retrieved using
-get_config_var(name), and the list of variables is available via
-get_config_vars().keys(). Additional convenience functions are also
-available.
-
-Written by: Fred L. Drake, Jr.
-Email:
-"""
-
-import os
-import re
-import sys
-import sysconfig
-import pathlib
-
-from .errors import DistutilsPlatformError
-from . import py39compat
-from ._functools import pass_none
-
-IS_PYPY = '__pypy__' in sys.builtin_module_names
-
-# These are needed in a couple of spots, so just compute them once.
-PREFIX = os.path.normpath(sys.prefix)
-EXEC_PREFIX = os.path.normpath(sys.exec_prefix)
-BASE_PREFIX = os.path.normpath(sys.base_prefix)
-BASE_EXEC_PREFIX = os.path.normpath(sys.base_exec_prefix)
-
-# Path to the base directory of the project. On Windows the binary may
-# live in project/PCbuild/win32 or project/PCbuild/amd64.
-# set for cross builds
-if "_PYTHON_PROJECT_BASE" in os.environ:
- project_base = os.path.abspath(os.environ["_PYTHON_PROJECT_BASE"])
-else:
- if sys.executable:
- project_base = os.path.dirname(os.path.abspath(sys.executable))
- else:
- # sys.executable can be empty if argv[0] has been changed and Python is
- # unable to retrieve the real program name
- project_base = os.getcwd()
-
-
-def _is_python_source_dir(d):
- """
- Return True if the target directory appears to point to an
- un-installed Python.
- """
- modules = pathlib.Path(d).joinpath('Modules')
- return any(modules.joinpath(fn).is_file() for fn in ('Setup', 'Setup.local'))
-
-
-_sys_home = getattr(sys, '_home', None)
-
-
-def _is_parent(dir_a, dir_b):
- """
- Return True if a is a parent of b.
- """
- return os.path.normcase(dir_a).startswith(os.path.normcase(dir_b))
-
-
-if os.name == 'nt':
-
- @pass_none
- def _fix_pcbuild(d):
- # In a venv, sys._home will be inside BASE_PREFIX rather than PREFIX.
- prefixes = PREFIX, BASE_PREFIX
- matched = (
- prefix
- for prefix in prefixes
- if _is_parent(d, os.path.join(prefix, "PCbuild"))
- )
- return next(matched, d)
-
- project_base = _fix_pcbuild(project_base)
- _sys_home = _fix_pcbuild(_sys_home)
-
-
-def _python_build():
- if _sys_home:
- return _is_python_source_dir(_sys_home)
- return _is_python_source_dir(project_base)
-
-
-python_build = _python_build()
-
-
-# Calculate the build qualifier flags if they are defined. Adding the flags
-# to the include and lib directories only makes sense for an installation, not
-# an in-source build.
-build_flags = ''
-try:
- if not python_build:
- build_flags = sys.abiflags
-except AttributeError:
- # It's not a configure-based build, so the sys module doesn't have
- # this attribute, which is fine.
- pass
-
-
-def get_python_version():
- """Return a string containing the major and minor Python version,
- leaving off the patchlevel. Sample return values could be '1.5'
- or '2.2'.
- """
- return '%d.%d' % sys.version_info[:2]
-
-
-def get_python_inc(plat_specific=0, prefix=None):
- """Return the directory containing installed Python header files.
-
- If 'plat_specific' is false (the default), this is the path to the
- non-platform-specific header files, i.e. Python.h and so on;
- otherwise, this is the path to platform-specific header files
- (namely pyconfig.h).
-
- If 'prefix' is supplied, use it instead of sys.base_prefix or
- sys.base_exec_prefix -- i.e., ignore 'plat_specific'.
- """
- default_prefix = BASE_EXEC_PREFIX if plat_specific else BASE_PREFIX
- resolved_prefix = prefix if prefix is not None else default_prefix
- try:
- getter = globals()[f'_get_python_inc_{os.name}']
- except KeyError:
- raise DistutilsPlatformError(
- "I don't know where Python installs its C header files "
- "on platform '%s'" % os.name
- )
- return getter(resolved_prefix, prefix, plat_specific)
-
-
-def _get_python_inc_posix(prefix, spec_prefix, plat_specific):
- if IS_PYPY and sys.version_info < (3, 8):
- return os.path.join(prefix, 'include')
- return (
- _get_python_inc_posix_python(plat_specific)
- or _get_python_inc_from_config(plat_specific, spec_prefix)
- or _get_python_inc_posix_prefix(prefix)
- )
-
-
-def _get_python_inc_posix_python(plat_specific):
- """
- Assume the executable is in the build directory. The
- pyconfig.h file should be in the same directory. Since
- the build directory may not be the source directory,
- use "srcdir" from the makefile to find the "Include"
- directory.
- """
- if not python_build:
- return
- if plat_specific:
- return _sys_home or project_base
- incdir = os.path.join(get_config_var('srcdir'), 'Include')
- return os.path.normpath(incdir)
-
-
-def _get_python_inc_from_config(plat_specific, spec_prefix):
- """
- If no prefix was explicitly specified, provide the include
- directory from the config vars. Useful when
- cross-compiling, since the config vars may come from
- the host
- platform Python installation, while the current Python
- executable is from the build platform installation.
-
- >>> monkeypatch = getfixture('monkeypatch')
- >>> gpifc = _get_python_inc_from_config
- >>> monkeypatch.setitem(gpifc.__globals__, 'get_config_var', str.lower)
- >>> gpifc(False, '/usr/bin/')
- >>> gpifc(False, '')
- >>> gpifc(False, None)
- 'includepy'
- >>> gpifc(True, None)
- 'confincludepy'
- """
- if spec_prefix is None:
- return get_config_var('CONF' * plat_specific + 'INCLUDEPY')
-
-
-def _get_python_inc_posix_prefix(prefix):
- implementation = 'pypy' if IS_PYPY else 'python'
- python_dir = implementation + get_python_version() + build_flags
- return os.path.join(prefix, "include", python_dir)
-
-
-def _get_python_inc_nt(prefix, spec_prefix, plat_specific):
- if python_build:
- # Include both the include and PC dir to ensure we can find
- # pyconfig.h
- return (
- os.path.join(prefix, "include")
- + os.path.pathsep
- + os.path.join(prefix, "PC")
- )
- return os.path.join(prefix, "include")
-
-
-# allow this behavior to be monkey-patched. Ref pypa/distutils#2.
-def _posix_lib(standard_lib, libpython, early_prefix, prefix):
- if standard_lib:
- return libpython
- else:
- return os.path.join(libpython, "site-packages")
-
-
-def get_python_lib(plat_specific=0, standard_lib=0, prefix=None):
- """Return the directory containing the Python library (standard or
- site additions).
-
- If 'plat_specific' is true, return the directory containing
- platform-specific modules, i.e. any module from a non-pure-Python
- module distribution; otherwise, return the platform-shared library
- directory. If 'standard_lib' is true, return the directory
- containing standard Python library modules; otherwise, return the
- directory for site-specific modules.
-
- If 'prefix' is supplied, use it instead of sys.base_prefix or
- sys.base_exec_prefix -- i.e., ignore 'plat_specific'.
- """
-
- if IS_PYPY and sys.version_info < (3, 8):
- # PyPy-specific schema
- if prefix is None:
- prefix = PREFIX
- if standard_lib:
- return os.path.join(prefix, "lib-python", sys.version[0])
- return os.path.join(prefix, 'site-packages')
-
- early_prefix = prefix
-
- if prefix is None:
- if standard_lib:
- prefix = plat_specific and BASE_EXEC_PREFIX or BASE_PREFIX
- else:
- prefix = plat_specific and EXEC_PREFIX or PREFIX
-
- if os.name == "posix":
- if plat_specific or standard_lib:
- # Platform-specific modules (any module from a non-pure-Python
- # module distribution) or standard Python library modules.
- libdir = getattr(sys, "platlibdir", "lib")
- else:
- # Pure Python
- libdir = "lib"
- implementation = 'pypy' if IS_PYPY else 'python'
- libpython = os.path.join(prefix, libdir, implementation + get_python_version())
- return _posix_lib(standard_lib, libpython, early_prefix, prefix)
- elif os.name == "nt":
- if standard_lib:
- return os.path.join(prefix, "Lib")
- else:
- return os.path.join(prefix, "Lib", "site-packages")
- else:
- raise DistutilsPlatformError(
- "I don't know where Python installs its library "
- "on platform '%s'" % os.name
- )
-
-
-def customize_compiler(compiler): # noqa: C901
- """Do any platform-specific customization of a CCompiler instance.
-
- Mainly needed on Unix, so we can plug in the information that
- varies across Unices and is stored in Python's Makefile.
- """
- if compiler.compiler_type == "unix":
- if sys.platform == "darwin":
- # Perform first-time customization of compiler-related
- # config vars on OS X now that we know we need a compiler.
- # This is primarily to support Pythons from binary
- # installers. The kind and paths to build tools on
- # the user system may vary significantly from the system
- # that Python itself was built on. Also the user OS
- # version and build tools may not support the same set
- # of CPU architectures for universal builds.
- global _config_vars
- # Use get_config_var() to ensure _config_vars is initialized.
- if not get_config_var('CUSTOMIZED_OSX_COMPILER'):
- import _osx_support
-
- _osx_support.customize_compiler(_config_vars)
- _config_vars['CUSTOMIZED_OSX_COMPILER'] = 'True'
-
- (
- cc,
- cxx,
- cflags,
- ccshared,
- ldshared,
- shlib_suffix,
- ar,
- ar_flags,
- ) = get_config_vars(
- 'CC',
- 'CXX',
- 'CFLAGS',
- 'CCSHARED',
- 'LDSHARED',
- 'SHLIB_SUFFIX',
- 'AR',
- 'ARFLAGS',
- )
-
- if 'CC' in os.environ:
- newcc = os.environ['CC']
- if 'LDSHARED' not in os.environ and ldshared.startswith(cc):
- # If CC is overridden, use that as the default
- # command for LDSHARED as well
- ldshared = newcc + ldshared[len(cc) :]
- cc = newcc
- if 'CXX' in os.environ:
- cxx = os.environ['CXX']
- if 'LDSHARED' in os.environ:
- ldshared = os.environ['LDSHARED']
- if 'CPP' in os.environ:
- cpp = os.environ['CPP']
- else:
- cpp = cc + " -E" # not always
- if 'LDFLAGS' in os.environ:
- ldshared = ldshared + ' ' + os.environ['LDFLAGS']
- if 'CFLAGS' in os.environ:
- cflags = cflags + ' ' + os.environ['CFLAGS']
- ldshared = ldshared + ' ' + os.environ['CFLAGS']
- if 'CPPFLAGS' in os.environ:
- cpp = cpp + ' ' + os.environ['CPPFLAGS']
- cflags = cflags + ' ' + os.environ['CPPFLAGS']
- ldshared = ldshared + ' ' + os.environ['CPPFLAGS']
- if 'AR' in os.environ:
- ar = os.environ['AR']
- if 'ARFLAGS' in os.environ:
- archiver = ar + ' ' + os.environ['ARFLAGS']
- else:
- archiver = ar + ' ' + ar_flags
-
- cc_cmd = cc + ' ' + cflags
- compiler.set_executables(
- preprocessor=cpp,
- compiler=cc_cmd,
- compiler_so=cc_cmd + ' ' + ccshared,
- compiler_cxx=cxx,
- linker_so=ldshared,
- linker_exe=cc,
- archiver=archiver,
- )
-
- if 'RANLIB' in os.environ and compiler.executables.get('ranlib', None):
- compiler.set_executables(ranlib=os.environ['RANLIB'])
-
- compiler.shared_lib_extension = shlib_suffix
-
-
-def get_config_h_filename():
- """Return full pathname of installed pyconfig.h file."""
- if python_build:
- if os.name == "nt":
- inc_dir = os.path.join(_sys_home or project_base, "PC")
- else:
- inc_dir = _sys_home or project_base
- return os.path.join(inc_dir, 'pyconfig.h')
- else:
- return sysconfig.get_config_h_filename()
-
-
-def get_makefile_filename():
- """Return full pathname of installed Makefile from the Python build."""
- return sysconfig.get_makefile_filename()
-
-
-def parse_config_h(fp, g=None):
- """Parse a config.h-style file.
-
- A dictionary containing name/value pairs is returned. If an
- optional dictionary is passed in as the second argument, it is
- used instead of a new dictionary.
- """
- return sysconfig.parse_config_h(fp, vars=g)
-
-
-# Regexes needed for parsing Makefile (and similar syntaxes,
-# like old-style Setup files).
-_variable_rx = re.compile(r"([a-zA-Z][a-zA-Z0-9_]+)\s*=\s*(.*)")
-_findvar1_rx = re.compile(r"\$\(([A-Za-z][A-Za-z0-9_]*)\)")
-_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}")
-
-
-def parse_makefile(fn, g=None): # noqa: C901
- """Parse a Makefile-style file.
-
- A dictionary containing name/value pairs is returned. If an
- optional dictionary is passed in as the second argument, it is
- used instead of a new dictionary.
- """
- from distutils.text_file import TextFile
-
- fp = TextFile(
- fn, strip_comments=1, skip_blanks=1, join_lines=1, errors="surrogateescape"
- )
-
- if g is None:
- g = {}
- done = {}
- notdone = {}
-
- while True:
- line = fp.readline()
- if line is None: # eof
- break
- m = _variable_rx.match(line)
- if m:
- n, v = m.group(1, 2)
- v = v.strip()
- # `$$' is a literal `$' in make
- tmpv = v.replace('$$', '')
-
- if "$" in tmpv:
- notdone[n] = v
- else:
- try:
- v = int(v)
- except ValueError:
- # insert literal `$'
- done[n] = v.replace('$$', '$')
- else:
- done[n] = v
-
- # Variables with a 'PY_' prefix in the makefile. These need to
- # be made available without that prefix through sysconfig.
- # Special care is needed to ensure that variable expansion works, even
- # if the expansion uses the name without a prefix.
- renamed_variables = ('CFLAGS', 'LDFLAGS', 'CPPFLAGS')
-
- # do variable interpolation here
- while notdone:
- for name in list(notdone):
- value = notdone[name]
- m = _findvar1_rx.search(value) or _findvar2_rx.search(value)
- if m:
- n = m.group(1)
- found = True
- if n in done:
- item = str(done[n])
- elif n in notdone:
- # get it on a subsequent round
- found = False
- elif n in os.environ:
- # do it like make: fall back to environment
- item = os.environ[n]
-
- elif n in renamed_variables:
- if name.startswith('PY_') and name[3:] in renamed_variables:
- item = ""
-
- elif 'PY_' + n in notdone:
- found = False
-
- else:
- item = str(done['PY_' + n])
- else:
- done[n] = item = ""
- if found:
- after = value[m.end() :]
- value = value[: m.start()] + item + after
- if "$" in after:
- notdone[name] = value
- else:
- try:
- value = int(value)
- except ValueError:
- done[name] = value.strip()
- else:
- done[name] = value
- del notdone[name]
-
- if name.startswith('PY_') and name[3:] in renamed_variables:
-
- name = name[3:]
- if name not in done:
- done[name] = value
- else:
- # bogus variable reference; just drop it since we can't deal
- del notdone[name]
-
- fp.close()
-
- # strip spurious spaces
- for k, v in done.items():
- if isinstance(v, str):
- done[k] = v.strip()
-
- # save the results in the global dictionary
- g.update(done)
- return g
-
-
-def expand_makefile_vars(s, vars):
- """Expand Makefile-style variables -- "${foo}" or "$(foo)" -- in
- 'string' according to 'vars' (a dictionary mapping variable names to
- values). Variables not present in 'vars' are silently expanded to the
- empty string. The variable values in 'vars' should not contain further
- variable expansions; if 'vars' is the output of 'parse_makefile()',
- you're fine. Returns a variable-expanded version of 's'.
- """
-
- # This algorithm does multiple expansion, so if vars['foo'] contains
- # "${bar}", it will expand ${foo} to ${bar}, and then expand
- # ${bar}... and so forth. This is fine as long as 'vars' comes from
- # 'parse_makefile()', which takes care of such expansions eagerly,
- # according to make's variable expansion semantics.
-
- while True:
- m = _findvar1_rx.search(s) or _findvar2_rx.search(s)
- if m:
- (beg, end) = m.span()
- s = s[0:beg] + vars.get(m.group(1)) + s[end:]
- else:
- break
- return s
-
-
-_config_vars = None
-
-
-def get_config_vars(*args):
- """With no arguments, return a dictionary of all configuration
- variables relevant for the current platform. Generally this includes
- everything needed to build extensions and install both pure modules and
- extensions. On Unix, this means every variable defined in Python's
- installed Makefile; on Windows it's a much smaller set.
-
- With arguments, return a list of values that result from looking up
- each argument in the configuration variable dictionary.
- """
- global _config_vars
- if _config_vars is None:
- _config_vars = sysconfig.get_config_vars().copy()
- py39compat.add_ext_suffix(_config_vars)
-
- if args:
- vals = []
- for name in args:
- vals.append(_config_vars.get(name))
- return vals
- else:
- return _config_vars
-
-
-def get_config_var(name):
- """Return the value of a single variable using the dictionary
- returned by 'get_config_vars()'. Equivalent to
- get_config_vars().get(name)
- """
- if name == 'SO':
- import warnings
-
- warnings.warn('SO is deprecated, use EXT_SUFFIX', DeprecationWarning, 2)
- return get_config_vars().get(name)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/test_time_augmentation.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/test_time_augmentation.py
deleted file mode 100644
index 373e6bf00a39c040ff1da49d6dcd39a54a0b69a7..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/test_time_augmentation.py
+++ /dev/null
@@ -1,307 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import copy
-import numpy as np
-from contextlib import contextmanager
-from itertools import count
-from typing import List
-import torch
-from fvcore.transforms import HFlipTransform, NoOpTransform
-from torch import nn
-from torch.nn.parallel import DistributedDataParallel
-
-from detectron2.config import configurable
-from detectron2.data.detection_utils import read_image
-from detectron2.data.transforms import (
- RandomFlip,
- ResizeShortestEdge,
- ResizeTransform,
- apply_augmentations,
-)
-from detectron2.structures import Boxes, Instances
-
-from .meta_arch import GeneralizedRCNN
-from .postprocessing import detector_postprocess
-from .roi_heads.fast_rcnn import fast_rcnn_inference_single_image
-
-__all__ = ["DatasetMapperTTA", "GeneralizedRCNNWithTTA"]
-
-
-class DatasetMapperTTA:
- """
- Implement test-time augmentation for detection data.
- It is a callable which takes a dataset dict from a detection dataset,
- and returns a list of dataset dicts where the images
- are augmented from the input image by the transformations defined in the config.
- This is used for test-time augmentation.
- """
-
- @configurable
- def __init__(self, min_sizes: List[int], max_size: int, flip: bool):
- """
- Args:
- min_sizes: list of short-edge size to resize the image to
- max_size: maximum height or width of resized images
- flip: whether to apply flipping augmentation
- """
- self.min_sizes = min_sizes
- self.max_size = max_size
- self.flip = flip
-
- @classmethod
- def from_config(cls, cfg):
- return {
- "min_sizes": cfg.TEST.AUG.MIN_SIZES,
- "max_size": cfg.TEST.AUG.MAX_SIZE,
- "flip": cfg.TEST.AUG.FLIP,
- }
-
- def __call__(self, dataset_dict):
- """
- Args:
- dict: a dict in standard model input format. See tutorials for details.
-
- Returns:
- list[dict]:
- a list of dicts, which contain augmented version of the input image.
- The total number of dicts is ``len(min_sizes) * (2 if flip else 1)``.
- Each dict has field "transforms" which is a TransformList,
- containing the transforms that are used to generate this image.
- """
- numpy_image = dataset_dict["image"].permute(1, 2, 0).numpy()
- shape = numpy_image.shape
- orig_shape = (dataset_dict["height"], dataset_dict["width"])
- if shape[:2] != orig_shape:
- # It transforms the "original" image in the dataset to the input image
- pre_tfm = ResizeTransform(orig_shape[0], orig_shape[1], shape[0], shape[1])
- else:
- pre_tfm = NoOpTransform()
-
- # Create all combinations of augmentations to use
- aug_candidates = [] # each element is a list[Augmentation]
- for min_size in self.min_sizes:
- resize = ResizeShortestEdge(min_size, self.max_size)
- aug_candidates.append([resize]) # resize only
- if self.flip:
- flip = RandomFlip(prob=1.0)
- aug_candidates.append([resize, flip]) # resize + flip
-
- # Apply all the augmentations
- ret = []
- for aug in aug_candidates:
- new_image, tfms = apply_augmentations(aug, np.copy(numpy_image))
- torch_image = torch.from_numpy(np.ascontiguousarray(new_image.transpose(2, 0, 1)))
-
- dic = copy.deepcopy(dataset_dict)
- dic["transforms"] = pre_tfm + tfms
- dic["image"] = torch_image
- ret.append(dic)
- return ret
-
-
-class GeneralizedRCNNWithTTA(nn.Module):
- """
- A GeneralizedRCNN with test-time augmentation enabled.
- Its :meth:`__call__` method has the same interface as :meth:`GeneralizedRCNN.forward`.
- """
-
- def __init__(self, cfg, model, tta_mapper=None, batch_size=3):
- """
- Args:
- cfg (CfgNode):
- model (GeneralizedRCNN): a GeneralizedRCNN to apply TTA on.
- tta_mapper (callable): takes a dataset dict and returns a list of
- augmented versions of the dataset dict. Defaults to
- `DatasetMapperTTA(cfg)`.
- batch_size (int): batch the augmented images into this batch size for inference.
- """
- super().__init__()
- if isinstance(model, DistributedDataParallel):
- model = model.module
- assert isinstance(
- model, GeneralizedRCNN
- ), "TTA is only supported on GeneralizedRCNN. Got a model of type {}".format(type(model))
- self.cfg = cfg.clone()
- assert not self.cfg.MODEL.KEYPOINT_ON, "TTA for keypoint is not supported yet"
- assert (
- not self.cfg.MODEL.LOAD_PROPOSALS
- ), "TTA for pre-computed proposals is not supported yet"
-
- self.model = model
-
- if tta_mapper is None:
- tta_mapper = DatasetMapperTTA(cfg)
- self.tta_mapper = tta_mapper
- self.batch_size = batch_size
-
- @contextmanager
- def _turn_off_roi_heads(self, attrs):
- """
- Open a context where some heads in `model.roi_heads` are temporarily turned off.
- Args:
- attr (list[str]): the attribute in `model.roi_heads` which can be used
- to turn off a specific head, e.g., "mask_on", "keypoint_on".
- """
- roi_heads = self.model.roi_heads
- old = {}
- for attr in attrs:
- try:
- old[attr] = getattr(roi_heads, attr)
- except AttributeError:
- # The head may not be implemented in certain ROIHeads
- pass
-
- if len(old.keys()) == 0:
- yield
- else:
- for attr in old.keys():
- setattr(roi_heads, attr, False)
- yield
- for attr in old.keys():
- setattr(roi_heads, attr, old[attr])
-
- def _batch_inference(self, batched_inputs, detected_instances=None):
- """
- Execute inference on a list of inputs,
- using batch size = self.batch_size, instead of the length of the list.
-
- Inputs & outputs have the same format as :meth:`GeneralizedRCNN.inference`
- """
- if detected_instances is None:
- detected_instances = [None] * len(batched_inputs)
-
- outputs = []
- inputs, instances = [], []
- for idx, input, instance in zip(count(), batched_inputs, detected_instances):
- inputs.append(input)
- instances.append(instance)
- if len(inputs) == self.batch_size or idx == len(batched_inputs) - 1:
- outputs.extend(
- self.model.inference(
- inputs,
- instances if instances[0] is not None else None,
- do_postprocess=False,
- )
- )
- inputs, instances = [], []
- return outputs
-
- def __call__(self, batched_inputs):
- """
- Same input/output format as :meth:`GeneralizedRCNN.forward`
- """
-
- def _maybe_read_image(dataset_dict):
- ret = copy.copy(dataset_dict)
- if "image" not in ret:
- image = read_image(ret.pop("file_name"), self.model.input_format)
- image = torch.from_numpy(np.ascontiguousarray(image.transpose(2, 0, 1))) # CHW
- ret["image"] = image
- if "height" not in ret and "width" not in ret:
- ret["height"] = image.shape[1]
- ret["width"] = image.shape[2]
- return ret
-
- return [self._inference_one_image(_maybe_read_image(x)) for x in batched_inputs]
-
- def _inference_one_image(self, input):
- """
- Args:
- input (dict): one dataset dict with "image" field being a CHW tensor
-
- Returns:
- dict: one output dict
- """
- orig_shape = (input["height"], input["width"])
- augmented_inputs, tfms = self._get_augmented_inputs(input)
- # Detect boxes from all augmented versions
- with self._turn_off_roi_heads(["mask_on", "keypoint_on"]):
- # temporarily disable roi heads
- all_boxes, all_scores, all_classes = self._get_augmented_boxes(augmented_inputs, tfms)
- # merge all detected boxes to obtain final predictions for boxes
- merged_instances = self._merge_detections(all_boxes, all_scores, all_classes, orig_shape)
-
- if self.cfg.MODEL.MASK_ON:
- # Use the detected boxes to obtain masks
- augmented_instances = self._rescale_detected_boxes(
- augmented_inputs, merged_instances, tfms
- )
- # run forward on the detected boxes
- outputs = self._batch_inference(augmented_inputs, augmented_instances)
- # Delete now useless variables to avoid being out of memory
- del augmented_inputs, augmented_instances
- # average the predictions
- merged_instances.pred_masks = self._reduce_pred_masks(outputs, tfms)
- merged_instances = detector_postprocess(merged_instances, *orig_shape)
- return {"instances": merged_instances}
- else:
- return {"instances": merged_instances}
-
- def _get_augmented_inputs(self, input):
- augmented_inputs = self.tta_mapper(input)
- tfms = [x.pop("transforms") for x in augmented_inputs]
- return augmented_inputs, tfms
-
- def _get_augmented_boxes(self, augmented_inputs, tfms):
- # 1: forward with all augmented images
- outputs = self._batch_inference(augmented_inputs)
- # 2: union the results
- all_boxes = []
- all_scores = []
- all_classes = []
- for output, tfm in zip(outputs, tfms):
- # Need to inverse the transforms on boxes, to obtain results on original image
- pred_boxes = output.pred_boxes.tensor
- original_pred_boxes = tfm.inverse().apply_box(pred_boxes.cpu().numpy())
- all_boxes.append(torch.from_numpy(original_pred_boxes).to(pred_boxes.device))
-
- all_scores.extend(output.scores)
- all_classes.extend(output.pred_classes)
- all_boxes = torch.cat(all_boxes, dim=0)
- return all_boxes, all_scores, all_classes
-
- def _merge_detections(self, all_boxes, all_scores, all_classes, shape_hw):
- # select from the union of all results
- num_boxes = len(all_boxes)
- num_classes = self.cfg.MODEL.ROI_HEADS.NUM_CLASSES
- # +1 because fast_rcnn_inference expects background scores as well
- all_scores_2d = torch.zeros(num_boxes, num_classes + 1, device=all_boxes.device)
- for idx, cls, score in zip(count(), all_classes, all_scores):
- all_scores_2d[idx, cls] = score
-
- merged_instances, _ = fast_rcnn_inference_single_image(
- all_boxes,
- all_scores_2d,
- shape_hw,
- 1e-8,
- self.cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST,
- self.cfg.TEST.DETECTIONS_PER_IMAGE,
- )
-
- return merged_instances
-
- def _rescale_detected_boxes(self, augmented_inputs, merged_instances, tfms):
- augmented_instances = []
- for input, tfm in zip(augmented_inputs, tfms):
- # Transform the target box to the augmented image's coordinate space
- pred_boxes = merged_instances.pred_boxes.tensor.cpu().numpy()
- pred_boxes = torch.from_numpy(tfm.apply_box(pred_boxes))
-
- aug_instances = Instances(
- image_size=input["image"].shape[1:3],
- pred_boxes=Boxes(pred_boxes),
- pred_classes=merged_instances.pred_classes,
- scores=merged_instances.scores,
- )
- augmented_instances.append(aug_instances)
- return augmented_instances
-
- def _reduce_pred_masks(self, outputs, tfms):
- # Should apply inverse transforms on masks.
- # We assume only resize & flip are used. pred_masks is a scale-invariant
- # representation, so we handle flip specially
- for output, tfm in zip(outputs, tfms):
- if any(isinstance(t, HFlipTransform) for t in tfm.transforms):
- output.pred_masks = output.pred_masks.flip(dims=[3])
- all_pred_masks = torch.stack([o.pred_masks for o in outputs], dim=0)
- avg_pred_masks = torch.mean(all_pred_masks, dim=0)
- return avg_pred_masks
diff --git a/spaces/Banbri/zcvzcv/src/app/interface/page/index.tsx b/spaces/Banbri/zcvzcv/src/app/interface/page/index.tsx
deleted file mode 100644
index 545ecb4af98a3f4bac9b964f1d4bae32bd62294a..0000000000000000000000000000000000000000
--- a/spaces/Banbri/zcvzcv/src/app/interface/page/index.tsx
+++ /dev/null
@@ -1,55 +0,0 @@
-import { allLayoutAspectRatios, allLayouts } from "@/app/layouts"
-import { useStore } from "@/app/store"
-import { cn } from "@/lib/utils"
-import { useEffect, useRef } from "react"
-
-export function Page({ page }: { page: number }) {
- const zoomLevel = useStore(state => state.zoomLevel)
- const layouts = useStore(state => state.layouts)
- // const prompt = useStore(state => state.prompt)
-
- const LayoutElement = (allLayouts as any)[layouts[page]]
- const aspectRatio = ((allLayoutAspectRatios as any)[layouts[page]] as string) || "aspect-[250/297]"
- /*
- const [canLoad, setCanLoad] = useState(false)
- useEffect(() => {
- if (prompt?.length) {
- setCanLoad(false)
- setTimeout(() => {
- setCanLoad(true)
- }, page * 4000)
- }
- }, [prompt])
- */
-
- const setPage = useStore(state => state.setPage)
- const pageRef = useRef(null)
-
- useEffect(() => {
- const element = pageRef.current
- if (!element) { return }
- setPage(element)
- }, [pageRef.current])
-
- return (
-
100 ? `100`}`
- }}
- >
-
-
- )
-}
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Apk X Aire Behringer.md b/spaces/Benson/text-generation/Examples/Descargar Apk X Aire Behringer.md
deleted file mode 100644
index 3f3545766742f899fbddc5b5af47bd1e516ee6ec..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Apk X Aire Behringer.md
+++ /dev/null
@@ -1,61 +0,0 @@
-
-
Cómo descargar APK X AIR Behringer para dispositivos Android
-
Si usted es un músico, ingeniero de sonido, o intérprete en vivo que utiliza un mezclador digital BEHRINGER X AIR, es posible que desee descargar APK X AIR Behringer para su dispositivo Android. Esta aplicación le permite controlar todas las funciones de mezcla, procesamiento y efectos de su mezclador desde su tableta o teléfono inteligente. En este artículo, te mostraré cómo descargar, instalar y usar esta aplicación, así como algunos consejos y trucos para sacarle el máximo partido.
-
Beneficios de APK X AIR Behringer
-
APK X AIR Behringer es una aplicación gratuita que ofrece un control completo para los mezcladores X18, XR18, XR16 y XR12. La interfaz de usuario es configurable para acceso simplificado o edición de nivel experto (S/E), para mezclar 18 canales de entrada a 12 buses. Control también se proporciona para los cuatro procesadores de efectos estéreo internos - todos los cuales cuentan con el aclamado motor de procesamiento de audio BEHRINGER X32.
La aplicación proporciona la movilidad para ir donde usted necesita para obtener el máximo provecho de su sistema, lo que le permite ajustar la mezcla de la casa desde cualquier asiento o mezclas de monitor de ajuste fino desde el escenario. Dado que todos los mezcladores BEHRINGER X AIR cuentan con puntos de acceso internos, la configuración de la aplicación no podría ser más simple - solo seleccione la red X AIR y conecte su dispositivo Android a ella. Al abrir la aplicación, su mezclador X AIR se mostrará como un dispositivo controlable, e incluso le permitirá bloquear su dispositivo Android a ese mezclador X AIR específico. También puede ejecutar la aplicación en modo de demostración sin conectarse a su mezclador.
-
No se requiere hardware adicional, por lo que la aplicación es la solución ideal para aplicaciones de mezcla remota sin problemas. Ya sea que lo utilice para espectáculos en vivo, grabaciones de estudio, ensayos, podcasts o seminarios web, APK X AIR Behringer puede ayudarle a lograr una calidad de sonido profesional con facilidad y comodidad.
-
Requisitos para APK X AIR Behringer
-
Para usar APK X AIR Behringer, necesita lo siguiente:
-
-
-
Un mezclador digital BEHRINGER X AIR (X18, XR18, XR16 o XR12) con firmware versión 1.15 o superior
-
Una red Wi-Fi que conecta tu dispositivo y tu mezclador
-
Una conexión a Internet para descargar la aplicación
-
-
Pasos para descargar APK X AIR Behringer
-
Aquí están los pasos para descargar APK X AIR Behringer para su dispositivo Android:
-
Paso 1: Encontrar el enlace de descarga oficial para APK X AIR Behringer
-
La aplicación no está disponible en la Google Play Store, por lo que necesita encontrar el enlace oficial de descarga desde el sitio web de BEHRINGER. Puede escanear el código QR en la página del producto o ir a esta URL: [https://www.behringer.com/behringer/product?modelCode=P0BI8]
-
Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo
-
Dado que está descargando la aplicación desde una fuente de terceros, debe habilitar fuentes desconocidas en la configuración de su dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. Es posible que vea un mensaje de advertencia que dice que instalar desde fuentes desconocidas puede dañar su dispositivo, pero puede ignorarlo siempre y cuando confíe en el origen de la aplicación.
-
Paso 3: Descargar e instalar el archivo APK
-
Una vez que haya habilitado fuentes desconocidas, puede descargar el archivo APK desde el enlace que encontró en el paso 1. El tamaño del archivo es de aproximadamente 5.6 MB y debería tomar unos segundos para descargar dependiendo de su velocidad de Internet. Una vez finalizada la descarga, abre el archivo y sigue las instrucciones para instalar la aplicación en tu dispositivo. Es posible que necesite conceder algunos permisos a la aplicación, como el acceso a su red Wi-Fi y almacenamiento.
-
Paso 4: Conecte su dispositivo a su mezclador X AIR a través de Wi-Fi
-
-
Paso 5: Iniciar la aplicación y disfrutar de sus características
-
Ahora que ha instalado y conectado la aplicación, puede iniciarla y comenzar a controlar su mezclador de forma remota. Verá una lista de dispositivos disponibles en la pantalla de inicio de la aplicación. Toque en el que coincida con el nombre de la red del mezclador y el número de modelo. A continuación, verá un mensaje de confirmación que dice "Conectado". Ahora puede acceder a todas las funciones de mezcla, procesamiento y efectos de su mezclador desde su dispositivo. También puede cambiar entre el modo S/E, la superposición RTA, el modo de envío de bus único, la función AutoMixing y las instantáneas internas desde el menú de la aplicación.
-
-
Consejos y trucos para el uso de APK X AIR Behringer
-
Para optimizar tu experiencia con APK X AIR Behringer, aquí hay algunos consejos y trucos que puedes probar:
-
Consejo 1: Utilice el modo S/ E para cambiar entre la edición de nivel simplificado y experto
-
La aplicación tiene dos modos de operación: simplificado (S) y experto (E). El modo S proporciona una interfaz optimizada que le permite ajustar solo los parámetros más esenciales de cada canal, tales como ganancia, silenciar, solo, pan, EQ, dinámica y enviar niveles. El modo E proporciona una interfaz con todas las funciones que le permite acceder a todos los parámetros de cada canal, tales como configuración de preamplificador, configuración de puerta, configuración de compresor, configuración de limitador, configuración de retardo, etc. Puede cambiar entre los modos S y E pulsando el botón S/ E en la esquina superior izquierda de la aplicación.
-
Consejo 2: Utilice la superposición RTA para ajustar la configuración de EQ
-
-
Consejo 3: Utilice el modo de envío de bus único para el monitoreo personal
-
La aplicación tiene un solo modo de envío de bus que le permite controlar solo un nivel de envío de bus por canal a la vez. Esto es útil para aplicaciones de monitoreo personal donde cada músico o intérprete quiere ajustar su propia mezcla de monitor sin afectar a los demás. Para usar este modo, toque en el botón de envío de bus único en la esquina superior derecha de la aplicación y seleccione un bus de la lista. A continuación, verá un fader azul que representa el nivel de envío de ese bus para cada canal. Puede arrastrarlo hacia arriba o hacia abajo para ajustar el nivel de envío. También puede tocar en los botones de silencio o solo para silenciar o solo el bus.
-
Consejo 4: Utilice la función de Auto-Mixing para conferencias o discusiones de panel
-
La aplicación tiene una función AutoMixing que ajusta automáticamente la ganancia de varios micrófonos en tiempo real para reducir el ruido de fondo y la retroalimentación. Esto es útil para conferencias o mesas redondas donde varios oradores están hablando al mismo tiempo. Para utilizar esta función, toque en el botón de Auto Mixing en la esquina superior derecha de la aplicación y seleccione un canal de entrada de la lista. A continuación, verá un indicador verde que muestra el estado de AutoMixing de ese canal. También puede ajustar el umbral, el peso y los parámetros de destino del algoritmo AutoMixing.
-
Consejo 5: Utilice las instantáneas internas para guardar y recuperar la configuración
-
La aplicación tiene una característica de instantánea interna que le permite guardar y recuperar la configuración del mezclador en cualquier momento. Esto es útil para cambiar entre diferentes escenas o ajustes preestablecidos rápida y fácilmente. Para utilizar esta función, toque en el botón de instantánea en la esquina superior derecha de la aplicación y seleccione una ranura de instantánea de la lista. A continuación, puede nombrar, guardar, cargar o eliminar su instantánea. También puede usar la función de bloqueo para evitar cambios accidentales en su instantánea.
-
Conclusión
-
-
Si tiene alguna pregunta o comentario sobre APK X AIR Behringer, no dude en ponerse en contacto con el equipo de atención al cliente de BEHRINGER o visitar su sitio web para obtener más información. También puede consultar su canal de YouTube para obtener tutoriales y demostraciones de sus productos.
-
Gracias por leer este artículo y espero que te haya sido útil. Si lo hizo, por favor compartirlo con sus amigos y colegas que podrían estar interesados en APK X AIR Behringer. Y no te olvides de descargar la aplicación y probarlo por ti mismo!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas y respuestas comunes sobre APK X AIR Behringer:
-
-
¿Es APK X AIR Behringer compatible con otros productos BEHRINGER?
-
APK X AIR Behringer está diseñado específicamente para los mezcladores X18, XR18, XR16 y XR12. No es compatible con otros productos BEHRINGER, como el X32 o X AIR EDIT.
-
¿Puedo usar APK X AIR Behringer con múltiples dispositivos al mismo tiempo?
-
Sí, puede usar APK X AIR Behringer con varios dispositivos al mismo tiempo, siempre y cuando estén conectados a la misma red Wi-Fi que su mezclador. Sin embargo, debe tener cuidado de no hacer cambios conflictivos en la configuración del mezclador desde diferentes dispositivos, ya que esto puede causar resultados inesperados.
-
¿Puedo usar APK X AIR Behringer sin conexión?
-
No, no se puede utilizar APK X AIR Behringer fuera de línea. Necesita una conexión a Internet para descargar la aplicación y una conexión Wi-Fi para conectarse a su mezclador.
-
¿Cómo puedo actualizar APK X AIR Behringer?
-
Para actualizar APK X AIR Behringer, es necesario comprobar si hay nuevas versiones en el sitio web de BEHRINGER y descargarlos manualmente. La aplicación no tiene una función de actualización automática.
-
¿Cómo puedo desinstalar APK X AIR Behringer?
-
Para desinstalar APK X AIR Behringer, es necesario ir a Configuración > Aplicaciones en el dispositivo y encontrar la aplicación de la lista. A continuación, toque en él y seleccione desinstalar.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Azulejos De Piano 2 Apk Mod.md b/spaces/Benson/text-generation/Examples/Descargar Azulejos De Piano 2 Apk Mod.md
deleted file mode 100644
index 7962394020c1ee673035918e15831c8d87570abb..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Azulejos De Piano 2 Apk Mod.md
+++ /dev/null
@@ -1,50 +0,0 @@
-
-
Descargar Piano Tiles 2 APK Mod: Un divertido y desafiante juego de música
-
¿Te gusta la música y los juegos de ritmo? ¿Quieres probar tus reflejos y habilidades de coordinación? Si es así, entonces deberías probar Piano Tiles 2, uno de los juegos de música más populares y adictivos del mundo. Y si quieres disfrutar del juego con más características y beneficios, entonces usted debe descargar Piano Tiles 2 APK Mod, una versión modificada del juego que le da acceso ilimitado a todas las canciones, monedas, diamantes, y más. En este artículo, le diremos todo lo que necesita saber sobre Piano Tiles 2 y cómo descargar e instalar Piano Tiles 2 APK Mod en su dispositivo Android.
-
¿Qué es Piano Tiles 2?
-
Piano Tiles 2 es una secuela del juego original Piano Tiles, también conocido como Don’t Tap the White Tile. Es un juego simple pero desafiante donde tienes que tocar las fichas negras que aparecen en la pantalla en sincronía con la música. El juego tiene cientos de canciones de diferentes géneros, como clásica, pop, rock, jazz y más. También puedes competir con otros jugadores de todo el mundo y ver quién puede puntuar más alto en la clasificación.
Piano Tiles 2 tiene muchas características que lo convierten en un juego divertido y emocionante para jugar. Algunas de ellas son:
-
-
Sonido y gráficos de alta calidad: El juego tiene gráficos impresionantes y animaciones suaves que crean una experiencia realista de tocar el piano. La calidad de sonido también es excelente, con notas claras y nítidas que coinciden perfectamente con las canciones.
-
Varias canciones y niveles: El juego tiene una gran colección de canciones de diferentes géneros y épocas, como Mozart, Beethoven, Chopin, Taylor Swift, Ed Sheeran, Bruno Mars y más. Puede elegir entre diferentes niveles de dificultad, que van desde fácil de dominar.
-
-
Logros y recompensas: El juego tiene muchos logros que puedes desbloquear completando ciertas tareas o alcanzando ciertos hitos. También puedes ganar monedas y diamantes jugando o viendo anuncios. Puedes usar estas monedas para comprar nuevas canciones, skins, boosters y más.
-
-
Cómo jugar Piano Tiles 2
-
La jugabilidad de Piano Tiles 2 es muy simple e intuitiva. Todo lo que tienes que hacer es tocar las fichas negras que aparecen en la pantalla en sincronía con la música. Tienes que evitar tocar las fichas blancas o perder las fichas negras, de lo contrario perderás el juego. Cuanto más rápido toque, mayor será su puntuación será. También puedes usar boosters como monedas dobles, auto-play o revive para ayudarte en situaciones difíciles.
-
¿Por qué descargar Piano Tiles 2 APK Mod?
-
Piano Tiles 2 APK Mod es una versión modificada del juego original que le da acceso ilimitado a todas las características y beneficios del juego. Algunas de las ventajas de descargar Piano Tiles 2 APK Mod son:
-
-
Todas las canciones desbloqueadas: Puedes reproducir cualquier canción que quieras sin tener que gastar monedas o diamantes o esperar a que se desbloqueen.
-
Todas las monedas y diamantes ilimitados: y están familiarizados con, ya que esto le ayudará a tocar las fichas con mayor precisión y disfrutar de la música más.
-
Usa los amplificadores sabiamente
-
El juego tiene varios potenciadores que pueden ayudarte de diferentes maneras. Algunos de ellos son:
-
-
Monedas dobles: Este booster duplicará la cantidad de monedas que ganes en un juego. Puedes usarlo para comprar más canciones, skins u otros boosters.
-
Auto-play: Este booster hará que el juego se juegue solo por unos segundos. Puedes usarlo para descansar tus dedos o evitar fichas difíciles.
-
Revive: Este refuerzo te permitirá continuar el juego después de cometer un error. Puedes usarlo para guardar tu progreso o mejorar tu puntuación.
-
-
-
Practica y mejora tus habilidades
-
La mejor manera de mejorar en Piano Tiles 2 es practicar y mejorar tus habilidades. Usted debe jugar el juego con regularidad y probar diferentes canciones y niveles. También debe prestar atención al ritmo y el tiempo de las fichas, así como la velocidad y la dirección de las fichas deslizantes. También debes tratar de tocar las fichas con ambas manos, ya que esto aumentará tu eficiencia y coordinación. Cuanto más juegues, más aprenderás y dominarás el juego.
-
-
Conclusión
-
Piano Tiles 2 es un divertido y desafiante juego de música que pondrá a prueba tus reflejos y habilidades de coordinación. Tiene cientos de canciones de diferentes géneros y niveles de dificultad, así como sonido de alta calidad y gráficos. También puedes competir con otros jugadores de todo el mundo y ver quién puede jugar más rápido y mejor. Si quieres disfrutar del juego con más características y beneficios, usted debe descargar Piano Tiles 2 APK Mod, una versión modificada del juego que le da acceso ilimitado a todas las canciones, monedas, diamantes, y más. Puede descargar e instalar Piano Tiles 2 APK Mod en su dispositivo Android siguiendo los sencillos pasos que hemos proporcionado en este artículo. Esperamos que te diviertas jugando Piano Tiles 2!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre Piano Tiles 2 y Piano Tiles 2 APK Mod:
-
-
Es Piano Tiles 2 libre para jugar?
-Sí, Piano Tiles 2 es gratis, pero tiene algunas compras en la aplicación que requieren dinero real. También puedes ver anuncios para ganar monedas o diamantes.
-
¿Es seguro usar Piano Tiles 2 APK Mod?
-Sí, Piano Tiles 2 APK Mod es seguro de usar, siempre y cuando se descarga de una fuente confiable. Lo hemos probado en nuestros dispositivos y no encontramos virus o malware.
-
¿Puedo tocar Piano Tiles 2 sin conexión?
-
-
¿Puedo actualizar Piano Tiles 2 APK Mod?
-Sí, puede actualizar Piano Tiles 2 APK Mod, pero puede perder algunas de las características modded si lo hace. Te recomendamos que busques actualizaciones de la misma fuente donde descargaste el archivo APK.
-
¿Puedo sincronizar mi progreso entre dispositivos?
-Sí, puedes sincronizar tu progreso entre dispositivos iniciando sesión con tu cuenta de Facebook. Sin embargo, esto puede no funcionar para Piano Tiles 2 APK Mod, ya que puede entrar en conflicto con los datos originales del juego.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat_new/src/app.html b/spaces/BetterAPI/BetterChat_new/src/app.html
deleted file mode 100644
index cbee75a1325edc1e113cb99a35bf491d216bb8a1..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/app.html
+++ /dev/null
@@ -1,45 +0,0 @@
-
-
-
-
-
-
- HuggingChat
-
- %sveltekit.head%
-
-
-
-
-In this repository, we release code for TensorMask in Detectron2.
-TensorMask is a dense sliding-window instance segmentation framework that, for the first time, achieves results close to the well-developed Mask R-CNN framework -- both qualitatively and quantitatively. It establishes a conceptually complementary direction for object instance segmentation research.
-
-## Installation
-First install Detectron 2 following [INSTALL.md](https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md). Then compile the TensorMask-specific op (`swap_align2nat`):
-```bash
-cd /path/to/detectron2/projects/TensorMask
-python setup.py build develop
-```
-
-## Training
-
-To train a model, run:
-```bash
-python /path/to/detectron2/projects/TensorMask/train_net.py --config-file
-```
-
-For example, to launch TensorMask BiPyramid training (1x schedule) with ResNet-50 backbone on 8 GPUs,
-one should execute:
-```bash
-python /path/to/detectron2/projects/TensorMask/train_net.py --config-file configs/tensormask_R_50_FPN_1x.yaml --num-gpus 8
-```
-
-## Evaluation
-
-Model evaluation can be done similarly (6x schedule with scale augmentation):
-```bash
-python /path/to/detectron2/projects/TensorMask/train_net.py --config-file configs/tensormask_R_50_FPN_6x.yaml --eval-only MODEL.WEIGHTS /path/to/model_checkpoint
-```
-
-# Pretrained Models
-
-| Backbone | lr sched | AP box | AP mask | download |
-| -------- | -------- | -- | --- | -------- |
-| R50 | 1x | 37.6 | 32.4 | model \| metrics |
-| R50 | 6x | 41.4 | 35.8 | model \| metrics |
-
-
-## Citing TensorMask
-
-If you use TensorMask, please use the following BibTeX entry.
-
-```
-@InProceedings{chen2019tensormask,
- title={Tensormask: A Foundation for Dense Object Segmentation},
- author={Chen, Xinlei and Girshick, Ross and He, Kaiming and Doll{\'a}r, Piotr},
- journal={The International Conference on Computer Vision (ICCV)},
- year={2019}
-}
-```
-
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/conf.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/conf.py
deleted file mode 100644
index 4ec8c847cf1e74fc312952617bb7c42c6d757b7e..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/conf.py
+++ /dev/null
@@ -1,108 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-#
-# Configuration file for the Sphinx documentation builder.
-#
-# This file only contains a selection of the most common options. For a full
-# list see the documentation:
-# http://www.sphinx-doc.org/en/master/config
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-import os
-import sys
-sys.path.insert(0, os.path.abspath('../..'))
-
-RELEASE = os.environ.get('RELEASE', False)
-
-# -- Project information -----------------------------------------------------
-
-project = u'OpenVQA'
-copyright = u'2019, MILVLG'
-author = u'MILVLG'
-
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-# version = '1.0'
-# The full version, including alpha/beta/rc tags.
-# release = '0.0'
-
-
-# -- General configuration ---------------------------------------------------
-
-master_doc = 'index'
-
-# The suffix(es) of source filenames.
-# You can specify multiple suffix as a list of string:
-#
-source_suffix = {
- '.rst': 'restructuredtext',
- '.txt': 'markdown',
- '.md': 'markdown',
-}
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-extensions = [
- 'sphinx.ext.autodoc',
- 'sphinx.ext.autosummary',
- 'sphinx.ext.doctest',
- 'sphinx.ext.intersphinx',
- 'sphinx.ext.todo',
- 'sphinx.ext.coverage',
- 'sphinx.ext.napoleon',
- 'sphinx.ext.viewcode',
- 'sphinx_markdown_tables',
- 'recommonmark',
-]
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-
-
-# -- Options for HTML output -------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages. See the documentation for
-# a list of builtin themes.
-#
-html_theme = 'sphinx_rtd_theme'
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
-
-# Add cusotm css overrides
-def setup(app):
- app.add_stylesheet( "custom.css" )
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-if RELEASE:
- templates_path = ['_templates-stable']
-
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = 'sphinx'
-
-# Disable docstring inheritance
-autodoc_inherit_docstrings = False
-
-
-# -- Other Options ------------------------------------------------------------
-
-# intersphinx_mapping = {
-# 'python': ('https://docs.python.org/3', None)
-# }
diff --git a/spaces/CVPR/LIVE/pybind11/tests/cross_module_gil_utils.cpp b/spaces/CVPR/LIVE/pybind11/tests/cross_module_gil_utils.cpp
deleted file mode 100644
index 07db9f6e48a10dfd2d4370c3daff6e793d6675d2..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/cross_module_gil_utils.cpp
+++ /dev/null
@@ -1,73 +0,0 @@
-/*
- tests/cross_module_gil_utils.cpp -- tools for acquiring GIL from a different module
-
- Copyright (c) 2019 Google LLC
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-#include
-#include
-
-// This file mimics a DSO that makes pybind11 calls but does not define a
-// PYBIND11_MODULE. The purpose is to test that such a DSO can create a
-// py::gil_scoped_acquire when the running thread is in a GIL-released state.
-//
-// Note that we define a Python module here for convenience, but in general
-// this need not be the case. The typical scenario would be a DSO that implements
-// shared logic used internally by multiple pybind11 modules.
-
-namespace {
-
-namespace py = pybind11;
-void gil_acquire() { py::gil_scoped_acquire gil; }
-
-constexpr char kModuleName[] = "cross_module_gil_utils";
-
-#if PY_MAJOR_VERSION >= 3
-struct PyModuleDef moduledef = {
- PyModuleDef_HEAD_INIT,
- kModuleName,
- NULL,
- 0,
- NULL,
- NULL,
- NULL,
- NULL,
- NULL
-};
-#else
-PyMethodDef module_methods[] = {
- {NULL, NULL, 0, NULL}
-};
-#endif
-
-} // namespace
-
-extern "C" PYBIND11_EXPORT
-#if PY_MAJOR_VERSION >= 3
-PyObject* PyInit_cross_module_gil_utils()
-#else
-void initcross_module_gil_utils()
-#endif
-{
-
- PyObject* m =
-#if PY_MAJOR_VERSION >= 3
- PyModule_Create(&moduledef);
-#else
- Py_InitModule(kModuleName, module_methods);
-#endif
-
- if (m != NULL) {
- static_assert(
- sizeof(&gil_acquire) == sizeof(void*),
- "Function pointer must have the same size as void*");
- PyModule_AddObject(m, "gil_acquire_funcaddr",
- PyLong_FromVoidPtr(reinterpret_cast(&gil_acquire)));
- }
-
-#if PY_MAJOR_VERSION >= 3
- return m;
-#endif
-}
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/count.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/count.h
deleted file mode 100644
index fde1728b77261d75c561b9042ec365281d78cee9..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/count.h
+++ /dev/null
@@ -1,23 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system inherits count
-#include
-
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/add.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/add.js
deleted file mode 100644
index eacf1ac98268bd8dc9e89ddd044047dfe21c4121..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/system/add.js
+++ /dev/null
@@ -1,446 +0,0 @@
-import cfg from "../../lib/config/config.js"
-import plugin from "../../lib/plugins/plugin.js"
-import common from "../../lib/common/common.js"
-import fs from "node:fs"
-import path from "node:path"
-import lodash from "lodash"
-import fetch from "node-fetch"
-import { fileTypeFromBuffer } from "file-type"
-
-let messageMap = {}
-
-export class add extends plugin {
- constructor() {
- super({
- name: "添加消息",
- dsc: "添加消息",
- event: "message",
- priority: 50000,
- rule: [
- {
- reg: "^#(全局)?添加",
- fnc: "add"
- },
- {
- reg: "^#(全局)?删除",
- fnc: "del"
- },
- {
- reg: "",
- fnc: "getMessage",
- log: false
- },
- {
- reg: "^#(全局)?(消息|词条)",
- fnc: "list"
- }
- ]
- })
-
- this.path = "data/messageJson/"
- }
-
- async init() {
- common.mkdirs(this.path)
- }
-
- /** 群号key */
- get grpKey() {
- return `Yz:group_id:${this.e.user_id}`
- }
-
- /** #添加 */
- async add() {
- this.isGlobal = Boolean(this.e.msg.match(/^#全局/))
- await this.getGroupId()
-
- if (!this.group_id) {
- await this.reply("请先在群内触发消息,确定添加的群")
- return
- }
-
- this.initMessageMap()
-
- if (!this.checkAuth()) return false
- /** 获取关键词 */
- this.getKeyWord()
- if (!this.e.keyWord) {
- await this.reply("添加错误:没有关键词")
- return
- }
-
- this.e.message = []
- this.setContext("addContext")
-
- return this.reply("请发送添加内容,完成后发送#结束添加", true, { at: true })
- }
-
- /** 获取群号 */
- async getGroupId() {
- /** 添加全局消息,存入到机器人文件中 */
- if (this.isGlobal) {
- this.group_id = "global"
- return this.group_id
- }
-
- if (this.e.isGroup) {
- this.group_id = this.e.group_id
- redis.setEx(this.grpKey, 3600 * 24 * 30, String(this.group_id))
- return this.group_id
- }
-
- // redis获取
- let groupId = await redis.get(this.grpKey)
- if (groupId) {
- this.group_id = groupId
- return this.group_id
- }
-
- return false
- }
-
- checkAuth() {
- if (this.e.isMaster) return true
-
- const groupCfg = cfg.getGroup(this.e.self_id, this.group_id)
- if (groupCfg.addLimit == 2) {
- this.reply("暂无权限,只有主人才能操作")
- return false
- }
- if (groupCfg.addLimit == 1) {
- if (!this.e.member.is_admin) {
- this.reply("暂无权限,只有管理员才能操作")
- return false
- }
- }
-
- if (groupCfg.addPrivate != 1 && !this.e.isGroup) {
- this.reply("禁止私聊添加")
- return false
- }
-
- return true
- }
-
- /** 获取添加关键词 */
- getKeyWord() {
- this.e.isGlobal = Boolean(this.e.msg.match(/^#全局/))
- this.keyWord = this.e.raw_message.replace(/#(全局)?(添加|删除)/, "").trim()
- this.e.keyWord = this.trimAlias(this.keyWord)
- }
-
- /** 过滤别名 */
- trimAlias(msg) {
- const groupCfg = cfg.getGroup(this.e.self_id, this.group_id)
- let alias = groupCfg.botAlias
- if (!Array.isArray(alias))
- alias = [alias]
-
- for (const name of alias)
- if (msg.startsWith(name))
- msg = lodash.trimStart(msg, name).trim()
-
- return msg
- }
-
- /** 添加内容 */
- async addContext() {
- const context = this.getContext()?.addContext
- this.isGlobal = context.isGlobal
- await this.getGroupId()
- /** 关键词 */
- this.keyWord = context.keyWord
-
- if (!this.e.msg?.includes("#结束添加")) {
- /** 添加内容 */
- for (const i of this.e.message) {
- if (i.url) i.file = await this.saveFile(i)
- if (i.type == "at" && i.qq == this.e.self_id) continue
- context.message.push(i)
- }
- return
- }
-
- this.finish("addContext")
- if (!context.message?.length) {
- this.reply("添加错误:没有添加内容")
- return
- }
-
- if (!messageMap[this.group_id])
- messageMap[this.group_id] = new Map()
-
- /** 支持单个关键词添加多个 */
- let message = messageMap[this.group_id].get(this.keyWord)
- if (Array.isArray(message))
- message.push(context.message)
- else
- message = [context.message]
- messageMap[this.group_id].set(this.keyWord, message)
-
- if (message.length > 1)
- this.keyWord += String(message.length)
-
- this.saveJson()
- return this.reply(`添加成功:${this.keyWord}`)
- }
-
- saveJson() {
- let obj = {}
- for (let [k, v] of messageMap[this.group_id])
- obj[k] = v
-
- fs.writeFileSync(`${this.path}${this.group_id}.json`, JSON.stringify(obj, "", "\t"))
- }
-
- async makeBuffer(file) {
- if (file.match(/^base64:\/\//))
- return Buffer.from(file.replace(/^base64:\/\//, ""), "base64")
- else if (file.match(/^https?:\/\//))
- return Buffer.from(await (await fetch(file)).arrayBuffer())
- else if (fs.existsSync(file))
- return Buffer.from(fs.readFileSync(file))
- return file
- }
-
- async fileType(data) {
- const file = { name: `${this.group_id}/${data.type}/${Date.now()}` }
- try {
- file.url = data.url.replace(/^base64:\/\/.*/, "base64://...")
- file.buffer = await this.makeBuffer(data.url)
- file.type = await fileTypeFromBuffer(file.buffer)
- file.name = `${file.name}.${file.type.ext}`
- } catch (err) {
- logger.error(`文件类型检测错误:${logger.red(err)}`)
- file.name = `${file.name}-${path.basename(data.file || data.url)}`
- }
- return file
- }
-
- async saveFile(data) {
- const file = await this.fileType(data)
- if (file.name && Buffer.isBuffer(file.buffer) && common.mkdirs(path.dirname(`${this.path}${file.name}`))) {
- fs.writeFileSync(`${this.path}${file.name}`, file.buffer)
- return file.name
- }
- return data.url
- }
-
- async getMessage() {
- if (!this.e.raw_message) return false
- this.isGlobal = false
-
- await this.getGroupId()
- if (!this.group_id) return false
-
- this.initMessageMap()
- this.initGlobalMessageMap()
-
- this.keyWord = this.trimAlias(this.e.raw_message.trim())
- let keyWord = this.keyWord
-
- let num = 0
- if (isNaN(keyWord)) {
- num = keyWord.charAt(keyWord.length-1)
-
- if (!isNaN(num) && !messageMap[this.group_id].has(keyWord) && !messageMap.global.has(keyWord)) {
- keyWord = lodash.trimEnd(keyWord, num).trim()
- num--
- }
- }
-
- let msg = [
- ...messageMap[this.group_id].get(keyWord) || [],
- ...messageMap.global.get(keyWord) || [],
- ]
- if (lodash.isEmpty(msg)) return false
-
- if (!msg[num])
- num = lodash.random(0, msg.length-1)
-
- msg = [...msg[num]]
- for (const i in msg)
- if (msg[i].file && fs.existsSync(`${this.path}${msg[i].file}`))
- msg[i] = { ...msg[i], file: `base64://${fs.readFileSync(`${this.path}${msg[i].file}`).toString("base64")}` }
-
- logger.mark(`[发送消息]${this.e.logText} ${this.keyWord}`)
- const groupCfg = cfg.getGroup(this.e.self_id, this.group_id)
- return this.reply(msg, Boolean(groupCfg.addReply), {
- at: Boolean(groupCfg.addAt),
- recallMsg: groupCfg.addRecall,
- })
- }
-
- /** 初始化已添加内容 */
- initMessageMap() {
- if (messageMap[this.group_id]) return
- messageMap[this.group_id] = new Map()
-
- const path = `${this.path}${this.group_id}.json`
- if (!fs.existsSync(path)) return
-
- try {
- const message = JSON.parse(fs.readFileSync(path, "utf8"))
- for (const i in message)
- messageMap[this.group_id].set(i, message[i])
- } catch (err) {
- logger.error(`JSON 格式错误:${path} ${err}`)
- }
- }
-
- /** 初始化全局已添加内容 */
- initGlobalMessageMap() {
- if (messageMap.global) return
- messageMap.global = new Map()
-
- const globalPath = `${this.path}global.json`
- if (!fs.existsSync(globalPath)) return
-
- try {
- const message = JSON.parse(fs.readFileSync(globalPath, "utf8"))
- for (const i in message)
- messageMap.global.set(i, message[i])
- } catch (err) {
- logger.error(`JSON 格式错误:${globalPath} ${err}`)
- }
- }
-
- async del() {
- this.isGlobal = this.e.msg.includes("全局")
- await this.getGroupId()
- if (!(this.group_id && this.checkAuth())) return false
-
- this.initMessageMap()
-
- this.getKeyWord()
- if (!this.keyWord) {
- await this.reply("删除错误:没有关键词")
- return false
- }
-
- this.keyWord = this.trimAlias(this.keyWord)
- let keyWord = this.keyWord
-
- let num = false
- let index = 0
- if (isNaN(keyWord)) {
- num = keyWord.charAt(keyWord.length-1)
-
- if (!isNaN(num) && !messageMap[this.group_id].has(keyWord)) {
- keyWord = lodash.trimEnd(keyWord, num).trim()
- index = num-1
- } else {
- num = false
- }
- }
-
- let arr = messageMap[this.group_id].get(keyWord)
- if (!arr) {
- // await this.reply(`暂无此消息:${keyWord}`)
- return false
- }
-
- let tmp = []
- if (num) {
- if (!arr[index]) {
- // await this.reply(`暂无此消息:${keyWord}${num}`)
- return false
- }
-
- tmp = arr[index]
- arr.splice(index, 1)
-
- if (arr.length <= 0) {
- messageMap[this.group_id].delete(keyWord)
- } else {
- messageMap[this.group_id].set(keyWord, arr)
- }
- } else {
- if (this.e.msg.includes("删除全部")) {
- tmp = arr
- arr = []
- } else {
- tmp = arr.pop()
- }
-
- if (arr.length <= 0) {
- messageMap[this.group_id].delete(keyWord)
- } else {
- messageMap[this.group_id].set(keyWord, arr)
- }
- }
-
- this.saveJson()
- return this.reply(`删除成功:${this.keyWord}`)
- }
-
- async list() {
- this.isGlobal = Boolean(this.e.msg.match(/^#全局/))
-
- let page = 1
- let pageSize = 100
- let type = "list"
-
- await this.getGroupId()
- if (!this.group_id) return false
-
- this.initMessageMap()
-
- const search = this.e.msg.replace(/^#(全局)?(消息|词条)/, "").trim()
- if (search.match(/^列表/))
- page = search.replace(/^列表/, "") || 1
- else
- type = "search"
-
- let list = messageMap[this.group_id]
-
- if (lodash.isEmpty(list)) {
- await this.reply("暂无消息")
- return
- }
-
- let arr = []
- if (type == "list")
- for (let [k, v] of messageMap[this.group_id])
- arr.push({ key: k, val: v, num: arr.length+1 })
- else
- for (let [k, v] of messageMap[this.group_id])
- if (k.includes(search))
- arr.push({ key: k, val: v, num: arr.length+1 })
-
- let count = arr.length
- arr = arr.reverse()
-
- if (type == "list")
- arr = this.pagination(page, pageSize, arr)
- if (lodash.isEmpty(arr)) return false
-
- let msg = []
- let num = 0
- for (const i of arr) {
- if (num >= page * pageSize) break
-
- let keyWord = i.key
- if (!keyWord) continue
-
- msg.push(`${i.num}. ${keyWord}(${i.val.length})`)
- num++
- }
- msg = [msg.join("\n")]
-
- if (type == "list" && count > 100)
- msg.push(`更多内容请翻页查看\n如:#消息列表${Number(page)+1}`)
-
- let title = `消息列表:第${page}页,共${count}条`
- if (type == "search")
- title = `消息${search}:共${count}条`
-
- return this.reply(await common.makeForwardMsg(this.e, msg, title))
- }
-
- /** 分页 */
- pagination(pageNo, pageSize, array) {
- let offset = (pageNo-1) * pageSize
- return offset+pageSize >= array.length ? array.slice(offset, array.length) : array.slice(offset, offset+pageSize)
- }
-}
\ No newline at end of file
diff --git a/spaces/Cyril666/my_abi/dataset.py b/spaces/Cyril666/my_abi/dataset.py
deleted file mode 100644
index e424cb2134ba0d992515b2446302e1a758a3db66..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/my_abi/dataset.py
+++ /dev/null
@@ -1,278 +0,0 @@
-import logging
-import re
-
-import cv2
-import lmdb
-import six
-from fastai.vision import *
-from torchvision import transforms
-
-from transforms import CVColorJitter, CVDeterioration, CVGeometry
-from utils import CharsetMapper, onehot
-
-
-class ImageDataset(Dataset):
- "`ImageDataset` read data from LMDB database."
-
- def __init__(self,
- path:PathOrStr,
- is_training:bool=True,
- img_h:int=32,
- img_w:int=100,
- max_length:int=25,
- check_length:bool=True,
- case_sensitive:bool=False,
- charset_path:str='data/charset_36.txt',
- convert_mode:str='RGB',
- data_aug:bool=True,
- deteriorate_ratio:float=0.,
- multiscales:bool=True,
- one_hot_y:bool=True,
- return_idx:bool=False,
- return_raw:bool=False,
- **kwargs):
- self.path, self.name = Path(path), Path(path).name
- assert self.path.is_dir() and self.path.exists(), f"{path} is not a valid directory."
- self.convert_mode, self.check_length = convert_mode, check_length
- self.img_h, self.img_w = img_h, img_w
- self.max_length, self.one_hot_y = max_length, one_hot_y
- self.return_idx, self.return_raw = return_idx, return_raw
- self.case_sensitive, self.is_training = case_sensitive, is_training
- self.data_aug, self.multiscales = data_aug, multiscales
- self.charset = CharsetMapper(charset_path, max_length=max_length+1)
- self.c = self.charset.num_classes
-
- self.env = lmdb.open(str(path), readonly=True, lock=False, readahead=False, meminit=False)
- assert self.env, f'Cannot open LMDB dataset from {path}.'
- with self.env.begin(write=False) as txn:
- self.length = int(txn.get('num-samples'.encode()))
-
- if self.is_training and self.data_aug:
- self.augment_tfs = transforms.Compose([
- CVGeometry(degrees=45, translate=(0.0, 0.0), scale=(0.5, 2.), shear=(45, 15), distortion=0.5, p=0.5),
- CVDeterioration(var=20, degrees=6, factor=4, p=0.25),
- CVColorJitter(brightness=0.5, contrast=0.5, saturation=0.5, hue=0.1, p=0.25)
- ])
- self.totensor = transforms.ToTensor()
-
- def __len__(self): return self.length
-
- def _next_image(self, index):
- next_index = random.randint(0, len(self) - 1)
- return self.get(next_index)
-
- def _check_image(self, x, pixels=6):
- if x.size[0] <= pixels or x.size[1] <= pixels: return False
- else: return True
-
- def resize_multiscales(self, img, borderType=cv2.BORDER_CONSTANT):
- def _resize_ratio(img, ratio, fix_h=True):
- if ratio * self.img_w < self.img_h:
- if fix_h: trg_h = self.img_h
- else: trg_h = int(ratio * self.img_w)
- trg_w = self.img_w
- else: trg_h, trg_w = self.img_h, int(self.img_h / ratio)
- img = cv2.resize(img, (trg_w, trg_h))
- pad_h, pad_w = (self.img_h - trg_h) / 2, (self.img_w - trg_w) / 2
- top, bottom = math.ceil(pad_h), math.floor(pad_h)
- left, right = math.ceil(pad_w), math.floor(pad_w)
- img = cv2.copyMakeBorder(img, top, bottom, left, right, borderType)
- return img
-
- if self.is_training:
- if random.random() < 0.5:
- base, maxh, maxw = self.img_h, self.img_h, self.img_w
- h, w = random.randint(base, maxh), random.randint(base, maxw)
- return _resize_ratio(img, h/w)
- else: return _resize_ratio(img, img.shape[0] / img.shape[1]) # keep aspect ratio
- else: return _resize_ratio(img, img.shape[0] / img.shape[1]) # keep aspect ratio
-
- def resize(self, img):
- if self.multiscales: return self.resize_multiscales(img, cv2.BORDER_REPLICATE)
- else: return cv2.resize(img, (self.img_w, self.img_h))
-
- def get(self, idx):
- with self.env.begin(write=False) as txn:
- image_key, label_key = f'image-{idx+1:09d}', f'label-{idx+1:09d}'
- try:
- label = str(txn.get(label_key.encode()), 'utf-8') # label
- label = re.sub('[^0-9a-zA-Z]+', '', label)
- if self.check_length and self.max_length > 0:
- if len(label) > self.max_length or len(label) <= 0:
- #logging.info(f'Long or short text image is found: {self.name}, {idx}, {label}, {len(label)}')
- return self._next_image(idx)
- label = label[:self.max_length]
-
- imgbuf = txn.get(image_key.encode()) # image
- buf = six.BytesIO()
- buf.write(imgbuf)
- buf.seek(0)
- with warnings.catch_warnings():
- warnings.simplefilter("ignore", UserWarning) # EXIF warning from TiffPlugin
- image = PIL.Image.open(buf).convert(self.convert_mode)
- if self.is_training and not self._check_image(image):
- #logging.info(f'Invalid image is found: {self.name}, {idx}, {label}, {len(label)}')
- return self._next_image(idx)
- except:
- import traceback
- traceback.print_exc()
- logging.info(f'Corrupted image is found: {self.name}, {idx}, {label}, {len(label)}')
- return self._next_image(idx)
- return image, label, idx
-
- def _process_training(self, image):
- if self.data_aug: image = self.augment_tfs(image)
- image = self.resize(np.array(image))
- return image
-
- def _process_test(self, image):
- return self.resize(np.array(image)) # TODO:move is_training to here
-
- def __getitem__(self, idx):
- image, text, idx_new = self.get(idx)
- if not self.is_training: assert idx == idx_new, f'idx {idx} != idx_new {idx_new} during testing.'
-
- if self.is_training: image = self._process_training(image)
- else: image = self._process_test(image)
- if self.return_raw: return image, text
- image = self.totensor(image)
-
- length = tensor(len(text) + 1).to(dtype=torch.long) # one for end token
- label = self.charset.get_labels(text, case_sensitive=self.case_sensitive)
- label = tensor(label).to(dtype=torch.long)
- if self.one_hot_y: label = onehot(label, self.charset.num_classes)
-
- if self.return_idx: y = [label, length, idx_new]
- else: y = [label, length]
- return image, y
-
-
-class TextDataset(Dataset):
- def __init__(self,
- path:PathOrStr,
- delimiter:str='\t',
- max_length:int=25,
- charset_path:str='data/charset_36.txt',
- case_sensitive=False,
- one_hot_x=True,
- one_hot_y=True,
- is_training=True,
- smooth_label=False,
- smooth_factor=0.2,
- use_sm=False,
- **kwargs):
- self.path = Path(path)
- self.case_sensitive, self.use_sm = case_sensitive, use_sm
- self.smooth_factor, self.smooth_label = smooth_factor, smooth_label
- self.charset = CharsetMapper(charset_path, max_length=max_length+1)
- self.one_hot_x, self.one_hot_y, self.is_training = one_hot_x, one_hot_y, is_training
- if self.is_training and self.use_sm: self.sm = SpellingMutation(charset=self.charset)
-
- dtype = {'inp': str, 'gt': str}
- self.df = pd.read_csv(self.path, dtype=dtype, delimiter=delimiter, na_filter=False)
- self.inp_col, self.gt_col = 0, 1
-
- def __len__(self): return len(self.df)
-
- def __getitem__(self, idx):
- text_x = self.df.iloc[idx, self.inp_col]
- text_x = re.sub('[^0-9a-zA-Z]+', '', text_x)
- if not self.case_sensitive: text_x = text_x.lower()
- if self.is_training and self.use_sm: text_x = self.sm(text_x)
-
- length_x = tensor(len(text_x) + 1).to(dtype=torch.long) # one for end token
- label_x = self.charset.get_labels(text_x, case_sensitive=self.case_sensitive)
- label_x = tensor(label_x)
- if self.one_hot_x:
- label_x = onehot(label_x, self.charset.num_classes)
- if self.is_training and self.smooth_label:
- label_x = torch.stack([self.prob_smooth_label(l) for l in label_x])
- x = [label_x, length_x]
-
- text_y = self.df.iloc[idx, self.gt_col]
- text_y = re.sub('[^0-9a-zA-Z]+', '', text_y)
- if not self.case_sensitive: text_y = text_y.lower()
- length_y = tensor(len(text_y) + 1).to(dtype=torch.long) # one for end token
- label_y = self.charset.get_labels(text_y, case_sensitive=self.case_sensitive)
- label_y = tensor(label_y)
- if self.one_hot_y: label_y = onehot(label_y, self.charset.num_classes)
- y = [label_y, length_y]
-
- return x, y
-
- def prob_smooth_label(self, one_hot):
- one_hot = one_hot.float()
- delta = torch.rand([]) * self.smooth_factor
- num_classes = len(one_hot)
- noise = torch.rand(num_classes)
- noise = noise / noise.sum() * delta
- one_hot = one_hot * (1 - delta) + noise
- return one_hot
-
-
-class SpellingMutation(object):
- def __init__(self, pn0=0.7, pn1=0.85, pn2=0.95, pt0=0.7, pt1=0.85, charset=None):
- """
- Args:
- pn0: the prob of not modifying characters is (pn0)
- pn1: the prob of modifying one characters is (pn1 - pn0)
- pn2: the prob of modifying two characters is (pn2 - pn1),
- and three (1 - pn2)
- pt0: the prob of replacing operation is pt0.
- pt1: the prob of inserting operation is (pt1 - pt0),
- and deleting operation is (1 - pt1)
- """
- super().__init__()
- self.pn0, self.pn1, self.pn2 = pn0, pn1, pn2
- self.pt0, self.pt1 = pt0, pt1
- self.charset = charset
- logging.info(f'the probs: pn0={self.pn0}, pn1={self.pn1} ' +
- f'pn2={self.pn2}, pt0={self.pt0}, pt1={self.pt1}')
-
- def is_digit(self, text, ratio=0.5):
- length = max(len(text), 1)
- digit_num = sum([t in self.charset.digits for t in text])
- if digit_num / length < ratio: return False
- return True
-
- def is_unk_char(self, char):
- # return char == self.charset.unk_char
- return (char not in self.charset.digits) and (char not in self.charset.alphabets)
-
- def get_num_to_modify(self, length):
- prob = random.random()
- if prob < self.pn0: num_to_modify = 0
- elif prob < self.pn1: num_to_modify = 1
- elif prob < self.pn2: num_to_modify = 2
- else: num_to_modify = 3
-
- if length <= 1: num_to_modify = 0
- elif length >= 2 and length <= 4: num_to_modify = min(num_to_modify, 1)
- else: num_to_modify = min(num_to_modify, length // 2) # smaller than length // 2
- return num_to_modify
-
- def __call__(self, text, debug=False):
- if self.is_digit(text): return text
- length = len(text)
- num_to_modify = self.get_num_to_modify(length)
- if num_to_modify <= 0: return text
-
- chars = []
- index = np.arange(0, length)
- random.shuffle(index)
- index = index[: num_to_modify]
- if debug: self.index = index
- for i, t in enumerate(text):
- if i not in index: chars.append(t)
- elif self.is_unk_char(t): chars.append(t)
- else:
- prob = random.random()
- if prob < self.pt0: # replace
- chars.append(random.choice(self.charset.alphabets))
- elif prob < self.pt1: # insert
- chars.append(random.choice(self.charset.alphabets))
- chars.append(t)
- else: # delete
- continue
- new_text = ''.join(chars[: self.charset.max_length-1])
- return new_text if len(new_text) >= 1 else text
\ No newline at end of file
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/mix.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/mix.py
deleted file mode 100644
index caf2c68b835101c4f3d18d3d53fbb1b8494b3dba..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/mix.py
+++ /dev/null
@@ -1,129 +0,0 @@
-"""
-Ways to transform interfaces to produce new interfaces
-"""
-import asyncio
-import warnings
-
-from gradio_client.documentation import document, set_documentation_group
-
-import gradio
-
-set_documentation_group("mix_interface")
-
-
-@document()
-class Parallel(gradio.Interface):
- """
- Creates a new Interface consisting of multiple Interfaces in parallel (comparing their outputs).
- The Interfaces to put in Parallel must share the same input components (but can have different output components).
-
- Demos: interface_parallel, interface_parallel_load
- Guides: advanced-interface-features
- """
-
- def __init__(self, *interfaces: gradio.Interface, **options):
- """
- Parameters:
- interfaces: any number of Interface objects that are to be compared in parallel
- options: additional kwargs that are passed into the new Interface object to customize it
- Returns:
- an Interface object comparing the given models
- """
- outputs = []
-
- for interface in interfaces:
- if not (isinstance(interface, gradio.Interface)):
- warnings.warn(
- "Parallel requires all inputs to be of type Interface. "
- "May not work as expected."
- )
- outputs.extend(interface.output_components)
-
- async def parallel_fn(*args):
- return_values_with_durations = await asyncio.gather(
- *[interface.call_function(0, list(args)) for interface in interfaces]
- )
- return_values = [rv["prediction"] for rv in return_values_with_durations]
- combined_list = []
- for interface, return_value in zip(interfaces, return_values):
- if len(interface.output_components) == 1:
- combined_list.append(return_value)
- else:
- combined_list.extend(return_value)
- if len(outputs) == 1:
- return combined_list[0]
- return combined_list
-
- parallel_fn.__name__ = " | ".join([io.__name__ for io in interfaces])
-
- kwargs = {
- "fn": parallel_fn,
- "inputs": interfaces[0].input_components,
- "outputs": outputs,
- }
- kwargs.update(options)
- super().__init__(**kwargs)
-
-
-@document()
-class Series(gradio.Interface):
- """
- Creates a new Interface from multiple Interfaces in series (the output of one is fed as the input to the next,
- and so the input and output components must agree between the interfaces).
-
- Demos: interface_series, interface_series_load
- Guides: advanced-interface-features
- """
-
- def __init__(self, *interfaces: gradio.Interface, **options):
- """
- Parameters:
- interfaces: any number of Interface objects that are to be connected in series
- options: additional kwargs that are passed into the new Interface object to customize it
- Returns:
- an Interface object connecting the given models
- """
-
- async def connected_fn(*data):
- for idx, interface in enumerate(interfaces):
- # skip preprocessing for first interface since the Series interface will include it
- if idx > 0 and not (interface.api_mode):
- data = [
- input_component.preprocess(data[i])
- for i, input_component in enumerate(interface.input_components)
- ]
-
- # run all of predictions sequentially
- data = (await interface.call_function(0, list(data)))["prediction"]
- if len(interface.output_components) == 1:
- data = [data]
-
- # skip postprocessing for final interface since the Series interface will include it
- if idx < len(interfaces) - 1 and not (interface.api_mode):
- data = [
- output_component.postprocess(data[i])
- for i, output_component in enumerate(
- interface.output_components
- )
- ]
-
- if len(interface.output_components) == 1: # type: ignore
- return data[0]
- return data
-
- for interface in interfaces:
- if not (isinstance(interface, gradio.Interface)):
- warnings.warn(
- "Series requires all inputs to be of type Interface. May "
- "not work as expected."
- )
- connected_fn.__name__ = " => ".join([io.__name__ for io in interfaces])
-
- kwargs = {
- "fn": connected_fn,
- "inputs": interfaces[0].input_components,
- "outputs": interfaces[-1].output_components,
- "_api_mode": interfaces[0].api_mode, # TODO: set api_mode per-interface
- }
- kwargs.update(options)
- super().__init__(**kwargs)
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-1e03cd90.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-1e03cd90.css
deleted file mode 100644
index 6692555db405e6eb83d0671b1ef9922ee30770d3..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-1e03cd90.css
+++ /dev/null
@@ -1 +0,0 @@
-.preview.svelte-w0jac3.svelte-w0jac3{display:flex;position:absolute;inset:0;flex-direction:column;z-index:var(--layer-2);backdrop-filter:blur(8px);background:var(--background-fill-primary);height:var(--size-full)}.fixed-height.svelte-w0jac3.svelte-w0jac3{min-height:var(--size-80);max-height:55vh}@media (min-width: 1280px){.fixed-height.svelte-w0jac3.svelte-w0jac3{min-height:450px}}.preview.svelte-w0jac3 img.svelte-w0jac3{width:var(--size-full);height:calc(var(--size-full) - 60px);object-fit:contain}.preview.svelte-w0jac3 img.with-caption.svelte-w0jac3{height:calc(var(--size-full) - 80px)}.caption.svelte-w0jac3.svelte-w0jac3{padding:var(--size-2) var(--size-3);overflow:hidden;color:var(--block-label-text-color);font-weight:var(--weight-semibold);text-align:center;text-overflow:ellipsis;white-space:nowrap}.thumbnails.svelte-w0jac3.svelte-w0jac3{display:flex;position:absolute;bottom:0;justify-content:center;align-items:center;gap:var(--spacing-lg);width:var(--size-full);height:var(--size-14);overflow-x:scroll}.thumbnail-item.svelte-w0jac3.svelte-w0jac3{--ring-color:transparent;position:relative;box-shadow:0 0 0 2px var(--ring-color),var(--shadow-drop);border:1px solid var(--border-color-primary);border-radius:var(--button-small-radius);background:var(--background-fill-secondary);aspect-ratio:var(--ratio-square);width:var(--size-full);height:var(--size-full);overflow:clip}.thumbnail-item.svelte-w0jac3.svelte-w0jac3:hover{--ring-color:var(--color-accent);filter:brightness(1.1)}.thumbnail-item.selected.svelte-w0jac3.svelte-w0jac3{--ring-color:var(--color-accent)}.thumbnail-small.svelte-w0jac3.svelte-w0jac3{flex:none;transform:scale(.9);transition:75ms;width:var(--size-9);height:var(--size-9)}.thumbnail-small.selected.svelte-w0jac3.svelte-w0jac3{--ring-color:var(--color-accent);transform:scale(1);border-color:var(--color-accent)}.thumbnail-small.svelte-w0jac3>img.svelte-w0jac3{width:var(--size-full);height:var(--size-full);overflow:hidden;object-fit:var(--object-fit)}.grid-wrap.svelte-w0jac3.svelte-w0jac3{position:relative;padding:var(--size-2);height:var(--size-full);overflow-y:auto}.grid-container.svelte-w0jac3.svelte-w0jac3{display:grid;position:relative;grid-template-rows:var(--grid-rows);grid-template-columns:var(--grid-cols);gap:var(--spacing-lg)}@media (min-width: 640px){.grid-container.svelte-w0jac3.svelte-w0jac3{grid-template-columns:var(--sm-grid-cols)}}@media (min-width: 768px){.grid-container.svelte-w0jac3.svelte-w0jac3{grid-template-columns:var(--md-grid-cols)}}@media (min-width: 1024px){.grid-container.svelte-w0jac3.svelte-w0jac3{grid-template-columns:var(--lg-grid-cols)}}@media (min-width: 1280px){.grid-container.svelte-w0jac3.svelte-w0jac3{grid-template-columns:var(--xl-grid-cols)}}@media (min-width: 1536px){.grid-container.svelte-w0jac3.svelte-w0jac3{grid-template-columns:var(--2xl-grid-cols)}}.thumbnail-lg.svelte-w0jac3>img.svelte-w0jac3{width:var(--size-full);height:var(--size-full);overflow:hidden;object-fit:var(--object-fit)}.thumbnail-lg.svelte-w0jac3:hover .caption-label.svelte-w0jac3{opacity:.5}.caption-label.svelte-w0jac3.svelte-w0jac3{position:absolute;right:var(--block-label-margin);bottom:var(--block-label-margin);z-index:var(--layer-1);border-top:1px solid var(--border-color-primary);border-left:1px solid var(--border-color-primary);border-radius:var(--block-label-radius);background:var(--background-fill-secondary);padding:var(--block-label-padding);max-width:80%;overflow:hidden;font-size:var(--block-label-text-size);text-align:left;text-overflow:ellipsis;white-space:nowrap}.icon-button.svelte-w0jac3.svelte-w0jac3{position:absolute;top:0;right:0;z-index:var(--layer-1)}
diff --git a/spaces/DaleChen/AutoGPT/autogpt/json_utils/__init__.py b/spaces/DaleChen/AutoGPT/autogpt/json_utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Datasculptor/DescriptionGPT/detic/modeling/roi_heads/detic_fast_rcnn.py b/spaces/Datasculptor/DescriptionGPT/detic/modeling/roi_heads/detic_fast_rcnn.py
deleted file mode 100644
index 186822dd8f67ef9d991ee79101b3bf1243a722a5..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/DescriptionGPT/detic/modeling/roi_heads/detic_fast_rcnn.py
+++ /dev/null
@@ -1,595 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import logging
-import math
-import json
-import numpy as np
-from typing import Dict, Union
-import torch
-from fvcore.nn import giou_loss, smooth_l1_loss
-from torch import nn
-from torch.nn import functional as F
-import fvcore.nn.weight_init as weight_init
-import detectron2.utils.comm as comm
-from detectron2.config import configurable
-from detectron2.layers import ShapeSpec, batched_nms, cat, cross_entropy, nonzero_tuple
-from detectron2.structures import Boxes, Instances
-from detectron2.utils.events import get_event_storage
-from detectron2.modeling.box_regression import Box2BoxTransform
-from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers
-from detectron2.modeling.roi_heads.fast_rcnn import fast_rcnn_inference
-from detectron2.modeling.roi_heads.fast_rcnn import _log_classification_stats
-
-from torch.cuda.amp import autocast
-from ..utils import load_class_freq, get_fed_loss_inds
-from .zero_shot_classifier import ZeroShotClassifier
-
-__all__ = ["DeticFastRCNNOutputLayers"]
-
-
-class DeticFastRCNNOutputLayers(FastRCNNOutputLayers):
- @configurable
- def __init__(
- self,
- input_shape: ShapeSpec,
- *,
- mult_proposal_score=False,
- cls_score=None,
- sync_caption_batch = False,
- use_sigmoid_ce = False,
- use_fed_loss = False,
- ignore_zero_cats = False,
- fed_loss_num_cat = 50,
- dynamic_classifier = False,
- image_label_loss = '',
- use_zeroshot_cls = False,
- image_loss_weight = 0.1,
- with_softmax_prop = False,
- caption_weight = 1.0,
- neg_cap_weight = 1.0,
- add_image_box = False,
- debug = False,
- prior_prob = 0.01,
- cat_freq_path = '',
- fed_loss_freq_weight = 0.5,
- softmax_weak_loss = False,
- **kwargs,
- ):
- super().__init__(
- input_shape=input_shape,
- **kwargs,
- )
- self.mult_proposal_score = mult_proposal_score
- self.sync_caption_batch = sync_caption_batch
- self.use_sigmoid_ce = use_sigmoid_ce
- self.use_fed_loss = use_fed_loss
- self.ignore_zero_cats = ignore_zero_cats
- self.fed_loss_num_cat = fed_loss_num_cat
- self.dynamic_classifier = dynamic_classifier
- self.image_label_loss = image_label_loss
- self.use_zeroshot_cls = use_zeroshot_cls
- self.image_loss_weight = image_loss_weight
- self.with_softmax_prop = with_softmax_prop
- self.caption_weight = caption_weight
- self.neg_cap_weight = neg_cap_weight
- self.add_image_box = add_image_box
- self.softmax_weak_loss = softmax_weak_loss
- self.debug = debug
-
- if softmax_weak_loss:
- assert image_label_loss in ['max_size']
-
- if self.use_sigmoid_ce:
- bias_value = -math.log((1 - prior_prob) / prior_prob)
- nn.init.constant_(self.cls_score.bias, bias_value)
-
- if self.use_fed_loss or self.ignore_zero_cats:
- freq_weight = load_class_freq(cat_freq_path, fed_loss_freq_weight)
- self.register_buffer('freq_weight', freq_weight)
- else:
- self.freq_weight = None
-
- if self.use_fed_loss and len(self.freq_weight) < self.num_classes:
- # assert self.num_classes == 11493
- print('Extending federated loss weight')
- self.freq_weight = torch.cat(
- [self.freq_weight,
- self.freq_weight.new_zeros(
- self.num_classes - len(self.freq_weight))]
- )
-
- assert (not self.dynamic_classifier) or (not self.use_fed_loss)
- input_size = input_shape.channels * \
- (input_shape.width or 1) * (input_shape.height or 1)
-
- if self.use_zeroshot_cls:
- del self.cls_score
- del self.bbox_pred
- assert cls_score is not None
- self.cls_score = cls_score
- self.bbox_pred = nn.Sequential(
- nn.Linear(input_size, input_size),
- nn.ReLU(inplace=True),
- nn.Linear(input_size, 4)
- )
- weight_init.c2_xavier_fill(self.bbox_pred[0])
- nn.init.normal_(self.bbox_pred[-1].weight, std=0.001)
- nn.init.constant_(self.bbox_pred[-1].bias, 0)
-
- if self.with_softmax_prop:
- self.prop_score = nn.Sequential(
- nn.Linear(input_size, input_size),
- nn.ReLU(inplace=True),
- nn.Linear(input_size, self.num_classes + 1),
- )
- weight_init.c2_xavier_fill(self.prop_score[0])
- nn.init.normal_(self.prop_score[-1].weight, mean=0, std=0.001)
- nn.init.constant_(self.prop_score[-1].bias, 0)
-
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- ret = super().from_config(cfg, input_shape)
- ret.update({
- 'mult_proposal_score': cfg.MODEL.ROI_BOX_HEAD.MULT_PROPOSAL_SCORE,
- 'sync_caption_batch': cfg.MODEL.SYNC_CAPTION_BATCH,
- 'use_sigmoid_ce': cfg.MODEL.ROI_BOX_HEAD.USE_SIGMOID_CE,
- 'use_fed_loss': cfg.MODEL.ROI_BOX_HEAD.USE_FED_LOSS,
- 'ignore_zero_cats': cfg.MODEL.ROI_BOX_HEAD.IGNORE_ZERO_CATS,
- 'fed_loss_num_cat': cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_NUM_CAT,
- 'dynamic_classifier': cfg.MODEL.DYNAMIC_CLASSIFIER,
- 'image_label_loss': cfg.MODEL.ROI_BOX_HEAD.IMAGE_LABEL_LOSS,
- 'use_zeroshot_cls': cfg.MODEL.ROI_BOX_HEAD.USE_ZEROSHOT_CLS,
- 'image_loss_weight': cfg.MODEL.ROI_BOX_HEAD.IMAGE_LOSS_WEIGHT,
- 'with_softmax_prop': cfg.MODEL.ROI_BOX_HEAD.WITH_SOFTMAX_PROP,
- 'caption_weight': cfg.MODEL.ROI_BOX_HEAD.CAPTION_WEIGHT,
- 'neg_cap_weight': cfg.MODEL.ROI_BOX_HEAD.NEG_CAP_WEIGHT,
- 'add_image_box': cfg.MODEL.ROI_BOX_HEAD.ADD_IMAGE_BOX,
- 'debug': cfg.DEBUG or cfg.SAVE_DEBUG or cfg.IS_DEBUG,
- 'prior_prob': cfg.MODEL.ROI_BOX_HEAD.PRIOR_PROB,
- 'cat_freq_path': cfg.MODEL.ROI_BOX_HEAD.CAT_FREQ_PATH,
- 'fed_loss_freq_weight': cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_FREQ_WEIGHT,
- 'softmax_weak_loss': cfg.MODEL.ROI_BOX_HEAD.SOFTMAX_WEAK_LOSS,
- })
- if ret['use_zeroshot_cls']:
- ret['cls_score'] = ZeroShotClassifier(cfg, input_shape)
- return ret
-
- def losses(self, predictions, proposals, \
- use_advanced_loss=True,
- classifier_info=(None,None,None)):
- """
- enable advanced loss
- """
- scores, proposal_deltas = predictions
- gt_classes = (
- cat([p.gt_classes for p in proposals], dim=0) if len(proposals) else torch.empty(0)
- )
- num_classes = self.num_classes
- if self.dynamic_classifier:
- _, cls_id_map = classifier_info[1]
- gt_classes = cls_id_map[gt_classes]
- num_classes = scores.shape[1] - 1
- assert cls_id_map[self.num_classes] == num_classes
- _log_classification_stats(scores, gt_classes)
-
- if len(proposals):
- proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) # Nx4
- assert not proposal_boxes.requires_grad, "Proposals should not require gradients!"
- gt_boxes = cat(
- [(p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes).tensor for p in proposals],
- dim=0,
- )
- else:
- proposal_boxes = gt_boxes = torch.empty((0, 4), device=proposal_deltas.device)
-
- if self.use_sigmoid_ce:
- loss_cls = self.sigmoid_cross_entropy_loss(scores, gt_classes)
- else:
- loss_cls = self.softmax_cross_entropy_loss(scores, gt_classes)
- return {
- "loss_cls": loss_cls,
- "loss_box_reg": self.box_reg_loss(
- proposal_boxes, gt_boxes, proposal_deltas, gt_classes,
- num_classes=num_classes)
- }
-
-
- def sigmoid_cross_entropy_loss(self, pred_class_logits, gt_classes):
- if pred_class_logits.numel() == 0:
- return pred_class_logits.new_zeros([1])[0] # This is more robust than .sum() * 0.
-
- B = pred_class_logits.shape[0]
- C = pred_class_logits.shape[1] - 1
-
- target = pred_class_logits.new_zeros(B, C + 1)
- target[range(len(gt_classes)), gt_classes] = 1 # B x (C + 1)
- target = target[:, :C] # B x C
-
- weight = 1
-
- if self.use_fed_loss and (self.freq_weight is not None): # fedloss
- appeared = get_fed_loss_inds(
- gt_classes,
- num_sample_cats=self.fed_loss_num_cat,
- C=C,
- weight=self.freq_weight)
- appeared_mask = appeared.new_zeros(C + 1)
- appeared_mask[appeared] = 1 # C + 1
- appeared_mask = appeared_mask[:C]
- fed_w = appeared_mask.view(1, C).expand(B, C)
- weight = weight * fed_w.float()
- if self.ignore_zero_cats and (self.freq_weight is not None):
- w = (self.freq_weight.view(-1) > 1e-4).float()
- weight = weight * w.view(1, C).expand(B, C)
- # import pdb; pdb.set_trace()
-
- cls_loss = F.binary_cross_entropy_with_logits(
- pred_class_logits[:, :-1], target, reduction='none') # B x C
- loss = torch.sum(cls_loss * weight) / B
- return loss
-
-
- def softmax_cross_entropy_loss(self, pred_class_logits, gt_classes):
- """
- change _no_instance handling
- """
- if pred_class_logits.numel() == 0:
- return pred_class_logits.new_zeros([1])[0]
-
- if self.ignore_zero_cats and (self.freq_weight is not None):
- zero_weight = torch.cat([
- (self.freq_weight.view(-1) > 1e-4).float(),
- self.freq_weight.new_ones(1)]) # C + 1
- loss = F.cross_entropy(
- pred_class_logits, gt_classes,
- weight=zero_weight, reduction="mean")
- elif self.use_fed_loss and (self.freq_weight is not None): # fedloss
- C = pred_class_logits.shape[1] - 1
- appeared = get_fed_loss_inds(
- gt_classes,
- num_sample_cats=self.fed_loss_num_cat,
- C=C,
- weight=self.freq_weight)
- appeared_mask = appeared.new_zeros(C + 1).float()
- appeared_mask[appeared] = 1. # C + 1
- appeared_mask[C] = 1.
- loss = F.cross_entropy(
- pred_class_logits, gt_classes,
- weight=appeared_mask, reduction="mean")
- else:
- loss = F.cross_entropy(
- pred_class_logits, gt_classes, reduction="mean")
- return loss
-
-
- def box_reg_loss(
- self, proposal_boxes, gt_boxes, pred_deltas, gt_classes,
- num_classes=-1):
- """
- Allow custom background index
- """
- num_classes = num_classes if num_classes > 0 else self.num_classes
- box_dim = proposal_boxes.shape[1] # 4 or 5
- fg_inds = nonzero_tuple((gt_classes >= 0) & (gt_classes < num_classes))[0]
- if pred_deltas.shape[1] == box_dim: # cls-agnostic regression
- fg_pred_deltas = pred_deltas[fg_inds]
- else:
- fg_pred_deltas = pred_deltas.view(-1, self.num_classes, box_dim)[
- fg_inds, gt_classes[fg_inds]
- ]
-
- if self.box_reg_loss_type == "smooth_l1":
- gt_pred_deltas = self.box2box_transform.get_deltas(
- proposal_boxes[fg_inds],
- gt_boxes[fg_inds],
- )
- loss_box_reg = smooth_l1_loss(
- fg_pred_deltas, gt_pred_deltas, self.smooth_l1_beta, reduction="sum"
- )
- elif self.box_reg_loss_type == "giou":
- fg_pred_boxes = self.box2box_transform.apply_deltas(
- fg_pred_deltas, proposal_boxes[fg_inds]
- )
- loss_box_reg = giou_loss(fg_pred_boxes, gt_boxes[fg_inds], reduction="sum")
- else:
- raise ValueError(f"Invalid bbox reg loss type '{self.box_reg_loss_type}'")
- return loss_box_reg / max(gt_classes.numel(), 1.0)
-
- def inference(self, predictions, proposals):
- """
- enable use proposal boxes
- """
- predictions = (predictions[0], predictions[1])
- boxes = self.predict_boxes(predictions, proposals)
- scores = self.predict_probs(predictions, proposals)
- if self.mult_proposal_score:
- proposal_scores = [p.get('objectness_logits') for p in proposals]
- scores = [(s * ps[:, None]) ** 0.5 \
- for s, ps in zip(scores, proposal_scores)]
- image_shapes = [x.image_size for x in proposals]
- return fast_rcnn_inference(
- boxes,
- scores,
- image_shapes,
- self.test_score_thresh,
- self.test_nms_thresh,
- self.test_topk_per_image,
- )
-
-
- def predict_probs(self, predictions, proposals):
- """
- support sigmoid
- """
- # scores, _ = predictions
- scores = predictions[0]
- num_inst_per_image = [len(p) for p in proposals]
- if self.use_sigmoid_ce:
- probs = scores.sigmoid()
- else:
- probs = F.softmax(scores, dim=-1)
- return probs.split(num_inst_per_image, dim=0)
-
-
- def image_label_losses(self, predictions, proposals, image_labels, \
- classifier_info=(None,None,None), ann_type='image'):
- '''
- Inputs:
- scores: N x (C + 1)
- image_labels B x 1
- '''
- num_inst_per_image = [len(p) for p in proposals]
- scores = predictions[0]
- scores = scores.split(num_inst_per_image, dim=0) # B x n x (C + 1)
- if self.with_softmax_prop:
- prop_scores = predictions[2].split(num_inst_per_image, dim=0)
- else:
- prop_scores = [None for _ in num_inst_per_image]
- B = len(scores)
- img_box_count = 0
- select_size_count = 0
- select_x_count = 0
- select_y_count = 0
- max_score_count = 0
- storage = get_event_storage()
- loss = scores[0].new_zeros([1])[0]
- caption_loss = scores[0].new_zeros([1])[0]
- for idx, (score, labels, prop_score, p) in enumerate(zip(
- scores, image_labels, prop_scores, proposals)):
- if score.shape[0] == 0:
- loss += score.new_zeros([1])[0]
- continue
- if 'caption' in ann_type:
- score, caption_loss_img = self._caption_loss(
- score, classifier_info, idx, B)
- caption_loss += self.caption_weight * caption_loss_img
- if ann_type == 'caption':
- continue
-
- if self.debug:
- p.selected = score.new_zeros(
- (len(p),), dtype=torch.long) - 1
- for i_l, label in enumerate(labels):
- if self.dynamic_classifier:
- if idx == 0 and i_l == 0 and comm.is_main_process():
- storage.put_scalar('stats_label', label)
- label = classifier_info[1][1][label]
- assert label < score.shape[1]
- if self.image_label_loss in ['wsod', 'wsddn']:
- loss_i, ind = self._wsddn_loss(score, prop_score, label)
- elif self.image_label_loss == 'max_score':
- loss_i, ind = self._max_score_loss(score, label)
- elif self.image_label_loss == 'max_size':
- loss_i, ind = self._max_size_loss(score, label, p)
- elif self.image_label_loss == 'first':
- loss_i, ind = self._first_loss(score, label)
- elif self.image_label_loss == 'image':
- loss_i, ind = self._image_loss(score, label)
- elif self.image_label_loss == 'min_loss':
- loss_i, ind = self._min_loss_loss(score, label)
- else:
- assert 0
- loss += loss_i / len(labels)
- if type(ind) == type([]):
- img_box_count = sum(ind) / len(ind)
- if self.debug:
- for ind_i in ind:
- p.selected[ind_i] = label
- else:
- img_box_count = ind
- select_size_count = p[ind].proposal_boxes.area() / \
- (p.image_size[0] * p.image_size[1])
- max_score_count = score[ind, label].sigmoid()
- select_x_count = (p.proposal_boxes.tensor[ind, 0] + \
- p.proposal_boxes.tensor[ind, 2]) / 2 / p.image_size[1]
- select_y_count = (p.proposal_boxes.tensor[ind, 1] + \
- p.proposal_boxes.tensor[ind, 3]) / 2 / p.image_size[0]
- if self.debug:
- p.selected[ind] = label
-
- loss = loss / B
- storage.put_scalar('stats_l_image', loss.item())
- if 'caption' in ann_type:
- caption_loss = caption_loss / B
- loss = loss + caption_loss
- storage.put_scalar('stats_l_caption', caption_loss.item())
- if comm.is_main_process():
- storage.put_scalar('pool_stats', img_box_count)
- storage.put_scalar('stats_select_size', select_size_count)
- storage.put_scalar('stats_select_x', select_x_count)
- storage.put_scalar('stats_select_y', select_y_count)
- storage.put_scalar('stats_max_label_score', max_score_count)
-
- return {
- 'image_loss': loss * self.image_loss_weight,
- 'loss_cls': score.new_zeros([1])[0],
- 'loss_box_reg': score.new_zeros([1])[0]}
-
-
- def forward(self, x, classifier_info=(None,None,None)):
- """
- enable classifier_info
- """
- if x.dim() > 2:
- x = torch.flatten(x, start_dim=1)
- scores = []
-
- if classifier_info[0] is not None:
- cls_scores = self.cls_score(x, classifier=classifier_info[0])
- scores.append(cls_scores)
- else:
- cls_scores = self.cls_score(x)
- scores.append(cls_scores)
-
- if classifier_info[2] is not None:
- cap_cls = classifier_info[2]
- if self.sync_caption_batch:
- caption_scores = self.cls_score(x, classifier=cap_cls[:, :-1])
- else:
- caption_scores = self.cls_score(x, classifier=cap_cls)
- scores.append(caption_scores)
- scores = torch.cat(scores, dim=1) # B x C' or B x N or B x (C'+N)
-
- proposal_deltas = self.bbox_pred(x)
- if self.with_softmax_prop:
- prop_score = self.prop_score(x)
- return scores, proposal_deltas, prop_score
- else:
- return scores, proposal_deltas
-
-
- def _caption_loss(self, score, classifier_info, idx, B):
- assert (classifier_info[2] is not None)
- assert self.add_image_box
- cls_and_cap_num = score.shape[1]
- cap_num = classifier_info[2].shape[0]
- score, caption_score = score.split(
- [cls_and_cap_num - cap_num, cap_num], dim=1)
- # n x (C + 1), n x B
- caption_score = caption_score[-1:] # 1 x B # -1: image level box
- caption_target = caption_score.new_zeros(
- caption_score.shape) # 1 x B or 1 x MB, M: num machines
- if self.sync_caption_batch:
- # caption_target: 1 x MB
- rank = comm.get_rank()
- global_idx = B * rank + idx
- assert (classifier_info[2][
- global_idx, -1] - rank) ** 2 < 1e-8, \
- '{} {} {} {} {}'.format(
- rank, global_idx,
- classifier_info[2][global_idx, -1],
- classifier_info[2].shape,
- classifier_info[2][:, -1])
- caption_target[:, global_idx] = 1.
- else:
- assert caption_score.shape[1] == B
- caption_target[:, idx] = 1.
- caption_loss_img = F.binary_cross_entropy_with_logits(
- caption_score, caption_target, reduction='none')
- if self.sync_caption_batch:
- fg_mask = (caption_target > 0.5).float()
- assert (fg_mask.sum().item() - 1.) ** 2 < 1e-8, '{} {}'.format(
- fg_mask.shape, fg_mask)
- pos_loss = (caption_loss_img * fg_mask).sum()
- neg_loss = (caption_loss_img * (1. - fg_mask)).sum()
- caption_loss_img = pos_loss + self.neg_cap_weight * neg_loss
- else:
- caption_loss_img = caption_loss_img.sum()
- return score, caption_loss_img
-
-
- def _wsddn_loss(self, score, prop_score, label):
- assert prop_score is not None
- loss = 0
- final_score = score.sigmoid() * \
- F.softmax(prop_score, dim=0) # B x (C + 1)
- img_score = torch.clamp(
- torch.sum(final_score, dim=0),
- min=1e-10, max=1-1e-10) # (C + 1)
- target = img_score.new_zeros(img_score.shape) # (C + 1)
- target[label] = 1.
- loss += F.binary_cross_entropy(img_score, target)
- ind = final_score[:, label].argmax()
- return loss, ind
-
-
- def _max_score_loss(self, score, label):
- loss = 0
- target = score.new_zeros(score.shape[1])
- target[label] = 1.
- ind = score[:, label].argmax().item()
- loss += F.binary_cross_entropy_with_logits(
- score[ind], target, reduction='sum')
- return loss, ind
-
-
- def _min_loss_loss(self, score, label):
- loss = 0
- target = score.new_zeros(score.shape)
- target[:, label] = 1.
- with torch.no_grad():
- x = F.binary_cross_entropy_with_logits(
- score, target, reduction='none').sum(dim=1) # n
- ind = x.argmin().item()
- loss += F.binary_cross_entropy_with_logits(
- score[ind], target[0], reduction='sum')
- return loss, ind
-
-
- def _first_loss(self, score, label):
- loss = 0
- target = score.new_zeros(score.shape[1])
- target[label] = 1.
- ind = 0
- loss += F.binary_cross_entropy_with_logits(
- score[ind], target, reduction='sum')
- return loss, ind
-
-
- def _image_loss(self, score, label):
- assert self.add_image_box
- target = score.new_zeros(score.shape[1])
- target[label] = 1.
- ind = score.shape[0] - 1
- loss = F.binary_cross_entropy_with_logits(
- score[ind], target, reduction='sum')
- return loss, ind
-
-
- def _max_size_loss(self, score, label, p):
- loss = 0
- target = score.new_zeros(score.shape[1])
- target[label] = 1.
- sizes = p.proposal_boxes.area()
- ind = sizes[:-1].argmax().item() if len(sizes) > 1 else 0
- if self.softmax_weak_loss:
- loss += F.cross_entropy(
- score[ind:ind+1],
- score.new_tensor(label, dtype=torch.long).view(1),
- reduction='sum')
- else:
- loss += F.binary_cross_entropy_with_logits(
- score[ind], target, reduction='sum')
- return loss, ind
-
-
-
-def put_label_distribution(storage, hist_name, hist_counts, num_classes):
- """
- """
- ht_min, ht_max = 0, num_classes
- hist_edges = torch.linspace(
- start=ht_min, end=ht_max, steps=num_classes + 1, dtype=torch.float32)
-
- hist_params = dict(
- tag=hist_name,
- min=ht_min,
- max=ht_max,
- num=float(hist_counts.sum()),
- sum=float((hist_counts * torch.arange(len(hist_counts))).sum()),
- sum_squares=float(((hist_counts * torch.arange(len(hist_counts))) ** 2).sum()),
- bucket_limits=hist_edges[1:].tolist(),
- bucket_counts=hist_counts.tolist(),
- global_step=storage._iter,
- )
- storage._histograms.append(hist_params)
\ No newline at end of file
diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/models/encoders/__init__.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/models/encoders/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Datasculptor/StyleGAN-NADA/styleclip/styleclip_global.py b/spaces/Datasculptor/StyleGAN-NADA/styleclip/styleclip_global.py
deleted file mode 100644
index 96fa7569ebd51a5e6c2deddb57ccceb4f4376904..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/StyleGAN-NADA/styleclip/styleclip_global.py
+++ /dev/null
@@ -1,181 +0,0 @@
-'''
-Code adapted from Stitch it in Time by Tzaban et al.
-https://github.com/rotemtzaban/STIT
-'''
-
-
-import numpy as np
-import torch
-from tqdm import tqdm
-from pathlib import Path
-import os
-
-import clip
-
-imagenet_templates = [
- 'a bad photo of a {}.',
- 'a photo of many {}.',
- 'a sculpture of a {}.',
- 'a photo of the hard to see {}.',
- 'a low resolution photo of the {}.',
- 'a rendering of a {}.',
- 'graffiti of a {}.',
- 'a bad photo of the {}.',
- 'a cropped photo of the {}.',
- 'a tattoo of a {}.',
- 'the embroidered {}.',
- 'a photo of a hard to see {}.',
- 'a bright photo of a {}.',
- 'a photo of a clean {}.',
- 'a photo of a dirty {}.',
- 'a dark photo of the {}.',
- 'a drawing of a {}.',
- 'a photo of my {}.',
- 'the plastic {}.',
- 'a photo of the cool {}.',
- 'a close-up photo of a {}.',
- 'a black and white photo of the {}.',
- 'a painting of the {}.',
- 'a painting of a {}.',
- 'a pixelated photo of the {}.',
- 'a sculpture of the {}.',
- 'a bright photo of the {}.',
- 'a cropped photo of a {}.',
- 'a plastic {}.',
- 'a photo of the dirty {}.',
- 'a jpeg corrupted photo of a {}.',
- 'a blurry photo of the {}.',
- 'a photo of the {}.',
- 'a good photo of the {}.',
- 'a rendering of the {}.',
- 'a {} in a video game.',
- 'a photo of one {}.',
- 'a doodle of a {}.',
- 'a close-up photo of the {}.',
- 'a photo of a {}.',
- 'the origami {}.',
- 'the {} in a video game.',
- 'a sketch of a {}.',
- 'a doodle of the {}.',
- 'a origami {}.',
- 'a low resolution photo of a {}.',
- 'the toy {}.',
- 'a rendition of the {}.',
- 'a photo of the clean {}.',
- 'a photo of a large {}.',
- 'a rendition of a {}.',
- 'a photo of a nice {}.',
- 'a photo of a weird {}.',
- 'a blurry photo of a {}.',
- 'a cartoon {}.',
- 'art of a {}.',
- 'a sketch of the {}.',
- 'a embroidered {}.',
- 'a pixelated photo of a {}.',
- 'itap of the {}.',
- 'a jpeg corrupted photo of the {}.',
- 'a good photo of a {}.',
- 'a plushie {}.',
- 'a photo of the nice {}.',
- 'a photo of the small {}.',
- 'a photo of the weird {}.',
- 'the cartoon {}.',
- 'art of the {}.',
- 'a drawing of the {}.',
- 'a photo of the large {}.',
- 'a black and white photo of a {}.',
- 'the plushie {}.',
- 'a dark photo of a {}.',
- 'itap of a {}.',
- 'graffiti of the {}.',
- 'a toy {}.',
- 'itap of my {}.',
- 'a photo of a cool {}.',
- 'a photo of a small {}.',
- 'a tattoo of the {}.',
-]
-
-CONV_CODE_INDICES = [(0, 512), (1024, 1536), (1536, 2048), (2560, 3072), (3072, 3584), (4096, 4608), (4608, 5120), (5632, 6144), (6144, 6656), (7168, 7680), (7680, 7936), (8192, 8448), (8448, 8576), (8704, 8832), (8832, 8896), (8960, 9024), (9024, 9056)]
-FFHQ_CODE_INDICES = [(0, 512), (512, 1024), (1024, 1536), (1536, 2048), (2560, 3072), (3072, 3584), (4096, 4608), (4608, 5120), (5632, 6144), (6144, 6656), (7168, 7680), (7680, 7936), (8192, 8448), (8448, 8576), (8704, 8832), (8832, 8896), (8960, 9024), (9024, 9056)] + \
- [(2048, 2560), (3584, 4096), (5120, 5632), (6656, 7168), (7936, 8192), (8576, 8704), (8896, 8960), (9056, 9088)]
-
-def zeroshot_classifier(model, classnames, templates, device):
-
- with torch.no_grad():
- zeroshot_weights = []
- for classname in tqdm(classnames):
- texts = [template.format(classname) for template in templates] # format with class
- texts = clip.tokenize(texts).to(device) # tokenize
- class_embeddings = model.encode_text(texts) # embed with text encoder
- class_embeddings /= class_embeddings.norm(dim=-1, keepdim=True)
- class_embedding = class_embeddings.mean(dim=0)
- class_embedding /= class_embedding.norm()
- zeroshot_weights.append(class_embedding)
- zeroshot_weights = torch.stack(zeroshot_weights, dim=1).to(device)
- return zeroshot_weights
-
-def expand_to_full_dim(partial_tensor):
- full_dim_tensor = torch.zeros(size=(1, 9088))
-
- start_idx = 0
- for conv_start, conv_end in CONV_CODE_INDICES:
- length = conv_end - conv_start
- full_dim_tensor[:, conv_start:conv_end] = partial_tensor[start_idx:start_idx + length]
- start_idx += length
-
- return full_dim_tensor
-
-def get_direction(neutral_class, target_class, beta, di, clip_model=None):
-
- device = "cuda" if torch.cuda.is_available() else "cpu"
-
- if clip_model is None:
- clip_model, _ = clip.load("ViT-B/32", device=device)
-
- class_names = [neutral_class, target_class]
- class_weights = zeroshot_classifier(clip_model, class_names, imagenet_templates, device)
-
- dt = class_weights[:, 1] - class_weights[:, 0]
- dt = dt / dt.norm()
-
- dt = dt.float()
- di = di.float()
-
- relevance = di @ dt
- mask = relevance.abs() > beta
- direction = relevance * mask
- direction_max = direction.abs().max()
- if direction_max > 0:
- direction = direction / direction_max
- else:
- raise ValueError(f'Beta value {beta} is too high for mapping from {neutral_class} to {target_class},'
- f' try setting it to a lower value')
- return direction
-
-def style_tensor_to_style_dict(style_tensor, refernce_generator):
- style_layers = refernce_generator.modulation_layers
-
- style_dict = {}
- for layer_idx, layer in enumerate(style_layers):
- style_dict[layer] = style_tensor[:, FFHQ_CODE_INDICES[layer_idx][0]:FFHQ_CODE_INDICES[layer_idx][1]]
-
- return style_dict
-
-def style_dict_to_style_tensor(style_dict, reference_generator):
- style_layers = reference_generator.modulation_layers
-
- style_tensor = torch.zeros(size=(1, 9088))
- for layer in style_dict:
- layer_idx = style_layers.index(layer)
- style_tensor[:, FFHQ_CODE_INDICES[layer_idx][0]:FFHQ_CODE_INDICES[layer_idx][1]] = style_dict[layer]
-
- return style_tensor
-
-def project_code_with_styleclip(source_latent, source_class, target_class, alpha, beta, reference_generator, di, clip_model=None):
- edit_direction = get_direction(source_class, target_class, beta, di, clip_model)
-
- edit_full_dim = expand_to_full_dim(edit_direction)
-
- source_s = style_dict_to_style_tensor(source_latent, reference_generator)
-
- return source_s + alpha * edit_full_dim
\ No newline at end of file
diff --git a/spaces/Detomo/ai-comic-generation/src/components/icons/full-screen.tsx b/spaces/Detomo/ai-comic-generation/src/components/icons/full-screen.tsx
deleted file mode 100644
index 34ec93bbab4b8359868737dbab9c6f7f6d594e03..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/components/icons/full-screen.tsx
+++ /dev/null
@@ -1,16 +0,0 @@
-export function FullScreenIcon() {
- return (
-
- )
-}
\ No newline at end of file
diff --git a/spaces/Dhrushreddy/profile1/README.md b/spaces/Dhrushreddy/profile1/README.md
deleted file mode 100644
index 4d6a8835a84f11a82edf37df2d653b976224d5a0..0000000000000000000000000000000000000000
--- a/spaces/Dhrushreddy/profile1/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Profile1
-emoji: 📊
-colorFrom: pink
-colorTo: indigo
-sdk: streamlit
-sdk_version: 1.21.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/DiamondYin/Voice-ChatGPT-Streamlit-12/app.py b/spaces/DiamondYin/Voice-ChatGPT-Streamlit-12/app.py
deleted file mode 100644
index c14ec429648632e650cb293f45324b272bd752a7..0000000000000000000000000000000000000000
--- a/spaces/DiamondYin/Voice-ChatGPT-Streamlit-12/app.py
+++ /dev/null
@@ -1,293 +0,0 @@
-import streamlit as st
-import openai
-import os
-import base64
-import glob
-import json
-import mistune
-import pytz
-import math
-import requests
-import time
-
-from datetime import datetime
-from openai import ChatCompletion
-from xml.etree import ElementTree as ET
-from bs4 import BeautifulSoup
-from collections import deque
-from audio_recorder_streamlit import audio_recorder
-
-def generate_filename(prompt, file_type):
- central = pytz.timezone('US/Central')
- safe_date_time = datetime.now(central).strftime("%m%d_%I%M")
- safe_prompt = "".join(x for x in prompt if x.isalnum())[:45]
- return f"{safe_date_time}_{safe_prompt}.{file_type}"
-
-def transcribe_audio(openai_key, file_path, model):
- OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions"
- headers = {
- "Authorization": f"Bearer {openai_key}",
- }
- with open(file_path, 'rb') as f:
- data = {'file': f}
- response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model})
- if response.status_code == 200:
- st.write(response.json())
-
- response2 = chat_with_model(response.json().get('text'), '') # *************************************
- st.write('Responses:')
- #st.write(response)
- st.write(response2)
- return response.json().get('text')
- else:
- st.write(response.json())
- st.error("Error in API call.")
- return None
-
-def save_and_play_audio(audio_recorder):
- audio_bytes = audio_recorder()
- if audio_bytes:
- filename = generate_filename("Recording", "wav")
- with open(filename, 'wb') as f:
- f.write(audio_bytes)
- st.audio(audio_bytes, format="audio/wav")
- return filename
- return None
-
-def create_file(filename, prompt, response):
- if filename.endswith(".txt"):
- with open(filename, 'w') as file:
- file.write(f"{prompt}\n{response}")
- elif filename.endswith(".htm"):
- with open(filename, 'w') as file:
- file.write(f"{prompt} {response}")
- elif filename.endswith(".md"):
- with open(filename, 'w') as file:
- file.write(f"{prompt}\n\n{response}")
-
-def truncate_document(document, length):
- return document[:length]
-def divide_document(document, max_length):
- return [document[i:i+max_length] for i in range(0, len(document), max_length)]
-
-def get_table_download_link(file_path):
- with open(file_path, 'r') as file:
- data = file.read()
- b64 = base64.b64encode(data.encode()).decode()
- file_name = os.path.basename(file_path)
- ext = os.path.splitext(file_name)[1] # get the file extension
- if ext == '.txt':
- mime_type = 'text/plain'
- elif ext == '.py':
- mime_type = 'text/plain'
- elif ext == '.xlsx':
- mime_type = 'text/plain'
- elif ext == '.csv':
- mime_type = 'text/plain'
- elif ext == '.htm':
- mime_type = 'text/html'
- elif ext == '.md':
- mime_type = 'text/markdown'
- else:
- mime_type = 'application/octet-stream' # general binary data type
- href = f'{file_name}'
- return href
-
-def CompressXML(xml_text):
- root = ET.fromstring(xml_text)
- for elem in list(root.iter()):
- if isinstance(elem.tag, str) and 'Comment' in elem.tag:
- elem.parent.remove(elem)
- return ET.tostring(root, encoding='unicode', method="xml")
-
-def read_file_content(file,max_length):
- if file.type == "application/json":
- content = json.load(file)
- return str(content)
- elif file.type == "text/html" or file.type == "text/htm":
- content = BeautifulSoup(file, "html.parser")
- return content.text
- elif file.type == "application/xml" or file.type == "text/xml":
- tree = ET.parse(file)
- root = tree.getroot()
- xml = CompressXML(ET.tostring(root, encoding='unicode'))
- return xml
- elif file.type == "text/markdown" or file.type == "text/md":
- md = mistune.create_markdown()
- content = md(file.read().decode())
- return content
- elif file.type == "text/plain":
- return file.getvalue().decode()
- else:
- return ""
-
-def chat_with_model(prompt, document_section, model_choice='gpt-3.5-turbo'):
- model = model_choice
- conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}]
- conversation.append({'role': 'user', 'content': prompt})
- if len(document_section)>0:
- conversation.append({'role': 'assistant', 'content': document_section})
-
- # iterate through the stream of events
- start_time = time.time()
-
-
- report = []
- res_box = st.empty()
-
- collected_chunks = []
- collected_messages = []
-
- for chunk in openai.ChatCompletion.create(
- model='gpt-3.5-turbo',
- messages=conversation,
- temperature=0.5,
- stream=True
- ):
-
- collected_chunks.append(chunk) # save the event response
- chunk_message = chunk['choices'][0]['delta'] # extract the message
- collected_messages.append(chunk_message) # save the message
-
- content=chunk["choices"][0].get("delta",{}).get("content")
-
- try:
- report.append(content)
- if len(content) > 0:
- result = "".join(report).strip()
- #result = result.replace("\n", "")
- res_box.markdown(f'*{result}*')
- except:
- st.write('.')
-
- full_reply_content = ''.join([m.get('content', '') for m in collected_messages])
- #st.write(f"Full conversation received: {full_reply_content}")
- st.write("Elapsed time:")
- st.write(time.time() - start_time)
- return full_reply_content
-
-def chat_with_file_contents(prompt, file_content, model_choice='gpt-3.5-turbo'):
- conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}]
- conversation.append({'role': 'user', 'content': prompt})
- if len(file_content)>0:
- conversation.append({'role': 'assistant', 'content': file_content})
- response = openai.ChatCompletion.create(model=model_choice, messages=conversation)
- return response['choices'][0]['message']['content']
-
-
-def main():
- # Sidebar and global
- openai.api_key = os.getenv('OPENAI_KEY')
- st.set_page_config(page_title="GPT Streamlit Document Reasoner",layout="wide")
- menu = ["htm", "txt", "xlsx", "csv", "md", "py"] #619
- choice = st.sidebar.selectbox("Output File Type:", menu)
- model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301'))
-
- # Audio, transcribe, GPT:
- filename = save_and_play_audio(audio_recorder)
- if filename is not None:
- transcription = transcribe_audio(openai.api_key, filename, "whisper-1")
- st.write(transcription)
- gptOutput = chat_with_model(transcription, '', model_choice) # *************************************
- filename = generate_filename(transcription, choice)
- create_file(filename, transcription, gptOutput)
- st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
-
-
- user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100)
-
- collength, colupload = st.columns([2,3]) # adjust the ratio as needed
- with collength:
- #max_length = 12000 - optimal for gpt35 turbo. 2x=24000 for gpt4. 8x=96000 for gpt4-32k.
- max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000)
- with colupload:
- uploaded_file = st.file_uploader("Add a file for context:", type=["xml", "json", "xlsx","csv","html", "htm", "md", "txt"])
-
- document_sections = deque()
- document_responses = {}
-
- if uploaded_file is not None:
- file_content = read_file_content(uploaded_file, max_length)
- document_sections.extend(divide_document(file_content, max_length))
-
- if len(document_sections) > 0:
-
- if st.button("👁️ View Upload"):
- st.markdown("**Sections of the uploaded file:**")
- for i, section in enumerate(list(document_sections)):
- st.markdown(f"**Section {i+1}**\n{section}")
-
- st.markdown("**Chat with the model:**")
- for i, section in enumerate(list(document_sections)):
- if i in document_responses:
- st.markdown(f"**Section {i+1}**\n{document_responses[i]}")
- else:
- if st.button(f"Chat about Section {i+1}"):
- st.write('Reasoning with your inputs...')
- response = chat_with_model(user_prompt, section, model_choice) # *************************************
- st.write('Response:')
- st.write(response)
- document_responses[i] = response
- filename = generate_filename(f"{user_prompt}_section_{i+1}", choice)
- create_file(filename, user_prompt, response)
- st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
-
- if st.button('💬 Chat'):
- st.write('Reasoning with your inputs...')
- response = chat_with_model(user_prompt, ''.join(list(document_sections,)), model_choice) # *************************************
- st.write('Response:')
- st.write(response)
-
- filename = generate_filename(user_prompt, choice)
- create_file(filename, user_prompt, response)
- st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
-
- all_files = glob.glob("*.*")
- all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names
- all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order
-
- # sidebar of files
- file_contents=''
- next_action=''
- for file in all_files:
- col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed
- with col1:
- if st.button("🌐", key="md_"+file): # md emoji button
- with open(file, 'r') as f:
- file_contents = f.read()
- next_action='md'
- with col2:
- st.markdown(get_table_download_link(file), unsafe_allow_html=True)
- with col3:
- if st.button("📂", key="open_"+file): # open emoji button
- with open(file, 'r') as f:
- file_contents = f.read()
- next_action='open'
- with col4:
- if st.button("🔍", key="read_"+file): # search emoji button
- with open(file, 'r') as f:
- file_contents = f.read()
- next_action='search'
- with col5:
- if st.button("🗑", key="delete_"+file):
- os.remove(file)
- st.experimental_rerun()
-
- if len(file_contents) > 0:
- if next_action=='open':
- file_content_area = st.text_area("File Contents:", file_contents, height=500)
- if next_action=='md':
- st.markdown(file_contents)
- if next_action=='search':
- file_content_area = st.text_area("File Contents:", file_contents, height=500)
- st.write('Reasoning with your inputs...')
- #response = chat_with_file_contents(user_prompt, file_contents)
- response = chat_with_model(user_prompt, file_contents, model_choice)
- st.write('Response:')
- st.write(response)
- filename = generate_filename(file_content_area, choice)
- create_file(filename, file_content_area, response)
- st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True)
-
-if __name__ == "__main__":
- main()
\ No newline at end of file
diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/data/datasets/register_ade20k_instance.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/data/datasets/register_ade20k_instance.py
deleted file mode 100644
index 1ded7095cde756dfa1d94c25b2f7d1d2e5da6313..0000000000000000000000000000000000000000
--- a/spaces/EPFL-VILAB/MultiMAE/mask2former/data/datasets/register_ade20k_instance.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import json
-import logging
-import numpy as np
-import os
-from PIL import Image
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets.coco import load_coco_json, register_coco_instances
-from detectron2.utils.file_io import PathManager
-
-ADE_CATEGORIES = [{'id': 7, 'name': 'bed'}, {'id': 8, 'name': 'windowpane'}, {'id': 10, 'name': 'cabinet'}, {'id': 12, 'name': 'person'}, {'id': 14, 'name': 'door'}, {'id': 15, 'name': 'table'}, {'id': 18, 'name': 'curtain'}, {'id': 19, 'name': 'chair'}, {'id': 20, 'name': 'car'}, {'id': 22, 'name': 'painting'}, {'id': 23, 'name': 'sofa'}, {'id': 24, 'name': 'shelf'}, {'id': 27, 'name': 'mirror'}, {'id': 30, 'name': 'armchair'}, {'id': 31, 'name': 'seat'}, {'id': 32, 'name': 'fence'}, {'id': 33, 'name': 'desk'}, {'id': 35, 'name': 'wardrobe'}, {'id': 36, 'name': 'lamp'}, {'id': 37, 'name': 'bathtub'}, {'id': 38, 'name': 'railing'}, {'id': 39, 'name': 'cushion'}, {'id': 41, 'name': 'box'}, {'id': 42, 'name': 'column'}, {'id': 43, 'name': 'signboard'}, {'id': 44, 'name': 'chest of drawers'}, {'id': 45, 'name': 'counter'}, {'id': 47, 'name': 'sink'}, {'id': 49, 'name': 'fireplace'}, {'id': 50, 'name': 'refrigerator'}, {'id': 53, 'name': 'stairs'}, {'id': 55, 'name': 'case'}, {'id': 56, 'name': 'pool table'}, {'id': 57, 'name': 'pillow'}, {'id': 58, 'name': 'screen door'}, {'id': 62, 'name': 'bookcase'}, {'id': 64, 'name': 'coffee table'}, {'id': 65, 'name': 'toilet'}, {'id': 66, 'name': 'flower'}, {'id': 67, 'name': 'book'}, {'id': 69, 'name': 'bench'}, {'id': 70, 'name': 'countertop'}, {'id': 71, 'name': 'stove'}, {'id': 72, 'name': 'palm'}, {'id': 73, 'name': 'kitchen island'}, {'id': 74, 'name': 'computer'}, {'id': 75, 'name': 'swivel chair'}, {'id': 76, 'name': 'boat'}, {'id': 78, 'name': 'arcade machine'}, {'id': 80, 'name': 'bus'}, {'id': 81, 'name': 'towel'}, {'id': 82, 'name': 'light'}, {'id': 83, 'name': 'truck'}, {'id': 85, 'name': 'chandelier'}, {'id': 86, 'name': 'awning'}, {'id': 87, 'name': 'streetlight'}, {'id': 88, 'name': 'booth'}, {'id': 89, 'name': 'television receiver'}, {'id': 90, 'name': 'airplane'}, {'id': 92, 'name': 'apparel'}, {'id': 93, 'name': 'pole'}, {'id': 95, 'name': 'bannister'}, {'id': 97, 'name': 'ottoman'}, {'id': 98, 'name': 'bottle'}, {'id': 102, 'name': 'van'}, {'id': 103, 'name': 'ship'}, {'id': 104, 'name': 'fountain'}, {'id': 107, 'name': 'washer'}, {'id': 108, 'name': 'plaything'}, {'id': 110, 'name': 'stool'}, {'id': 111, 'name': 'barrel'}, {'id': 112, 'name': 'basket'}, {'id': 115, 'name': 'bag'}, {'id': 116, 'name': 'minibike'}, {'id': 118, 'name': 'oven'}, {'id': 119, 'name': 'ball'}, {'id': 120, 'name': 'food'}, {'id': 121, 'name': 'step'}, {'id': 123, 'name': 'trade name'}, {'id': 124, 'name': 'microwave'}, {'id': 125, 'name': 'pot'}, {'id': 126, 'name': 'animal'}, {'id': 127, 'name': 'bicycle'}, {'id': 129, 'name': 'dishwasher'}, {'id': 130, 'name': 'screen'}, {'id': 132, 'name': 'sculpture'}, {'id': 133, 'name': 'hood'}, {'id': 134, 'name': 'sconce'}, {'id': 135, 'name': 'vase'}, {'id': 136, 'name': 'traffic light'}, {'id': 137, 'name': 'tray'}, {'id': 138, 'name': 'ashcan'}, {'id': 139, 'name': 'fan'}, {'id': 142, 'name': 'plate'}, {'id': 143, 'name': 'monitor'}, {'id': 144, 'name': 'bulletin board'}, {'id': 146, 'name': 'radiator'}, {'id': 147, 'name': 'glass'}, {'id': 148, 'name': 'clock'}, {'id': 149, 'name': 'flag'}]
-
-
-_PREDEFINED_SPLITS = {
- # point annotations without masks
- "ade20k_instance_train": (
- "ADEChallengeData2016/images/training",
- "ADEChallengeData2016/ade20k_instance_train.json",
- ),
- "ade20k_instance_val": (
- "ADEChallengeData2016/images/validation",
- "ADEChallengeData2016/ade20k_instance_val.json",
- ),
-}
-
-
-def _get_ade_instances_meta():
- thing_ids = [k["id"] for k in ADE_CATEGORIES]
- assert len(thing_ids) == 100, len(thing_ids)
- # Mapping from the incontiguous ADE category id to an id in [0, 99]
- thing_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(thing_ids)}
- thing_classes = [k["name"] for k in ADE_CATEGORIES]
- ret = {
- "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id,
- "thing_classes": thing_classes,
- }
- return ret
-
-
-def register_all_ade20k_instance(root):
- for key, (image_root, json_file) in _PREDEFINED_SPLITS.items():
- # Assume pre-defined datasets live in `./datasets`.
- register_coco_instances(
- key,
- _get_ade_instances_meta(),
- os.path.join(root, json_file) if "://" not in json_file else json_file,
- os.path.join(root, image_root),
- )
-
-
-_root = os.getenv("DETECTRON2_DATASETS", "datasets")
-register_all_ade20k_instance(_root)
diff --git a/spaces/Egrt/GCycleGAN/utils/callbacks.py b/spaces/Egrt/GCycleGAN/utils/callbacks.py
deleted file mode 100644
index d70115ead91c64f2f7aaa3b559cb0351642ee65d..0000000000000000000000000000000000000000
--- a/spaces/Egrt/GCycleGAN/utils/callbacks.py
+++ /dev/null
@@ -1,65 +0,0 @@
-import os
-
-import torch
-import matplotlib
-matplotlib.use('Agg')
-import scipy.signal
-from matplotlib import pyplot as plt
-from torch.utils.tensorboard import SummaryWriter
-
-
-class LossHistory():
- def __init__(self, log_dir, model, input_shape):
- self.log_dir = log_dir
-
- os.makedirs(self.log_dir)
- self.writer = SummaryWriter(self.log_dir)
- try:
- for m in model:
- dummy_input = torch.randn(2, 3, input_shape[0], input_shape[1])
- self.writer.add_graph(m, dummy_input)
- except:
- pass
-
- def append_loss(self, epoch, **kwargs):
- if not os.path.exists(self.log_dir):
- os.makedirs(self.log_dir)
-
- for key, value in kwargs.items():
- if not hasattr(self, key):
- setattr(self, key, [])
- #---------------------------------#
- # 为列表添加数值
- #---------------------------------#
- getattr(self, key).append(value)
-
- #---------------------------------#
- # 写入txt
- #---------------------------------#
- with open(os.path.join(self.log_dir, key + ".txt"), 'a') as f:
- f.write(str(value))
- f.write("\n")
-
- #---------------------------------#
- # 写入tensorboard
- #---------------------------------#
- self.writer.add_scalar(key, value, epoch)
-
- self.loss_plot(**kwargs)
-
- def loss_plot(self, **kwargs):
- plt.figure()
-
- for key, value in kwargs.items():
- losses = getattr(self, key)
- plt.plot(range(len(losses)), losses, linewidth = 2, label = key)
-
- plt.grid(True)
- plt.xlabel('Epoch')
- plt.ylabel('Loss')
- plt.legend(loc="upper right")
-
- plt.savefig(os.path.join(self.log_dir, "epoch_loss.png"))
-
- plt.cla()
- plt.close("all")
\ No newline at end of file
diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_dataset.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_dataset.py
deleted file mode 100644
index 4cf2d9e6583a6789b771679734ce55bb8a22e628..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/realesrgan_dataset.py
+++ /dev/null
@@ -1,192 +0,0 @@
-import cv2
-import math
-import numpy as np
-import os
-import os.path as osp
-import random
-import time
-import torch
-from basicsr.data.degradations import circular_lowpass_kernel, random_mixed_kernels
-from basicsr.data.transforms import augment
-from basicsr.utils import FileClient, get_root_logger, imfrombytes, img2tensor
-from basicsr.utils.registry import DATASET_REGISTRY
-from torch.utils import data as data
-
-
-@DATASET_REGISTRY.register()
-class RealESRGANDataset(data.Dataset):
- """Dataset used for Real-ESRGAN model:
- Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data.
-
- It loads gt (Ground-Truth) images, and augments them.
- It also generates blur kernels and sinc kernels for generating low-quality images.
- Note that the low-quality images are processed in tensors on GPUS for faster processing.
-
- Args:
- opt (dict): Config for train datasets. It contains the following keys:
- dataroot_gt (str): Data root path for gt.
- meta_info (str): Path for meta information file.
- io_backend (dict): IO backend type and other kwarg.
- use_hflip (bool): Use horizontal flips.
- use_rot (bool): Use rotation (use vertical flip and transposing h and w for implementation).
- Please see more options in the codes.
- """
-
- def __init__(self, opt):
- super(RealESRGANDataset, self).__init__()
- self.opt = opt
- self.file_client = None
- self.io_backend_opt = opt['io_backend']
- self.gt_folder = opt['dataroot_gt']
-
- # file client (lmdb io backend)
- if self.io_backend_opt['type'] == 'lmdb':
- self.io_backend_opt['db_paths'] = [self.gt_folder]
- self.io_backend_opt['client_keys'] = ['gt']
- if not self.gt_folder.endswith('.lmdb'):
- raise ValueError(f"'dataroot_gt' should end with '.lmdb', but received {self.gt_folder}")
- with open(osp.join(self.gt_folder, 'meta_info.txt')) as fin:
- self.paths = [line.split('.')[0] for line in fin]
- else:
- # disk backend with meta_info
- # Each line in the meta_info describes the relative path to an image
- with open(self.opt['meta_info']) as fin:
- paths = [line.strip().split(' ')[0] for line in fin]
- self.paths = [os.path.join(self.gt_folder, v) for v in paths]
-
- # blur settings for the first degradation
- self.blur_kernel_size = opt['blur_kernel_size']
- self.kernel_list = opt['kernel_list']
- self.kernel_prob = opt['kernel_prob'] # a list for each kernel probability
- self.blur_sigma = opt['blur_sigma']
- self.betag_range = opt['betag_range'] # betag used in generalized Gaussian blur kernels
- self.betap_range = opt['betap_range'] # betap used in plateau blur kernels
- self.sinc_prob = opt['sinc_prob'] # the probability for sinc filters
-
- # blur settings for the second degradation
- self.blur_kernel_size2 = opt['blur_kernel_size2']
- self.kernel_list2 = opt['kernel_list2']
- self.kernel_prob2 = opt['kernel_prob2']
- self.blur_sigma2 = opt['blur_sigma2']
- self.betag_range2 = opt['betag_range2']
- self.betap_range2 = opt['betap_range2']
- self.sinc_prob2 = opt['sinc_prob2']
-
- # a final sinc filter
- self.final_sinc_prob = opt['final_sinc_prob']
-
- self.kernel_range = [2 * v + 1 for v in range(3, 11)] # kernel size ranges from 7 to 21
- # TODO: kernel range is now hard-coded, should be in the configure file
- self.pulse_tensor = torch.zeros(21, 21).float() # convolving with pulse tensor brings no blurry effect
- self.pulse_tensor[10, 10] = 1
-
- def __getitem__(self, index):
- if self.file_client is None:
- self.file_client = FileClient(self.io_backend_opt.pop('type'), **self.io_backend_opt)
-
- # -------------------------------- Load gt images -------------------------------- #
- # Shape: (h, w, c); channel order: BGR; image range: [0, 1], float32.
- gt_path = self.paths[index]
- # avoid errors caused by high latency in reading files
- retry = 3
- while retry > 0:
- try:
- img_bytes = self.file_client.get(gt_path, 'gt')
- except (IOError, OSError) as e:
- logger = get_root_logger()
- logger.warn(f'File client error: {e}, remaining retry times: {retry - 1}')
- # change another file to read
- index = random.randint(0, self.__len__())
- gt_path = self.paths[index]
- time.sleep(1) # sleep 1s for occasional server congestion
- else:
- break
- finally:
- retry -= 1
- img_gt = imfrombytes(img_bytes, float32=True)
-
- # -------------------- Do augmentation for training: flip, rotation -------------------- #
- img_gt = augment(img_gt, self.opt['use_hflip'], self.opt['use_rot'])
-
- # crop or pad to 400
- # TODO: 400 is hard-coded. You may change it accordingly
- h, w = img_gt.shape[0:2]
- crop_pad_size = 400
- # pad
- if h < crop_pad_size or w < crop_pad_size:
- pad_h = max(0, crop_pad_size - h)
- pad_w = max(0, crop_pad_size - w)
- img_gt = cv2.copyMakeBorder(img_gt, 0, pad_h, 0, pad_w, cv2.BORDER_REFLECT_101)
- # crop
- if img_gt.shape[0] > crop_pad_size or img_gt.shape[1] > crop_pad_size:
- h, w = img_gt.shape[0:2]
- # randomly choose top and left coordinates
- top = random.randint(0, h - crop_pad_size)
- left = random.randint(0, w - crop_pad_size)
- img_gt = img_gt[top:top + crop_pad_size, left:left + crop_pad_size, ...]
-
- # ------------------------ Generate kernels (used in the first degradation) ------------------------ #
- kernel_size = random.choice(self.kernel_range)
- if np.random.uniform() < self.opt['sinc_prob']:
- # this sinc filter setting is for kernels ranging from [7, 21]
- if kernel_size < 13:
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- else:
- omega_c = np.random.uniform(np.pi / 5, np.pi)
- kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
- else:
- kernel = random_mixed_kernels(
- self.kernel_list,
- self.kernel_prob,
- kernel_size,
- self.blur_sigma,
- self.blur_sigma, [-math.pi, math.pi],
- self.betag_range,
- self.betap_range,
- noise_range=None)
- # pad kernel
- pad_size = (21 - kernel_size) // 2
- kernel = np.pad(kernel, ((pad_size, pad_size), (pad_size, pad_size)))
-
- # ------------------------ Generate kernels (used in the second degradation) ------------------------ #
- kernel_size = random.choice(self.kernel_range)
- if np.random.uniform() < self.opt['sinc_prob2']:
- if kernel_size < 13:
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- else:
- omega_c = np.random.uniform(np.pi / 5, np.pi)
- kernel2 = circular_lowpass_kernel(omega_c, kernel_size, pad_to=False)
- else:
- kernel2 = random_mixed_kernels(
- self.kernel_list2,
- self.kernel_prob2,
- kernel_size,
- self.blur_sigma2,
- self.blur_sigma2, [-math.pi, math.pi],
- self.betag_range2,
- self.betap_range2,
- noise_range=None)
-
- # pad kernel
- pad_size = (21 - kernel_size) // 2
- kernel2 = np.pad(kernel2, ((pad_size, pad_size), (pad_size, pad_size)))
-
- # ------------------------------------- the final sinc kernel ------------------------------------- #
- if np.random.uniform() < self.opt['final_sinc_prob']:
- kernel_size = random.choice(self.kernel_range)
- omega_c = np.random.uniform(np.pi / 3, np.pi)
- sinc_kernel = circular_lowpass_kernel(omega_c, kernel_size, pad_to=21)
- sinc_kernel = torch.FloatTensor(sinc_kernel)
- else:
- sinc_kernel = self.pulse_tensor
-
- # BGR to RGB, HWC to CHW, numpy to tensor
- img_gt = img2tensor([img_gt], bgr2rgb=True, float32=True)[0]
- kernel = torch.FloatTensor(kernel)
- kernel2 = torch.FloatTensor(kernel2)
-
- return_d = {'gt': img_gt, 'kernel1': kernel, 'kernel2': kernel2, 'sinc_kernel': sinc_kernel, 'gt_path': gt_path}
- return return_d
-
- def __len__(self):
- return len(self.paths)
diff --git a/spaces/ElainaFanBoy/MusicGen/CHANGELOG.md b/spaces/ElainaFanBoy/MusicGen/CHANGELOG.md
deleted file mode 100644
index 6aaad6b5ee31e4685ead54c1a46d7f57b225912d..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/MusicGen/CHANGELOG.md
+++ /dev/null
@@ -1,20 +0,0 @@
-# Changelog
-
-All notable changes to this project will be documented in this file.
-
-The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
-
-## [0.0.2a] - TBD
-
-Improved demo, fixed top p (thanks @jnordberg).
-
-Compressor tanh on output to avoid clipping with some style (especially piano).
-Now repeating the conditioning periodically if it is too short.
-
-More options when launching Gradio app locally (thanks @ashleykleynhans).
-
-Testing out PyTorch 2.0 memory efficient attention.
-
-## [0.0.1] - 2023-06-09
-
-Initial release, with model evaluation only.
diff --git a/spaces/EronSamez/RVC_HFmeu/infer/modules/train/extract_feature_print.py b/spaces/EronSamez/RVC_HFmeu/infer/modules/train/extract_feature_print.py
deleted file mode 100644
index f771dd9b8ba92262e6844e7b5781de43c342833a..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer/modules/train/extract_feature_print.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import os
-import sys
-import traceback
-
-os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1"
-os.environ["PYTORCH_MPS_HIGH_WATERMARK_RATIO"] = "0.0"
-
-device = sys.argv[1]
-n_part = int(sys.argv[2])
-i_part = int(sys.argv[3])
-if len(sys.argv) == 6:
- exp_dir = sys.argv[4]
- version = sys.argv[5]
-else:
- i_gpu = sys.argv[4]
- exp_dir = sys.argv[5]
- os.environ["CUDA_VISIBLE_DEVICES"] = str(i_gpu)
- version = sys.argv[6]
-import fairseq
-import numpy as np
-import soundfile as sf
-import torch
-import torch.nn.functional as F
-
-if "privateuseone" not in device:
- device = "cpu"
- if torch.cuda.is_available():
- device = "cuda"
- elif torch.backends.mps.is_available():
- device = "mps"
-else:
- import torch_directml
-
- device = torch_directml.device(torch_directml.default_device())
-
- def forward_dml(ctx, x, scale):
- ctx.scale = scale
- res = x.clone().detach()
- return res
-
- fairseq.modules.grad_multiply.GradMultiply.forward = forward_dml
-
-f = open("%s/extract_f0_feature.log" % exp_dir, "a+")
-
-
-def printt(strr):
- print(strr)
- f.write("%s\n" % strr)
- f.flush()
-
-
-printt(sys.argv)
-model_path = "assets/hubert/hubert_base.pt"
-
-printt(exp_dir)
-wavPath = "%s/1_16k_wavs" % exp_dir
-outPath = (
- "%s/3_feature256" % exp_dir if version == "v1" else "%s/3_feature768" % exp_dir
-)
-os.makedirs(outPath, exist_ok=True)
-
-
-# wave must be 16k, hop_size=320
-def readwave(wav_path, normalize=False):
- wav, sr = sf.read(wav_path)
- assert sr == 16000
- feats = torch.from_numpy(wav).float()
- if feats.dim() == 2: # double channels
- feats = feats.mean(-1)
- assert feats.dim() == 1, feats.dim()
- if normalize:
- with torch.no_grad():
- feats = F.layer_norm(feats, feats.shape)
- feats = feats.view(1, -1)
- return feats
-
-
-# HuBERT model
-printt("load model(s) from {}".format(model_path))
-# if hubert model is exist
-if os.access(model_path, os.F_OK) == False:
- printt(
- "Error: Extracting is shut down because %s does not exist, you may download it from https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main"
- % model_path
- )
- exit(0)
-models, saved_cfg, task = fairseq.checkpoint_utils.load_model_ensemble_and_task(
- [model_path],
- suffix="",
-)
-model = models[0]
-model = model.to(device)
-printt("move model to %s" % device)
-if device not in ["mps", "cpu"]:
- model = model.half()
-model.eval()
-
-todo = sorted(list(os.listdir(wavPath)))[i_part::n_part]
-n = max(1, len(todo) // 10) # 最多打印十条
-if len(todo) == 0:
- printt("no-feature-todo")
-else:
- printt("all-feature-%s" % len(todo))
- for idx, file in enumerate(todo):
- try:
- if file.endswith(".wav"):
- wav_path = "%s/%s" % (wavPath, file)
- out_path = "%s/%s" % (outPath, file.replace("wav", "npy"))
-
- if os.path.exists(out_path):
- continue
-
- feats = readwave(wav_path, normalize=saved_cfg.task.normalize)
- padding_mask = torch.BoolTensor(feats.shape).fill_(False)
- inputs = {
- "source": feats.half().to(device)
- if device not in ["mps", "cpu"]
- else feats.to(device),
- "padding_mask": padding_mask.to(device),
- "output_layer": 9 if version == "v1" else 12, # layer 9
- }
- with torch.no_grad():
- logits = model.extract_features(**inputs)
- feats = (
- model.final_proj(logits[0]) if version == "v1" else logits[0]
- )
-
- feats = feats.squeeze(0).float().cpu().numpy()
- if np.isnan(feats).sum() == 0:
- np.save(out_path, feats, allow_pickle=False)
- else:
- printt("%s-contains nan" % file)
- if idx % n == 0:
- printt("now-%s,all-%s,%s,%s" % (len(todo), idx, file, feats.shape))
- except:
- printt(traceback.format_exc())
- printt("all-feature-done")
diff --git a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/model_param_init.py b/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/model_param_init.py
deleted file mode 100644
index b995c0bfb1194746187692e2ab1c2a6dbaaaec6c..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/model_param_init.py
+++ /dev/null
@@ -1,69 +0,0 @@
-import json
-import os
-import pathlib
-
-default_param = {}
-default_param["bins"] = 768
-default_param["unstable_bins"] = 9 # training only
-default_param["reduction_bins"] = 762 # training only
-default_param["sr"] = 44100
-default_param["pre_filter_start"] = 757
-default_param["pre_filter_stop"] = 768
-default_param["band"] = {}
-
-
-default_param["band"][1] = {
- "sr": 11025,
- "hl": 128,
- "n_fft": 960,
- "crop_start": 0,
- "crop_stop": 245,
- "lpf_start": 61, # inference only
- "res_type": "polyphase",
-}
-
-default_param["band"][2] = {
- "sr": 44100,
- "hl": 512,
- "n_fft": 1536,
- "crop_start": 24,
- "crop_stop": 547,
- "hpf_start": 81, # inference only
- "res_type": "sinc_best",
-}
-
-
-def int_keys(d):
- r = {}
- for k, v in d:
- if k.isdigit():
- k = int(k)
- r[k] = v
- return r
-
-
-class ModelParameters(object):
- def __init__(self, config_path=""):
- if ".pth" == pathlib.Path(config_path).suffix:
- import zipfile
-
- with zipfile.ZipFile(config_path, "r") as zip:
- self.param = json.loads(
- zip.read("param.json"), object_pairs_hook=int_keys
- )
- elif ".json" == pathlib.Path(config_path).suffix:
- with open(config_path, "r") as f:
- self.param = json.loads(f.read(), object_pairs_hook=int_keys)
- else:
- self.param = default_param
-
- for k in [
- "mid_side",
- "mid_side_b",
- "mid_side_b2",
- "stereo_w",
- "stereo_n",
- "reverse",
- ]:
- if not k in self.param:
- self.param[k] = False
diff --git a/spaces/EsoCode/text-generation-webui/extensions/multimodal/DOCS.md b/spaces/EsoCode/text-generation-webui/extensions/multimodal/DOCS.md
deleted file mode 100644
index eaa4365e9a304a14ebbdb1d4d435f3a2a1f7a7d2..0000000000000000000000000000000000000000
--- a/spaces/EsoCode/text-generation-webui/extensions/multimodal/DOCS.md
+++ /dev/null
@@ -1,85 +0,0 @@
-# Technical description of multimodal extension
-
-## Working principle
-Multimodality extension does most of the stuff which is required for any image input:
-
-- adds the UI
-- saves the images as base64 JPEGs to history
-- provides the hooks to the UI
-- if there are images in the prompt, it:
- - splits the prompt to text and image parts
- - adds image start/end markers to text parts, then encodes and embeds the text parts
- - calls the vision pipeline to embed the images
- - stitches the embeddings together, and returns them to text generation
-- loads the appropriate vision pipeline, selected either from model name, or by specifying --multimodal-pipeline parameter
-
-Now, for the pipelines, they:
-
-- load the required vision models
-- return some consts, for example the number of tokens taken up by image
-- and most importantly: return the embeddings for LLM, given a list of images
-
-## Prompts/history
-
-To save images in prompt/history, this extension is using a base64 JPEG, wrapped in a HTML tag, like so:
-```
-
-```
-where `{img_str}` is the actual image data. This format makes displaying them in the UI for free. Do note, that this format is required to be exactly the same, the regex used to find the images is: ``.
-
-## LLM input
-To describe the input, let's see it on an example prompt:
-```
-text1text2text3
-```
-where `textN` is N-th text, `` is N-th image, in HTML format specified above.
-
-**The first step is to split the prompt into image/text parts**, so we get:
-```
-['text1', '', 'text2', '', 'text3']
-```
-this is done in `MultimodalEmbedder._split_prompt(...)` function, which returns a list of `PromptPart`s - dataclasses wrapping the separate parts.
-
-This function also appends the image start/end markers to text, which are provided by `AbstractMultimodalPipeline.image_start()` / `AbstractMultimodalPipeline.image_end()` functions. If image start is ``, and end is ``, this function will return:
-```
-['text1', '', 'text2', '', 'text3']
-```
-
-**The returned prompt parts are then turned into token embeddings.**
-
-First, they are modified to token IDs, for the text it is done using standard `modules.text_generation.encode()` function, and for the images the returned token IDs are changed to placeholders. The placeholder is a list of `N` times `placeholder token id`, where `N` is specified using `AbstractMultimodalPipeline.num_image_embeds()`, and placeholder token IDs using `AbstractMultimodalPipeline.placeholder_token_id()`.
-
-Now, based on the token IDs, the prompt might get truncated, especially if `max_new_tokens` are unreasonably high. Unfortunately, it can't be done simply, just by trimming the prompt to be short enough. This way will lead to sometimes splitting the prompt in the middle of an image embedding, which usually breaks the generation. Therefore, in this case, the entire image needs to be removed from input. This is done inside `MultimodalEmbedder._encode_text(...)` function.
-
-**After the tokenization, the tokens need to get embedded**, the text and images are once again treated separately.
-
-The text parts are turned to embeddings, using `AbstractMultimodalPipeline.embed_tokens(...)` function. It uses standard embedding function from the model, but to support many LLMs, the actual function is returned by the pipeline (as it might be different for different LLMs), for LLaMA it is `shared.model.model.embed_tokens(...)`.
-
-The image parts are turned to embeddings, using `AbstractMultimodalPipeline.embed_images(...)` function. This function is specific for a given pipeline, it takes the images as input, forwards them through vision model/projector, and returns the embeddings.
-
-**Now, the returned embeddings are stitched together**, using `torch.cat()`, this is creating the final input to the LLM.
-
-## Pipelines
-
-All of the pipelines should subclass `AbstractMultimodalPipeline` class. The idea is to allow for new pipelines to be added in the same way as user extensions - git clone into `extensions/multimodal/pipelines`.
-
-The pipelines are the description of the vision part, containing vision model/multimodal projector. All of the pipelines should have an unique `name()`, which is then selected by user, in `--multimodal-pipeline` CLI argument. For an example, see `pipelines/llava/llava.py`.
-
-## Pipeline modules
-
-Pipelines are organized into "pipeline modules" - subdirectories in `pipelines` directory. The pipeline modules should contain a file called `pipelines.py`, that should contain the following fields:
-- `available_pipelines: List[str]` - list of pipelines provided by this module, shown as the list of available pipelines to the user
-- `def get_pipeline(name: str, params: dict) -> Optional[AbstractMultimodalPipeline]`: - a function to get a concrete pipeline by `name`, if `name` doesn't match any, should return `None`. `params` is the user settings for multimodal extension
-- `def get_pipeline_from_model_name(model_name: str, params: dict) -> Optional[AbstractMultimodalPipeline]`: - a function to get a pipeline from `model_name`, should be eager to return `None`, unless the determination can be done clearly (for example: minigpt-4 bases on vicuna - it should never return the pipeline, but llava can, as it has its own specific LLM finetune)
-
-**NOTE**: A pipeline module should lazy-import the pipelines only when necessary, and it should keep its imports to minimum
-
-## Pipeline params
-
-The pipelines will get the extension `params` in the constructor. They should honor the following fields:
-- `vision_device` - string, specifying `torch.device` to run the vision model (CLIP/ViT) on
-- `vision_bits` - int, number of fp bits to load the vision model(s) in
-- `projector_device` - string, specifying `torch.device` to run the projector models (Linear layers, QFormer, etc.) on
-- `projector_bits` - int, number of fp bits to load the projector models in
-
-As a helper, `AbstractMultimodalPipeline` has `_get_device(self, setting_name: str, params: dict)` and `_get_dtype(self, setting_name: str, params: dict)` helper functions, which parse string/int and return `torch.device` / `torch.dtype`.
diff --git a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/utils/logger.py b/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/utils/logger.py
deleted file mode 100644
index 9714bf59c30fc82de24c1ee58d9118d0864b3572..0000000000000000000000000000000000000000
--- a/spaces/FelixLuoX/codeformer/CodeFormer/basicsr/utils/logger.py
+++ /dev/null
@@ -1,169 +0,0 @@
-import datetime
-import logging
-import time
-
-from .dist_util import get_dist_info, master_only
-
-initialized_logger = {}
-
-
-class MessageLogger():
- """Message logger for printing.
- Args:
- opt (dict): Config. It contains the following keys:
- name (str): Exp name.
- logger (dict): Contains 'print_freq' (str) for logger interval.
- train (dict): Contains 'total_iter' (int) for total iters.
- use_tb_logger (bool): Use tensorboard logger.
- start_iter (int): Start iter. Default: 1.
- tb_logger (obj:`tb_logger`): Tensorboard logger. Default: None.
- """
-
- def __init__(self, opt, start_iter=1, tb_logger=None):
- self.exp_name = opt['name']
- self.interval = opt['logger']['print_freq']
- self.start_iter = start_iter
- self.max_iters = opt['train']['total_iter']
- self.use_tb_logger = opt['logger']['use_tb_logger']
- self.tb_logger = tb_logger
- self.start_time = time.time()
- self.logger = get_root_logger()
-
- @master_only
- def __call__(self, log_vars):
- """Format logging message.
- Args:
- log_vars (dict): It contains the following keys:
- epoch (int): Epoch number.
- iter (int): Current iter.
- lrs (list): List for learning rates.
- time (float): Iter time.
- data_time (float): Data time for each iter.
- """
- # epoch, iter, learning rates
- epoch = log_vars.pop('epoch')
- current_iter = log_vars.pop('iter')
- lrs = log_vars.pop('lrs')
-
- message = (f'[{self.exp_name[:5]}..][epoch:{epoch:3d}, ' f'iter:{current_iter:8,d}, lr:(')
- for v in lrs:
- message += f'{v:.3e},'
- message += ')] '
-
- # time and estimated time
- if 'time' in log_vars.keys():
- iter_time = log_vars.pop('time')
- data_time = log_vars.pop('data_time')
-
- total_time = time.time() - self.start_time
- time_sec_avg = total_time / (current_iter - self.start_iter + 1)
- eta_sec = time_sec_avg * (self.max_iters - current_iter - 1)
- eta_str = str(datetime.timedelta(seconds=int(eta_sec)))
- message += f'[eta: {eta_str}, '
- message += f'time (data): {iter_time:.3f} ({data_time:.3f})] '
-
- # other items, especially losses
- for k, v in log_vars.items():
- message += f'{k}: {v:.4e} '
- # tensorboard logger
- if self.use_tb_logger:
- if k.startswith('l_'):
- self.tb_logger.add_scalar(f'losses/{k}', v, current_iter)
- else:
- self.tb_logger.add_scalar(k, v, current_iter)
- self.logger.info(message)
-
-
-@master_only
-def init_tb_logger(log_dir):
- from torch.utils.tensorboard import SummaryWriter
- tb_logger = SummaryWriter(log_dir=log_dir)
- return tb_logger
-
-
-@master_only
-def init_wandb_logger(opt):
- """We now only use wandb to sync tensorboard log."""
- import wandb
- logger = logging.getLogger('basicsr')
-
- project = opt['logger']['wandb']['project']
- resume_id = opt['logger']['wandb'].get('resume_id')
- if resume_id:
- wandb_id = resume_id
- resume = 'allow'
- logger.warning(f'Resume wandb logger with id={wandb_id}.')
- else:
- wandb_id = wandb.util.generate_id()
- resume = 'never'
-
- wandb.init(id=wandb_id, resume=resume, name=opt['name'], config=opt, project=project, sync_tensorboard=True)
-
- logger.info(f'Use wandb logger with id={wandb_id}; project={project}.')
-
-
-def get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=None):
- """Get the root logger.
- The logger will be initialized if it has not been initialized. By default a
- StreamHandler will be added. If `log_file` is specified, a FileHandler will
- also be added.
- Args:
- logger_name (str): root logger name. Default: 'basicsr'.
- log_file (str | None): The log filename. If specified, a FileHandler
- will be added to the root logger.
- log_level (int): The root logger level. Note that only the process of
- rank 0 is affected, while other processes will set the level to
- "Error" and be silent most of the time.
- Returns:
- logging.Logger: The root logger.
- """
- logger = logging.getLogger(logger_name)
- # if the logger has been initialized, just return it
- if logger_name in initialized_logger:
- return logger
-
- format_str = '%(asctime)s %(levelname)s: %(message)s'
- stream_handler = logging.StreamHandler()
- stream_handler.setFormatter(logging.Formatter(format_str))
- logger.addHandler(stream_handler)
- logger.propagate = False
- rank, _ = get_dist_info()
- if rank != 0:
- logger.setLevel('ERROR')
- elif log_file is not None:
- logger.setLevel(log_level)
- # add file handler
- # file_handler = logging.FileHandler(log_file, 'w')
- file_handler = logging.FileHandler(log_file, 'a') #Shangchen: keep the previous log
- file_handler.setFormatter(logging.Formatter(format_str))
- file_handler.setLevel(log_level)
- logger.addHandler(file_handler)
- initialized_logger[logger_name] = True
- return logger
-
-
-def get_env_info():
- """Get environment information.
- Currently, only log the software version.
- """
- import torch
- import torchvision
-
- from basicsr.version import __version__
- msg = r"""
- ____ _ _____ ____
- / __ ) ____ _ _____ (_)_____/ ___/ / __ \
- / __ |/ __ `// ___// // ___/\__ \ / /_/ /
- / /_/ // /_/ /(__ )/ // /__ ___/ // _, _/
- /_____/ \__,_//____//_/ \___//____//_/ |_|
- ______ __ __ __ __
- / ____/____ ____ ____/ / / / __ __ _____ / /__ / /
- / / __ / __ \ / __ \ / __ / / / / / / // ___// //_/ / /
- / /_/ // /_/ // /_/ // /_/ / / /___/ /_/ // /__ / /< /_/
- \____/ \____/ \____/ \____/ /_____/\____/ \___//_/|_| (_)
- """
- msg += ('\nVersion Information: '
- f'\n\tBasicSR: {__version__}'
- f'\n\tPyTorch: {torch.__version__}'
- f'\n\tTorchVision: {torchvision.__version__}')
- return msg
\ No newline at end of file
diff --git a/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/README.md b/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/README.md
deleted file mode 100644
index fb45a36b5909585aa964f2033762ee59b55526b0..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/Applio-RVC-Fork/utils/README.md
+++ /dev/null
@@ -1,6 +0,0 @@
-# External Colab Code
-Code used to make Google Colab work correctly
-- Repo link: https://github.com/IAHispano/Applio-RVC-Fork/
-
-Thanks to https://github.com/kalomaze/externalcolabcode
-
diff --git a/spaces/GAIR/Factool/factool/knowledge_qa/google_serper.py b/spaces/GAIR/Factool/factool/knowledge_qa/google_serper.py
deleted file mode 100644
index 7e7884f8392ab69e6ece7d3a448fb656d33994dd..0000000000000000000000000000000000000000
--- a/spaces/GAIR/Factool/factool/knowledge_qa/google_serper.py
+++ /dev/null
@@ -1,118 +0,0 @@
-# The following code was adapted from https://github.com/hwchase17/langchain/blob/master/langchain/utilities/google_serper.py
-
-"""Util that calls Google Search using the Serper.dev API."""
-import pdb
-import requests
-import asyncio
-import aiohttp
-import yaml
-import os
-
-from factool.env_config import factool_env_config
-
-# env
-# serper_api_key = factool_env_config.serper_api_key
-
-
-class GoogleSerperAPIWrapper():
- """Wrapper around the Serper.dev Google Search API.
- You can create a free API key at https://serper.dev.
- To use, you should have the environment variable ``SERPER_API_KEY``
- set with your API key, or pass `serper_api_key` as a named parameter
- to the constructor.
- Example:
- .. code-block:: python
- from langchain import GoogleSerperAPIWrapper
- google_serper = GoogleSerperAPIWrapper()
- """
- def __init__(self, snippet_cnt = 10) -> None:
- self.k = snippet_cnt
- self.gl = "us"
- self.hl = "en"
- self.serper_api_key = os.environ.get("SERPER_API_KEY", None)
- assert self.serper_api_key is not None, "Please set the SERPER_API_KEY environment variable."
-
- async def _google_serper_search_results(self, session, search_term: str, gl: str, hl: str) -> dict:
- headers = {
- "X-API-KEY": self.serper_api_key or "",
- "Content-Type": "application/json",
- }
- params = {"q": search_term, "gl": gl, "hl": hl}
- async with session.post(
- "https://google.serper.dev/search", headers=headers, params=params, raise_for_status=True
- ) as response:
- return await response.json()
-
- def _parse_results(self, results):
- snippets = []
-
- if results.get("answerBox"):
- answer_box = results.get("answerBox", {})
- if answer_box.get("answer"):
- element = {"content":answer_box.get("answer"),"source":"None"}
- return [element]
- elif answer_box.get("snippet"):
- element = {"content":answer_box.get("snippet").replace("\n", " "),"source":"None"}
- return [element]
- elif answer_box.get("snippetHighlighted"):
- element = {"content":answer_box.get("snippetHighlighted"),"source":"None"}
- return [element]
-
- if results.get("knowledgeGraph"):
- kg = results.get("knowledgeGraph", {})
- title = kg.get("title")
- entity_type = kg.get("type")
- if entity_type:
- element = {"content":f"{title}: {entity_type}","source":"None"}
- snippets.append(element)
- description = kg.get("description")
- if description:
- element = {"content":description,"source":"None"}
- snippets.append(element)
- for attribute, value in kg.get("attributes", {}).items():
- element = {"content":f"{attribute}: {value}","source":"None"}
- snippets.append(element)
-
- for result in results["organic"][: self.k]:
- if "snippet" in result:
- element = {"content":result["snippet"],"source":result["link"]}
- snippets.append(element)
- for attribute, value in result.get("attributes", {}).items():
- element = {"content":f"{attribute}: {value}","source":result["link"]}
- snippets.append(element)
-
- if len(snippets) == 0:
- element = {"content":"No good Google Search Result was found","source":"None"}
- return [element]
-
- # keep only the first k snippets
- snippets = snippets[:int(self.k / 2)]
-
- return snippets
-
- async def parallel_searches(self, search_queries, gl, hl):
- async with aiohttp.ClientSession() as session:
- tasks = [self._google_serper_search_results(session, query, gl, hl) for query in search_queries]
- search_results = await asyncio.gather(*tasks, return_exceptions=True)
- return search_results
-
- async def run(self, queries):
- """Run query through GoogleSearch and parse result."""
- flattened_queries = []
-
- for sublist in queries:
- if sublist is None:
- sublist = ['None', 'None']
- for item in sublist:
- flattened_queries.append(item)
-
- results = await self.parallel_searches(flattened_queries, gl=self.gl, hl=self.hl)
- snippets_list = []
- for i in range(len(results)):
- snippets_list.append(self._parse_results(results[i]))
- snippets_split = [snippets_list[i] + snippets_list[i+1] for i in range(0, len(snippets_list), 2)]
- return snippets_split
-
-if __name__ == "__main__":
- search = GoogleSerperAPIWrapper()
- print(asyncio.run(search.run("What is the capital of the United States?")))
\ No newline at end of file
diff --git a/spaces/GIZ/SDSN-demo/utils/sdg_classifier.py b/spaces/GIZ/SDSN-demo/utils/sdg_classifier.py
deleted file mode 100644
index 57e633f689b18ec4512730ebb32429c8ea8b7b06..0000000000000000000000000000000000000000
--- a/spaces/GIZ/SDSN-demo/utils/sdg_classifier.py
+++ /dev/null
@@ -1,177 +0,0 @@
-from haystack.nodes import TransformersDocumentClassifier
-from haystack.schema import Document
-from typing import List, Tuple
-from typing_extensions import Literal
-import logging
-import pandas as pd
-from pandas import DataFrame, Series
-from utils.checkconfig import getconfig
-from utils.streamlitcheck import check_streamlit
-from utils.preprocessing import processingpipeline
-try:
- import streamlit as st
-except ImportError:
- logging.info("Streamlit not installed")
-
-## Labels dictionary ###
-_lab_dict = {0: 'no_cat',
- 1:'SDG 1 - No poverty',
- 2:'SDG 2 - Zero hunger',
- 3:'SDG 3 - Good health and well-being',
- 4:'SDG 4 - Quality education',
- 5:'SDG 5 - Gender equality',
- 6:'SDG 6 - Clean water and sanitation',
- 7:'SDG 7 - Affordable and clean energy',
- 8:'SDG 8 - Decent work and economic growth',
- 9:'SDG 9 - Industry, Innovation and Infrastructure',
- 10:'SDG 10 - Reduced inequality',
- 11:'SDG 11 - Sustainable cities and communities',
- 12:'SDG 12 - Responsible consumption and production',
- 13:'SDG 13 - Climate action',
- 14:'SDG 14 - Life below water',
- 15:'SDG 15 - Life on land',
- 16:'SDG 16 - Peace, justice and strong institutions',
- 17:'SDG 17 - Partnership for the goals',}
-
-@st.cache(allow_output_mutation=True)
-def load_sdgClassifier(config_file:str = None, classifier_name:str = None):
- """
- loads the document classifier using haystack, where the name/path of model
- in HF-hub as string is used to fetch the model object.Either configfile or
- model should be passed.
- 1. https://docs.haystack.deepset.ai/reference/document-classifier-api
- 2. https://docs.haystack.deepset.ai/docs/document_classifier
-
- Params
- --------
- config_file: config file path from which to read the model name
- classifier_name: if modelname is passed, it takes a priority if not \
- found then will look for configfile, else raise error.
-
-
- Return: document classifier model
- """
- if not classifier_name:
- if not config_file:
- logging.warning("Pass either model name or config file")
- return
- else:
- config = getconfig(config_file)
- classifier_name = config.get('sdg','MODEL')
-
- logging.info("Loading classifier")
- doc_classifier = TransformersDocumentClassifier(
- model_name_or_path=classifier_name,
- task="text-classification")
-
- return doc_classifier
-
-
-@st.cache(allow_output_mutation=True)
-def sdg_classification(haystack_doc:List[Document],
- threshold:float = 0.8,
- classifier_model:TransformersDocumentClassifier= None
- )->Tuple[DataFrame,Series]:
- """
- Text-Classification on the list of texts provided. Classifier provides the
- most appropriate label for each text. these labels are in terms of if text
- belongs to which particular Sustainable Devleopment Goal (SDG).
-
- Params
- ---------
- haystack_doc: List of haystack Documents. The output of Preprocessing Pipeline
- contains the list of paragraphs in different format,here the list of
- Haystack Documents is used.
- threshold: threshold value for the model to keep the results from classifier
- classifiermodel: you can pass the classifier model directly,which takes priority
- however if not then looks for model in streamlit session.
- In case of streamlit avoid passing the model directly.
-
-
- Returns
- ----------
- df: Dataframe with two columns['SDG:int', 'text']
- x: Series object with the unique SDG covered in the document uploaded and
- the number of times it is covered/discussed/count_of_paragraphs.
-
- """
- logging.info("Working on SDG Classification")
- if not classifier_model:
- if check_streamlit():
- classifier_model = st.session_state['sdg_classifier']
- else:
- logging.warning("No streamlit envinornment found, Pass the classifier")
- return
-
- results = classifier_model.predict(haystack_doc)
-
-
- labels_= [(l.meta['classification']['label'],
- l.meta['classification']['score'],l.content,) for l in results]
-
- df = DataFrame(labels_, columns=["SDG","Relevancy","text"])
-
- df = df.sort_values(by="Relevancy", ascending=False).reset_index(drop=True)
- df.index += 1
- df =df[df['Relevancy']>threshold]
-
- # creating the dataframe for value counts of SDG, along with 'title' of SDGs
- x = df['SDG'].value_counts()
- x = x.rename('count')
- x = x.rename_axis('SDG').reset_index()
- x["SDG"] = pd.to_numeric(x["SDG"])
- x = x.sort_values(by=['count'], ascending=False)
- x['SDG_name'] = x['SDG'].apply(lambda x: _lab_dict[x])
- x['SDG_Num'] = x['SDG'].apply(lambda x: "SDG "+str(x))
-
- df['SDG'] = pd.to_numeric(df['SDG'])
- df = df.sort_values('SDG')
-
- return df, x
-
-def runSDGPreprocessingPipeline(file_name:str, file_path:str,
- split_by: Literal["sentence", "word"] = 'sentence',
- split_length:int = 2, split_respect_sentence_boundary:bool = False,
- split_overlap:int = 0,remove_punc:bool = False)->List[Document]:
- """
- creates the pipeline and runs the preprocessing pipeline,
- the params for pipeline are fetched from paramconfig
-
- Params
- ------------
-
- file_name: filename, in case of streamlit application use
- st.session_state['filename']
- file_path: filepath, in case of streamlit application use st.session_state['filepath']
- split_by: document splitting strategy either as word or sentence
- split_length: when synthetically creating the paragrpahs from document,
- it defines the length of paragraph.
- split_respect_sentence_boundary: Used when using 'word' strategy for
- splititng of text.
- split_overlap: Number of words or sentences that overlap when creating
- the paragraphs. This is done as one sentence or 'some words' make sense
- when read in together with others. Therefore the overlap is used.
- remove_punc: to remove all Punctuation including ',' and '.' or not
-
-
- Return
- --------------
- List[Document]: When preprocessing pipeline is run, the output dictionary
- has four objects. For the Haysatck implementation of SDG classification we,
- need to use the List of Haystack Document, which can be fetched by
- key = 'documents' on output.
-
- """
-
- sdg_processing_pipeline = processingpipeline()
-
- output_sdg_pre = sdg_processing_pipeline.run(file_paths = file_path,
- params= {"FileConverter": {"file_path": file_path, \
- "file_name": file_name},
- "UdfPreProcessor": {"remove_punc": remove_punc, \
- "split_by": split_by, \
- "split_length":split_length,\
- "split_overlap": split_overlap, \
- "split_respect_sentence_boundary":split_respect_sentence_boundary}})
-
- return output_sdg_pre
diff --git a/spaces/GMFTBY/PandaGPT/scripts/train.sh b/spaces/GMFTBY/PandaGPT/scripts/train.sh
deleted file mode 100644
index e071d72afdd773e803e1ce316538594c31d7d41d..0000000000000000000000000000000000000000
--- a/spaces/GMFTBY/PandaGPT/scripts/train.sh
+++ /dev/null
@@ -1,12 +0,0 @@
-#!/bin/bash
-
-deepspeed --include localhost:0,1,2,3,4,5,6,7 --master_addr 127.0.0.1 --master_port 28457 train_sft.py \
- --model openllama_peft \
- --stage 1\
- --data_path ../data/pandagpt4_visual_instruction_data.json\
- --image_root_path ../data/images/\
- --imagebind_ckpt_path ../pretrained_ckpt/imagebind_ckpt/\
- --vicuna_ckpt_path ../pretrained_ckpt/vicuna_ckpt/13b_v0/\
- --max_tgt_len 400\
- --save_path ./ckpt/pandagpt_13b_v0_peft/\
- --log_path ./ckpt/pandagpt_13b_v0_peft/log_rest/
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/agents/transporter.py b/spaces/Gen-Sim/Gen-Sim/cliport/agents/transporter.py
deleted file mode 100644
index d24967d1e7f2684176732a06bb9271676f43bbc3..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/agents/transporter.py
+++ /dev/null
@@ -1,539 +0,0 @@
-import os
-import numpy as np
-
-import torch
-import torch.nn.functional as F
-from pytorch_lightning import LightningModule
-
-from cliport.tasks import cameras
-from cliport.utils import utils
-from cliport.models.core.attention import Attention
-from cliport.models.core.transport import Transport
-from cliport.models.streams.two_stream_attention import TwoStreamAttention
-from cliport.models.streams.two_stream_transport import TwoStreamTransport
-
-from cliport.models.streams.two_stream_attention import TwoStreamAttentionLat
-from cliport.models.streams.two_stream_transport import TwoStreamTransportLat
-import time
-import IPython
-
-class TransporterAgent(LightningModule):
- def __init__(self, name, cfg, train_ds, test_ds):
- super().__init__()
- utils.set_seed(0)
- self.automatic_optimization=False
- self.device_type = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # this is bad for PL :(
- self.name = name
- self.cfg = cfg
- self.train_loader = train_ds
- self.test_loader = test_ds
-
- self.train_ds = train_ds.dataset
- self.test_ds = test_ds.dataset
-
- self.name = name
- self.task = cfg['train']['task']
- self.total_steps = 0
- self.crop_size = 64
- self.n_rotations = cfg['train']['n_rotations']
-
- self.pix_size = 0.003125
- self.in_shape = (320, 160, 6)
- self.cam_config = cameras.RealSenseD415.CONFIG
- self.bounds = np.array([[0.25, 0.75], [-0.5, 0.5], [0, 0.28]])
-
- self.val_repeats = cfg['train']['val_repeats']
- self.save_steps = cfg['train']['save_steps']
-
- self._build_model()
- ##
- # reduce the number of parameters here
- ##
- self._optimizers = {
- 'attn': torch.optim.Adam(self.attention.parameters(), lr=self.cfg['train']['lr']),
- 'trans': torch.optim.Adam(self.transport.parameters(), lr=self.cfg['train']['lr'])
- }
- print("Agent: {}, Logging: {}".format(name, cfg['train']['log']))
-
- def configure_optimizers(self):
- return self._optimizers
-
- def _build_model(self):
- self.attention = None
- self.transport = None
- raise NotImplementedError()
-
- def forward(self, x):
- raise NotImplementedError()
-
- def cross_entropy_with_logits(self, pred, labels, reduction='mean'):
- # Lucas found that both sum and mean work equally well
- x = (-labels.view(len(labels), -1) * F.log_softmax(pred.view(len(labels), -1), -1))
- if reduction == 'sum':
- return x.sum()
- elif reduction == 'mean':
- return x.mean()
- else:
- raise NotImplementedError()
-
- def attn_forward(self, inp, softmax=True):
- inp_img = inp['inp_img']
- output = self.attention.forward(inp_img, softmax=softmax)
- return output
-
- def attn_training_step(self, frame, backprop=True, compute_err=False):
- inp_img = frame['img']
- p0, p0_theta = frame['p0'], frame['p0_theta']
-
- inp = {'inp_img': inp_img}
- out = self.attn_forward(inp, softmax=False)
- return self.attn_criterion(backprop, compute_err, inp, out, p0, p0_theta)
-
- def attn_criterion(self, backprop, compute_err, inp, out, p, theta):
- # Get label.
- if type(theta) is torch.Tensor:
- theta = theta.detach().cpu().numpy()
-
- theta_i = theta / (2 * np.pi / self.attention.n_rotations)
- theta_i = np.int32(np.round(theta_i)) % self.attention.n_rotations
- inp_img = inp['inp_img'].float()
-
- label_size = inp_img.shape[:3] + (self.attention.n_rotations,)
- label = torch.zeros(label_size, dtype=torch.float, device=out.device)
-
- # remove this for-loop laters
- for idx, p_i in enumerate(p):
- label[idx, int(p_i[0]), int(p_i[1]), theta_i[idx]] = 1
- label = label.permute((0, 3, 1, 2)).contiguous()
-
- # Get loss.
- loss = self.cross_entropy_with_logits(out, label)
-
- # Backpropagate.
- if backprop:
- attn_optim = self._optimizers['attn']
- self.manual_backward(loss)
- attn_optim.step()
- attn_optim.zero_grad()
-
- # Pixel and Rotation error (not used anywhere).
- err = {}
- if compute_err:
- with torch.no_grad():
- pick_conf = self.attn_forward(inp)
- pick_conf = pick_conf[0].permute(1,2,0)
- pick_conf = pick_conf.detach().cpu().numpy()
- p = p[0]
- theta = theta[0]
-
- # single batch
- argmax = np.argmax(pick_conf)
- argmax = np.unravel_index(argmax, shape=pick_conf.shape)
- p0_pix = argmax[:2]
- p0_theta = argmax[2] * (2 * np.pi / pick_conf.shape[2])
-
- err = {
- 'dist': np.linalg.norm(np.array(p.detach().cpu().numpy()) - p0_pix, ord=1),
- 'theta': np.absolute((theta - p0_theta) % np.pi)
- }
- return loss, err
-
- def trans_forward(self, inp, softmax=True):
- inp_img = inp['inp_img']
- p0 = inp['p0']
-
- output = self.transport.forward(inp_img, p0, softmax=softmax)
- return output
-
- def transport_training_step(self, frame, backprop=True, compute_err=False):
- inp_img = frame['img'].float()
- p0 = frame['p0']
- p1, p1_theta = frame['p1'], frame['p1_theta']
-
- inp = {'inp_img': inp_img, 'p0': p0}
- output = self.trans_forward(inp, softmax=False)
- err, loss = self.transport_criterion(backprop, compute_err, inp, output, p0, p1, p1_theta)
- return loss, err
-
- def transport_criterion(self, backprop, compute_err, inp, output, p, q, theta):
- s = time.time()
- if type(theta) is torch.Tensor:
- theta = theta.detach().cpu().numpy()
-
- itheta = theta / (2 * np.pi / self.transport.n_rotations)
- itheta = np.int32(np.round(itheta)) % self.transport.n_rotations
-
- # Get one-hot pixel label map.
- inp_img = inp['inp_img']
-
- # label_size = inp_img.shape[:2] + (self.transport.n_rotations,)
- label_size = inp_img.shape[:3] + (self.transport.n_rotations,)
- label = torch.zeros(label_size, dtype=torch.float, device=output.device)
-
- # remove this for-loop laters
- q[:,0] = torch.clamp(q[:,0], 0, label.shape[1]-1)
- q[:,1] = torch.clamp(q[:,1], 0, label.shape[2]-1)
-
- for idx, q_i in enumerate(q):
- label[idx, int(q_i[0]), int(q_i[1]), itheta[idx]] = 1
- label = label.permute((0, 3, 1, 2)).contiguous()
-
- # Get loss.
- loss = self.cross_entropy_with_logits(output, label)
-
- if backprop:
- transport_optim = self._optimizers['trans']
- transport_optim.zero_grad()
- self.manual_backward(loss)
- transport_optim.step()
-
- # Pixel and Rotation error (not used anywhere).
- err = {}
- if compute_err:
- with torch.no_grad():
- place_conf = self.trans_forward(inp)
- # pick the first batch
- place_conf = place_conf[0]
- q = q[0]
- theta = theta[0]
- place_conf = place_conf.permute(1, 2, 0)
- place_conf = place_conf.detach().cpu().numpy()
- argmax = np.argmax(place_conf)
- argmax = np.unravel_index(argmax, shape=place_conf.shape)
- p1_pix = argmax[:2]
- p1_theta = argmax[2] * (2 * np.pi / place_conf.shape[2])
-
- err = {
- 'dist': np.linalg.norm(np.array(q.detach().cpu().numpy()) - p1_pix, ord=1),
- 'theta': np.absolute((theta - p1_theta) % np.pi)
- }
-
- self.transport.iters += 1
- return err, loss
-
- def training_step(self, batch, batch_idx):
-
- self.attention.train()
- self.transport.train()
-
- frame, _ = batch
- self.start_time = time.time()
-
- # Get training losses.
- step = self.total_steps + 1
- loss0, err0 = self.attn_training_step(frame)
-
- self.start_time = time.time()
-
- if isinstance(self.transport, Attention):
- loss1, err1 = self.attn_training_step(frame)
- else:
- loss1, err1 = self.transport_training_step(frame)
-
- total_loss = loss0 + loss1
- self.total_steps = step
- self.start_time = time.time()
- self.log('tr/attn/loss', loss0)
- self.log('tr/trans/loss', loss1)
- self.log('tr/loss', total_loss)
- self.check_save_iteration()
-
- return dict(
- loss=total_loss,
- )
-
- def check_save_iteration(self):
- global_step = self.total_steps
-
- if (global_step + 1) % 100 == 0:
- # save lastest checkpoint
- print(f"Saving last.ckpt Epoch: {self.trainer.current_epoch} | Global Step: {self.trainer.global_step}")
- self.save_last_checkpoint()
-
- def save_last_checkpoint(self):
- checkpoint_path = os.path.join(self.cfg['train']['train_dir'], 'checkpoints')
- ckpt_path = os.path.join(checkpoint_path, 'last.ckpt')
- self.trainer.save_checkpoint(ckpt_path)
-
- def validation_step(self, batch, batch_idx):
- self.attention.eval()
- self.transport.eval()
-
- loss0, loss1 = 0, 0
- assert self.val_repeats >= 1
- for i in range(self.val_repeats):
- frame, _ = batch
- l0, err0 = self.attn_training_step(frame, backprop=False, compute_err=True)
- loss0 += l0
- if isinstance(self.transport, Attention):
- l1, err1 = self.attn_training_step(frame, backprop=False, compute_err=True)
- loss1 += l1
- else:
- l1, err1 = self.transport_training_step(frame, backprop=False, compute_err=True)
- loss1 += l1
- loss0 /= self.val_repeats
- loss1 /= self.val_repeats
- val_total_loss = loss0 + loss1
-
- return dict(
- val_loss=val_total_loss,
- val_loss0=loss0,
- val_loss1=loss1,
- val_attn_dist_err=err0['dist'],
- val_attn_theta_err=err0['theta'],
- val_trans_dist_err=err1['dist'],
- val_trans_theta_err=err1['theta'],
- )
-
- def training_epoch_end(self, all_outputs):
- super().training_epoch_end(all_outputs)
- utils.set_seed(self.trainer.current_epoch+1)
-
- def validation_epoch_end(self, all_outputs):
- mean_val_total_loss = np.mean([v['val_loss'].item() for v in all_outputs])
- mean_val_loss0 = np.mean([v['val_loss0'].item() for v in all_outputs])
- mean_val_loss1 = np.mean([v['val_loss1'].item() for v in all_outputs])
- total_attn_dist_err = np.sum([v['val_attn_dist_err'].sum() for v in all_outputs])
- total_attn_theta_err = np.sum([v['val_attn_theta_err'].sum() for v in all_outputs])
- total_trans_dist_err = np.sum([v['val_trans_dist_err'].sum() for v in all_outputs])
- total_trans_theta_err = np.sum([v['val_trans_theta_err'].sum() for v in all_outputs])
-
-
- self.log('vl/attn/loss', mean_val_loss0)
- self.log('vl/trans/loss', mean_val_loss1)
- self.log('vl/loss', mean_val_total_loss)
- self.log('vl/total_attn_dist_err', total_attn_dist_err)
- self.log('vl/total_attn_theta_err', total_attn_theta_err)
- self.log('vl/total_trans_dist_err', total_trans_dist_err)
- self.log('vl/total_trans_theta_err', total_trans_theta_err)
-
- print("\nAttn Err - Dist: {:.2f}, Theta: {:.2f}".format(total_attn_dist_err, total_attn_theta_err))
- print("Transport Err - Dist: {:.2f}, Theta: {:.2f}".format(total_trans_dist_err, total_trans_theta_err))
-
- return dict(
- val_loss=mean_val_total_loss,
- val_loss0=mean_val_loss0,
- mean_val_loss1=mean_val_loss1,
- total_attn_dist_err=total_attn_dist_err,
- total_attn_theta_err=total_attn_theta_err,
- total_trans_dist_err=total_trans_dist_err,
- total_trans_theta_err=total_trans_theta_err,
- )
-
- def act(self, obs, info=None, goal=None): # pylint: disable=unused-argument
- """Run inference and return best action given visual observations."""
- # Get heightmap from RGB-D images.
- img = self.test_ds.get_image(obs)
-
- # Attention model forward pass.
- pick_inp = {'inp_img': img}
- pick_conf = self.attn_forward(pick_inp)
-
-
- pick_conf = pick_conf.detach().cpu().numpy()
- argmax = np.argmax(pick_conf)
- argmax = np.unravel_index(argmax, shape=pick_conf.shape)
- p0_pix = argmax[:2]
- p0_theta = argmax[2] * (2 * np.pi / pick_conf.shape[2])
-
- # Transport model forward pass.
- place_inp = {'inp_img': img, 'p0': p0_pix}
- place_conf = self.trans_forward(place_inp)
- place_conf = place_conf.permute(1, 2, 0)
- place_conf = place_conf.detach().cpu().numpy()
- argmax = np.argmax(place_conf)
- argmax = np.unravel_index(argmax, shape=place_conf.shape)
- p1_pix = argmax[:2]
- p1_theta = argmax[2] * (2 * np.pi / place_conf.shape[2])
-
- # Pixels to end effector poses.
- hmap = img[:, :, 3]
- p0_xyz = utils.pix_to_xyz(p0_pix, hmap, self.bounds, self.pix_size)
- p1_xyz = utils.pix_to_xyz(p1_pix, hmap, self.bounds, self.pix_size)
- p0_xyzw = utils.eulerXYZ_to_quatXYZW((0, 0, -p0_theta))
- p1_xyzw = utils.eulerXYZ_to_quatXYZW((0, 0, -p1_theta))
-
- return {
- 'pose0': (np.asarray(p0_xyz), np.asarray(p0_xyzw)),
- 'pose1': (np.asarray(p1_xyz), np.asarray(p1_xyzw)),
- 'pick': p0_pix,
- 'place': p1_pix,
- }
-
- def optimizer_step(self, current_epoch, batch_nb, optimizer, optimizer_i, second_order_closure, on_tpu, using_native_amp, using_lbfgs):
- pass
-
- def configure_optimizers(self):
- pass
-
- def train_dataloader(self):
- return self.train_loader
-
- def val_dataloader(self):
- return self.test_loader
-
- def load(self, model_path):
- self.load_state_dict(torch.load(model_path)['state_dict'])
- self.to(device=self.device_type)
-
-
-class OriginalTransporterAgent(TransporterAgent):
-
- def __init__(self, name, cfg, train_ds, test_ds):
- super().__init__(name, cfg, train_ds, test_ds)
-
- def _build_model(self):
- stream_fcn = 'plain_resnet'
- self.attention = Attention(
- stream_fcn=(stream_fcn, None),
- in_shape=self.in_shape,
- n_rotations=1,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
- self.transport = Transport(
- stream_fcn=(stream_fcn, None),
- in_shape=self.in_shape,
- n_rotations=self.n_rotations,
- crop_size=self.crop_size,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
-
-
-class ClipUNetTransporterAgent(TransporterAgent):
-
- def __init__(self, name, cfg, train_ds, test_ds):
- super().__init__(name, cfg, train_ds, test_ds)
-
- def _build_model(self):
- stream_fcn = 'clip_unet'
- self.attention = Attention(
- stream_fcn=(stream_fcn, None),
- in_shape=self.in_shape,
- n_rotations=1,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
- self.transport = Transport(
- stream_fcn=(stream_fcn, None),
- in_shape=self.in_shape,
- n_rotations=self.n_rotations,
- crop_size=self.crop_size,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
-
-
-class TwoStreamClipUNetTransporterAgent(TransporterAgent):
-
- def __init__(self, name, cfg, train_ds, test_ds):
- super().__init__(name, cfg, train_ds, test_ds)
-
- def _build_model(self):
- stream_one_fcn = 'plain_resnet'
- stream_two_fcn = 'clip_unet'
- self.attention = TwoStreamAttention(
- stream_fcn=(stream_one_fcn, stream_two_fcn),
- in_shape=self.in_shape,
- n_rotations=1,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
- self.transport = TwoStreamTransport(
- stream_fcn=(stream_one_fcn, stream_two_fcn),
- in_shape=self.in_shape,
- n_rotations=self.n_rotations,
- crop_size=self.crop_size,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
-
-
-class TwoStreamClipUNetLatTransporterAgent(TransporterAgent):
-
- def __init__(self, name, cfg, train_ds, test_ds):
- super().__init__(name, cfg, train_ds, test_ds)
-
- def _build_model(self):
- stream_one_fcn = 'plain_resnet_lat'
- stream_two_fcn = 'clip_unet_lat'
- self.attention = TwoStreamAttentionLat(
- stream_fcn=(stream_one_fcn, stream_two_fcn),
- in_shape=self.in_shape,
- n_rotations=1,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
- self.transport = TwoStreamTransportLat(
- stream_fcn=(stream_one_fcn, stream_two_fcn),
- in_shape=self.in_shape,
- n_rotations=self.n_rotations,
- crop_size=self.crop_size,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
-
-
-class TwoStreamClipWithoutSkipsTransporterAgent(TransporterAgent):
-
- def __init__(self, name, cfg, train_ds, test_ds):
- super().__init__(name, cfg, train_ds, test_ds)
-
- def _build_model(self):
- # TODO: lateral version
- stream_one_fcn = 'plain_resnet'
- stream_two_fcn = 'clip_woskip'
- self.attention = TwoStreamAttention(
- stream_fcn=(stream_one_fcn, stream_two_fcn),
- in_shape=self.in_shape,
- n_rotations=1,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
- self.transport = TwoStreamTransport(
- stream_fcn=(stream_one_fcn, stream_two_fcn),
- in_shape=self.in_shape,
- n_rotations=self.n_rotations,
- crop_size=self.crop_size,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
-
-
-class TwoStreamRN50BertUNetTransporterAgent(TransporterAgent):
-
- def __init__(self, name, cfg, train_ds, test_ds):
- super().__init__(name, cfg, train_ds, test_ds)
-
- def _build_model(self):
- # TODO: lateral version
- stream_one_fcn = 'plain_resnet'
- stream_two_fcn = 'rn50_bert_unet'
- self.attention = TwoStreamAttention(
- stream_fcn=(stream_one_fcn, stream_two_fcn),
- in_shape=self.in_shape,
- n_rotations=1,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
- self.transport = TwoStreamTransport(
- stream_fcn=(stream_one_fcn, stream_two_fcn),
- in_shape=self.in_shape,
- n_rotations=self.n_rotations,
- crop_size=self.crop_size,
- preprocess=utils.preprocess,
- cfg=self.cfg,
- device=self.device_type,
- )
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/primitives.py b/spaces/Gen-Sim/Gen-Sim/cliport/tasks/primitives.py
deleted file mode 100644
index de6d4da015622d54c160f65dc9a1682bab649267..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/primitives.py
+++ /dev/null
@@ -1,113 +0,0 @@
-"""Motion primitives."""
-
-import numpy as np
-from cliport.utils import utils
-
-
-class PickPlace():
- """Pick and place primitive."""
-
- def __init__(self, height=0.32, speed=0.01):
- self.height, self.speed = height, speed
-
- def __call__(self, movej, movep, ee, pose0, pose1):
- """Execute pick and place primitive.
-
- Args:
- movej: function to move robot joints.
- movep: function to move robot end effector pose.
- ee: robot end effector.
- pose0: SE(3) picking pose.
- pose1: SE(3) placing pose.
-
- Returns:
- timeout: robot movement timed out if True.
- """
-
- pick_pose, place_pose = pose0, pose1
-
- # Execute picking primitive.
- prepick_to_pick = ((0, 0, 0.32), (0, 0, 0, 1))
- postpick_to_pick = ((0, 0, self.height), (0, 0, 0, 1))
- prepick_pose = utils.multiply(pick_pose, prepick_to_pick)
- postpick_pose = utils.multiply(pick_pose, postpick_to_pick)
- timeout = movep(prepick_pose)
-
- # Move towards pick pose until contact is detected.
- delta = (np.float32([0, 0, -0.001]),
- utils.eulerXYZ_to_quatXYZW((0, 0, 0)))
- targ_pose = prepick_pose
- while not ee.detect_contact(): # and target_pose[2] > 0:
- targ_pose = utils.multiply(targ_pose, delta)
- timeout |= movep(targ_pose)
- if timeout:
- return True
-
- # Activate end effector, move up, and check picking success.
- ee.activate()
- timeout |= movep(postpick_pose, self.speed)
- pick_success = ee.check_grasp()
-
- # Execute placing primitive if pick is successful.
- if pick_success:
- preplace_to_place = ((0, 0, self.height), (0, 0, 0, 1))
- postplace_to_place = ((0, 0, 0.32), (0, 0, 0, 1))
- preplace_pose = utils.multiply(place_pose, preplace_to_place)
- postplace_pose = utils.multiply(place_pose, postplace_to_place)
- targ_pose = preplace_pose
- while not ee.detect_contact():
- targ_pose = utils.multiply(targ_pose, delta)
- timeout |= movep(targ_pose, self.speed)
- if timeout:
- return True
- ee.release()
- timeout |= movep(postplace_pose)
-
- # Move to prepick pose if pick is not successful.
- else:
- ee.release()
- timeout |= movep(prepick_pose)
-
- return timeout
-
-
-def push(movej, movep, ee, pose0, pose1): # pylint: disable=unused-argument
- """Execute pushing primitive.
-
- Args:
- movej: function to move robot joints.
- movep: function to move robot end effector pose.
- ee: robot end effector.
- pose0: SE(3) starting pose.
- pose1: SE(3) ending pose.
-
- Returns:
- timeout: robot movement timed out if True.
- """
-
- # Adjust push start and end positions.
- pos0 = np.float32((pose0[0][0], pose0[0][1], 0.005))
- pos1 = np.float32((pose1[0][0], pose1[0][1], 0.005))
- vec = np.float32(pos1) - np.float32(pos0)
- length = np.linalg.norm(vec)
- vec = vec / length
- pos0 -= vec * 0.02
- pos1 -= vec * 0.05
-
- # Align spatula against push direction.
- theta = np.arctan2(vec[1], vec[0])
- rot = utils.eulerXYZ_to_quatXYZW((0, 0, theta))
-
- over0 = (pos0[0], pos0[1], 0.31)
- over1 = (pos1[0], pos1[1], 0.31)
-
- # Execute push.
- timeout = movep((over0, rot))
- timeout |= movep((pos0, rot))
- n_push = np.int32(np.floor(np.linalg.norm(pos1 - pos0) / 0.01))
- for _ in range(n_push):
- target = pos0 + vec * n_push * 0.01
- timeout |= movep((target, rot), speed=0.003)
- timeout |= movep((pos1, rot), speed=0.003)
- timeout |= movep((over1, rot))
- return timeout
diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/utils/utils.py b/spaces/Gen-Sim/Gen-Sim/cliport/utils/utils.py
deleted file mode 100644
index 8d1ecf6a5925b7a4e7ac254b8bdbf5d3f1ed1ee4..0000000000000000000000000000000000000000
--- a/spaces/Gen-Sim/Gen-Sim/cliport/utils/utils.py
+++ /dev/null
@@ -1,1257 +0,0 @@
-"""Miscellaneous utilities."""
-
-import cv2
-import random
-import matplotlib
-import matplotlib.pyplot as plt
-import meshcat
-import meshcat.geometry as g
-import meshcat.transformations as mtf
-
-import PIL
-import yaml
-import numpy as np
-from transforms3d import euler
-
-import pybullet as p
-import kornia
-from omegaconf import OmegaConf
-
-import os
-import torch
-import torchvision
-
-
-# -----------------------------------------------------------------------------
-# HEIGHTMAP UTILS
-# -----------------------------------------------------------------------------
-
-def get_heightmap(points, colors, bounds, pixel_size):
- """Get top-down (z-axis) orthographic heightmap image from 3D pointcloud.
-
- Args:
- points: HxWx3 float array of 3D points in world coordinates.
- colors: HxWx3 uint8 array of values in range 0-255 aligned with points.
- bounds: 3x2 float array of values (rows: X,Y,Z; columns: min,max) defining
- region in 3D space to generate heightmap in world coordinates.
- pixel_size: float defining size of each pixel in meters.
-
- Returns:
- heightmap: HxW float array of height (from lower z-bound) in meters.
- colormap: HxWx3 uint8 array of backprojected color aligned with heightmap.
- """
- width = int(np.round((bounds[0, 1] - bounds[0, 0]) / pixel_size))
- height = int(np.round((bounds[1, 1] - bounds[1, 0]) / pixel_size))
- heightmap = np.zeros((height, width), dtype=np.float32)
- colormap = np.zeros((height, width, colors.shape[-1]), dtype=np.uint8)
-
- # Filter out 3D points that are outside of the predefined bounds.
- ix = (points[Ellipsis, 0] >= bounds[0, 0]) & (points[Ellipsis, 0] < bounds[0, 1])
- iy = (points[Ellipsis, 1] >= bounds[1, 0]) & (points[Ellipsis, 1] < bounds[1, 1])
- iz = (points[Ellipsis, 2] >= bounds[2, 0]) & (points[Ellipsis, 2] < bounds[2, 1])
- valid = ix & iy & iz
- points = points[valid]
- colors = colors[valid]
-
- # Sort 3D points by z-value, which works with array assignment to simulate
- # z-buffering for rendering the heightmap image.
- iz = np.argsort(points[:, -1])
- points, colors = points[iz], colors[iz]
- px = np.int32(np.floor((points[:, 0] - bounds[0, 0]) / pixel_size))
- py = np.int32(np.floor((points[:, 1] - bounds[1, 0]) / pixel_size))
- px = np.clip(px, 0, width - 1)
- py = np.clip(py, 0, height - 1)
- heightmap[py, px] = points[:, 2] - bounds[2, 0]
- for c in range(colors.shape[-1]):
- colormap[py, px, c] = colors[:, c]
- return heightmap, colormap
-
-
-def get_pointcloud(depth, intrinsics):
- """Get 3D pointcloud from perspective depth image.
-
- Args:
- depth: HxW float array of perspective depth in meters.
- intrinsics: 3x3 float array of camera intrinsics matrix.
-
- Returns:
- points: HxWx3 float array of 3D points in camera coordinates.
- """
- height, width = depth.shape
- xlin = np.linspace(0, width - 1, width)
- ylin = np.linspace(0, height - 1, height)
- px, py = np.meshgrid(xlin, ylin)
- px = (px - intrinsics[0, 2]) * (depth / intrinsics[0, 0])
- py = (py - intrinsics[1, 2]) * (depth / intrinsics[1, 1])
- points = np.float32([px, py, depth]).transpose(1, 2, 0)
- return points
-
-
-def transform_pointcloud(points, transform):
- """Apply rigid transformation to 3D pointcloud.
-
- Args:
- points: HxWx3 float array of 3D points in camera coordinates.
- transform: 4x4 float array representing a rigid transformation matrix.
-
- Returns:
- points: HxWx3 float array of transformed 3D points.
- """
- padding = ((0, 0), (0, 0), (0, 1))
- homogen_points = np.pad(points.copy(), padding,
- 'constant', constant_values=1)
- for i in range(3):
- points[Ellipsis, i] = np.sum(transform[i, :] * homogen_points, axis=-1)
- return points
-
-
-def reconstruct_heightmaps(color, depth, configs, bounds, pixel_size):
- """Reconstruct top-down heightmap views from multiple 3D pointclouds."""
- heightmaps, colormaps = [], []
- for color, depth, config in zip(color, depth, configs):
- intrinsics = np.array(config['intrinsics']).reshape(3, 3)
- xyz = get_pointcloud(depth, intrinsics)
- position = np.array(config['position']).reshape(3, 1)
- rotation = p.getMatrixFromQuaternion(config['rotation'])
- rotation = np.array(rotation).reshape(3, 3)
- transform = np.eye(4)
- transform[:3, :] = np.hstack((rotation, position))
- xyz = transform_pointcloud(xyz, transform)
- heightmap, colormap = get_heightmap(xyz, color, bounds, pixel_size)
- heightmaps.append(heightmap)
- colormaps.append(colormap)
- return heightmaps, colormaps
-
-
-def pix_to_xyz(pixel, height, bounds, pixel_size, skip_height=False):
- """Convert from pixel location on heightmap to 3D position."""
- u, v = pixel
- x = bounds[0, 0] + v * pixel_size
- y = bounds[1, 0] + u * pixel_size
- if not skip_height:
- z = bounds[2, 0] + height[u, v]
- else:
- z = 0.0
- return (x, y, z)
-
-
-def xyz_to_pix(position, bounds, pixel_size):
- """Convert from 3D position to pixel location on heightmap."""
- u = int(np.round((position[1] - bounds[1, 0]) / pixel_size))
- v = int(np.round((position[0] - bounds[0, 0]) / pixel_size))
- return (u, v)
-
-
-def unproject_vectorized(uv_coordinates, depth_values,
- intrinsic,
- distortion):
- """Vectorized version of unproject(), for N points.
-
- Args:
- uv_coordinates: pixel coordinates to unproject of shape (n, 2).
- depth_values: depth values corresponding index-wise to the uv_coordinates of
- shape (n).
- intrinsic: array of shape (3, 3). This is typically the return value of
- intrinsics_to_matrix.
- distortion: camera distortion parameters of shape (5,).
-
- Returns:
- xyz coordinates in camera frame of shape (n, 3).
- """
- cam_mtx = intrinsic # shape [3, 3]
- cam_dist = np.array(distortion) # shape [5]
-
- # shape of points_undistorted is [N, 2] after the squeeze().
- points_undistorted = cv2.undistortPoints(
- uv_coordinates.reshape((-1, 1, 2)), cam_mtx, cam_dist).squeeze()
-
- x = points_undistorted[:, 0] * depth_values
- y = points_undistorted[:, 1] * depth_values
-
- xyz = np.vstack((x, y, depth_values)).T
- return xyz
-
-
-def unproject_depth_vectorized(im_depth, depth_dist,
- camera_mtx,
- camera_dist):
- """Unproject depth image into 3D point cloud, using calibration.
-
- Args:
- im_depth: raw depth image, pre-calibration of shape (height, width).
- depth_dist: depth distortion parameters of shape (8,)
- camera_mtx: intrinsics matrix of shape (3, 3). This is typically the return
- value of intrinsics_to_matrix.
- camera_dist: camera distortion parameters shape (5,).
-
- Returns:
- numpy array of shape [3, H*W]. each column is xyz coordinates
- """
- h, w = im_depth.shape
-
- # shape of each u_map, v_map is [H, W].
- u_map, v_map = np.meshgrid(np.linspace(
- 0, w - 1, w), np.linspace(0, h - 1, h))
-
- adjusted_depth = depth_dist[0] + im_depth * depth_dist[1]
-
- # shape after stack is [N, 2], where N = H * W.
- uv_coordinates = np.stack((u_map.reshape(-1), v_map.reshape(-1)), axis=-1)
-
- return unproject_vectorized(uv_coordinates, adjusted_depth.reshape(-1),
- camera_mtx, camera_dist)
-
-
-# -----------------------------------------------------------------------------
-# MATH UTILS
-# -----------------------------------------------------------------------------
-
-
-def sample_distribution(prob, n_samples=1):
- """Sample data point from a custom distribution."""
- flat_prob = prob.flatten() / np.sum(prob)
- rand_ind = np.random.choice(
- np.arange(len(flat_prob)), n_samples, p=flat_prob, replace=False)
- rand_ind_coords = np.array(np.unravel_index(rand_ind, prob.shape)).T
- return np.int32(rand_ind_coords.squeeze())
-
-
-# -------------------------------------------------------------------------
-# Transformation Helper Functions
-# -------------------------------------------------------------------------
-
-
-def invert(pose):
- return p.invertTransform(pose[0], pose[1])
-
-
-def multiply(pose0, pose1):
- return p.multiplyTransforms(pose0[0], pose0[1], pose1[0], pose1[1])
-
-
-def apply(pose, position):
- position = np.float32(position)
- position_shape = position.shape
- position = np.float32(position).reshape(3, -1)
- rotation = np.float32(p.getMatrixFromQuaternion(pose[1])).reshape(3, 3)
- translation = np.float32(pose[0]).reshape(3, 1)
- position = rotation @ position + translation
- return tuple(position.reshape(position_shape))
-
-
-def eulerXYZ_to_quatXYZW(rotation): # pylint: disable=invalid-name
- """Abstraction for converting from a 3-parameter rotation to quaterion.
-
- This will help us easily switch which rotation parameterization we use.
- Quaternion should be in xyzw order for pybullet.
-
- Args:
- rotation: a 3-parameter rotation, in xyz order tuple of 3 floats
-
- Returns:
- quaternion, in xyzw order, tuple of 4 floats
- """
- euler_zxy = (rotation[2], rotation[0], rotation[1])
- quaternion_wxyz = euler.euler2quat(*euler_zxy, axes='szxy')
- q = quaternion_wxyz
- quaternion_xyzw = (q[1], q[2], q[3], q[0])
- return quaternion_xyzw
-
-
-def quatXYZW_to_eulerXYZ(quaternion_xyzw): # pylint: disable=invalid-name
- """Abstraction for converting from quaternion to a 3-parameter toation.
-
- This will help us easily switch which rotation parameterization we use.
- Quaternion should be in xyzw order for pybullet.
-
- Args:
- quaternion_xyzw: in xyzw order, tuple of 4 floats
-
- Returns:
- rotation: a 3-parameter rotation, in xyz order, tuple of 3 floats
- """
- q = quaternion_xyzw
- quaternion_wxyz = np.array([q[3], q[0], q[1], q[2]])
- euler_zxy = euler.quat2euler(quaternion_wxyz, axes='szxy')
- euler_xyz = (euler_zxy[1], euler_zxy[2], euler_zxy[0])
- return euler_xyz
-
-
-def apply_transform(transform_to_from, points_from):
- r"""Transforms points (3D) into new frame.
-
- Using transform_to_from notation.
-
- Args:
- transform_to_from: numpy.ndarray of shape [B,4,4], SE3
- points_from: numpy.ndarray of shape [B,3,N]
-
- Returns:
- points_to: numpy.ndarray of shape [B,3,N]
- """
- num_points = points_from.shape[-1]
-
- # non-batched
- if len(transform_to_from.shape) == 2:
- ones = np.ones((1, num_points))
-
- # makes these each into homogenous vectors
- points_from = np.vstack((points_from, ones)) # [4,N]
- points_to = transform_to_from @ points_from # [4,N]
- return points_to[0:3, :] # [3,N]
-
- # batched
- else:
- assert len(transform_to_from.shape) == 3
- batch_size = transform_to_from.shape[0]
- zeros = np.ones((batch_size, 1, num_points))
- points_from = np.concatenate((points_from, zeros), axis=1)
- assert points_from.shape[1] == 4
- points_to = transform_to_from @ points_from
- return points_to[:, 0:3, :]
-
-
-# -----------------------------------------------------------------------------
-# IMAGE UTILS
-# -----------------------------------------------------------------------------
-
-
-def preprocess(img, dist='transporter'):
- """Pre-process input (subtract mean, divide by std)."""
-
- transporter_color_mean = [0.18877631, 0.18877631, 0.18877631]
- transporter_color_std = [0.07276466, 0.07276466, 0.07276466]
- transporter_depth_mean = 0.00509261
- transporter_depth_std = 0.00903967
-
- franka_color_mean = [0.622291933, 0.628313992, 0.623031488]
- franka_color_std = [0.168154213, 0.17626014, 0.184527364]
- franka_depth_mean = 0.872146842
- franka_depth_std = 0.195743116
-
- clip_color_mean = [0.48145466, 0.4578275, 0.40821073]
- clip_color_std = [0.26862954, 0.26130258, 0.27577711]
-
- # choose distribution
- if dist == 'clip':
- color_mean = clip_color_mean
- color_std = clip_color_std
- elif dist == 'mdetr':
- color_mean = [0.485, 0.456, 0.406]
- color_std = [0.229, 0.224, 0.225]
- elif dist == 'franka':
- color_mean = franka_color_mean
- color_std = franka_color_std
- else:
- color_mean = transporter_color_mean
- color_std = transporter_color_std
-
- if dist == 'franka':
- depth_mean = franka_depth_mean
- depth_std = franka_depth_std
- else:
- depth_mean = transporter_depth_mean
- depth_std = transporter_depth_std
-
- # convert to pytorch tensor (if required)
- if type(img) == torch.Tensor:
- def cast_shape(stat, img):
- tensor = torch.from_numpy(np.array(stat)).to(device=img.device, dtype=img.dtype)
- tensor = tensor.unsqueeze(0).unsqueeze(-1).unsqueeze(-1)
- tensor = tensor.repeat(img.shape[0], 1, img.shape[-2], img.shape[-1])
- return tensor
-
- color_mean = cast_shape(color_mean, img)
- color_std = cast_shape(color_std, img)
- depth_mean = cast_shape(depth_mean, img)
- depth_std = cast_shape(depth_std, img)
-
- # normalize
- img = img.clone()
- img[:, :3, :, :] = ((img[:, :3, :, :] / 255 - color_mean) / color_std)
- img[:, 3:, :, :] = ((img[:, 3:, :, :] - depth_mean) / depth_std)
- else:
- # normalize
- img[:, :, :3] = (img[:, :, :3] / 255 - color_mean) / color_std
- img[:, :, 3:] = (img[:, :, 3:] - depth_mean) / depth_std
-
- # if dist == 'franka' or dist == 'transporter':
- # print(np.mean(img[:,:3,:,:].detach().cpu().numpy(), axis=(0,2,3)),
- # np.mean(img[:,3,:,:].detach().cpu().numpy()))
-
- return img
-
-def map_kit_scale(scale):
- return (scale[0] / 10, scale[1] / 10, scale[2] / 10)
-
-def deprocess(img):
- color_mean = 0.18877631
- depth_mean = 0.00509261
- color_std = 0.07276466
- depth_std = 0.00903967
-
- img[:, :, :3] = np.uint8(((img[:, :, :3] * color_std) + color_mean) * 255)
- img[:, :, 3:] = np.uint8(((img[:, :, 3:] * depth_std) + depth_mean) * 255)
- return img
-
-
-def get_fused_heightmap(obs, configs, bounds, pix_size):
- """Reconstruct orthographic heightmaps with segmentation masks."""
- heightmaps, colormaps = reconstruct_heightmaps(
- obs['color'], obs['depth'], configs, bounds, pix_size)
- colormaps = np.float32(colormaps)
- heightmaps = np.float32(heightmaps)
-
- # Fuse maps from different views.
- valid = np.sum(colormaps, axis=3) > 0
- repeat = np.sum(valid, axis=0)
- repeat[repeat == 0] = 1
- cmap = np.sum(colormaps, axis=0) / repeat[Ellipsis, None]
- cmap = np.uint8(np.round(cmap))
- hmap = np.max(heightmaps, axis=0) # Max to handle occlusions.
- return cmap, hmap
-
-
-def get_image_transform(theta, trans, pivot=(0, 0)):
- """Compute composite 2D rigid transformation matrix."""
- # Get 2D rigid transformation matrix that rotates an image by theta (in
- # radians) around pivot (in pixels) and translates by trans vector (in
- # pixels)
- pivot_t_image = np.array([[1., 0., -pivot[0]], [0., 1., -pivot[1]],
- [0., 0., 1.]])
- image_t_pivot = np.array([[1., 0., pivot[0]], [0., 1., pivot[1]],
- [0., 0., 1.]])
- transform = np.array([[np.cos(theta), -np.sin(theta), trans[0]],
- [np.sin(theta), np.cos(theta), trans[1]], [0., 0., 1.]])
- return np.dot(image_t_pivot, np.dot(transform, pivot_t_image))
-
-
-def check_transform(image, pixel, transform):
- """Valid transform only if pixel locations are still in FoV after transform."""
- new_pixel = np.flip(
- np.int32(
- np.round(
- np.dot(transform,
- np.float32([pixel[1], pixel[0],
- 1.]).reshape(3, 1))))[:2].squeeze())
- valid = np.all(
- new_pixel >= 0
- ) and new_pixel[0] < image.shape[0] and new_pixel[1] < image.shape[1]
- return valid, new_pixel
-
-
-def get_se3_from_image_transform(theta, trans, pivot, heightmap, bounds,
- pixel_size):
- """Calculate SE3 from image transform."""
- position_center = pix_to_xyz(
- np.flip(np.int32(np.round(pivot))),
- heightmap,
- bounds,
- pixel_size,
- skip_height=False)
- new_position_center = pix_to_xyz(
- np.flip(np.int32(np.round(pivot + trans))),
- heightmap,
- bounds,
- pixel_size,
- skip_height=True)
- # Don't look up the z height, it might get augmented out of frame
- new_position_center = (new_position_center[0], new_position_center[1],
- position_center[2])
-
- delta_position = np.array(new_position_center) - np.array(position_center)
-
- t_world_center = np.eye(4)
- t_world_center[0:3, 3] = np.array(position_center)
-
- t_centernew_center = np.eye(4)
- euler_zxy = (-theta, 0, 0)
- t_centernew_center[0:3, 0:3] = euler.euler2mat(
- *euler_zxy, axes='szxy')[0:3, 0:3]
-
- t_centernew_center_tonly = np.eye(4)
- t_centernew_center_tonly[0:3, 3] = -delta_position
- t_centernew_center = t_centernew_center @ t_centernew_center_tonly
-
- t_world_centernew = t_world_center @ np.linalg.inv(t_centernew_center)
- return t_world_center, t_world_centernew
-
-
-def get_random_image_transform_params(image_size, theta_sigma=60):
- theta = np.random.normal(0, np.deg2rad(theta_sigma))
-
- trans_sigma = np.min(image_size) / 6
- trans = np.random.normal(0, trans_sigma, size=2) # [x, y]
- pivot = (image_size[1] / 2, image_size[0] / 2)
- return theta, trans, pivot
-
-
-def q_mult(q1, q2):
- w1, x1, y1, z1 = q1
- w2, x2, y2, z2 = q2
- w = w1 * w2 - x1 * x2 - y1 * y2 - z1 * z2
- x = w1 * x2 + x1 * w2 + y1 * z2 - z1 * y2
- y = w1 * y2 + y1 * w2 + z1 * x2 - x1 * z2
- z = w1 * z2 + z1 * w2 + x1 * y2 - y1 * x2
- return (w, x, y, z)
-
-def perturb(input_image, pixels, theta_sigma=60, add_noise=False):
- """Data augmentation on images."""
- image_size = input_image.shape[:2]
-
- # Compute random rigid transform.
- while True:
- theta, trans, pivot = get_random_image_transform_params(image_size, theta_sigma=theta_sigma)
- transform = get_image_transform(theta, trans, pivot)
- transform_params = theta, trans, pivot
-
- # Ensure pixels remain in the image after transform.
- is_valid = True
- new_pixels = []
- new_rounded_pixels = []
- for pixel in pixels:
- pixel = np.float32([pixel[1], pixel[0], 1.]).reshape(3, 1)
-
- rounded_pixel = np.int32(np.round(transform @ pixel))[:2].squeeze()
- rounded_pixel = np.flip(rounded_pixel)
-
- pixel = (transform @ pixel)[:2].squeeze()
- pixel = np.flip(pixel)
-
- in_fov_rounded = rounded_pixel[0] < image_size[0] and rounded_pixel[
- 1] < image_size[1]
- in_fov = pixel[0] < image_size[0] and pixel[1] < image_size[1]
-
- is_valid = is_valid and np.all(rounded_pixel >= 0) and np.all(
- pixel >= 0) and in_fov_rounded and in_fov
-
- new_pixels.append(pixel)
- new_rounded_pixels.append(rounded_pixel)
- if is_valid:
- break
-
- # Apply rigid transform to image and pixel labels.
- input_image = cv2.warpAffine(
- input_image,
- transform[:2, :], (image_size[1], image_size[0]),
- flags=cv2.INTER_LINEAR)
-
- # Apply noise
- color = np.int32(input_image[:,:,:3])
- depth = np.float32(input_image[:,:,3:])
-
- if add_noise:
- color += np.int32(np.random.normal(0, 3, image_size + (3,)))
- color = np.uint8(np.clip(color, 0, 255))
-
- depth += np.float32(np.random.normal(0, 0.003, image_size + (3,)))
-
- input_image = np.concatenate((color, depth), axis=2)
-
- # length of 5
- transform_params = np.array([theta, trans[0], trans[1], pivot[0], pivot[1]])
- return input_image, new_pixels, new_rounded_pixels, transform_params
-
-
-def apply_perturbation(input_image, transform_params):
- '''Apply data augmentation with specific transform params'''
- image_size = input_image.shape[:2]
-
- # Apply rigid transform to image and pixel labels.
- theta, trans, pivot = transform_params[0], transform_params[1:3], transform_params[3:5]
- transform = get_image_transform(theta, trans, pivot)
-
- input_image = cv2.warpAffine(
- input_image,
- transform[:2, :], (image_size[1], image_size[0]),
- flags=cv2.INTER_LINEAR)
- return input_image
-
-
-class ImageRotator:
- """Rotate for n rotations."""
- # Reference: https://kornia.readthedocs.io/en/latest/tutorials/warp_affine.html?highlight=rotate
-
- def __init__(self, n_rotations):
- self.angles = []
- for i in range(n_rotations):
- theta = i * 2 * 180 / n_rotations
- self.angles.append(theta)
-
- def __call__(self, x_list, pivot, reverse=False):
- rot_x_list = []
- for i, angle in enumerate(self.angles):
- x = x_list[i]# .unsqueeze(0)
- # create transformation (rotation)
- size = len(x)
- alpha = angle if not reverse else (-1.0 * angle) # in degrees
- angle = torch.ones(size) * alpha
-
- # define the rotation center
- if type(pivot) is not torch.Tensor:
- center = torch.FloatTensor(pivot)[...,[1,0]]
- center = center.view(1,-1).repeat((size,1))
- else:
- center = pivot[...,[1,0]].view(1,-1).clone().to(angle.device)
- # center: torch.tensor = torch.ones(size, 2)
- # center[..., 0] = int(pivot[1])
- # center[..., 1] = int(pivot[0])
-
- # define the scale factor
- scale = torch.ones(size, 2)
-
- # # compute the transformation matrix
- M = kornia.geometry.get_rotation_matrix2d(center, angle, scale)
- # x_warped = torchvision.transforms.functional.affine(x.float(), scale=1.,
- # center=[int(pivot[1]),int(pivot[0])],
- # angle=alpha, translate=[0,0], shear=0,
- # interpolation= torchvision.transforms.InterpolationMode.BILINEAR)
-
-
- # apply the transformation to original image
- # M = M.repeat(len(x), 1, 1)
- _, _, h, w = x.shape
- x_warped = kornia.geometry.transform.warp_affine(x.float(), M.to(x.device), dsize=(h, w))
- x_warped = x_warped
- rot_x_list.append(x_warped)
-
- return rot_x_list
-
-# KD Tree Utils
-# Construct K-D Tree to roughly estimate how many objects can fit inside the box.
-class TreeNode:
-
- def __init__(self, parent, children, bbox):
- self.parent = parent
- self.children = children
- self.bbox = bbox # min x, min y, min z, max x, max y, max z
-
-def KDTree(node, min_object_dim, margin, bboxes):
- size = node.bbox[3:] - node.bbox[:3]
-
- # Choose which axis to split.
- split = size > 2 * min_object_dim
- if np.sum(split) == 0:
- bboxes.append(node.bbox)
- return
- split = np.float32(split) / np.sum(split)
- split_axis = np.random.choice(range(len(split)), 1, p=split)[0]
-
- # Split along chosen axis and create 2 children
- cut_ind = np.random.rand() * \
- (size[split_axis] - 2 * min_object_dim) + \
- node.bbox[split_axis] + min_object_dim
- child1_bbox = node.bbox.copy()
- child1_bbox[3 + split_axis] = cut_ind - margin / 2.
- child2_bbox = node.bbox.copy()
- child2_bbox[split_axis] = cut_ind + margin / 2.
- node.children = [
- TreeNode(node, [], bbox=child1_bbox),
- TreeNode(node, [], bbox=child2_bbox)
- ]
- KDTree(node.children[0], min_object_dim, margin, bboxes)
- KDTree(node.children[1], min_object_dim, margin, bboxes)
-
-# -----------------------------------------------------------------------------
-# Shape Name UTILS
-# -----------------------------------------------------------------------------
-google_seen_obj_shapes = {
- 'train': [
- 'alarm clock',
- 'android toy',
- 'black boot with leopard print',
- 'black fedora',
- 'black razer mouse',
- 'black sandal',
- 'black shoe with orange stripes',
- 'bull figure',
- 'butterfinger chocolate',
- 'c clamp',
- 'can opener',
- 'crayon box',
- 'dog statue',
- 'frypan',
- 'green and white striped towel',
- 'grey soccer shoe with cleats',
- 'hard drive',
- 'honey dipper',
- 'magnifying glass',
- 'mario figure',
- 'nintendo 3ds',
- 'nintendo cartridge',
- 'office depot box',
- 'orca plush toy',
- 'pepsi gold caffeine free box',
- 'pepsi wild cherry box',
- 'porcelain cup',
- 'purple tape',
- 'red and white flashlight',
- 'rhino figure',
- 'rocket racoon figure',
- 'scissors',
- 'silver tape',
- 'spatula with purple head',
- 'spiderman figure',
- 'tablet',
- 'toy school bus',
- ],
- 'val': [
- 'ball puzzle',
- 'black and blue sneakers',
- 'black shoe with green stripes',
- 'brown fedora',
- 'dinosaur figure',
- 'hammer',
- 'light brown boot with golden laces',
- 'lion figure',
- 'pepsi max box',
- 'pepsi next box',
- 'porcelain salad plate',
- 'porcelain spoon',
- 'red and white striped towel',
- 'red cup',
- 'screwdriver',
- 'toy train',
- 'unicorn toy',
- 'white razer mouse',
- 'yoshi figure'
- ],
- 'test': [
- 'ball puzzle',
- 'black and blue sneakers',
- 'black shoe with green stripes',
- 'brown fedora',
- 'dinosaur figure',
- 'hammer',
- 'light brown boot with golden laces',
- 'lion figure',
- 'pepsi max box',
- 'pepsi next box',
- 'porcelain salad plate',
- 'porcelain spoon',
- 'red and white striped towel',
- 'red cup',
- 'screwdriver',
- 'toy train',
- 'unicorn toy',
- 'white razer mouse',
- 'yoshi figure'
- ],
- }
-
-google_unseen_obj_shapes = {
- 'train': [
- 'alarm clock',
- 'android toy',
- 'black boot with leopard print',
- 'black fedora',
- 'black razer mouse',
- 'black sandal',
- 'black shoe with orange stripes',
- 'bull figure',
- 'butterfinger chocolate',
- 'c clamp',
- 'can opener',
- 'crayon box',
- 'dog statue',
- 'frypan',
- 'green and white striped towel',
- 'grey soccer shoe with cleats',
- 'hard drive',
- 'honey dipper',
- 'magnifying glass',
- 'mario figure',
- 'nintendo 3ds',
- 'nintendo cartridge',
- 'office depot box',
- 'orca plush toy',
- 'pepsi gold caffeine free box',
- 'pepsi wild cherry box',
- 'porcelain cup',
- 'purple tape',
- 'red and white flashlight',
- 'rhino figure',
- 'rocket racoon figure',
- 'scissors',
- 'silver tape',
- 'spatula with purple head',
- 'spiderman figure',
- 'tablet',
- 'toy school bus',
- ],
- 'val': [
- 'ball puzzle',
- 'black and blue sneakers',
- 'black shoe with green stripes',
- 'brown fedora',
- 'dinosaur figure',
- 'hammer',
- 'light brown boot with golden laces',
- 'lion figure',
- 'pepsi max box',
- 'pepsi next box',
- 'porcelain salad plate',
- 'porcelain spoon',
- 'red and white striped towel',
- 'red cup',
- 'screwdriver',
- 'toy train',
- 'unicorn toy',
- 'white razer mouse',
- 'yoshi figure'
- ],
- 'test': [
- 'ball puzzle',
- 'black and blue sneakers',
- 'black shoe with green stripes',
- 'brown fedora',
- 'dinosaur figure',
- 'hammer',
- 'light brown boot with golden laces',
- 'lion figure',
- 'pepsi max box',
- 'pepsi next box',
- 'porcelain salad plate',
- 'porcelain spoon',
- 'red and white striped towel',
- 'red cup',
- 'screwdriver',
- 'toy train',
- 'unicorn toy',
- 'white razer mouse',
- 'yoshi figure'
- ],
- }
-
-google_all_shapes = {
- 'train': [
- 'alarm clock',
- 'android toy',
- 'ball puzzle',
- 'black and blue sneakers',
- 'black boot with leopard print',
- 'black fedora',
- 'black razer mouse',
- 'black sandal',
- 'black shoe with green stripes',
- 'black shoe with orange stripes',
- 'brown fedora',
- 'bull figure',
- 'butterfinger chocolate',
- 'c clamp',
- 'can opener',
- 'crayon box',
- 'dinosaur figure',
- 'dog statue',
- 'frypan',
- 'green and white striped towel',
- 'grey soccer shoe with cleats',
- 'hammer',
- 'hard drive',
- 'honey dipper',
- 'light brown boot with golden laces',
- 'lion figure',
- 'magnifying glass',
- 'mario figure',
- 'nintendo 3ds',
- 'nintendo cartridge',
- 'office depot box',
- 'orca plush toy',
- 'pepsi gold caffeine free box',
- 'pepsi max box',
- 'pepsi next box',
- 'pepsi wild cherry box',
- 'porcelain cup',
- 'porcelain salad plate',
- 'porcelain spoon',
- 'purple tape',
- 'red and white flashlight',
- 'red and white striped towel',
- 'red cup',
- 'rhino figure',
- 'rocket racoon figure',
- 'scissors',
- 'screwdriver',
- 'silver tape',
- 'spatula with purple head',
- 'spiderman figure',
- 'tablet',
- 'toy school bus',
- 'toy train',
- 'unicorn toy',
- 'white razer mouse',
- 'yoshi figure',
- ],
- 'val': [
- 'alarm clock',
- 'android toy',
- 'ball puzzle',
- 'black and blue sneakers',
- 'black boot with leopard print',
- 'black fedora',
- 'black razer mouse',
- 'black sandal',
- 'black shoe with green stripes',
- 'black shoe with orange stripes',
- 'brown fedora',
- 'bull figure',
- 'butterfinger chocolate',
- 'c clamp',
- 'can opener',
- 'crayon box',
- 'dinosaur figure',
- 'dog statue',
- 'frypan',
- 'green and white striped towel',
- 'grey soccer shoe with cleats',
- 'hammer',
- 'hard drive',
- 'honey dipper',
- 'light brown boot with golden laces',
- 'lion figure',
- 'magnifying glass',
- 'mario figure',
- 'nintendo 3ds',
- 'nintendo cartridge',
- 'office depot box',
- 'orca plush toy',
- 'pepsi gold caffeine free box',
- 'pepsi max box',
- 'pepsi next box',
- 'pepsi wild cherry box',
- 'porcelain cup',
- 'porcelain salad plate',
- 'porcelain spoon',
- 'purple tape',
- 'red and white flashlight',
- 'red and white striped towel',
- 'red cup',
- 'rhino figure',
- 'rocket racoon figure',
- 'scissors',
- 'screwdriver',
- 'silver tape',
- 'spatula with purple head',
- 'spiderman figure',
- 'tablet',
- 'toy school bus',
- 'toy train',
- 'unicorn toy',
- 'white razer mouse',
- 'yoshi figure',
- ],
- 'test': [
- 'alarm clock',
- 'android toy',
- 'ball puzzle',
- 'black and blue sneakers',
- 'black boot with leopard print',
- 'black fedora',
- 'black razer mouse',
- 'black sandal',
- 'black shoe with green stripes',
- 'black shoe with orange stripes',
- 'brown fedora',
- 'bull figure',
- 'butterfinger chocolate',
- 'c clamp',
- 'can opener',
- 'crayon box',
- 'dinosaur figure',
- 'dog statue',
- 'frypan',
- 'green and white striped towel',
- 'grey soccer shoe with cleats',
- 'hammer',
- 'hard drive',
- 'honey dipper',
- 'light brown boot with golden laces',
- 'lion figure',
- 'magnifying glass',
- 'mario figure',
- 'nintendo 3ds',
- 'nintendo cartridge',
- 'office depot box',
- 'orca plush toy',
- 'pepsi gold caffeine free box',
- 'pepsi max box',
- 'pepsi next box',
- 'pepsi wild cherry box',
- 'porcelain cup',
- 'porcelain salad plate',
- 'porcelain spoon',
- 'purple tape',
- 'red and white flashlight',
- 'red and white striped towel',
- 'red cup',
- 'rhino figure',
- 'rocket racoon figure',
- 'scissors',
- 'screwdriver',
- 'silver tape',
- 'spatula with purple head',
- 'spiderman figure',
- 'tablet',
- 'toy school bus',
- 'toy train',
- 'unicorn toy',
- 'white razer mouse',
- 'yoshi figure',
- ],
- }
-assembling_kit_shapes = {
- 0: "letter R shape",
- 1: "letter A shape",
- 2: "triangle",
- 3: "square",
- 4: "plus",
- 5: "letter T shape",
- 6: "diamond",
- 7: "pentagon",
- 8: "rectangle",
- 9: "flower",
- 10: "star",
- 11: "circle",
- 12: "letter G shape",
- 13: "letter V shape",
- 14: "letter E shape",
- 15: "letter L shape",
- 16: "ring",
- 17: "hexagon",
- 18: "heart",
- 19: "letter M shape",
- }
-
-# -----------------------------------------------------------------------------
-# COLOR AND PLOT UTILS
-# -----------------------------------------------------------------------------
-
-
-# Colors (Tableau palette).
-COLORS = {
- 'blue': [78.0 / 255.0, 121.0 / 255.0, 167.0 / 255.0],
- 'red': [255.0 / 255.0, 087.0 / 255.0, 089.0 / 255.0],
- 'green': [089.0 / 255.0, 169.0 / 255.0, 078.0 / 255.0],
- 'orange': [242.0 / 255.0, 142.0 / 255.0, 043.0 / 255.0],
- 'yellow': [237.0 / 255.0, 201.0 / 255.0, 072.0 / 255.0],
- 'purple': [176.0 / 255.0, 122.0 / 255.0, 161.0 / 255.0],
- 'pink': [255.0 / 255.0, 157.0 / 255.0, 167.0 / 255.0],
- 'cyan': [118.0 / 255.0, 183.0 / 255.0, 178.0 / 255.0],
- 'brown': [156.0 / 255.0, 117.0 / 255.0, 095.0 / 255.0],
- 'white': [255.0 / 255.0, 255.0 / 255.0, 255.0 / 255.0],
- 'gray': [186.0 / 255.0, 176.0 / 255.0, 172.0 / 255.0],
- 'indigo': [75.0 / 255.0, 0.0 / 255.0, 130.0 / 255.0],
- 'violet': [143.0 / 255.0, 0.0 / 255.0, 255.0 / 255.0],
- 'black': [0.0 / 255.0, 0.0 / 255.0, 0.0 / 255.0],
- 'silver': [192.0 / 255.0, 192.0 / 255.0, 192.0 / 255.0],
- 'gold': [255.0 / 255.0, 215.0 / 255.0, 0.0 / 255.0],
-
-}
-
-COLORS_NAMES = list(COLORS.keys())
-TRAIN_COLORS = ['blue', 'red', 'green', 'yellow', 'brown', 'gray', 'cyan']
-EVAL_COLORS = ['blue', 'red', 'green', 'orange', 'purple', 'pink', 'white']
-
-
-def get_colors(mode, n_colors=-1, **kwargs):
- all_color_names = get_colors_names(mode)
-
- if n_colors == -1:
- all_color_names = all_color_names
- else:
- all_color_names = random.sample(all_color_names, n_colors)
- return [COLORS[cn] for cn in all_color_names], all_color_names
-
-def get_colors_names(mode):
- if mode == 'train':
- return TRAIN_COLORS
- elif mode == 'full':
- return TRAIN_COLORS
- else:
- return TRAIN_COLORS
-
-def get_random_color():
- return get_colors(mode='train', n_colors=1)
-
-def solve_hanoi_all(n_disks):
- # Solve Hanoi sequence with dynamic programming.
- hanoi_steps = [] # [[object index, from rod, to rod], ...]
-
- def solve_hanoi(n, t0, t1, t2):
- if n == 0:
- hanoi_steps.append([n, t0, t1])
- return
- solve_hanoi(n - 1, t0, t2, t1)
- hanoi_steps.append([n, t0, t1])
- solve_hanoi(n - 1, t2, t1, t0)
-
- solve_hanoi(n_disks - 1, 0, 2, 1)
- return hanoi_steps
-
-def plot(fname, # pylint: disable=dangerous-default-value
- title,
- ylabel,
- xlabel,
- data,
- xlim=[-np.inf, 0],
- xticks=None,
- ylim=[np.inf, -np.inf],
- show_std=True):
- """Plot frame data."""
- # Data is a dictionary that maps experiment names to tuples with 3
- # elements: x (size N array) and y (size N array) and y_std (size N array)
-
- # Get data limits.
- for name, (x, y, _) in data.items():
- del name
- y = np.array(y)
- xlim[0] = max(xlim[0], np.min(x))
- xlim[1] = max(xlim[1], np.max(x))
- ylim[0] = min(ylim[0], np.min(y))
- ylim[1] = max(ylim[1], np.max(y))
-
- # Draw background.
- plt.title(title, fontsize=14)
- plt.ylim(ylim)
- plt.ylabel(ylabel, fontsize=14)
- plt.yticks(fontsize=14)
- plt.xlim(xlim)
- plt.xlabel(xlabel, fontsize=14)
- plt.grid(True, linestyle='-', color=[0.8, 0.8, 0.8])
- ax = plt.gca()
- for axis in ['top', 'bottom', 'left', 'right']:
- ax.spines[axis].set_color('#000000')
- plt.rcParams.update({'font.size': 14})
- plt.rcParams['mathtext.default'] = 'regular'
- matplotlib.rcParams['pdf.fonttype'] = 42
- matplotlib.rcParams['ps.fonttype'] = 42
-
- # Draw data.
- color_iter = 0
- for name, (x, y, std) in data.items():
- del name
- x, y, std = np.float32(x), np.float32(y), np.float32(std)
- upper = np.clip(y + std, ylim[0], ylim[1])
- lower = np.clip(y - std, ylim[0], ylim[1])
- color = COLORS[list(COLORS.keys())[color_iter]]
- if show_std:
- plt.fill_between(x, upper, lower, color=color, linewidth=0, alpha=0.3)
- plt.plot(x, y, color=color, linewidth=2, marker='o', alpha=1.)
- color_iter += 1
-
- if xticks:
- plt.xticks(ticks=range(len(xticks)), labels=xticks, fontsize=14)
- else:
- plt.xticks(fontsize=14)
- plt.legend([name for name, _ in data.items()],
- loc='lower right', fontsize=14)
- plt.tight_layout()
- plt.savefig(fname)
- plt.clf()
-
-
-# -----------------------------------------------------------------------------
-# MESHCAT UTILS
-# -----------------------------------------------------------------------------
-
-def create_visualizer(clear=True):
- print('Waiting for meshcat server... have you started a server?')
- vis = meshcat.Visualizer(zmq_url='tcp://127.0.0.1:6000')
- if clear:
- vis.delete()
- return vis
-
-
-def make_frame(vis, name, h, radius, o=1.0):
- """Add a red-green-blue triad to the Meschat visualizer.
-
- Args:
- vis (MeshCat Visualizer): the visualizer
- name (string): name for this frame (should be unique)
- h (float): height of frame visualization
- radius (float): radius of frame visualization
- o (float): opacity
- """
- vis[name]['x'].set_object(
- g.Cylinder(height=h, radius=radius),
- g.MeshLambertMaterial(color=0xff0000, reflectivity=0.8, opacity=o))
- rotate_x = mtf.rotation_matrix(np.pi / 2.0, [0, 0, 1])
- rotate_x[0, 3] = h / 2
- vis[name]['x'].set_transform(rotate_x)
-
- vis[name]['y'].set_object(
- g.Cylinder(height=h, radius=radius),
- g.MeshLambertMaterial(color=0x00ff00, reflectivity=0.8, opacity=o))
- rotate_y = mtf.rotation_matrix(np.pi / 2.0, [0, 1, 0])
- rotate_y[1, 3] = h / 2
- vis[name]['y'].set_transform(rotate_y)
-
- vis[name]['z'].set_object(
- g.Cylinder(height=h, radius=radius),
- g.MeshLambertMaterial(color=0x0000ff, reflectivity=0.8, opacity=o))
- rotate_z = mtf.rotation_matrix(np.pi / 2.0, [1, 0, 0])
- rotate_z[2, 3] = h / 2
- vis[name]['z'].set_transform(rotate_z)
-
-
-def meshcat_visualize(vis, obs, act, info):
- """Visualize data using meshcat."""
-
- for key in sorted(info.keys()):
- pose = info[key]
- pick_transform = np.eye(4)
- pick_transform[0:3, 3] = pose[0]
- quaternion_wxyz = np.asarray(
- [pose[1][3], pose[1][0], pose[1][1], pose[1][2]])
- pick_transform[0:3, 0:3] = mtf.quaternion_matrix(quaternion_wxyz)[0:3, 0:3]
- label = 'obj_' + str(key)
- make_frame(vis, label, h=0.05, radius=0.0012, o=1.0)
- vis[label].set_transform(pick_transform)
-
- for cam_index in range(len(act['camera_config'])):
- verts = unproject_depth_vectorized(
- obs['depth'][cam_index], np.array([0, 1]),
- np.array(act['camera_config'][cam_index]['intrinsics']).reshape(3, 3),
- np.zeros(5))
-
- # switch from [N,3] to [3,N]
- verts = verts.T
-
- cam_transform = np.eye(4)
- cam_transform[0:3, 3] = act['camera_config'][cam_index]['position']
- quaternion_xyzw = act['camera_config'][cam_index]['rotation']
- quaternion_wxyz = np.asarray([
- quaternion_xyzw[3], quaternion_xyzw[0], quaternion_xyzw[1],
- quaternion_xyzw[2]
- ])
- cam_transform[0:3, 0:3] = mtf.quaternion_matrix(quaternion_wxyz)[0:3, 0:3]
- verts = apply_transform(cam_transform, verts)
-
- colors = obs['color'][cam_index].reshape(-1, 3).T / 255.0
-
- vis['pointclouds/' + str(cam_index)].set_object(
- g.PointCloud(position=verts, color=colors))
-
-
-# -----------------------------------------------------------------------------
-# CONFIG UTILS
-# -----------------------------------------------------------------------------
-
-def set_seed(seed, torch=False):
- random.seed(seed)
- os.environ['PYTHONHASHSEED'] = str(seed)
- np.random.seed(seed)
-
- if torch:
- import torch
- torch.manual_seed(seed)
-
-
-def load_cfg(yaml_path):
- with open(yaml_path, 'r') as f:
- data = yaml.safe_load(f)
- return data
-
-
-def load_hydra_config(config_path):
- return OmegaConf.load(config_path)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detr/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/detr/README.md
deleted file mode 100644
index 711a308a5549b28c36515405feabf2ca0f7c7c1f..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/detr/README.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# DETR
-
-## Introduction
-
-[ALGORITHM]
-
-We provide the config files for DETR: [End-to-End Object Detection with Transformers](https://arxiv.org/abs/2005.12872).
-
-```BibTeX
-@inproceedings{detr,
- author = {Nicolas Carion and
- Francisco Massa and
- Gabriel Synnaeve and
- Nicolas Usunier and
- Alexander Kirillov and
- Sergey Zagoruyko},
- title = {End-to-End Object Detection with Transformers},
- booktitle = {ECCV},
- year = {2020}
-}
-```
-
-## Results and Models
-
-| Backbone | Model | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
-|:------:|:--------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
-| R-50 | DETR |150e |7.9| | 40.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/detr/detr_r50_8x2_150e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/detr/detr_r50_8x2_150e_coco/detr_r50_8x2_150e_coco_20201130_194835-2c4b8974.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/detr/detr_r50_8x2_150e_coco/detr_r50_8x2_150e_coco_20201130_194835.log.json) |
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r2_101_fpn_mdconv_c3-c5_mstrain_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r2_101_fpn_mdconv_c3-c5_mstrain_2x_coco.py
deleted file mode 100644
index 8da3122657adc2785129c28a84473c25777abba3..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r2_101_fpn_mdconv_c3-c5_mstrain_2x_coco.py
+++ /dev/null
@@ -1,16 +0,0 @@
-_base_ = './vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py'
-model = dict(
- pretrained='open-mmlab://res2net101_v1d_26w_4s',
- backbone=dict(
- type='Res2Net',
- depth=101,
- scales=4,
- base_width=26,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch',
- dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False),
- stage_with_dcn=(False, True, True, True)))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context.py
deleted file mode 100644
index 7c57a6f8ff0a7dbb18666c1b9c882da10e586aa3..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context.py
+++ /dev/null
@@ -1,9 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/pascal_context.py',
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=60),
- auxiliary_head=dict(num_classes=60),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/GroveStreet/GTA_SOVITS/vencoder/dphubert/__init__.py b/spaces/GroveStreet/GTA_SOVITS/vencoder/dphubert/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/GuXiaoBei/wechat-chatbot/scripts/shutdown.sh b/spaces/GuXiaoBei/wechat-chatbot/scripts/shutdown.sh
deleted file mode 100644
index c2bf6b14adcafd46e7278ab3730ab7f78b82c593..0000000000000000000000000000000000000000
--- a/spaces/GuXiaoBei/wechat-chatbot/scripts/shutdown.sh
+++ /dev/null
@@ -1,16 +0,0 @@
-#!/bin/bash
-
-#关闭服务
-cd `dirname $0`/..
-export BASE_DIR=`pwd`
-pid=`ps ax | grep -i app.py | grep "${BASE_DIR}" | grep python3 | grep -v grep | awk '{print $1}'`
-if [ -z "$pid" ] ; then
- echo "No chatgpt-on-wechat running."
- exit -1;
-fi
-
-echo "The chatgpt-on-wechat(${pid}) is running..."
-
-kill ${pid}
-
-echo "Send shutdown request to chatgpt-on-wechat(${pid}) OK"
diff --git a/spaces/HaoFeng2019/DocGeoNet/extractor.py b/spaces/HaoFeng2019/DocGeoNet/extractor.py
deleted file mode 100644
index 2e242193b8e14be6c74f89afd20b7d11cd8b6d62..0000000000000000000000000000000000000000
--- a/spaces/HaoFeng2019/DocGeoNet/extractor.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class ResidualBlock(nn.Module):
- def __init__(self, in_planes, planes, norm_fn='group', stride=1):
- super(ResidualBlock, self).__init__()
-
- self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, padding=1, stride=stride)
- self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1)
- self.relu = nn.ReLU(inplace=True)
-
- num_groups = planes // 8
-
- if norm_fn == 'group':
- self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
- self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
- if not stride == 1:
- self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
-
- elif norm_fn == 'batch':
- self.norm1 = nn.BatchNorm2d(planes)
- self.norm2 = nn.BatchNorm2d(planes)
- if not stride == 1:
- self.norm3 = nn.BatchNorm2d(planes)
-
- elif norm_fn == 'instance':
- self.norm1 = nn.InstanceNorm2d(planes)
- self.norm2 = nn.InstanceNorm2d(planes)
- if not stride == 1:
- self.norm3 = nn.InstanceNorm2d(planes)
-
- elif norm_fn == 'none':
- self.norm1 = nn.Sequential()
- self.norm2 = nn.Sequential()
- if not stride == 1:
- self.norm3 = nn.Sequential()
-
- if stride == 1:
- self.downsample = None
-
- else:
- self.downsample = nn.Sequential(
- nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm3)
-
-
- def forward(self, x):
- y = x
- y = self.relu(self.norm1(self.conv1(y)))
- y = self.relu(self.norm2(self.conv2(y)))
-
- if self.downsample is not None:
- x = self.downsample(x)
-
- return self.relu(x+y)
-
-
-class BasicEncoder(nn.Module):
- def __init__(self, input_dim=128, output_dim=128, norm_fn='batch'):
- super(BasicEncoder, self).__init__()
- self.norm_fn = norm_fn
-
- if self.norm_fn == 'group':
- self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64)
-
- elif self.norm_fn == 'batch':
- self.norm1 = nn.BatchNorm2d(64)
-
- elif self.norm_fn == 'instance':
- self.norm1 = nn.InstanceNorm2d(64)
-
- elif self.norm_fn == 'none':
- self.norm1 = nn.Sequential()
-
- self.conv1 = nn.Conv2d(input_dim, 64, kernel_size=7, stride=2, padding=3)
- self.relu1 = nn.ReLU(inplace=True)
-
- self.in_planes = 64
- self.layer1 = self._make_layer(64, stride=1)
- self.layer2 = self._make_layer(128, stride=2)
- self.layer3 = self._make_layer(192, stride=2)
-
- # output convolution
- self.conv2 = nn.Conv2d(192, output_dim, kernel_size=1)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)):
- if m.weight is not None:
- nn.init.constant_(m.weight, 1)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def _make_layer(self, dim, stride=1):
- layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride)
- layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1)
- layers = (layer1, layer2)
-
- self.in_planes = dim
- return nn.Sequential(*layers)
-
- def forward(self, x):
- x = self.conv1(x)
- x = self.norm1(x)
- x = self.relu1(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
-
- x = self.conv2(x)
-
- return x
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/run_scripts/caption/train_caption_stage1_el.sh b/spaces/HarryLee/eCommerceImageCaptioning/run_scripts/caption/train_caption_stage1_el.sh
deleted file mode 100644
index f12ee52c9d24fe296410da30b67e0ef5e9e76254..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/run_scripts/caption/train_caption_stage1_el.sh
+++ /dev/null
@@ -1,109 +0,0 @@
-#!/usr/bin/env
-
-# The port for communication. Note that if you want to run multiple tasks on the same machine,
-# you need to specify different port numbers.
-export MASTER_PORT=1051
-
-log_dir=./stage1_logs
-save_dir=./stage1_checkpoints
-mkdir -p $log_dir $save_dir
-
-bpe_dir=../../utils/BPE
-user_dir=../../ofa_module
-
-data_dir=../../dataset/caption_data
-data=${data_dir}/caption_stage1_train.tsv,${data_dir}/caption_val.tsv
-restore_file=../../checkpoints/ofa_large.pt
-selected_cols=0,4,2
-
-task=caption
-arch=ofa_large
-criterion=adjust_label_smoothed_encouraging_loss # for el
-label_smoothing=0.1
-lr=1e-5
-max_epoch=5
-warmup_ratio=0.06
-batch_size=8
-update_freq=4
-resnet_drop_path_rate=0.0
-encoder_drop_path_rate=0.1
-decoder_drop_path_rate=0.1
-dropout=0.1
-attention_dropout=0.0
-max_src_length=80
-max_tgt_length=20
-num_bins=1000
-patch_image_size=480
-eval_cider_cached=${data_dir}/cider_cached_tokens/coco-valid-words.p
-drop_worst_ratio=0.05 # modified from 0.2 for el
-log_end=0.75 # for el
-for max_epoch in {2,}; do
- echo "max_epoch "${max_epoch}
- for warmup_ratio in {0.06,}; do
- echo "warmup_ratio "${warmup_ratio}
- for drop_worst_after in {2500,}; do
- echo "drop_worst_after "${drop_worst_after}
-
- log_file=${log_dir}/${max_epoch}"_"${warmup_ratio}"_"${drop_worst_after}_el${log_end}_".log"
- save_path=${save_dir}/${max_epoch}"_"${warmup_ratio}"_"${drop_worst_after}_el${log_end}_
- mkdir -p $save_path
-
- CUDA_VISIBLE_DEVICES=0,1,2,3 python3 -m torch.distributed.launch --nproc_per_node=4 --master_port=${MASTER_PORT} ../../train.py \
- $data \
- --selected-cols=${selected_cols} \
- --bpe-dir=${bpe_dir} \
- --user-dir=${user_dir} \
- --restore-file=${restore_file} \
- --reset-optimizer --reset-dataloader --reset-meters \
- --save-dir=${save_path} \
- --task=${task} \
- --arch=${arch} \
- --criterion=${criterion} \
- --label-smoothing=${label_smoothing} \
- --batch-size=${batch_size} \
- --update-freq=${update_freq} \
- --encoder-normalize-before \
- --decoder-normalize-before \
- --share-decoder-input-output-embed \
- --share-all-embeddings \
- --layernorm-embedding \
- --patch-layernorm-embedding \
- --code-layernorm-embedding \
- --resnet-drop-path-rate=${resnet_drop_path_rate} \
- --encoder-drop-path-rate=${encoder_drop_path_rate} \
- --decoder-drop-path-rate=${decoder_drop_path_rate} \
- --dropout=${dropout} \
- --attention-dropout=${attention_dropout} \
- --weight-decay=0.01 --optimizer=adam --adam-betas="(0.9,0.999)" --adam-eps=1e-08 --clip-norm=1.0 \
- --lr-scheduler=polynomial_decay --lr=${lr} \
- --max-epoch=${max_epoch} --warmup-ratio=${warmup_ratio} \
- --log-format=simple --log-interval=10 \
- --fixed-validation-seed=7 \
- --no-epoch-checkpoints --keep-best-checkpoints=1 \
- --save-interval=1 --validate-interval=1 \
- --save-interval-updates=500 --validate-interval-updates=500 \
- --eval-cider \
- --eval-cider-cached-tokens=${eval_cider_cached} \
- --eval-args='{"beam":5,"max_len_b":16,"no_repeat_ngram_size":3}' \
- --best-checkpoint-metric=cider --maximize-best-checkpoint-metric \
- --max-src-length=${max_src_length} \
- --max-tgt-length=${max_tgt_length} \
- --find-unused-parameters \
- --freeze-encoder-embedding \
- --freeze-decoder-embedding \
- --add-type-embedding \
- --scale-attn \
- --scale-fc \
- --scale-heads \
- --disable-entangle \
- --num-bins=${num_bins} \
- --patch-image-size=${patch_image_size} \
- --drop-worst-ratio=${drop_worst_ratio} \
- --drop-worst-after=${drop_worst_after} \
- --log-end ${log_end} \
- --fp16 \
- --fp16-scale-window=512 \
- --num-workers=0 > ${log_file} 2>&1
- done
- done
-done
diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/train_hifi.sh b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/train_hifi.sh
deleted file mode 100644
index 6955a6e0f07777c1db68eae0e25bb48900adb70d..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/scripts/train_hifi.sh
+++ /dev/null
@@ -1,10 +0,0 @@
-python ../src/hifi_gan/train.py \
- --config '' \
- --input_wavs_dir '' \
- --input_mels_dir '' \
- --input_training_file '' \
- --input_validation_file '' \
- --checkpoint_path '' \
- --logs_path '' \
- --checkpoint_interval 10000 \
- --stdout_interval 50
diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/utils.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/utils.py
deleted file mode 100644
index a591aa319ccb264110111cda55c4a232b41aae74..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/src/glow_tts/utils.py
+++ /dev/null
@@ -1,282 +0,0 @@
-import os
-import glob
-import sys
-import argparse
-import logging
-import json
-import subprocess
-import numpy as np
-from scipy.io.wavfile import read
-import torch
-
-MATPLOTLIB_FLAG = False
-
-logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
-logger = logging
-
-
-def load_checkpoint(checkpoint_path, model, optimizer=None):
- assert os.path.isfile(checkpoint_path)
- checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
- iteration = 1
- if "iteration" in checkpoint_dict.keys():
- iteration = checkpoint_dict["iteration"]
- if "learning_rate" in checkpoint_dict.keys():
- learning_rate = checkpoint_dict["learning_rate"]
- if optimizer is not None and "optimizer" in checkpoint_dict.keys():
- optimizer.load_state_dict(checkpoint_dict["optimizer"])
- saved_state_dict = checkpoint_dict["model"]
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict = {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logger.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, "module"):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logger.info(
- "Loaded checkpoint '{}' (iteration {})".format(checkpoint_path, iteration)
- )
- return model, optimizer, learning_rate, iteration
-
-
-def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
- logger.info(
- "Saving model and optimizer state at iteration {} to {}".format(
- iteration, checkpoint_path
- )
- )
- if hasattr(model, "module"):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- torch.save(
- {
- "model": state_dict,
- "iteration": iteration,
- "optimizer": optimizer.state_dict(),
- "learning_rate": learning_rate,
- },
- checkpoint_path,
- )
-
-
-def summarize(writer, global_step, scalars={}, histograms={}, images={}):
- for k, v in scalars.items():
- writer.add_scalar(k, v, global_step)
- for k, v in histograms.items():
- writer.add_histogram(k, v, global_step)
- for k, v in images.items():
- writer.add_image(k, v, global_step, dataformats="HWC")
-
-
-def latest_checkpoint_path(dir_path, regex="G_*.pth"):
- f_list = glob.glob(os.path.join(dir_path, regex))
- f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
- x = f_list[-1]
- print(x)
- return x
-
-
-def plot_spectrogram_to_numpy(spectrogram):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots()
- im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
- plt.colorbar(im, ax=ax)
- plt.xlabel("Frames")
- plt.ylabel("Channels")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def plot_alignment_to_numpy(alignment, info=None):
- global MATPLOTLIB_FLAG
- if not MATPLOTLIB_FLAG:
- import matplotlib
-
- matplotlib.use("Agg")
- MATPLOTLIB_FLAG = True
- mpl_logger = logging.getLogger("matplotlib")
- mpl_logger.setLevel(logging.WARNING)
- import matplotlib.pylab as plt
- import numpy as np
-
- fig, ax = plt.subplots(figsize=(6, 4))
- im = ax.imshow(alignment, aspect="auto", origin="lower", interpolation="none")
- fig.colorbar(im, ax=ax)
- xlabel = "Decoder timestep"
- if info is not None:
- xlabel += "\n\n" + info
- plt.xlabel(xlabel)
- plt.ylabel("Encoder timestep")
- plt.tight_layout()
-
- fig.canvas.draw()
- data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
- data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
- plt.close()
- return data
-
-
-def load_wav_to_torch(full_path):
- sampling_rate, data = read(full_path)
- return torch.FloatTensor(data.astype(np.float32)), sampling_rate
-
-
-def load_filepaths_and_text(filename, split="|"):
- with open(filename, encoding="utf-8") as f:
- filepaths_and_text = [line.strip().split(split) for line in f]
- return filepaths_and_text
-
-
-def get_hparams(init=True):
- parser = argparse.ArgumentParser()
- parser.add_argument("-c", "--config", type=str, help="JSON file for configuration")
- parser.add_argument("-m", "--model", type=str, help="Model name")
- # parser.add_argument('-g', '--gan', type=str,
- # help='Model name')
- parser.add_argument("-l", "--logs", type=str, help="logs name")
- # parser.add_argument('-s', '--mels', type=str,
- # help='logs name')
-
- args = parser.parse_args()
- # model_dir = os.path.join("./logs", args.model)
- model_dir = args.model
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
-
- config_path = args.config
- config_save_path = os.path.join(model_dir, "config.json")
-
- # if not config_path : config_path = config_save_path
-
- if init:
- with open(config_path, "r") as f:
- data = f.read()
- with open(config_save_path, "w") as f:
- f.write(data)
- else:
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- hparams.log_dir = args.logs
- # hparams.mels_dir = args.mels
- # hparams.gan_dir = args.gan
- return hparams
-
-
-def get_hparams_from_dir(model_dir):
- config_save_path = os.path.join(model_dir, "config.json")
- with open(config_save_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- hparams.model_dir = model_dir
- return hparams
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = json.loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def check_git_hash(model_dir):
- source_dir = os.path.dirname(os.path.realpath(__file__))
- if not os.path.exists(os.path.join(source_dir, ".git")):
- logger.warn(
- "{} is not a git repository, therefore hash value comparison will be ignored.".format(
- source_dir
- )
- )
- return
-
- cur_hash = subprocess.getoutput("git rev-parse HEAD")
-
- path = os.path.join(model_dir, "githash")
- if os.path.exists(path):
- saved_hash = open(path).read()
- if saved_hash != cur_hash:
- logger.warn(
- "git hash values are different. {}(saved) != {}(current)".format(
- saved_hash[:8], cur_hash[:8]
- )
- )
- else:
- open(path, "w").write(cur_hash)
-
-
-def get_logger(model_dir, filename="train.log"):
- global logger
- logger = logging.getLogger(os.path.basename(model_dir))
- logger.setLevel(logging.DEBUG)
-
- formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
- if not os.path.exists(model_dir):
- os.makedirs(model_dir)
- h = logging.FileHandler(os.path.join(model_dir, filename))
- h.setLevel(logging.DEBUG)
- h.setFormatter(formatter)
- logger.addHandler(h)
- return logger
-
-
-class HParams:
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
diff --git a/spaces/Harveenchadha/oiTrans/scripts/concat_joint_data.py b/spaces/Harveenchadha/oiTrans/scripts/concat_joint_data.py
deleted file mode 100644
index f1496177b0f47869e8e58ebdb0395c2c457e300a..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/oiTrans/scripts/concat_joint_data.py
+++ /dev/null
@@ -1,130 +0,0 @@
-import os
-from tqdm import tqdm
-import sys
-
-LANGS = [
- "as",
- "bn",
- "gu",
- "hi",
- "kn",
- "ml",
- "mr",
- "or",
- "pa",
- "ta",
- "te",
- #"ur"
-]
-
-
-def add_token(sent, tag_infos):
- """ add special tokens specified by tag_infos to each element in list
-
- tag_infos: list of tuples (tag_type,tag)
-
- each tag_info results in a token of the form: __{tag_type}__{tag}__
-
- """
-
- tokens = []
- for tag_type, tag in tag_infos:
- token = '__' + tag_type + '__' + tag + '__'
- tokens.append(token)
-
- return ' '.join(tokens) + ' ' + sent
-
-
-def concat_data(data_dir, outdir, lang_pair_list,
- out_src_lang='SRC', out_trg_lang='TGT', split='train'):
- """
- data_dir: input dir, contains directories for language pairs named l1-l2
- """
- os.makedirs(outdir, exist_ok=True)
-
- out_src_fname = '{}/{}.{}'.format(outdir, split, out_src_lang)
- out_trg_fname = '{}/{}.{}'.format(outdir, split, out_trg_lang)
-# out_meta_fname='{}/metadata.txt'.format(outdir)
-
- print()
- print(out_src_fname)
- print(out_trg_fname)
-# print(out_meta_fname)
-
- # concatenate train data
- if os.path.isfile(out_src_fname):
- os.unlink(out_src_fname)
- if os.path.isfile(out_trg_fname):
- os.unlink(out_trg_fname)
-# if os.path.isfile(out_meta_fname):
-# os.unlink(out_meta_fname)
-
- for src_lang, trg_lang in tqdm(lang_pair_list):
- print('src: {}, tgt:{}'.format(src_lang, trg_lang))
-
- in_src_fname = '{}/{}-{}/{}.{}'.format(
- data_dir, src_lang, trg_lang, split, src_lang)
- in_trg_fname = '{}/{}-{}/{}.{}'.format(
- data_dir, src_lang, trg_lang, split, trg_lang)
-
- if not os.path.exists(in_src_fname):
- continue
- if not os.path.exists(in_trg_fname):
- continue
-
- print(in_src_fname)
- os.system('cat {} >> {}'.format(in_src_fname, out_src_fname))
-
- print(in_trg_fname)
- os.system('cat {} >> {}'.format(in_trg_fname, out_trg_fname))
-
-
-# with open('{}/lang_pairs.txt'.format(outdir),'w',encoding='utf-8') as lpfile:
-# lpfile.write('\n'.join( [ '-'.join(x) for x in lang_pair_list ] ))
-
- corpus_stats(data_dir, outdir, lang_pair_list, split)
-
-
-def corpus_stats(data_dir, outdir, lang_pair_list, split):
- """
- data_dir: input dir, contains directories for language pairs named l1-l2
- """
-
- with open('{}/{}_lang_pairs.txt'.format(outdir, split), 'w', encoding='utf-8') as lpfile:
-
- for src_lang, trg_lang in tqdm(lang_pair_list):
- print('src: {}, tgt:{}'.format(src_lang, trg_lang))
-
- in_src_fname = '{}/{}-{}/{}.{}'.format(
- data_dir, src_lang, trg_lang, split, src_lang)
- # in_trg_fname='{}/{}-{}/train.{}'.format(data_dir,src_lang,trg_lang,trg_lang)
- if not os.path.exists(in_src_fname):
- continue
-
- print(in_src_fname)
- corpus_size = 0
- with open(in_src_fname, 'r', encoding='utf-8') as infile:
- corpus_size = sum(map(lambda x: 1, infile))
-
- lpfile.write('{}\t{}\t{}\n'.format(
- src_lang, trg_lang, corpus_size))
-
-
-if __name__ == '__main__':
-
- in_dir = sys.argv[1]
- out_dir = sys.argv[2]
- src_lang = sys.argv[3]
- tgt_lang = sys.argv[4]
- split = sys.argv[5]
- lang_pair_list = []
-
- if src_lang == 'en':
- for lang in LANGS:
- lang_pair_list.append(['en', lang])
- else:
- for lang in LANGS:
- lang_pair_list.append([lang, 'en'])
-
- concat_data(in_dir, out_dir, lang_pair_list, split=split)
-
diff --git a/spaces/Hellisotherpeople/HF-SHAP/app.py b/spaces/Hellisotherpeople/HF-SHAP/app.py
deleted file mode 100644
index 157f6498f33f927a3e81c095104daf7c7d0050d4..0000000000000000000000000000000000000000
--- a/spaces/Hellisotherpeople/HF-SHAP/app.py
+++ /dev/null
@@ -1,157 +0,0 @@
-import subprocess
-import sys
-
-
-##Lines 1-8 are necessary because the normal requirements.txt path for installing a package from disk doesn't work on HF spaces, thank you to Omar Sanseviero for the help!
-
-import numpy as np
-import pandas as pd
-import shap
-import streamlit as st
-import streamlit.components.v1 as components
-from datasets import load_dataset
-from transformers import (AutoModelForCausalLM, AutoModelForQuestionAnswering,
- AutoModelForSeq2SeqLM,
- AutoModelForSequenceClassification, AutoTokenizer,
- pipeline)
-
-
-st.set_page_config(page_title="HF-SHAP")
-st.title("HF-SHAP: A front end for SHAP")
-st.caption("By Allen Roush")
-st.caption("github: https://github.com/Hellisotherpeople")
-st.caption("Linkedin: https://www.linkedin.com/in/allen-roush-27721011b/")
-st.title("SHAP (SHapley Additive exPlanations)")
-st.image("https://shap.readthedocs.io/en/latest/_images/shap_header.png", width = 700)
-st.caption("By Lundberg, Scott M and Lee, Su-In")
-st.caption("Slightly modified by Allen Roush to fix a bug with text plotting not working outside of Jupyter Notebooks")
-st.caption("Full Citation: https://raw.githubusercontent.com/slundberg/shap/master/docs/references/shap_nips.bib")
-st.caption("See on github:: https://github.com/slundberg/shap")
-st.caption("More details of how SHAP works: https://christophm.github.io/interpretable-ml-book/shap.html")
-
-
-form = st.sidebar.form("Main Settings")
-
-form.header("Main Settings")
-
-
-
-task_done = form.selectbox("Which NLP task do you want to solve?", ["Text Generation", "Sentiment Analysis", "Translation", "Summarization"])
-
-
-
-
-
-custom_doc = form.checkbox("Use a document from an existing dataset?", value = False)
-if custom_doc:
- dataset_name = form.text_area("Enter the name of the huggingface Dataset to do analysis of:", value = "Hellisotherpeople/DebateSum")
- dataset_name_2 = form.text_area("Enter the name of the config for the dataset if it has one", value = "")
- split_name = form.text_area("Enter the name of the split of the dataset that you want to use", value = "train")
- number_of_records = form.number_input("Enter the number of documents that you want to analyze from the dataset", value = 200)
- column_name = form.text_area("Enter the name of the column that we are doing analysis on (the X value)", value = "Full-Document")
- index_to_analyze_start = form.number_input("Enter the index start of the document that you want to analyze of the dataset", value = 1)
- index_to_analyze_end = form.number_input("Enter the index end of the document that you want to analyze of the dataset", value = 2)
- form.caption("Multiple documents may not work on certain tasks")
-else:
- doc = st.text_area("Enter a custom document", value = "This is an example custom document")
-
-
-
-if task_done == "Text Generation":
- model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Text Generation", value = "gpt2")
- form.caption("This will download a new model, so it may take awhile or even break if the model is too large")
- decoder = form.checkbox("Is this a decoder model?", value = True)
- form.caption("This should be true for models like GPT-2, and false for models like BERT")
- max_length = form.number_input("What's the max length of the text?", value = 50)
- min_length = form.number_input("What's the min length of the text?", value = 20, max_value = max_length)
- penalize_repetion = form.number_input("How strongly do we want to penalize repetition in the text generation?", value = 2)
- sample = form.checkbox("Shall we use top-k and top-p decoding?", value = True)
- form.caption("Setting this to false makes it greedy")
- if sample:
- top_k = form.number_input("What value of K should we use for Top-K sampling? Set to zero to disable", value = 50)
- form.caption("In Top-K sampling, the K most likely next words are filtered and the probability mass is redistributed among only those K next words. ")
- top_p = form.number_input("What value of P should we use for Top-p sampling? Set to zero to disable", value = 0.95, max_value = 1.0, min_value = 0.0)
- form.caption("Top-p sampling chooses from the smallest possible set of words whose cumulative probability exceeds the probability p. The probability mass is then redistributed among this set of words.")
- temperature = form.number_input("How spicy/interesting do we want our models output to be", value = 1.05, min_value = 0.0)
- form.caption("Setting this higher decreases the likelihood of high probability words and increases the likelihood of low probability (and presumably more interesting) words")
- form.caption("For more details on what these settings mean, see here: https://huggingface.co/blog/how-to-generate")
-
-
-elif task_done == "Sentiment Analysis":
- model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Sentiment Analysis", value = "nateraw/bert-base-uncased-emotion")
- rescale_logits = form.checkbox("Do we rescale the probabilities in terms of log odds?", value = False)
-elif task_done == "Translation":
- model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Translation", value = "Helsinki-NLP/opus-mt-en-es")
-elif task_done == "Summarization":
- model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Translation", value = "sshleifer/distilbart-xsum-12-1")
-else:
- model_name = form.text_area("Enter the name of the pre-trained model from transformers that we are using for Question Answering", value = "deepset/roberta-base-squad2")
-
-form.header("Model Explanation Display Settings")
-output_width = form.number_input("Enter the number of pixels for width of model explanation html display", value = 800)
-output_height = form.number_input("Enter the number of pixels for height of model explanation html display", value = 1000)
-form.form_submit_button("Submit")
-
-@st.cache
-def load_and_process_data(path, name, streaming, split_name, number_of_records):
- dataset = load_dataset(path = path, name = name, streaming=streaming)
- #return list(dataset)
- dataset_head = dataset[split_name].take(number_of_records)
- df = pd.DataFrame.from_dict(dataset_head)
- return df[column_name]
-
-
-
-@st.cache(allow_output_mutation=True)
-def load_model(model_name):
- tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
- if task_done == "Text Generation":
- model = AutoModelForCausalLM.from_pretrained(model_name)
- model.config.is_decoder=decoder
- if sample == True:
- model.config.task_specific_params["text-generation"] = {"do_sample": sample, "max_length": max_length, "min_length": min_length, "temperature": temperature, "top_k": top_k, "top_p" : top_p, "no_repeat_ngram_size": penalize_repetion}
- else:
- model.config.task_specific_params["text-generation"] = {"do_sample": sample, "max_length": max_length, "min_length": min_length, "no_repeat_ngram_size": penalize_repetion}
-
- elif task_done == "Sentiment Analysis":
- model = AutoModelForSequenceClassification.from_pretrained(model_name)
- elif task_done == "Translation":
- model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
- elif task_done == "Summarization":
- model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
- elif task_done == "Question Answering":
- #TODO: This one is going to be harder...
- # https://shap.readthedocs.io/en/latest/example_notebooks/text_examples/question_answering/Explaining%20a%20Question%20Answering%20Transformers%20Model.html
- model = AutoModelForQuestionAnswering.from_pretrained(model_name)
-
- return tokenizer, model
-
-tokenizer, model = load_model(model_name)
-
-
-
-
-
-
-if custom_doc:
- df = load_and_process_data(dataset_name, dataset_name_2, True, split_name, number_of_records)
- doc = list(df[index_to_analyze_start:index_to_analyze_end])
- st.write(doc)
-
-if task_done == "Sentiment Analysis":
- pred = pipeline("text-classification", model=model, tokenizer=tokenizer, return_all_scores=True)
- explainer = shap.Explainer(pred, rescale_to_logits = rescale_logits)
-else:
- explainer = shap.Explainer(model, tokenizer)
-
-if custom_doc:
- shap_values = explainer(doc)
-else:
- shap_values = explainer([doc])
-
-
-
-the_plot = shap.plots.text(shap_values, display = False)
-st.caption("The plot is interactive! Try Hovering over or clicking on the input or output text")
-components.html(the_plot, height = output_height, width = output_width, scrolling = True)
-
diff --git a/spaces/HighCWu/GPEN/face_model/op/fused_bias_act.cpp b/spaces/HighCWu/GPEN/face_model/op/fused_bias_act.cpp
deleted file mode 100644
index 02be898f970bcc8ea297867fcaa4e71b24b3d949..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/GPEN/face_model/op/fused_bias_act.cpp
+++ /dev/null
@@ -1,21 +0,0 @@
-#include
-
-
-torch::Tensor fused_bias_act_op(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer,
- int act, int grad, float alpha, float scale);
-
-#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
-
-torch::Tensor fused_bias_act(const torch::Tensor& input, const torch::Tensor& bias, const torch::Tensor& refer,
- int act, int grad, float alpha, float scale) {
- CHECK_CUDA(input);
- CHECK_CUDA(bias);
-
- return fused_bias_act_op(input, bias, refer, act, grad, alpha, scale);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("fused_bias_act", &fused_bias_act, "fused bias act (CUDA)");
-}
\ No newline at end of file
diff --git a/spaces/HuggingFaceH4/open_llm_leaderboard/src/get_model_info/hardocded_metadata/flags.py b/spaces/HuggingFaceH4/open_llm_leaderboard/src/get_model_info/hardocded_metadata/flags.py
deleted file mode 100644
index cbd47b2608a0e6e07681b0ee1391af8e364ad00b..0000000000000000000000000000000000000000
--- a/spaces/HuggingFaceH4/open_llm_leaderboard/src/get_model_info/hardocded_metadata/flags.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Models which have been flagged by users as being problematic for a reason or another
-# (Model name to forum discussion link)
-FLAGGED_MODELS = {
- "Voicelab/trurl-2-13b": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/202",
- "deepnight-research/llama-2-70B-inst": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/207",
- "Aspik101/trurl-2-13b-pl-instruct_unload": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/213",
- "Fredithefish/ReasonixPajama-3B-HF": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/236",
- "TigerResearch/tigerbot-7b-sft-v1": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/237",
- "gaodrew/gaodrew-gorgonzola-13b": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/215",
- "AIDC-ai-business/Marcoroni-70B": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/287",
- "AIDC-ai-business/Marcoroni-13B": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/287",
- "AIDC-ai-business/Marcoroni-7B": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/287",
-}
-
-# Models which have been requested by orgs to not be submitted on the leaderboard
-DO_NOT_SUBMIT_MODELS = [
- "Voicelab/trurl-2-13b", # trained on MMLU
-]
diff --git a/spaces/HuguesdeF/moulinette/README.md b/spaces/HuguesdeF/moulinette/README.md
deleted file mode 100644
index fb946ece8788e8e39ad92576774ced43520b7ad4..0000000000000000000000000000000000000000
--- a/spaces/HuguesdeF/moulinette/README.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: Moulinette
-emoji: ⚙️
-colorFrom: indigo
-colorTo: indigo
-sdk: docker
-pinned: false
-license: apache-2.0
----
-# Moulinette Seguin Moreau
-
-## Installation Windows
-
-L'installation Windows de la Moulinette à logos passe par la création d'un conteneur virtuel, appelé "image Docker".
-Il fonctionne comme une petite machine virtuelle. Afin de s'en servir, il faut donc 1/Construire cette image virtuelle et 2/L'exécuter.
-
-Pour installer la Moulinette sur Windows, voici la procédure:
-* Installer docker desktop: https://www.docker.com/products/docker-desktop/
-* Ouvrir Docker Desktop. Se rendre dans les paramètres/général, puis cliquer sur "Start Docker Desktop when you log in", ce qui ouvrira automatiquement Docker au démarrage de l'ordinateur.
-* Ouvrir une invite de commande Windows, en cherchant "Run" dans la barre de recherche, puis tapper "cmd" dans la fenêtre Run.
-* Se rendre, depuis l'invite de commande dans le dossier contenant le code (Et donc ce fichier readme !). Pour aller dans un dossier utiliser la commande "cd".
-* Vérifier que docker est maintenant accessible, après l'installation précédente, en tapant: ```docker -v``` qui doit donner la version de Docker installée.
-* Entrer la commande:
-```
-docker build . -t moulinette
-```
-S'assurer que le build s'est bien passé en tapant ```docker images``` qui liste toutes les images Docker présentes sur l'ordinateur.
-Une image doit s'appeller "moulinette".
-
-* Puis, une fois le "build" realisé, entrer la commande:
-```
-docker run -d --restart unless-stopped -p 8501:8501 moulinette
-```
-Cette commande exécute (docker run) l'image docker "moulinette", aiguille le port du docker 8501 vers le port de la machine hôte 8501 (avec le -p),
- et relance l'image docker au démarrage de l'ordinateur (--restart). Enfin le -d indique de lancer l'exécution en mode "détaché", c'est-à-dire en tâche de fond.
-
-* Se rendre dans son navigateur web et rentrer l'url: localhost:8501
-* Ajouter cette page aux favoris.
-
diff --git a/spaces/ICML2022/OFA/fairseq/examples/criss/download_and_preprocess_flores_test.sh b/spaces/ICML2022/OFA/fairseq/examples/criss/download_and_preprocess_flores_test.sh
deleted file mode 100644
index ed4b390fbdee3991efeb298050e12065d7fe605b..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/criss/download_and_preprocess_flores_test.sh
+++ /dev/null
@@ -1,64 +0,0 @@
-#!/bin/bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-SPM_ENCODE=flores/scripts/spm_encode.py
-DATA=data_tmp
-SPM_MODEL=criss_checkpoints/sentence.bpe.model
-DICT=criss_checkpoints/dict.txt
-
-download_data() {
- CORPORA=$1
- URL=$2
-
- if [ -f $CORPORA ]; then
- echo "$CORPORA already exists, skipping download"
- else
- echo "Downloading $URL"
- wget $URL -O $CORPORA --no-check-certificate || rm -f $CORPORA
- if [ -f $CORPORA ]; then
- echo "$URL successfully downloaded."
- else
- echo "$URL not successfully downloaded."
- rm -f $CORPORA
- fi
- fi
-}
-
-if [[ -f flores ]]; then
- echo "flores already cloned"
-else
- git clone https://github.com/facebookresearch/flores
-fi
-
-mkdir -p $DATA
-download_data $DATA/wikipedia_en_ne_si_test_sets.tgz "https://github.com/facebookresearch/flores/raw/master/data/wikipedia_en_ne_si_test_sets.tgz"
-pushd $DATA
-pwd
-tar -vxf wikipedia_en_ne_si_test_sets.tgz
-popd
-
-
-for lang in ne_NP si_LK; do
- datadir=$DATA/${lang}-en_XX-flores
- rm -rf $datadir
- mkdir -p $datadir
- TEST_PREFIX=$DATA/wikipedia_en_ne_si_test_sets/wikipedia.test
- python $SPM_ENCODE \
- --model ${SPM_MODEL} \
- --output_format=piece \
- --inputs ${TEST_PREFIX}.${lang:0:2}-en.${lang:0:2} ${TEST_PREFIX}.${lang:0:2}-en.en \
- --outputs $datadir/test.bpe.${lang}-en_XX.${lang} $datadir/test.bpe.${lang}-en_XX.en_XX
-
- # binarize data
- fairseq-preprocess \
- --source-lang ${lang} --target-lang en_XX \
- --testpref $datadir/test.bpe.${lang}-en_XX \
- --destdir $datadir \
- --srcdict ${DICT} \
- --joined-dictionary \
- --workers 4
-done
diff --git a/spaces/ICML2022/OFA/fairseq/examples/noisychannel/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/noisychannel/__init__.py
deleted file mode 100644
index 89f1aef4f6328d25425e0bcabb42dfffd2ed35f0..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/noisychannel/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from .rerank_options import * # noqa
diff --git a/spaces/ICML2022/resefa/utils/loggers/__init__.py b/spaces/ICML2022/resefa/utils/loggers/__init__.py
deleted file mode 100644
index 665fd01dc34ae7a520dadfe4581c97e59dd6affe..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/resefa/utils/loggers/__init__.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# python3.7
-"""Collects all loggers."""
-
-from .normal_logger import NormalLogger
-from .rich_logger import RichLogger
-from .dummy_logger import DummyLogger
-
-__all__ = ['build_logger']
-
-_LOGGERS = {
- 'normal': NormalLogger,
- 'rich': RichLogger,
- 'dummy': DummyLogger
-}
-
-
-def build_logger(logger_type='normal', **kwargs):
- """Builds a logger.
-
- Args:
- logger_type: Type of logger, which is case insensitive.
- (default: `normal`)
- **kwargs: Additional arguments to build the logger.
-
- Raises:
- ValueError: If the `logger_type` is not supported.
- """
- logger_type = logger_type.lower()
- if logger_type not in _LOGGERS:
- raise ValueError(f'Invalid logger type: `{logger_type}`!\n'
- f'Types allowed: {list(_LOGGERS)}.')
- return _LOGGERS[logger_type](**kwargs)
diff --git a/spaces/Iceclear/StableSR/StableSR/ldm/util.py b/spaces/Iceclear/StableSR/StableSR/ldm/util.py
deleted file mode 100644
index 1b1301a55396c445ecdb28cc444fa10fcbd06391..0000000000000000000000000000000000000000
--- a/spaces/Iceclear/StableSR/StableSR/ldm/util.py
+++ /dev/null
@@ -1,211 +0,0 @@
-import importlib
-
-import torch
-import numpy as np
-from collections import abc
-from einops import rearrange
-from functools import partial
-
-import multiprocessing as mp
-from threading import Thread
-from queue import Queue
-
-from inspect import isfunction
-from PIL import Image, ImageDraw, ImageFont
-
-
-def log_txt_as_img(wh, xc, size=10):
- # wh a tuple of (width, height)
- # xc a list of captions to plot
- b = len(xc)
- txts = list()
- for bi in range(b):
- txt = Image.new("RGB", wh, color="white")
- draw = ImageDraw.Draw(txt)
- font = ImageFont.truetype('data/DejaVuSans.ttf', size=size)
- nc = int(40 * (wh[0] / 256))
- lines = "\n".join(xc[bi][start:start + nc] for start in range(0, len(xc[bi]), nc))
-
- try:
- draw.text((0, 0), lines, fill="black", font=font)
- except UnicodeEncodeError:
- print("Cant encode string for logging. Skipping.")
-
- txt = np.array(txt).transpose(2, 0, 1) / 127.5 - 1.0
- txts.append(txt)
- txts = np.stack(txts)
- txts = torch.tensor(txts)
- return txts
-
-
-def ismap(x):
- if not isinstance(x, torch.Tensor):
- return False
- return (len(x.shape) == 4) and (x.shape[1] > 3)
-
-
-def isimage(x):
- if not isinstance(x, torch.Tensor):
- return False
- return (len(x.shape) == 4) and (x.shape[1] == 3 or x.shape[1] == 1)
-
-
-def exists(x):
- return x is not None
-
-
-def default(val, d):
- if exists(val):
- return val
- return d() if isfunction(d) else d
-
-
-def mean_flat(tensor):
- """
- https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/nn.py#L86
- Take the mean over all non-batch dimensions.
- """
- return tensor.mean(dim=list(range(1, len(tensor.shape))))
-
-
-def count_params(model, verbose=False):
- total_params = sum(p.numel() for p in model.parameters())
- if verbose:
- print(f"{model.__class__.__name__} has {total_params * 1.e-6:.2f} M params.")
- return total_params
-
-
-def instantiate_from_config(config):
- if not "target" in config:
- if config == '__is_first_stage__':
- return None
- elif config == "__is_unconditional__":
- return None
- raise KeyError("Expected key `target` to instantiate.")
- return get_obj_from_str(config["target"])(**config.get("params", dict()))
-
-def instantiate_from_config_sr(config):
- if not "target" in config:
- if config == '__is_first_stage__':
- return None
- elif config == "__is_unconditional__":
- return None
- raise KeyError("Expected key `target` to instantiate.")
- return get_obj_from_str(config["target"])(config.get("params", dict()))
-
-def get_obj_from_str(string, reload=False):
- module, cls = string.rsplit(".", 1)
- if reload:
- module_imp = importlib.import_module(module)
- importlib.reload(module_imp)
- return getattr(importlib.import_module(module, package=None), cls)
-
-
-def _do_parallel_data_prefetch(func, Q, data, idx, idx_to_fn=False):
- # create dummy dataset instance
-
- # run prefetching
- if idx_to_fn:
- res = func(data, worker_id=idx)
- else:
- res = func(data)
- Q.put([idx, res])
- Q.put("Done")
-
-
-def parallel_data_prefetch(
- func: callable, data, n_proc, target_data_type="ndarray", cpu_intensive=True, use_worker_id=False
-):
- # if target_data_type not in ["ndarray", "list"]:
- # raise ValueError(
- # "Data, which is passed to parallel_data_prefetch has to be either of type list or ndarray."
- # )
- if isinstance(data, np.ndarray) and target_data_type == "list":
- raise ValueError("list expected but function got ndarray.")
- elif isinstance(data, abc.Iterable):
- if isinstance(data, dict):
- print(
- f'WARNING:"data" argument passed to parallel_data_prefetch is a dict: Using only its values and disregarding keys.'
- )
- data = list(data.values())
- if target_data_type == "ndarray":
- data = np.asarray(data)
- else:
- data = list(data)
- else:
- raise TypeError(
- f"The data, that shall be processed parallel has to be either an np.ndarray or an Iterable, but is actually {type(data)}."
- )
-
- if cpu_intensive:
- Q = mp.Queue(1000)
- proc = mp.Process
- else:
- Q = Queue(1000)
- proc = Thread
- # spawn processes
- if target_data_type == "ndarray":
- arguments = [
- [func, Q, part, i, use_worker_id]
- for i, part in enumerate(np.array_split(data, n_proc))
- ]
- else:
- step = (
- int(len(data) / n_proc + 1)
- if len(data) % n_proc != 0
- else int(len(data) / n_proc)
- )
- arguments = [
- [func, Q, part, i, use_worker_id]
- for i, part in enumerate(
- [data[i: i + step] for i in range(0, len(data), step)]
- )
- ]
- processes = []
- for i in range(n_proc):
- p = proc(target=_do_parallel_data_prefetch, args=arguments[i])
- processes += [p]
-
- # start processes
- print(f"Start prefetching...")
- import time
-
- start = time.time()
- gather_res = [[] for _ in range(n_proc)]
- try:
- for p in processes:
- p.start()
-
- k = 0
- while k < n_proc:
- # get result
- res = Q.get()
- if res == "Done":
- k += 1
- else:
- gather_res[res[0]] = res[1]
-
- except Exception as e:
- print("Exception: ", e)
- for p in processes:
- p.terminate()
-
- raise e
- finally:
- for p in processes:
- p.join()
- print(f"Prefetching complete. [{time.time() - start} sec.]")
-
- if target_data_type == 'ndarray':
- if not isinstance(gather_res[0], np.ndarray):
- return np.concatenate([np.asarray(r) for r in gather_res], axis=0)
-
- # order outputs
- return np.concatenate(gather_res, axis=0)
- elif target_data_type == 'list':
- out = []
- for r in gather_res:
- out.extend(r)
- return out
- else:
- return gather_res
diff --git a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/gradio_block_arena_named.py b/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/gradio_block_arena_named.py
deleted file mode 100644
index 0db977bad8887f7b7a653b835bac508efd65aba6..0000000000000000000000000000000000000000
--- a/spaces/Intel/NeuralChat-ICX-INT4/fastchat/serve/gradio_block_arena_named.py
+++ /dev/null
@@ -1,395 +0,0 @@
-import json
-import time
-
-import gradio as gr
-import numpy as np
-
-from fastchat.conversation import get_default_conv_template
-from fastchat.utils import (
- build_logger,
- violates_moderation,
- moderation_msg,
-)
-from fastchat.serve.gradio_patch import Chatbot as grChatbot
-from fastchat.serve.gradio_web_server import (
- http_bot,
- get_conv_log_filename,
- no_change_btn,
- enable_btn,
- disable_btn,
-)
-
-
-logger = build_logger("gradio_web_server_multi", "gradio_web_server_multi.log")
-
-num_models = 2
-enable_moderation = False
-
-
-def set_global_vars_named(enable_moderation_):
- global enable_moderation
- enable_moderation = enable_moderation_
-
-
-def load_demo_side_by_side_named(models, url_params):
- states = (None,) * num_models
-
- model_left = models[0]
- if len(models) > 1:
- weights = ([8, 4, 2, 1] + [1] * 32)[:len(models) - 1]
- weights = weights / np.sum(weights)
- model_right = np.random.choice(models[1:], p=weights)
- else:
- model_right = model_left
-
- selector_updates = (
- gr.Dropdown.update(model_left, visible=True),
- gr.Dropdown.update(model_right, visible=True),
- )
-
- return (
- states
- + selector_updates
- + (gr.Chatbot.update(visible=True),) * num_models
- + (
- gr.Textbox.update(visible=True),
- gr.Box.update(visible=True),
- gr.Row.update(visible=True),
- gr.Row.update(visible=True),
- gr.Accordion.update(visible=True),
- )
- )
-
-
-def vote_last_response(states, vote_type, model_selectors, request: gr.Request):
- with open(get_conv_log_filename(), "a") as fout:
- data = {
- "tstamp": round(time.time(), 4),
- "type": vote_type,
- "models": [x for x in model_selectors],
- "states": [x.dict() for x in states],
- "ip": request.client.host,
- }
- fout.write(json.dumps(data) + "\n")
-
-
-def leftvote_last_response(
- state0, state1, model_selector0, model_selector1, request: gr.Request
-):
- logger.info(f"leftvote (named). ip: {request.client.host}")
- vote_last_response(
- [state0, state1], "leftvote", [model_selector0, model_selector1], request
- )
- return ("",) + (disable_btn,) * 3
-
-
-def rightvote_last_response(
- state0, state1, model_selector0, model_selector1, request: gr.Request
-):
- logger.info(f"rightvote (named). ip: {request.client.host}")
- vote_last_response(
- [state0, state1], "rightvote", [model_selector0, model_selector1], request
- )
- return ("",) + (disable_btn,) * 3
-
-
-def tievote_last_response(
- state0, state1, model_selector0, model_selector1, request: gr.Request
-):
- logger.info(f"tievote (named). ip: {request.client.host}")
- vote_last_response(
- [state0, state1], "tievote", [model_selector0, model_selector1], request
- )
- return ("",) + (disable_btn,) * 3
-
-
-def regenerate(state0, state1, request: gr.Request):
- logger.info(f"regenerate (named). ip: {request.client.host}")
- states = [state0, state1]
- for i in range(num_models):
- states[i].messages[-1][-1] = None
- states[i].skip_next = False
- return states + [x.to_gradio_chatbot() for x in states] + [""] + [disable_btn] * 5
-
-
-def clear_history(request: gr.Request):
- logger.info(f"clear_history (named). ip: {request.client.host}")
- return [None] * num_models + [None] * num_models + [""] + [disable_btn] * 5
-
-
-def share_click(state0, state1, model_selector0, model_selector1,
- request: gr.Request):
- logger.info(f"share (named). ip: {request.client.host}")
- if state0 is not None and state1 is not None:
- vote_last_response(
- [state0, state1], "share", [model_selector0, model_selector1], request
- )
-
-
-def add_text(state0, state1, text, request: gr.Request):
- logger.info(f"add_text (named). ip: {request.client.host}. len: {len(text)}")
- states = [state0, state1]
-
- for i in range(num_models):
- if states[i] is None:
- states[i] = get_default_conv_template("vicuna").copy()
-
- if len(text) <= 0:
- for i in range(num_models):
- states[i].skip_next = True
- return (
- states
- + [x.to_gradio_chatbot() for x in states]
- + [""]
- + [
- no_change_btn,
- ]
- * 5
- )
-
- if enable_moderation:
- flagged = violates_moderation(text)
- if flagged:
- logger.info(f"violate moderation (named). ip: {request.client.host}. text: {text}")
- for i in range(num_models):
- states[i].skip_next = True
- return (
- states
- + [x.to_gradio_chatbot() for x in states]
- + [moderation_msg]
- + [
- no_change_btn,
- ]
- * 5
- )
-
- text = text[:1536] # Hard cut-off
- for i in range(num_models):
- states[i].append_message(states[i].roles[0], text)
- states[i].append_message(states[i].roles[1], None)
- states[i].skip_next = False
-
- return (
- states
- + [x.to_gradio_chatbot() for x in states]
- + [""]
- + [
- disable_btn,
- ]
- * 5
- )
-
-
-def http_bot_all(
- state0,
- state1,
- model_selector0,
- model_selector1,
- temperature,
- max_new_tokens,
- request: gr.Request,
-):
- logger.info(f"http_bot_all (named). ip: {request.client.host}")
- states = [state0, state1]
- model_selector = [model_selector0, model_selector1]
- gen = []
- for i in range(num_models):
- gen.append(
- http_bot(states[i], model_selector[i], temperature, max_new_tokens, request)
- )
-
- chatbots = [None] * num_models
- while True:
- stop = True
- for i in range(num_models):
- try:
- ret = next(gen[i])
- states[i], chatbots[i] = ret[0], ret[1]
- buttons = ret[2:]
- stop = False
- except StopIteration:
- pass
- yield states + chatbots + list(buttons)
- if stop:
- break
-
- for i in range(10):
- if i % 2 == 0:
- yield states + chatbots + [disable_btn] * 3 + list(buttons)[3:]
- else:
- yield states + chatbots + list(buttons)
- time.sleep(0.2)
-
-
-def build_side_by_side_ui_named(models):
- notice_markdown = """
-# ⚔️ Chatbot Arena ⚔️
-Rules:
-- Chat with two models side-by-side and vote for which one is better!
-- You pick the models you want to chat with.
-- You can continue chating and voting or click "Clear history" to start a new round.
-- A leaderboard will be available soon.
-- [[GitHub]](https://github.com/lm-sys/FastChat) [[Twitter]](https://twitter.com/lmsysorg) [[Discord]](https://discord.gg/h6kCZb72G7)
-
-### Terms of use
-By using this service, users are required to agree to the following terms: The service is a research preview intended for non-commercial use only. It only provides limited safety measures and may generate offensive content. It must not be used for any illegal, harmful, violent, racist, or sexual purposes. **The service collects user dialogue data for future research.**
-The demo works better on desktop devices with a wide screen.
-
-### Choose two models to chat with
-| | |
-| ---- | ---- |
-| [Vicuna](https://vicuna.lmsys.org): a chat assistant fine-tuned from LLaMA on user-shared conversations by LMSYS. | [Koala](https://bair.berkeley.edu/blog/2023/04/03/koala/): a dialogue model for academic research by BAIR |
-| [OpenAssistant (oasst)](https://open-assistant.io/): a chat-based assistant for everyone by LAION. | [Dolly](https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm): an instruction-tuned open large language model by Databricks. |
-| [ChatGLM](https://chatglm.cn/blog): an open bilingual dialogue language model by Tsinghua University | [StableLM](https://github.com/stability-AI/stableLM/): Stability AI language models. |
-| [Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html): a model fine-tuned from LLaMA on instruction-following demonstrations by Stanford. | [LLaMA](https://arxiv.org/abs/2302.13971): open and efficient foundation language models by Meta. |
-"""
-
- learn_more_markdown = """
-### License
-The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
-"""
-
- states = [gr.State() for _ in range(num_models)]
- model_selectors = [None] * num_models
- chatbots = [None] * num_models
-
- notice = gr.Markdown(notice_markdown, elem_id="notice_markdown")
-
- with gr.Box(elem_id="share-region"):
- with gr.Row():
- for i in range(num_models):
- with gr.Column():
- model_selectors[i] = gr.Dropdown(
- choices=models,
- value=models[i] if len(models) > i else "",
- interactive=True,
- show_label=False,
- ).style(container=False)
-
- with gr.Row():
- for i in range(num_models):
- label = "Model A" if i == 0 else "Model B"
- with gr.Column():
- chatbots[i] = grChatbot(label=label, elem_id=f"chatbot{i}",
- visible=False).style(height=550)
-
- with gr.Box() as button_row:
- with gr.Row():
- leftvote_btn = gr.Button(value="👈 A is better", interactive=False)
- tie_btn = gr.Button(value="🤝 Tie", interactive=False)
- rightvote_btn = gr.Button(value="👉 B is better", interactive=False)
-
- with gr.Row():
- with gr.Column(scale=20):
- textbox = gr.Textbox(
- show_label=False,
- placeholder="Enter text and press ENTER",
- visible=False,
- ).style(container=False)
- with gr.Column(scale=1, min_width=50):
- send_btn = gr.Button(value="Send", visible=False)
-
- with gr.Row() as button_row2:
- regenerate_btn = gr.Button(value="🔄 Regenerate", interactive=False)
- clear_btn = gr.Button(value="🗑️ Clear history", interactive=False)
- share_btn = gr.Button(value="📷 Share")
-
- with gr.Accordion("Parameters", open=False, visible=True) as parameter_row:
- temperature = gr.Slider(
- minimum=0.0,
- maximum=1.0,
- value=0.7,
- step=0.1,
- interactive=True,
- label="Temperature",
- )
- max_output_tokens = gr.Slider(
- minimum=0,
- maximum=1024,
- value=512,
- step=64,
- interactive=True,
- label="Max output tokens",
- )
-
- gr.Markdown(learn_more_markdown)
-
- # Register listeners
- btn_list = [leftvote_btn, rightvote_btn, tie_btn, regenerate_btn, clear_btn]
- leftvote_btn.click(
- leftvote_last_response,
- states + model_selectors,
- [textbox, leftvote_btn, rightvote_btn, tie_btn],
- )
- rightvote_btn.click(
- rightvote_last_response,
- states + model_selectors,
- [textbox, leftvote_btn, rightvote_btn, tie_btn],
- )
- tie_btn.click(
- tievote_last_response,
- states + model_selectors,
- [textbox, leftvote_btn, rightvote_btn, tie_btn],
- )
- regenerate_btn.click(
- regenerate, states, states + chatbots + [textbox] + btn_list
- ).then(
- http_bot_all,
- states + model_selectors + [temperature, max_output_tokens],
- states + chatbots + btn_list,
- )
- clear_btn.click(clear_history, None, states + chatbots + [textbox] + btn_list)
-
- share_js="""
-function (a, b, c, d) {
- const captureElement = document.querySelector('#share-region');
- html2canvas(captureElement)
- .then(canvas => {
- canvas.style.display = 'none'
- document.body.appendChild(canvas)
- return canvas
- })
- .then(canvas => {
- const image = canvas.toDataURL('image/png')
- const a = document.createElement('a')
- a.setAttribute('download', 'chatbot-arena.png')
- a.setAttribute('href', image)
- a.click()
- canvas.remove()
- });
- return [a, b, c, d];
-}
-"""
- share_btn.click(share_click, states + model_selectors, [], _js=share_js)
-
- for i in range(num_models):
- model_selectors[i].change(
- clear_history, None, states + chatbots + [textbox] + btn_list
- )
-
- textbox.submit(
- add_text, states + [textbox], states + chatbots + [textbox] + btn_list
- ).then(
- http_bot_all,
- states + model_selectors + [temperature, max_output_tokens],
- states + chatbots + btn_list,
- )
- send_btn.click(
- add_text, states + [textbox], states + chatbots + [textbox] + btn_list
- ).then(
- http_bot_all,
- states + model_selectors + [temperature, max_output_tokens],
- states + chatbots + btn_list,
- )
-
- return (
- states,
- model_selectors,
- chatbots,
- textbox,
- send_btn,
- button_row,
- button_row2,
- parameter_row,
- )
-
diff --git a/spaces/Juno360219/cloudqi-cqi_text_to_image_pt_v0/README.md b/spaces/Juno360219/cloudqi-cqi_text_to_image_pt_v0/README.md
deleted file mode 100644
index c6cc054cd7fea45bcfdb0c3d0a0c4590c62656d9..0000000000000000000000000000000000000000
--- a/spaces/Juno360219/cloudqi-cqi_text_to_image_pt_v0/README.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: Shiny for Python template
-emoji: 🌍
-colorFrom: yellow
-colorTo: indigo
-sdk: docker
-pinned: false
-license: mit
-duplicated_from: posit/shiny-for-python-template
----
-
-This is a templated Space for [Shiny for Python](https://shiny.rstudio.com/py/).
-
-To get started with a new app do the following:
-
-1) Install Shiny with `pip install shiny`
-2) Create a new app with `shiny create .`
-3) Then run the app with `shiny run --reload`
-
-To learn more about this framework please see the [Documentation](https://shiny.rstudio.com/py/docs/overview.html).
diff --git a/spaces/Kayson/InstructDiffusion/dataset/pose/pose.py b/spaces/Kayson/InstructDiffusion/dataset/pose/pose.py
deleted file mode 100644
index beb9ec8afc0ff60f8e431bc27005cb271af495c6..0000000000000000000000000000000000000000
--- a/spaces/Kayson/InstructDiffusion/dataset/pose/pose.py
+++ /dev/null
@@ -1,760 +0,0 @@
-# ------------------------------------------------------------------------------
-# Copyright (c) Microsoft
-# Licensed under the MIT License.
-# Written by Bin Xiao (Bin.Xiao@microsoft.com)
-# Modified by Zigang Geng (zigang@mail.ustc.edu.cn)
-# ------------------------------------------------------------------------------
-
-from __future__ import annotations
-
-import logging
-import os
-import json
-import copy
-import math
-import random
-from pathlib import Path
-from typing import Any
-
-import cv2
-import numpy as np
-import torch
-import torchvision
-from einops import rearrange
-from PIL import Image
-from torch.utils.data import Dataset
-import torchvision.transforms as transforms
-from pycocotools.coco import COCO
-
-
-logger = logging.getLogger(__name__)
-
-
-colors = {
- 'red': (255, 0, 0),
- 'green': (0, 255, 0),
- 'blue': (0, 0, 255),
- 'yellow': (255, 255, 0),
- 'cyan': (0, 255, 255),
- 'magenta': (255, 0, 255),
- 'gray': (128, 128, 128),
- 'white': (255, 255, 255),
- 'black': (0, 0, 0)}
-
-
-def readTXT(txt_path):
- with open(txt_path, 'r') as f:
- listInTXT = [line.strip() for line in f]
-
- return listInTXT
-
-
-class PoseDataset(Dataset):
- def __init__(self, root, image_set, is_train, max_prompt_num=5, min_prompt_num=1,
- radius=10, size=256, transparency=0.0, sample_weight=1.0, transform=None):
-
- self.sample_weight = sample_weight
- self.max_prompt_num = max_prompt_num
- self.min_prompt_num = min_prompt_num
- self.radius = radius
- self.transparency = transparency
- self.num_joints = 0
- self.pixel_std = 200
- self.flip_pairs = []
- self.parent_ids = []
-
- self.keypoints_type = {}
-
- self.is_train = is_train
- self.image_set = image_set
- self.root = root
-
- self.scale_factor = 0.35
- self.rotation_factor = 45
- self.flip = True
- self.num_joints_half_body = 8
- self.prob_half_body = 0.3
-
- self.image_size = np.array((size, size))
- self.heatmap_size = np.array((size, size))
-
- self.transform = transform
- self.db = []
-
- pose_diverse_prompt_path = 'dataset/prompt/prompt_pose.txt'
- self.pose_diverse_prompt_list = []
- with open(pose_diverse_prompt_path) as f:
- line = f.readline()
- while line:
- line = line.strip('\n')
- self.pose_diverse_prompt_list.append(line)
- line = f.readline()
-
- def _get_db(self):
- raise NotImplementedError
-
- def evaluate(self, preds, output_dir, *args, **kwargs):
- raise NotImplementedError
-
- def half_body_transform(self, joints, joints_vis):
- upper_joints = []
- lower_joints = []
- for joint_id in range(self.num_joints):
- if joints_vis[joint_id][0] > 0:
- if joint_id in self.upper_body_ids:
- upper_joints.append(joints[joint_id])
- else:
- lower_joints.append(joints[joint_id])
-
- if np.random.randn() < 0.5 and len(upper_joints) > 2:
- selected_joints = upper_joints
- else:
- selected_joints = lower_joints \
- if len(lower_joints) > 2 else upper_joints
-
- if len(selected_joints) < 2:
- return None, None
-
- selected_joints = np.array(selected_joints, dtype=np.float32)
- center = selected_joints.mean(axis=0)[:2]
-
- left_top = np.amin(selected_joints, axis=0)
- right_bottom = np.amax(selected_joints, axis=0)
-
- w = right_bottom[0] - left_top[0]
- h = right_bottom[1] - left_top[1]
-
- if w > self.aspect_ratio * h:
- h = w * 1.0 / self.aspect_ratio
- elif w < self.aspect_ratio * h:
- w = h * self.aspect_ratio
-
- scale = np.array(
- [
- w * 1.0 / self.pixel_std,
- h * 1.0 / self.pixel_std
- ],
- dtype=np.float32
- )
-
- scale = scale * 1.5
-
- return center, scale
-
- def __len__(self,):
- return int(len(self.db) * self.sample_weight)
-
- def __getitem__(self, idx):
- if self.sample_weight >= 1:
- idx = idx % len(self.db)
- else:
- idx = int(idx / self.sample_weight) + random.randint(0, int(1 / self.sample_weight) - 1)
-
- db_rec = copy.deepcopy(self.db[idx])
-
- image_file = db_rec['image']
- filename = db_rec['filename'] if 'filename' in db_rec else ''
- imgnum = db_rec['imgnum'] if 'imgnum' in db_rec else ''
-
- data_numpy = cv2.imread(
- image_file, cv2.IMREAD_COLOR | cv2.IMREAD_IGNORE_ORIENTATION
- )
- data_numpy = cv2.cvtColor(data_numpy, cv2.COLOR_BGR2RGB)
-
- if data_numpy is None:
- logger.error('=> fail to read {}'.format(image_file))
- raise ValueError('Fail to read {}'.format(image_file))
-
- joints = db_rec['joints_3d']
- joints_vis = db_rec['joints_3d_vis']
-
- c = db_rec['center']
- s = db_rec['scale']
- score = db_rec['score'] if 'score' in db_rec else 1
- r = 0
-
- if self.is_train:
- if (np.sum(joints_vis[:, 0]) > self.num_joints_half_body
- and np.random.rand() < self.prob_half_body):
- c_half_body, s_half_body = self.half_body_transform(
- joints, joints_vis
- )
-
- if c_half_body is not None and s_half_body is not None:
- c, s = c_half_body, s_half_body
-
- sf = self.scale_factor
- rf = self.rotation_factor
- s = s * np.clip(np.random.randn()*sf + 1, 1 - sf, 1 + sf)
- r = np.clip(np.random.randn()*rf, -rf*2, rf*2) \
- if random.random() <= 0.6 else 0
-
- if self.flip and random.random() <= 0.5:
- data_numpy = data_numpy[:, ::-1, :]
- joints, joints_vis = fliplr_joints(
- joints, joints_vis, data_numpy.shape[1], self.flip_pairs)
- c[0] = data_numpy.shape[1] - c[0] - 1
-
- trans = get_affine_transform(c, s, r, self.image_size)
- input = cv2.warpAffine(
- data_numpy,
- trans,
- (int(self.image_size[0]), int(self.image_size[1])),
- flags=cv2.INTER_LINEAR)
-
- if self.transform:
- input = self.transform(input)
-
- for i in range(self.num_joints):
- if joints_vis[i, 0] > 0.0:
- joints[i, 0:2] = affine_transform(joints[i, 0:2], trans)
-
- target, prompt = self.generate_target(input, joints, joints_vis)
-
- # return Image.fromarray(input), Image.fromarray(target), prompt
-
- image_0 = rearrange(2 * torch.tensor(np.array(input)).float() / 255 - 1, "h w c -> c h w")
- image_1 = rearrange(2 * torch.tensor(np.array(target)).float() / 255 - 1, "h w c -> c h w")
-
- return dict(edited=image_1, edit=dict(c_concat=image_0, c_crossattn=prompt))
-
- def generate_target(self, input, joints, joints_vis):
- '''
- :param input: [height, width, 3]
- :param joints: [num_joints, 3]
- :param joints_vis: [num_joints, 3]
- :return: target
- '''
- radius = self.radius
- target = copy.deepcopy(input)
-
- joint_num = random.randint(self.min_prompt_num, self.max_prompt_num)
- joint_ids = np.random.choice([i for i in range(self.num_joints)], joint_num, replace=False)
- random_color_names = random.sample(list(colors.keys()), len(joint_ids))
- random_marker_names = ['circle' for i in range(len(joint_ids))]
-
- prompt = ""
-
- for color_idx, joint_id in enumerate(joint_ids):
- feat_stride = self.image_size / self.heatmap_size
- mu_x = int(joints[joint_id][0] / feat_stride[0] + 0.5)
- mu_y = int(joints[joint_id][1] / feat_stride[1] + 0.5)
- # Check that any part of the gaussian is in-bounds
- ul = [int(mu_x - radius), int(mu_y - radius)]
- br = [int(mu_x + radius + 1), int(mu_y + radius + 1)]
- if ul[0] >= self.heatmap_size[0] or ul[1] >= self.heatmap_size[1] \
- or br[0] < 0 or br[1] < 0:
- # If not, just return the image as is
- joints_vis[joint_id][0] = 0
- continue
-
- marker_size = 2 * radius + 1
- g = np.zeros((marker_size, marker_size))
- x, y = np.indices((marker_size, marker_size))
- interval = int((marker_size - marker_size / math.sqrt(2)) // 2)
- mask = (x - radius) ** 2 + (y - radius) ** 2 <= radius ** 2 + 1
- g[mask] = 1
-
- # Usable gaussian range
- g_x = max(0, -ul[0]), min(br[0], self.heatmap_size[0]) - ul[0]
- g_y = max(0, -ul[1]), min(br[1], self.heatmap_size[1]) - ul[1]
- # Image range
- img_x = max(0, ul[0]), min(br[0], self.heatmap_size[0])
- img_y = max(0, ul[1]), min(br[1], self.heatmap_size[1])
-
- v = joints_vis[joint_id][0]
- random_color_name = random_color_names[color_idx]
- random_color = colors[random_color_name]
-
- prompt += random.choice(self.pose_diverse_prompt_list).format(
- color=random_color_name,
- joint=self.keypoints_type[joint_id])
-
- if v > 0.5:
- target[img_y[0]:img_y[1], img_x[0]:img_x[1]][g[g_y[0]:g_y[1], g_x[0]:g_x[1]]>0] \
- = self.transparency*target[img_y[0]:img_y[1], img_x[0]:img_x[1]][g[g_y[0]:g_y[1], g_x[0]:g_x[1]]>0] \
- + (1-self.transparency)*np.array(random_color)
-
- return target, prompt
-
-
-class COCODataset(PoseDataset):
- def __init__(self, root, image_set, is_train, max_prompt_num=5, min_prompt_num=1,
- radius=10, size=256, transparency=0.0, sample_weight=1.0, transform=None):
-
- super().__init__(root, image_set, is_train, max_prompt_num, min_prompt_num,
- radius, size, transparency, sample_weight, transform)
-
- self.keypoints_type = {
- 0: "nose",
- 1: "left eye",
- 2: "right eye",
- 3: "left ear",
- 4: "right ear",
- 5: "left shoulder",
- 6: "right shoulder",
- 7: "left elbow",
- 8: "right elbow",
- 9: "left wrist",
- 10: "right wrist",
- 11: "left hip",
- 12: "right hip",
- 13: "left knee",
- 14: "right knee",
- 15: "left ankle",
- 16: "right ankle"
- }
-
- self.image_width = size
- self.image_height = size
- self.aspect_ratio = self.image_width * 1.0 / self.image_height
- self.pixel_std = 200
-
- self.coco = COCO(self._get_ann_file_keypoint())
-
- # deal with class names
- cats = [cat['name']
- for cat in self.coco.loadCats(self.coco.getCatIds())]
- self.classes = ['__background__'] + cats
- logger.info('=> classes: {}'.format(self.classes))
- self.num_classes = len(self.classes)
- self._class_to_ind = dict(zip(self.classes, range(self.num_classes)))
- self._class_to_coco_ind = dict(zip(cats, self.coco.getCatIds()))
- self._coco_ind_to_class_ind = dict(
- [
- (self._class_to_coco_ind[cls], self._class_to_ind[cls])
- for cls in self.classes[1:]
- ]
- )
-
- # load image file names
- self.image_set_index = self._load_image_set_index()
- self.num_images = len(self.image_set_index)
- logger.info('=> num_images: {}'.format(self.num_images))
-
- self.num_joints = 17
- self.flip_pairs = [[1, 2], [3, 4], [5, 6], [7, 8],
- [9, 10], [11, 12], [13, 14], [15, 16]]
- self.parent_ids = None
- self.upper_body_ids = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
- self.lower_body_ids = (11, 12, 13, 14, 15, 16)
-
- if 'coco' in self.root:
- self.db = self._get_db()
-
- logger.info('=> load {} samples'.format(len(self.db)))
-
- def _get_ann_file_keypoint(self):
- """ self.root / annotations / person_keypoints_train2017.json """
- if 'coco' in self.root:
- prefix = 'person_keypoints' \
- if 'test' not in self.image_set else 'image_info'
- return os.path.join(
- self.root,
- 'annotations',
- prefix + '_' + self.image_set + '.json'
- )
- elif 'crowdpose' in self.root:
- prefix = 'crowdpose'
- return os.path.join(
- self.root,
- 'json',
- prefix + '_' + self.image_set + '.json'
- )
- elif 'aic' in self.root:
- prefix = 'aic'
- return os.path.join(
- self.root,
- 'annotations',
- prefix + '_' + self.image_set + '.json'
- )
- else:
- raise ValueError('Please write the path for this new dataset.')
-
- def _load_image_set_index(self):
- """ image id: int """
- image_ids = self.coco.getImgIds()
- return image_ids
-
- def _get_db(self):
- gt_db = self._load_coco_keypoint_annotations()
- return gt_db
-
- def _load_coco_keypoint_annotations(self):
- """ ground truth bbox and keypoints """
- gt_db = []
- for index in self.image_set_index:
- gt_db.extend(self._load_coco_keypoint_annotation_kernal(index))
- return gt_db
-
- def _load_coco_keypoint_annotation_kernal(self, index):
- """
- coco ann: [u'segmentation', u'area', u'iscrowd', u'image_id', u'bbox', u'category_id', u'id']
- iscrowd:
- crowd instances are handled by marking their overlaps with all categories to -1
- and later excluded in training
- bbox:
- [x1, y1, w, h]
- :param index: coco image id
- :return: db entry
- """
- im_ann = self.coco.loadImgs(index)[0]
- width = im_ann['width']
- height = im_ann['height']
-
- annIds = self.coco.getAnnIds(imgIds=index, iscrowd=False)
- objs = self.coco.loadAnns(annIds)
-
- # sanitize bboxes
- valid_objs = []
- for obj in objs:
- x, y, w, h = obj['bbox']
- x1 = np.max((0, x))
- y1 = np.max((0, y))
- x2 = np.min((width - 1, x1 + np.max((0, w - 1))))
- y2 = np.min((height - 1, y1 + np.max((0, h - 1))))
- if 'crowdpose' in self.root:
- obj['area'] = 1
- if obj['area'] > 0 and x2 >= x1 and y2 >= y1:
- obj['clean_bbox'] = [x1, y1, x2-x1, y2-y1]
- valid_objs.append(obj)
- objs = valid_objs
-
- rec = []
- for obj in objs:
- cls = self._coco_ind_to_class_ind[obj['category_id']]
- if cls != 1:
- continue
-
- # ignore objs without keypoints annotation
- if max(obj['keypoints']) == 0:
- continue
-
- joints_3d = np.zeros((self.num_joints, 3), dtype=np.float32)
- joints_3d_vis = np.zeros((self.num_joints, 3), dtype=np.float32)
- for ipt in range(self.num_joints):
- joints_3d[ipt, 0] = obj['keypoints'][ipt * 3 + 0]
- joints_3d[ipt, 1] = obj['keypoints'][ipt * 3 + 1]
- joints_3d[ipt, 2] = 0
- t_vis = obj['keypoints'][ipt * 3 + 2]
- if t_vis > 1:
- t_vis = 1
- joints_3d_vis[ipt, 0] = t_vis
- joints_3d_vis[ipt, 1] = t_vis
- joints_3d_vis[ipt, 2] = 0
-
- center, scale = self._box2cs(obj['clean_bbox'][:4])
- rec.append({
- 'image': self.image_path_from_index(index, im_ann),
- 'center': center,
- 'scale': scale,
- 'joints_3d': joints_3d,
- 'joints_3d_vis': joints_3d_vis,
- 'filename': '',
- 'imgnum': 0,
- })
-
- return rec
-
- def _box2cs(self, box):
- x, y, w, h = box[:4]
- return self._xywh2cs(x, y, w, h)
-
- def _xywh2cs(self, x, y, w, h):
- center = np.zeros((2), dtype=np.float32)
- center[0] = x + w * 0.5
- center[1] = y + h * 0.5
-
- if w > self.aspect_ratio * h:
- h = w * 1.0 / self.aspect_ratio
- elif w < self.aspect_ratio * h:
- w = h * self.aspect_ratio
- scale = np.array(
- [w * 1.0 / self.pixel_std, h * 1.0 / self.pixel_std],
- dtype=np.float32)
- if center[0] != -1:
- scale = scale * 1.25
-
- return center, scale
-
- def image_path_from_index(self, index, im_ann):
- """ example: images / train2017 / 000000119993.jpg """
- if 'coco' in self.root:
- file_name = '%012d.jpg' % index
- if '2014' in self.image_set:
- file_name = 'COCO_%s_' % self.image_set + file_name
-
- prefix = 'test2017' if 'test' in self.image_set else self.image_set
-
- data_name = prefix
-
- image_path = os.path.join(
- self.root, 'images', data_name, file_name)
-
- return image_path
- elif 'crowdpose' in self.root:
- file_name = f'{index}.jpg'
-
- image_path = os.path.join(
- self.root, 'images', file_name)
-
- return image_path
- elif 'aic' in self.root:
- file_name = im_ann["file_name"]
-
- image_path = os.path.join(
- self.root, 'ai_challenger_keypoint_train_20170902', 'keypoint_train_images_20170902', file_name)
-
- return image_path
-
-
-def flip_back(output_flipped, matched_parts):
- '''
- ouput_flipped: numpy.ndarray(batch_size, num_joints, height, width)
- '''
- assert output_flipped.ndim == 4,\
- 'output_flipped should be [batch_size, num_joints, height, width]'
-
- output_flipped = output_flipped[:, :, :, ::-1]
-
- for pair in matched_parts:
- tmp = output_flipped[:, pair[0], :, :].copy()
- output_flipped[:, pair[0], :, :] = output_flipped[:, pair[1], :, :]
- output_flipped[:, pair[1], :, :] = tmp
-
- return output_flipped
-
-
-def fliplr_joints(joints, joints_vis, width, matched_parts):
- """
- flip coords
- """
- # Flip horizontal
- joints[:, 0] = width - joints[:, 0] - 1
-
- # Change left-right parts
- for pair in matched_parts:
- joints[pair[0], :], joints[pair[1], :] = \
- joints[pair[1], :], joints[pair[0], :].copy()
- joints_vis[pair[0], :], joints_vis[pair[1], :] = \
- joints_vis[pair[1], :], joints_vis[pair[0], :].copy()
-
- return joints*joints_vis, joints_vis
-
-
-def get_affine_transform(
- center, scale, rot, output_size,
- shift=np.array([0, 0], dtype=np.float32), inv=0
-):
- if not isinstance(scale, np.ndarray) and not isinstance(scale, list):
- print(scale)
- scale = np.array([scale, scale])
-
- scale_tmp = scale * 200.0
- src_w = scale_tmp[0]
- dst_w = output_size[0]
- dst_h = output_size[1]
-
- rot_rad = np.pi * rot / 180
- src_dir = get_dir([0, src_w * -0.5], rot_rad)
- dst_dir = np.array([0, dst_w * -0.5], np.float32)
-
- src = np.zeros((3, 2), dtype=np.float32)
- dst = np.zeros((3, 2), dtype=np.float32)
- src[0, :] = center + scale_tmp * shift
- src[1, :] = center + src_dir + scale_tmp * shift
- dst[0, :] = [dst_w * 0.5, dst_h * 0.5]
- dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dir
-
- src[2:, :] = get_3rd_point(src[0, :], src[1, :])
- dst[2:, :] = get_3rd_point(dst[0, :], dst[1, :])
-
- if inv:
- trans = cv2.getAffineTransform(np.float32(dst), np.float32(src))
- else:
- trans = cv2.getAffineTransform(np.float32(src), np.float32(dst))
-
- return trans
-
-
-def affine_transform(pt, t):
- new_pt = np.array([pt[0], pt[1], 1.]).T
- new_pt = np.dot(t, new_pt)
- return new_pt[:2]
-
-
-def get_3rd_point(a, b):
- direct = a - b
- return b + np.array([-direct[1], direct[0]], dtype=np.float32)
-
-
-def get_dir(src_point, rot_rad):
- sn, cs = np.sin(rot_rad), np.cos(rot_rad)
-
- src_result = [0, 0]
- src_result[0] = src_point[0] * cs - src_point[1] * sn
- src_result[1] = src_point[0] * sn + src_point[1] * cs
-
- return src_result
-
-
-class CrowdPoseDataset(COCODataset):
- def __init__(self, root, image_set, is_train, max_prompt_num=5, min_prompt_num=1,
- radius=10, size=256, transparency=0.0, sample_weight=1.0, transform=None):
-
- super().__init__(root, image_set, is_train, max_prompt_num, min_prompt_num,
- radius, size, transparency, sample_weight, transform)
-
- self.keypoints_type = {
- 0: 'left_shoulder',
- 1: 'right_shoulder',
- 2: 'left_elbow',
- 3: 'right_elbow',
- 4: 'left_wrist',
- 5: 'right_wrist',
- 6: 'left_hip',
- 7: 'right_hip',
- 8: 'left_knee',
- 9: 'right_knee',
- 10: 'left_ankle',
- 11: 'right_ankle',
- 12: 'top_head',
- 13: 'neck'
- }
-
- self.num_joints = 14
- self.prob_half_body = -1
- self.flip_pairs = [[0, 1], [2, 3], [4, 5], [6, 7], [8, 9], [10, 11]]
- self.parent_ids = None
- self.upper_body_ids = (0, 1, 2, 3, 4, 5, 12, 13)
- self.lower_body_ids = (6, 7, 8, 9, 10, 11)
-
- self.db = self._get_db()
-
- logger.info('=> load {} samples'.format(len(self.db)))
-
-
-class AICDataset(COCODataset):
- def __init__(self, root, image_set, is_train, max_prompt_num=5, min_prompt_num=1,
- radius=10, size=256, transparency=0.0, sample_weight=1.0, transform=None):
- super().__init__(root, image_set, is_train, max_prompt_num, min_prompt_num,
- radius, size, transparency, sample_weight, transform)
-
- self.keypoints_type = {
- 0: "right_shoulder",
- 1: "right_elbow",
- 2: "right_wrist",
- 3: "left_shoulder",
- 4: "left_elbow",
- 5: "left_wrist",
- 6: "right_hip",
- 7: "right_knee",
- 8: "right_ankle",
- 9: "left_hip",
- 10: "left_knee",
- 11: "left_ankle",
- 12: "head_top",
- 13: "neck"
- }
-
- self.num_joints = 14
- self.prob_half_body = -1
- self.flip_pairs = [[0, 3], [1, 4], [2, 5], [6, 9], [7, 10], [8, 11]]
- self.parent_ids = None
- self.upper_body_ids = (0, 1, 2, 3, 4, 5, 12, 13)
- self.lower_body_ids = (6, 7, 8, 9, 10, 11)
-
- self.db = self._get_db()
-
- logger.info('=> load {} samples'.format(len(self.db)))
-
-
-class MPIIDataset(PoseDataset):
- def __init__(self, root, image_set, is_train, max_prompt_num=5, min_prompt_num=1,
- radius=10, size=256, transparency=0.0, sample_weight=1.0, transform=None):
- super().__init__(root, image_set, is_train, max_prompt_num, min_prompt_num,
- radius, size, transparency, sample_weight, transform)
-
- self.keypoints_type = {
- 0: 'right_ankle',
- 1: 'right_knee',
- 2: 'right_hip',
- 3: 'left_hip',
- 4: 'left_knee',
- 5: 'left_ankle',
- 6: 'pelvis',
- 7: 'thorax',
- 8: 'upper_neck',
- 9: 'head_top',
- 10: 'right_wrist',
- 11: 'right_elbow',
- 12: 'right_shoulder',
- 13: 'left_shoulder',
- 14: 'left_elbow',
- 15: 'left_wrist'
- }
-
- self.data_format = 'jpg'
- self.num_joints = 16
- self.prob_half_body = -1
- self.flip_pairs = [[0, 5], [1, 4], [2, 3], [10, 15], [11, 14], [12, 13]]
- self.parent_ids = None
- self.upper_body_ids = (7, 8, 9, 10, 11, 12, 13, 14, 15)
- self.lower_body_ids = (0, 1, 2, 3, 4, 5, 6)
-
- self.db = self._get_db()
-
- logger.info('=> load {} samples'.format(len(self.db)))
-
- def _get_db(self):
- # create train/val split
- file_name = os.path.join(
- self.root, 'annot', self.image_set+'.json'
- )
- with open(file_name) as anno_file:
- anno = json.load(anno_file)
-
- gt_db = []
- for a in anno:
- image_name = a['image']
-
- c = np.array(a['center'], dtype=np.float32)
- s = np.array([a['scale'], a['scale']], dtype=np.float32)
-
- # Adjust center/scale slightly to avoid cropping limbs
- if c[0] != -1:
- c[1] = c[1] + 15 * s[1]
- s = s * 1.25
-
- # MPII uses matlab format, index is based 1,
- # we should first convert to 0-based index
- c = c - 1
-
- joints_3d = np.zeros((self.num_joints, 3), dtype=np.float32)
- joints_3d_vis = np.zeros((self.num_joints, 3), dtype=np.float32)
- if self.image_set != 'test':
- joints = np.array(a['joints'])
- joints[:, 0:2] = joints[:, 0:2] - 1
- joints_vis = np.array(a['joints_vis'])
- assert len(joints) == self.num_joints, \
- 'joint num diff: {} vs {}'.format(len(joints),
- self.num_joints)
-
- joints_3d[:, 0:2] = joints[:, 0:2]
- joints_3d_vis[:, 0] = joints_vis[:]
- joints_3d_vis[:, 1] = joints_vis[:]
-
- image_dir = 'images.zip@' if self.data_format == 'zip' else 'images'
- gt_db.append(
- {
- 'image': os.path.join(self.root, image_dir, image_name),
- 'center': c,
- 'scale': s,
- 'joints_3d': joints_3d,
- 'joints_3d_vis': joints_3d_vis,
- 'filename': '',
- 'imgnum': 0,
- }
- )
-
- return gt_db
diff --git a/spaces/Kevin676/AutoGPT/autogpt/agent/__init__.py b/spaces/Kevin676/AutoGPT/autogpt/agent/__init__.py
deleted file mode 100644
index e928af2205b1c52d19dc89ec4246e8c1d2c20e3f..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/autogpt/agent/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-from autogpt.agent.agent import Agent
-from autogpt.agent.agent_manager import AgentManager
-
-__all__ = ["Agent", "AgentManager"]
diff --git a/spaces/Kevin676/AutoGPT/autogpt/memory/weaviate.py b/spaces/Kevin676/AutoGPT/autogpt/memory/weaviate.py
deleted file mode 100644
index 5408e9a97aa3594ad443448cfc31f2546a01eb09..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/autogpt/memory/weaviate.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import uuid
-
-import weaviate
-from weaviate import Client
-from weaviate.embedded import EmbeddedOptions
-from weaviate.util import generate_uuid5
-
-from autogpt.config import Config
-from autogpt.memory.base import MemoryProviderSingleton, get_ada_embedding
-
-
-def default_schema(weaviate_index):
- return {
- "class": weaviate_index,
- "properties": [
- {
- "name": "raw_text",
- "dataType": ["text"],
- "description": "original text for the embedding",
- }
- ],
- }
-
-
-class WeaviateMemory(MemoryProviderSingleton):
- def __init__(self, cfg):
- auth_credentials = self._build_auth_credentials(cfg)
-
- url = f"{cfg.weaviate_protocol}://{cfg.weaviate_host}:{cfg.weaviate_port}"
-
- if cfg.use_weaviate_embedded:
- self.client = Client(
- embedded_options=EmbeddedOptions(
- hostname=cfg.weaviate_host,
- port=int(cfg.weaviate_port),
- persistence_data_path=cfg.weaviate_embedded_path,
- )
- )
-
- print(
- f"Weaviate Embedded running on: {url} with persistence path: {cfg.weaviate_embedded_path}"
- )
- else:
- self.client = Client(url, auth_client_secret=auth_credentials)
-
- self.index = WeaviateMemory.format_classname(cfg.memory_index)
- self._create_schema()
-
- @staticmethod
- def format_classname(index):
- # weaviate uses capitalised index names
- # The python client uses the following code to format
- # index names before the corresponding class is created
- if len(index) == 1:
- return index.capitalize()
- return index[0].capitalize() + index[1:]
-
- def _create_schema(self):
- schema = default_schema(self.index)
- if not self.client.schema.contains(schema):
- self.client.schema.create_class(schema)
-
- def _build_auth_credentials(self, cfg):
- if cfg.weaviate_username and cfg.weaviate_password:
- return weaviate.AuthClientPassword(
- cfg.weaviate_username, cfg.weaviate_password
- )
- if cfg.weaviate_api_key:
- return weaviate.AuthApiKey(api_key=cfg.weaviate_api_key)
- else:
- return None
-
- def add(self, data):
- vector = get_ada_embedding(data)
-
- doc_uuid = generate_uuid5(data, self.index)
- data_object = {"raw_text": data}
-
- with self.client.batch as batch:
- batch.add_data_object(
- uuid=doc_uuid,
- data_object=data_object,
- class_name=self.index,
- vector=vector,
- )
-
- return f"Inserting data into memory at uuid: {doc_uuid}:\n data: {data}"
-
- def get(self, data):
- return self.get_relevant(data, 1)
-
- def clear(self):
- self.client.schema.delete_all()
-
- # weaviate does not yet have a neat way to just remove the items in an index
- # without removing the entire schema, therefore we need to re-create it
- # after a call to delete_all
- self._create_schema()
-
- return "Obliterated"
-
- def get_relevant(self, data, num_relevant=5):
- query_embedding = get_ada_embedding(data)
- try:
- results = (
- self.client.query.get(self.index, ["raw_text"])
- .with_near_vector({"vector": query_embedding, "certainty": 0.7})
- .with_limit(num_relevant)
- .do()
- )
-
- if len(results["data"]["Get"][self.index]) > 0:
- return [
- str(item["raw_text"]) for item in results["data"]["Get"][self.index]
- ]
- else:
- return []
-
- except Exception as err:
- print(f"Unexpected error {err=}, {type(err)=}")
- return []
-
- def get_stats(self):
- result = self.client.query.aggregate(self.index).with_meta_count().do()
- class_data = result["data"]["Aggregate"][self.index]
-
- return class_data[0]["meta"] if class_data else {}
diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/subsampling.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/subsampling.py
deleted file mode 100644
index e754126b2ec1f2d914206ec35ec026c7b6add17f..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/ppg_extractor/encoder/subsampling.py
+++ /dev/null
@@ -1,218 +0,0 @@
-#!/usr/bin/env python3
-# -*- coding: utf-8 -*-
-
-# Copyright 2019 Shigeki Karita
-# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
-
-"""Subsampling layer definition."""
-import logging
-import torch
-
-from espnet.nets.pytorch_backend.transformer.embedding import PositionalEncoding
-
-
-class Conv2dSubsampling(torch.nn.Module):
- """Convolutional 2D subsampling (to 1/4 length or 1/2 length).
-
- :param int idim: input dim
- :param int odim: output dim
- :param flaot dropout_rate: dropout rate
- :param torch.nn.Module pos_enc: custom position encoding layer
-
- """
-
- def __init__(self, idim, odim, dropout_rate, pos_enc=None,
- subsample_by_2=False,
- ):
- """Construct an Conv2dSubsampling object."""
- super(Conv2dSubsampling, self).__init__()
- self.subsample_by_2 = subsample_by_2
- if subsample_by_2:
- self.conv = torch.nn.Sequential(
- torch.nn.Conv2d(1, odim, kernel_size=5, stride=1, padding=2),
- torch.nn.ReLU(),
- torch.nn.Conv2d(odim, odim, kernel_size=4, stride=2, padding=1),
- torch.nn.ReLU(),
- )
- self.out = torch.nn.Sequential(
- torch.nn.Linear(odim * (idim // 2), odim),
- pos_enc if pos_enc is not None else PositionalEncoding(odim, dropout_rate),
- )
- else:
- self.conv = torch.nn.Sequential(
- torch.nn.Conv2d(1, odim, kernel_size=4, stride=2, padding=1),
- torch.nn.ReLU(),
- torch.nn.Conv2d(odim, odim, kernel_size=4, stride=2, padding=1),
- torch.nn.ReLU(),
- )
- self.out = torch.nn.Sequential(
- torch.nn.Linear(odim * (idim // 4), odim),
- pos_enc if pos_enc is not None else PositionalEncoding(odim, dropout_rate),
- )
-
- def forward(self, x, x_mask):
- """Subsample x.
-
- :param torch.Tensor x: input tensor
- :param torch.Tensor x_mask: input mask
- :return: subsampled x and mask
- :rtype Tuple[torch.Tensor, torch.Tensor]
-
- """
- x = x.unsqueeze(1) # (b, c, t, f)
- x = self.conv(x)
- b, c, t, f = x.size()
- x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f))
- if x_mask is None:
- return x, None
- if self.subsample_by_2:
- return x, x_mask[:, :, ::2]
- else:
- return x, x_mask[:, :, ::2][:, :, ::2]
-
- def __getitem__(self, key):
- """Subsample x.
-
- When reset_parameters() is called, if use_scaled_pos_enc is used,
- return the positioning encoding.
-
- """
- if key != -1:
- raise NotImplementedError("Support only `-1` (for `reset_parameters`).")
- return self.out[key]
-
-
-class Conv2dNoSubsampling(torch.nn.Module):
- """Convolutional 2D without subsampling.
-
- :param int idim: input dim
- :param int odim: output dim
- :param flaot dropout_rate: dropout rate
- :param torch.nn.Module pos_enc: custom position encoding layer
-
- """
-
- def __init__(self, idim, odim, dropout_rate, pos_enc=None):
- """Construct an Conv2dSubsampling object."""
- super().__init__()
- logging.info("Encoder does not do down-sample on mel-spectrogram.")
- self.conv = torch.nn.Sequential(
- torch.nn.Conv2d(1, odim, kernel_size=5, stride=1, padding=2),
- torch.nn.ReLU(),
- torch.nn.Conv2d(odim, odim, kernel_size=5, stride=1, padding=2),
- torch.nn.ReLU(),
- )
- self.out = torch.nn.Sequential(
- torch.nn.Linear(odim * idim, odim),
- pos_enc if pos_enc is not None else PositionalEncoding(odim, dropout_rate),
- )
-
- def forward(self, x, x_mask):
- """Subsample x.
-
- :param torch.Tensor x: input tensor
- :param torch.Tensor x_mask: input mask
- :return: subsampled x and mask
- :rtype Tuple[torch.Tensor, torch.Tensor]
-
- """
- x = x.unsqueeze(1) # (b, c, t, f)
- x = self.conv(x)
- b, c, t, f = x.size()
- x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f))
- if x_mask is None:
- return x, None
- return x, x_mask
-
- def __getitem__(self, key):
- """Subsample x.
-
- When reset_parameters() is called, if use_scaled_pos_enc is used,
- return the positioning encoding.
-
- """
- if key != -1:
- raise NotImplementedError("Support only `-1` (for `reset_parameters`).")
- return self.out[key]
-
-
-class Conv2dSubsampling6(torch.nn.Module):
- """Convolutional 2D subsampling (to 1/6 length).
-
- :param int idim: input dim
- :param int odim: output dim
- :param flaot dropout_rate: dropout rate
-
- """
-
- def __init__(self, idim, odim, dropout_rate):
- """Construct an Conv2dSubsampling object."""
- super(Conv2dSubsampling6, self).__init__()
- self.conv = torch.nn.Sequential(
- torch.nn.Conv2d(1, odim, 3, 2),
- torch.nn.ReLU(),
- torch.nn.Conv2d(odim, odim, 5, 3),
- torch.nn.ReLU(),
- )
- self.out = torch.nn.Sequential(
- torch.nn.Linear(odim * (((idim - 1) // 2 - 2) // 3), odim),
- PositionalEncoding(odim, dropout_rate),
- )
-
- def forward(self, x, x_mask):
- """Subsample x.
-
- :param torch.Tensor x: input tensor
- :param torch.Tensor x_mask: input mask
- :return: subsampled x and mask
- :rtype Tuple[torch.Tensor, torch.Tensor]
- """
- x = x.unsqueeze(1) # (b, c, t, f)
- x = self.conv(x)
- b, c, t, f = x.size()
- x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f))
- if x_mask is None:
- return x, None
- return x, x_mask[:, :, :-2:2][:, :, :-4:3]
-
-
-class Conv2dSubsampling8(torch.nn.Module):
- """Convolutional 2D subsampling (to 1/8 length).
-
- :param int idim: input dim
- :param int odim: output dim
- :param flaot dropout_rate: dropout rate
-
- """
-
- def __init__(self, idim, odim, dropout_rate):
- """Construct an Conv2dSubsampling object."""
- super(Conv2dSubsampling8, self).__init__()
- self.conv = torch.nn.Sequential(
- torch.nn.Conv2d(1, odim, 3, 2),
- torch.nn.ReLU(),
- torch.nn.Conv2d(odim, odim, 3, 2),
- torch.nn.ReLU(),
- torch.nn.Conv2d(odim, odim, 3, 2),
- torch.nn.ReLU(),
- )
- self.out = torch.nn.Sequential(
- torch.nn.Linear(odim * ((((idim - 1) // 2 - 1) // 2 - 1) // 2), odim),
- PositionalEncoding(odim, dropout_rate),
- )
-
- def forward(self, x, x_mask):
- """Subsample x.
-
- :param torch.Tensor x: input tensor
- :param torch.Tensor x_mask: input mask
- :return: subsampled x and mask
- :rtype Tuple[torch.Tensor, torch.Tensor]
- """
- x = x.unsqueeze(1) # (b, c, t, f)
- x = self.conv(x)
- b, c, t, f = x.size()
- x = self.out(x.transpose(1, 2).contiguous().view(b, t, c * f))
- if x_mask is None:
- return x, None
- return x, x_mask[:, :, :-2:2][:, :, :-2:2][:, :, :-2:2]
diff --git a/spaces/KingBlaze1227/PC-PICKERS/README.md b/spaces/KingBlaze1227/PC-PICKERS/README.md
deleted file mode 100644
index 1453b7a93e22cf243f0d811f768c39c53d44ff6e..0000000000000000000000000000000000000000
--- a/spaces/KingBlaze1227/PC-PICKERS/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: PC PICKERS
-emoji: 🐠
-colorFrom: pink
-colorTo: yellow
-sdk: static
-pinned: false
-license: cc
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Kreaols/ChuanhuChatGPT/assets/custom.css b/spaces/Kreaols/ChuanhuChatGPT/assets/custom.css
deleted file mode 100644
index 22108488886cfc8d7772214dd9b83727b3fca6a3..0000000000000000000000000000000000000000
--- a/spaces/Kreaols/ChuanhuChatGPT/assets/custom.css
+++ /dev/null
@@ -1,468 +0,0 @@
-:root {
- --chatbot-color-light: #000000;
- --chatbot-color-dark: #FFFFFF;
- --chatbot-background-color-light: #F3F3F3;
- --chatbot-background-color-dark: #121111;
- --message-user-background-color-light: #95EC69;
- --message-user-background-color-dark: #26B561;
- --message-bot-background-color-light: #FFFFFF;
- --message-bot-background-color-dark: #2C2C2C;
-}
-
-#app_title {
- font-weight: var(--prose-header-text-weight);
- font-size: var(--text-xxl);
- line-height: 1.3;
- text-align: left;
- margin-top: 6px;
- white-space: nowrap;
-}
-#description {
- text-align: center;
- margin: 32px 0 4px 0;
-}
-
-/* gradio的页脚信息 */
-footer {
- /* display: none !important; */
- margin-top: .2em !important;
- font-size: 85%;
-}
-#footer {
- text-align: center;
-}
-#footer div {
- display: inline-block;
-}
-#footer .versions{
- font-size: 85%;
- opacity: 0.60;
-}
-
-#float_display {
- position: absolute;
- max-height: 30px;
-}
-/* user_info */
-#user_info {
- white-space: nowrap;
- position: absolute; left: 8em; top: .2em;
- z-index: var(--layer-2);
- box-shadow: var(--block-shadow);
- border: none; border-radius: var(--block-label-radius);
- background: var(--color-accent);
- padding: var(--block-label-padding);
- font-size: var(--block-label-text-size); line-height: var(--line-sm);
- width: auto; min-height: 30px!important;
- opacity: 1;
- transition: opacity 0.3s ease-in-out;
-}
-#user_info .wrap {
- opacity: 0;
-}
-#user_info p {
- color: white;
- font-weight: var(--block-label-text-weight);
-}
-#user_info.hideK {
- opacity: 0;
- transition: opacity 1s ease-in-out;
-}
-
-/* status_display */
-#status_display {
- display: flex;
- min-height: 2em;
- align-items: flex-end;
- justify-content: flex-end;
-}
-#status_display p {
- font-size: .85em;
- font-family: ui-monospace, "SF Mono", "SFMono-Regular", "Menlo", "Consolas", "Liberation Mono", "Microsoft Yahei UI", "Microsoft Yahei", monospace;
- /* Windows下中文的monospace会fallback为新宋体,实在太丑,这里折中使用微软雅黑 */
- color: var(--body-text-color-subdued);
-}
-
-#status_display {
- transition: all 0.6s;
-}
-#chuanhu_chatbot {
- transition: height 0.3s ease;
-}
-
-/* usage_display */
-.insert_block {
- position: relative;
- margin: 0;
- padding: .5em 1em;
- box-shadow: var(--block-shadow);
- border-width: var(--block-border-width);
- border-color: var(--block-border-color);
- border-radius: var(--block-radius);
- background: var(--block-background-fill);
- width: 100%;
- line-height: var(--line-sm);
- min-height: 2em;
-}
-#usage_display p, #usage_display span {
- margin: 0;
- font-size: .85em;
- color: var(--body-text-color-subdued);
-}
-.progress-bar {
- background-color: var(--input-background-fill);;
- margin: .5em 0 !important;
- height: 20px;
- border-radius: 10px;
- overflow: hidden;
-}
-.progress {
- background-color: var(--block-title-background-fill);
- height: 100%;
- border-radius: 10px;
- text-align: right;
- transition: width 0.5s ease-in-out;
-}
-.progress-text {
- /* color: white; */
- color: var(--color-accent) !important;
- font-size: 1em !important;
- font-weight: bold;
- padding-right: 10px;
- line-height: 20px;
-}
-
-.apSwitch {
- top: 2px;
- display: inline-block;
- height: 24px;
- position: relative;
- width: 48px;
- border-radius: 12px;
-}
-.apSwitch input {
- display: none !important;
-}
-.apSlider {
- background-color: var(--neutral-200);
- bottom: 0;
- cursor: pointer;
- left: 0;
- position: absolute;
- right: 0;
- top: 0;
- transition: .4s;
- font-size: 18px;
- border-radius: 12px;
-}
-.apSlider::before {
- bottom: -1.5px;
- left: 1px;
- position: absolute;
- transition: .4s;
- content: "🌞";
-}
-input:checked + .apSlider {
- background-color: var(--primary-600);
-}
-input:checked + .apSlider::before {
- transform: translateX(23px);
- content:"🌚";
-}
-
-/* Override Slider Styles (for webkit browsers like Safari and Chrome)
- * 好希望这份提案能早日实现 https://github.com/w3c/csswg-drafts/issues/4410
- * 进度滑块在各个平台还是太不统一了
- */
-input[type="range"] {
- -webkit-appearance: none;
- height: 4px;
- background: var(--input-background-fill);
- border-radius: 5px;
- background-image: linear-gradient(var(--primary-500),var(--primary-500));
- background-size: 0% 100%;
- background-repeat: no-repeat;
-}
-input[type="range"]::-webkit-slider-thumb {
- -webkit-appearance: none;
- height: 20px;
- width: 20px;
- border-radius: 50%;
- border: solid 0.5px #ddd;
- background-color: white;
- cursor: ew-resize;
- box-shadow: var(--input-shadow);
- transition: background-color .1s ease;
-}
-input[type="range"]::-webkit-slider-thumb:hover {
- background: var(--neutral-50);
-}
-input[type=range]::-webkit-slider-runnable-track {
- -webkit-appearance: none;
- box-shadow: none;
- border: none;
- background: transparent;
-}
-
-#submit_btn, #cancel_btn {
- height: 42px !important;
-}
-#submit_btn::before {
- content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-#cancel_btn::before {
- content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-/* list */
-ol:not(.options), ul:not(.options) {
- padding-inline-start: 2em !important;
-}
-
-/* 亮色(默认) */
-#chuanhu_chatbot {
- background-color: var(--chatbot-background-color-light) !important;
- color: var(--chatbot-color-light) !important;
-}
-[data-testid = "bot"] {
- background-color: var(--message-bot-background-color-light) !important;
-}
-[data-testid = "user"] {
- background-color: var(--message-user-background-color-light) !important;
-}
-/* 暗色 */
-.dark #chuanhu_chatbot {
- background-color: var(--chatbot-background-color-dark) !important;
- color: var(--chatbot-color-dark) !important;
-}
-.dark [data-testid = "bot"] {
- background-color: var(--message-bot-background-color-dark) !important;
-}
-.dark [data-testid = "user"] {
- background-color: var(--message-user-background-color-dark) !important;
-}
-
-/* 屏幕宽度大于等于500px的设备 */
-/* update on 2023.4.8: 高度的细致调整已写入JavaScript */
-@media screen and (min-width: 500px) {
- #chuanhu_chatbot {
- height: calc(100vh - 200px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
-}
-/* 屏幕宽度小于500px的设备 */
-@media screen and (max-width: 499px) {
- #chuanhu_chatbot {
- height: calc(100vh - 140px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
- [data-testid = "bot"] {
- max-width: 95% !important;
- }
- #app_title h1{
- letter-spacing: -1px; font-size: 22px;
- }
-}
-#chuanhu_chatbot .wrap {
- overflow-x: hidden;
-}
-/* 对话气泡 */
-.message {
- border-radius: var(--radius-xl) !important;
- border: none;
- padding: var(--spacing-xl) !important;
- font-size: var(--text-md) !important;
- line-height: var(--line-md) !important;
- min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
- min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
- max-width: 85%;
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 85%;
- width: auto !important;
- border-bottom-right-radius: 0 !important;
-}
-
-.message.user p {
- white-space: pre-wrap;
-}
-.message .user-message {
- display: block;
- padding: 0 !important;
- white-space: pre-wrap;
-}
-
-.message .md-message p {
- margin-top: 0.6em !important;
- margin-bottom: 0.6em !important;
-}
-.message .md-message p:first-child { margin-top: 0 !important; }
-.message .md-message p:last-of-type { margin-bottom: 0 !important; }
-
-.message .md-message {
- display: block;
- padding: 0 !important;
-}
-.message .raw-message p {
- margin:0 !important;
-}
-.message .raw-message {
- display: block;
- padding: 0 !important;
- white-space: pre-wrap;
-}
-.raw-message.hideM, .md-message.hideM {
- display: none;
-}
-
-/* custom buttons */
-.chuanhu-btn {
- border-radius: 5px;
- /* background-color: #E6E6E6 !important; */
- color: rgba(120, 120, 120, 0.64) !important;
- padding: 4px !important;
- position: absolute;
- right: -22px;
- cursor: pointer !important;
- transition: color .2s ease, background-color .2s ease;
-}
-.chuanhu-btn:hover {
- background-color: rgba(167, 167, 167, 0.25) !important;
- color: unset !important;
-}
-.chuanhu-btn:active {
- background-color: rgba(167, 167, 167, 0.5) !important;
-}
-.chuanhu-btn:focus {
- outline: none;
-}
-.copy-bot-btn {
- /* top: 18px; */
- bottom: 0;
-}
-.toggle-md-btn {
- /* top: 0; */
- bottom: 20px;
-}
-.copy-code-btn {
- position: relative;
- float: right;
- font-size: 1em;
- cursor: pointer;
-}
-
-.message-wrap>div img{
- border-radius: 10px !important;
-}
-
-/* history message */
-.wrap>.history-message {
- padding: 10px !important;
-}
-.history-message {
- /* padding: 0 !important; */
- opacity: 80%;
- display: flex;
- flex-direction: column;
-}
-.history-message>.history-message {
- padding: 0 !important;
-}
-.history-message>.message-wrap {
- padding: 0 !important;
- margin-bottom: 16px;
-}
-.history-message>.message {
- margin-bottom: 16px;
-}
-.wrap>.history-message::after {
- content: "";
- display: block;
- height: 2px;
- background-color: var(--body-text-color-subdued);
- margin-bottom: 10px;
- margin-top: -10px;
- clear: both;
-}
-.wrap>.history-message>:last-child::after {
- content: "仅供查看";
- display: block;
- text-align: center;
- color: var(--body-text-color-subdued);
- font-size: 0.8em;
-}
-
-/* 表格 */
-table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-td,th {
- border: 1.2px solid var(--border-color-primary) !important;
- padding: 0.2em;
-}
-thead {
- background-color: rgba(175,184,193,0.2);
-}
-thead th {
- padding: .5em .2em;
-}
-/* 行内代码 */
-.message :not(pre) code {
- display: inline;
- white-space: break-spaces;
- font-family: var(--font-mono);
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* 代码块 */
-.message pre,
-.message pre[class*=language-] {
- color: #fff;
- overflow-x: auto;
- overflow-y: hidden;
- margin: .8em 1em 1em 0em !important;
- padding: var(--spacing-xl) 1.2em !important;
- border-radius: var(--radius-lg) !important;
-}
-.message pre code,
-.message pre code[class*=language-] {
- color: #fff;
- padding: 0;
- margin: 0;
- background-color: unset;
- text-shadow: none;
- font-family: var(--font-mono);
-}
-/* 覆盖 gradio 丑陋的复制按钮样式 */
-pre button[title="copy"] {
- border-radius: 5px;
- transition: background-color .2s ease;
-}
-pre button[title="copy"]:hover {
- background-color: #333232;
-}
-pre button .check {
- color: #fff !important;
- background: var(--neutral-950) !important;
-}
-
-/* 覆盖prism.css */
-.language-css .token.string,
-.style .token.string,
-.token.entity,
-.token.operator,
-.token.url {
- background: none !important;
-}
diff --git a/spaces/KyanChen/RSPrompter/mmdet/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/__init__.py
deleted file mode 100644
index 9f6140e121bc140896a7f432465651bfb1111575..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import mmcv
-import mmengine
-from mmengine.utils import digit_version
-
-from .version import __version__, version_info
-
-mmcv_minimum_version = '2.0'
-mmcv_maximum_version = '2.5.0'
-mmcv_version = digit_version(mmcv.__version__)
-
-mmengine_minimum_version = '0.7.0'
-mmengine_maximum_version = '1.5.0'
-mmengine_version = digit_version(mmengine.__version__)
-
-assert (mmcv_version >= digit_version(mmcv_minimum_version)
- and mmcv_version < digit_version(mmcv_maximum_version)), \
- f'MMCV=={mmcv.__version__} is used but incompatible. ' \
- f'Please install mmcv>={mmcv_minimum_version}, <{mmcv_maximum_version}.'
-
-assert (mmengine_version >= digit_version(mmengine_minimum_version)
- and mmengine_version < digit_version(mmengine_maximum_version)), \
- f'MMEngine=={mmengine.__version__} is used but incompatible. ' \
- f'Please install mmengine>={mmengine_minimum_version}, ' \
- f'<{mmengine_maximum_version}.'
-
-__all__ = ['__version__', 'version_info', 'digit_version']
diff --git a/spaces/LabelStudio/LabelStudio/Dockerfile b/spaces/LabelStudio/LabelStudio/Dockerfile
deleted file mode 100644
index 9ba913c6937a5238dd32d654197330a4bbf6f63e..0000000000000000000000000000000000000000
--- a/spaces/LabelStudio/LabelStudio/Dockerfile
+++ /dev/null
@@ -1,127 +0,0 @@
-FROM heartexlabs/label-studio:hf-latest
-
-################################################################################
-#
-# How to Disable Public Account Creation
-# --------------------------------------
-# By default this space allows for the unrestricted creation of new accounts
-# will full access to all projects and data. This is great for trying out
-# Label Studio and collaborating on projects, but you may want to restrict
-# access to your space to only authorized users. Uncomment the following line
-# to disable public account creation for this space.
-#
-# ENV LABEL_STUDIO_DISABLE_SIGNUP_WITHOUT_LINK=true
-#
-# Set secrets in your space to create an inital user, and log in with your
-# provided username and password. Do not set these in your Dockerfile, as they
-# globally visible on a public space.
-#
-# LABEL_STUDIO_USERNAME
-# LABEL_STUDIO_PASSWORD
-#
-# You will need to provide new users with an invitation link to join the space.
-#
-################################################################################
-
-################################################################################
-#
-# How to Enable Persistent Storage for Label Studio in Hugging Face Spaces
-# ------------------------------------------------------------------------
-#
-# By default this space stores all project configuration and data annotations
-# in local storage with sqlite. If the space is reset, all configuration and
-# annotation data in the space will be lost. You can enable configuration
-# persistence through one of two methods:
-#
-# 1) Enabling Hugging Face Persistent Storage for saving project and annotation
-# settings, as well as local task storage.
-# 2) Connecting an external Postgres database for saving project and annotation
-# settings, and cloud by connecting cloud storage for tasks.
-#
-################################################################################
-
-################################################################################
-#
-# How to Enable Hugging Face Persistent Storage for Label Studio
-# --------------------------------------------------------------
-#
-# In the Hugging Face Label Studio Space settings, select the appropriate
-# Persistent Storage tier. Note that Persistent Storage is a paid add-on.
-# By default, persistent storage is mounted to /data. In your Space settings,
-# set the following variables:
-#
-# LABEL_STUDIO_BASE_DATA_DIR=/data
-# ENV STORAGE_PERSISTENCE=1
-#
-# Your space will restart. NOTE: if you have existing settings and data,
-# they will be lost in this first restart. Data and setting will only be
-# preserved on subsequent restarts of the space.
-#
-################################################################################
-
-################################################################################
-#
-# How to Enable Configuration Persistence with Postgres
-# -----------------------------------------------------
-#
-# Set the following secret variables to match your own hosted instance of
-# Postgres. We strongly recommend setting these as secrets to prevent leaking
-# information about your database service to the public in your spaces
-# definition.
-#
-# ENV DJANGO_DB=default
-# ENV POSTGRE_NAME=
-# ENV POSTGRE_PORT=
-# ENV POSTGRE_USER=
-# ENV POSTGRE_PASSWORD=
-# ENV POSTGRE_PORT=
-# ENV POSTGRE_HOST=
-#
-# Uncomment the following line or set the following Space variable to remove
-# the warning about ephemeral storage
-#
-# ENV STORAGE_PERSISTENCE=1
-#
-# Note that you will need to connect cloud storage to host data items that you
-# want to annotate, as local storage will not be preserved across a space reset.
-#
-#
-# How to Enable Cloud Storage
-# ---------------------------
-# By default the only data storage enabled for this space is local. In the case
-# of a space reset, all data will be lost. To enable permanent storage, you
-# must enable a cloud storage connector. We also strongly recommend enabling
-# configuration persistence to preserve project data, annotations, and user
-# settings. Choose the appropriate cloud connector and configure the secrets
-# for it.
-#
-# Amazon S3
-# =========
-# STORAGE_TYPE=s3
-# STORAGE_AWS_ACCESS_KEY_ID=""
-# STORAGE_AWS_SECRET_ACCESS_KEY=""
-# STORAGE_AWS_BUCKET_NAME=""
-# STORAGE_AWS_REGION_NAME=""
-# STORAGE_AWS_FOLDER=""
-#
-# Google Cloud Storage
-# ====================
-#
-# STORAGE_TYPE=gcs
-# STORAGE_GCS_BUCKET_NAME=""
-# STORAGE_GCS_PROJECT_ID=""
-# STORAGE_GCS_FOLDER=""
-# GOOGLE_APPLICATION_CREDENTIALS="/opt/heartex/secrets/key.json"
-#
-# Azure Blob Storage
-# ==================
-#
-# STORAGE_TYPE=azure
-# STORAGE_AZURE_ACCOUNT_NAME=""
-# STORAGE_AZURE_ACCOUNT_KEY=""
-# STORAGE_AZURE_CONTAINER_NAME=""
-# STORAGE_AZURE_FOLDER=""
-#
-################################################################################
-
-CMD exec label-studio --host=$SPACE_HOST
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/models.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/models.py
deleted file mode 100644
index d898604960f129fc37f464ee3669bb61cfa8f614..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/infer_pack/models.py
+++ /dev/null
@@ -1,1142 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-from lib.infer.infer_pack import modules
-from lib.infer.infer_pack import attentions
-from lib.infer.infer_pack.commons import get_padding
-from torch.nn import Conv1d, ConvTranspose1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from lib.infer.infer_pack.commons import init_weights
-import numpy as np
-from lib.infer.infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- if uv.device.type == "privateuseone": # for DirectML
- uv = uv.float()
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMs256NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- nsff0 = nsff0[:, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
- ): # 这里ds是id,[bs,1]
- # print(1,pitch.shape)#[bs,t]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
- # print(-2,pitchf.shape,z_slice.shape)
- o = self.dec(z_slice, pitchf, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- nsff0 = nsff0[:, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, nsff0, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs256NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class SynthesizerTrnMs768NSFsid_nono(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr=None,
- **kwargs
- ):
- super().__init__()
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=False,
- )
- self.dec = Generator(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
- z_slice, ids_slice = commons.rand_slice_segments(
- z, y_lengths, self.segment_size
- )
- o = self.dec(z_slice, g=g)
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, phone, phone_lengths, sid, rate=None):
- g = self.emb_g(sid).unsqueeze(-1)
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
- if rate:
- head = int(z_p.shape[2] * rate)
- z_p = z_p[:, :, -head:]
- x_mask = x_mask[:, :, -head:]
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec(z * x_mask, g=g)
- return o, x_mask, (z, z_p, m_p, logs_p)
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
\ No newline at end of file
diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/server.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/server.py
deleted file mode 100644
index 0b6110f0779f2f0e6c1804abca6c3990975732ce..0000000000000000000000000000000000000000
--- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/tools/server.py
+++ /dev/null
@@ -1,373 +0,0 @@
-from flask import Flask, request, Response
-import os
-import sys
-import requests
-import shutil
-import subprocess
-import wget
-import signal
-from bs4 import BeautifulSoup
-import logging
-import click
-
-
-app = Flask(__name__)
-
-# Disable flask starting message
-log = logging.getLogger('werkzeug')
-log.setLevel(logging.ERROR)
-
-def secho(text, file=None, nl=None, err=None, color=None, **styles):
- pass
-
-def echo(text, file=None, nl=None, err=None, color=None, **styles):
- pass
-
-click.echo = echo
-click.secho = secho
-
-# Get the current directory path
-now_dir = os.path.dirname(os.path.abspath(__file__))
-
-# Go up two levels in the directory hierarchy
-for _ in range(2):
- now_dir = os.path.dirname(now_dir)
-
-# Add now_dir to sys.path so Python can find modules in that location
-sys.path.append(now_dir)
-
-from assets.i18n.i18n import I18nAuto
-i18n = I18nAuto()
-
-# Use the code from the resources module but with some changes
-def find_folder_parent(search_dir, folder_name):
- for dirpath, dirnames, filenames in os.walk(search_dir):
- if folder_name in dirnames:
- return os.path.abspath(dirpath)
- return None
-
-def get_mediafire_download_link(url):
- response = requests.get(url)
- response.raise_for_status()
- soup = BeautifulSoup(response.text, 'html.parser')
- download_button = soup.find('a', {'class': 'input popsok', 'aria-label': 'Download file'})
- if download_button:
- download_link = download_button.get('href')
- return download_link
- else:
- return None
-
-def download_from_url(url):
- file_path = find_folder_parent(now_dir, "assets")
- print(file_path)
- zips_path = os.path.join(file_path, "assets", "zips")
- print(zips_path)
- os.makedirs(zips_path, exist_ok=True)
- if url != "":
- print(i18n("Downloading the file: ") + f"{url}")
- if "drive.google.com" in url:
- if "file/d/" in url:
- file_id = url.split("file/d/")[1].split("/")[0]
- elif "id=" in url:
- file_id = url.split("id=")[1].split("&")[0]
- else:
- return None
-
- if file_id:
- os.chdir(zips_path)
- result = subprocess.run(
- ["gdown", f"https://drive.google.com/uc?id={file_id}", "--fuzzy"],
- capture_output=True,
- text=True,
- encoding="utf-8",
- )
- if (
- "Too many users have viewed or downloaded this file recently"
- in str(result.stderr)
- ):
- return "too much use"
- if "Cannot retrieve the public link of the file." in str(result.stderr):
- return "private link"
- print(result.stderr)
-
- elif "/blob/" in url or "/resolve/" in url:
- os.chdir(zips_path)
- if "/blob/" in url:
- url = url.replace("/blob/", "/resolve/")
-
- response = requests.get(url, stream=True)
- if response.status_code == 200:
- file_name = url.split("/")[-1]
- file_name = file_name.replace("%20", "_")
- total_size_in_bytes = int(response.headers.get('content-length', 0))
- block_size = 1024 # 1 Kibibyte
- progress_bar_length = 50
- progress = 0
- with open(os.path.join(zips_path, file_name), 'wb') as file:
- for data in response.iter_content(block_size):
- file.write(data)
- progress += len(data)
- progress_percent = int((progress / total_size_in_bytes) * 100)
- num_dots = int((progress / total_size_in_bytes) * progress_bar_length)
- progress_bar = "[" + "." * num_dots + " " * (progress_bar_length - num_dots) + "]"
- print(f"{progress_percent}% {progress_bar} {progress}/{total_size_in_bytes} ", end="\r")
- if progress_percent == 100:
- print("\n")
- else:
- os.chdir(file_path)
- return None
- elif "mega.nz" in url:
- if "#!" in url:
- file_id = url.split("#!")[1].split("!")[0]
- elif "file/" in url:
- file_id = url.split("file/")[1].split("/")[0]
- else:
- return None
- if file_id:
- print("Mega.nz is unsupported due mega.py deprecation")
- elif "/tree/main" in url:
- response = requests.get(url)
- soup = BeautifulSoup(response.content, "html.parser")
- temp_url = ""
- for link in soup.find_all("a", href=True):
- if link["href"].endswith(".zip"):
- temp_url = link["href"]
- break
- if temp_url:
- url = temp_url
- url = url.replace("blob", "resolve")
- if "huggingface.co" not in url:
- url = "https://huggingface.co" + url
-
- wget.download(url)
- else:
- print("No .zip file found on the page.")
- elif "cdn.discordapp.com" in url:
- file = requests.get(url)
- os.chdir("./assets/zips")
- if file.status_code == 200:
- name = url.split("/")
- with open(
- os.path.join(name[-1]), "wb"
- ) as newfile:
- newfile.write(file.content)
- else:
- return None
- elif "pixeldrain.com" in url:
- try:
- file_id = url.split("pixeldrain.com/u/")[1]
- os.chdir(zips_path)
- print(file_id)
- response = requests.get(f"https://pixeldrain.com/api/file/{file_id}")
- if response.status_code == 200:
- file_name = (
- response.headers.get("Content-Disposition")
- .split("filename=")[-1]
- .strip('";')
- )
- os.makedirs(zips_path, exist_ok=True)
- with open(os.path.join(zips_path, file_name), "wb") as newfile:
- newfile.write(response.content)
- os.chdir(file_path)
- return "downloaded"
- else:
- os.chdir(file_path)
- return None
- except Exception as e:
- print(e)
- os.chdir(file_path)
- return None
- elif "mediafire.com" in url:
- download_link = get_mediafire_download_link(url)
- if download_link:
- os.chdir(zips_path)
- wget.download(download_link)
- else:
- return None
- elif "www.weights.gg" in url:
- #Pls weights creator dont fix this because yes. c:
- url_parts = url.split("/")
- weights_gg_index = url_parts.index("www.weights.gg")
- if weights_gg_index != -1 and weights_gg_index < len(url_parts) - 1:
- model_part = "/".join(url_parts[weights_gg_index + 1:])
- if "models" in model_part:
- model_part = model_part.split("models/")[-1]
- print(model_part)
- if model_part:
- download_url = f"https://www.weights.gg/es/models/{model_part}"
- response = requests.get(download_url)
- if response.status_code == 200:
- soup = BeautifulSoup(response.text, "html.parser")
- button_link = soup.find("a", class_="bg-black text-white px-3 py-2 rounded-lg flex items-center gap-1")
- if button_link:
- download_link = button_link["href"]
- result = download_from_url(download_link)
- if result == "downloaded":
- return "downloaded"
- else:
- return None
- else:
- return None
- else:
- return None
- else:
- return None
- else:
- return None
- else:
- return None
- else:
- os.chdir(zips_path)
- wget.download(url)
-
- # Fix points in the zips
- for currentPath, _, zipFiles in os.walk(zips_path):
- for Files in zipFiles:
- filePart = Files.split(".")
- extensionFile = filePart[len(filePart) - 1]
- filePart.pop()
- nameFile = "_".join(filePart)
- realPath = os.path.join(currentPath, Files)
- os.rename(realPath, nameFile + "." + extensionFile)
-
- os.chdir(file_path)
- print(i18n("Full download"))
- return "downloaded"
- else:
- return None
-
-def extract_and_show_progress(zipfile_path, unzips_path):
- try:
- # Use shutil because zipfile not working
- shutil.unpack_archive(zipfile_path, unzips_path)
- return True
- except Exception as e:
- print(f"Error al descomprimir {zipfile_path}: {e}")
- return False
-
-
-@app.route('/download/', methods=['GET'])
-
-def load_downloaded_model(url):
- parent_path = find_folder_parent(now_dir, "assets")
- response = requests.get(url)
- response.raise_for_status()
- try:
- zips_path = os.path.join(parent_path, "assets", "zips")
- unzips_path = os.path.join(parent_path, "assets", "unzips")
- weights_path = os.path.join(parent_path, "logs", "weights")
- logs_dir = ""
-
- if os.path.exists(zips_path):
- shutil.rmtree(zips_path)
- if os.path.exists(unzips_path):
- shutil.rmtree(unzips_path)
-
- os.mkdir(zips_path)
- os.mkdir(unzips_path)
-
- download_file = download_from_url(url)
- if not download_file:
- print(i18n("The file could not be downloaded."))
- elif download_file == "downloaded":
- print(i18n("It has been downloaded successfully."))
- elif download_file == "too much use":
- raise Exception(
- i18n("Too many users have recently viewed or downloaded this file")
- )
- elif download_file == "private link":
- raise Exception(i18n("Cannot get file from this private link"))
-
- for filename in os.listdir(zips_path):
- if filename.endswith(".zip"):
- zipfile_path = os.path.join(zips_path, filename)
- print(i18n("Proceeding with the extraction..."))
- model_name = os.path.basename(zipfile_path)
- logs_dir = os.path.join(
- parent_path,
- "logs",
- os.path.normpath(str(model_name).replace(".zip", "")),
- )
-
- success = extract_and_show_progress(zipfile_path, unzips_path)
- if success:
- print(f"Extracción exitosa: {model_name}")
- else:
- print(f"Fallo en la extracción: {model_name}")
- else:
- print(i18n("Unzip error."))
- return ""
-
- index_file = False
- model_file = False
-
- for path, subdirs, files in os.walk(unzips_path):
- for item in files:
- item_path = os.path.join(path, item)
- if not "G_" in item and not "D_" in item and item.endswith(".pth"):
- model_file = True
- model_name = item.replace(".pth", "")
- logs_dir = os.path.join(parent_path, "logs", model_name)
- if os.path.exists(logs_dir):
- shutil.rmtree(logs_dir)
- os.mkdir(logs_dir)
- if not os.path.exists(weights_path):
- os.mkdir(weights_path)
- if os.path.exists(os.path.join(weights_path, item)):
- os.remove(os.path.join(weights_path, item))
- if os.path.exists(item_path):
- shutil.move(item_path, weights_path)
-
- if not model_file and not os.path.exists(logs_dir):
- os.mkdir(logs_dir)
- for path, subdirs, files in os.walk(unzips_path):
- for item in files:
- item_path = os.path.join(path, item)
- if item.startswith("added_") and item.endswith(".index"):
- index_file = True
- if os.path.exists(item_path):
- if os.path.exists(os.path.join(logs_dir, item)):
- os.remove(os.path.join(logs_dir, item))
- shutil.move(item_path, logs_dir)
- if item.startswith("total_fea.npy") or item.startswith("events."):
- if os.path.exists(item_path):
- if os.path.exists(os.path.join(logs_dir, item)):
- os.remove(os.path.join(logs_dir, item))
- shutil.move(item_path, logs_dir)
-
- result = ""
- if model_file:
- if index_file:
- print(i18n("The model works for inference, and has the .index file."))
- else:
- print(
- i18n(
- "The model works for inference, but it doesn't have the .index file."
- )
- )
-
- if not index_file and not model_file:
- print(i18n("No relevant file was found to upload."))
-
- os.chdir(parent_path)
- return result
- except Exception as e:
- os.chdir(parent_path)
- if "too much use" in str(e):
- print(i18n("Too many users have recently viewed or downloaded this file"))
- elif "private link" in str(e):
- print(i18n("Cannot get file from this private link"))
- else:
- print(i18n("An error occurred downloading"))
- print(e)
- finally:
- os.chdir(parent_path)
-
-@app.route('/shutdown', methods=['POST'])
-def shoutdown():
- print("This flask server is shutting down please close the window")
- pid = os.getpid()
- os.kill(pid, signal.SIGTERM)
-
-if __name__ == '__main__':
- app.run(host='localhost', port=8000)
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/brokers/oandabroker.py b/spaces/Lianjd/stock_dashboard/backtrader/brokers/oandabroker.py
deleted file mode 100644
index 6f050507a887bab754fcbbf7aca7f41271b72736..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/brokers/oandabroker.py
+++ /dev/null
@@ -1,357 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-import collections
-from copy import copy
-from datetime import date, datetime, timedelta
-import threading
-
-from backtrader.feed import DataBase
-from backtrader import (TimeFrame, num2date, date2num, BrokerBase,
- Order, BuyOrder, SellOrder, OrderBase, OrderData)
-from backtrader.utils.py3 import bytes, with_metaclass, MAXFLOAT
-from backtrader.metabase import MetaParams
-from backtrader.comminfo import CommInfoBase
-from backtrader.position import Position
-from backtrader.stores import oandastore
-from backtrader.utils import AutoDict, AutoOrderedDict
-from backtrader.comminfo import CommInfoBase
-
-
-class OandaCommInfo(CommInfoBase):
- def getvaluesize(self, size, price):
- # In real life the margin approaches the price
- return abs(size) * price
-
- def getoperationcost(self, size, price):
- '''Returns the needed amount of cash an operation would cost'''
- # Same reasoning as above
- return abs(size) * price
-
-
-class MetaOandaBroker(BrokerBase.__class__):
- def __init__(cls, name, bases, dct):
- '''Class has already been created ... register'''
- # Initialize the class
- super(MetaOandaBroker, cls).__init__(name, bases, dct)
- oandastore.OandaStore.BrokerCls = cls
-
-
-class OandaBroker(with_metaclass(MetaOandaBroker, BrokerBase)):
- '''Broker implementation for Oanda.
-
- This class maps the orders/positions from Oanda to the
- internal API of ``backtrader``.
-
- Params:
-
- - ``use_positions`` (default:``True``): When connecting to the broker
- provider use the existing positions to kickstart the broker.
-
- Set to ``False`` during instantiation to disregard any existing
- position
- '''
- params = (
- ('use_positions', True),
- ('commission', OandaCommInfo(mult=1.0, stocklike=False)),
- )
-
- def __init__(self, **kwargs):
- super(OandaBroker, self).__init__()
-
- self.o = oandastore.OandaStore(**kwargs)
-
- self.orders = collections.OrderedDict() # orders by order id
- self.notifs = collections.deque() # holds orders which are notified
-
- self.opending = collections.defaultdict(list) # pending transmission
- self.brackets = dict() # confirmed brackets
-
- self.startingcash = self.cash = 0.0
- self.startingvalue = self.value = 0.0
- self.positions = collections.defaultdict(Position)
-
- def start(self):
- super(OandaBroker, self).start()
- self.o.start(broker=self)
- self.startingcash = self.cash = cash = self.o.get_cash()
- self.startingvalue = self.value = self.o.get_value()
-
- if self.p.use_positions:
- for p in self.o.get_positions():
- print('position for instrument:', p['instrument'])
- is_sell = p['side'] == 'sell'
- size = p['units']
- if is_sell:
- size = -size
- price = p['avgPrice']
- self.positions[p['instrument']] = Position(size, price)
-
- def data_started(self, data):
- pos = self.getposition(data)
-
- if pos.size < 0:
- order = SellOrder(data=data,
- size=pos.size, price=pos.price,
- exectype=Order.Market,
- simulated=True)
-
- order.addcomminfo(self.getcommissioninfo(data))
- order.execute(0, pos.size, pos.price,
- 0, 0.0, 0.0,
- pos.size, 0.0, 0.0,
- 0.0, 0.0,
- pos.size, pos.price)
-
- order.completed()
- self.notify(order)
-
- elif pos.size > 0:
- order = BuyOrder(data=data,
- size=pos.size, price=pos.price,
- exectype=Order.Market,
- simulated=True)
-
- order.addcomminfo(self.getcommissioninfo(data))
- order.execute(0, pos.size, pos.price,
- 0, 0.0, 0.0,
- pos.size, 0.0, 0.0,
- 0.0, 0.0,
- pos.size, pos.price)
-
- order.completed()
- self.notify(order)
-
- def stop(self):
- super(OandaBroker, self).stop()
- self.o.stop()
-
- def getcash(self):
- # This call cannot block if no answer is available from oanda
- self.cash = cash = self.o.get_cash()
- return cash
-
- def getvalue(self, datas=None):
- self.value = self.o.get_value()
- return self.value
-
- def getposition(self, data, clone=True):
- # return self.o.getposition(data._dataname, clone=clone)
- pos = self.positions[data._dataname]
- if clone:
- pos = pos.clone()
-
- return pos
-
- def orderstatus(self, order):
- o = self.orders[order.ref]
- return o.status
-
- def _submit(self, oref):
- order = self.orders[oref]
- order.submit(self)
- self.notify(order)
- for o in self._bracketnotif(order):
- o.submit(self)
- self.notify(o)
-
- def _reject(self, oref):
- order = self.orders[oref]
- order.reject(self)
- self.notify(order)
- self._bracketize(order, cancel=True)
-
- def _accept(self, oref):
- order = self.orders[oref]
- order.accept()
- self.notify(order)
- for o in self._bracketnotif(order):
- o.accept(self)
- self.notify(o)
-
- def _cancel(self, oref):
- order = self.orders[oref]
- order.cancel()
- self.notify(order)
- self._bracketize(order, cancel=True)
-
- def _expire(self, oref):
- order = self.orders[oref]
- order.expire()
- self.notify(order)
- self._bracketize(order, cancel=True)
-
- def _bracketnotif(self, order):
- pref = getattr(order.parent, 'ref', order.ref) # parent ref or self
- br = self.brackets.get(pref, None) # to avoid recursion
- return br[-2:] if br is not None else []
-
- def _bracketize(self, order, cancel=False):
- pref = getattr(order.parent, 'ref', order.ref) # parent ref or self
- br = self.brackets.pop(pref, None) # to avoid recursion
- if br is None:
- return
-
- if not cancel:
- if len(br) == 3: # all 3 orders in place, parent was filled
- br = br[1:] # discard index 0, parent
- for o in br:
- o.activate() # simulate activate for children
- self.brackets[pref] = br # not done - reinsert children
-
- elif len(br) == 2: # filling a children
- oidx = br.index(order) # find index to filled (0 or 1)
- self._cancel(br[1 - oidx].ref) # cancel remaining (1 - 0 -> 1)
- else:
- # Any cancellation cancel the others
- for o in br:
- if o.alive():
- self._cancel(o.ref)
-
- def _fill(self, oref, size, price, ttype, **kwargs):
- order = self.orders[oref]
-
- if not order.alive(): # can be a bracket
- pref = getattr(order.parent, 'ref', order.ref)
- if pref not in self.brackets:
- msg = ('Order fill received for {}, with price {} and size {} '
- 'but order is no longer alive and is not a bracket. '
- 'Unknown situation')
- msg.format(order.ref, price, size)
- self.put_notification(msg, order, price, size)
- return
-
- # [main, stopside, takeside], neg idx to array are -3, -2, -1
- if ttype == 'STOP_LOSS_FILLED':
- order = self.brackets[pref][-2]
- elif ttype == 'TAKE_PROFIT_FILLED':
- order = self.brackets[pref][-1]
- else:
- msg = ('Order fill received for {}, with price {} and size {} '
- 'but order is no longer alive and is a bracket. '
- 'Unknown situation')
- msg.format(order.ref, price, size)
- self.put_notification(msg, order, price, size)
- return
-
- data = order.data
- pos = self.getposition(data, clone=False)
- psize, pprice, opened, closed = pos.update(size, price)
-
- comminfo = self.getcommissioninfo(data)
-
- closedvalue = closedcomm = 0.0
- openedvalue = openedcomm = 0.0
- margin = pnl = 0.0
-
- order.execute(data.datetime[0], size, price,
- closed, closedvalue, closedcomm,
- opened, openedvalue, openedcomm,
- margin, pnl,
- psize, pprice)
-
- if order.executed.remsize:
- order.partial()
- self.notify(order)
- else:
- order.completed()
- self.notify(order)
- self._bracketize(order)
-
- def _transmit(self, order):
- oref = order.ref
- pref = getattr(order.parent, 'ref', oref) # parent ref or self
-
- if order.transmit:
- if oref != pref: # children order
- # Put parent in orders dict, but add stopside and takeside
- # to order creation. Return the takeside order, to have 3s
- takeside = order # alias for clarity
- parent, stopside = self.opending.pop(pref)
- for o in parent, stopside, takeside:
- self.orders[o.ref] = o # write them down
-
- self.brackets[pref] = [parent, stopside, takeside]
- self.o.order_create(parent, stopside, takeside)
- return takeside # parent was already returned
-
- else: # Parent order, which is not being transmitted
- self.orders[order.ref] = order
- return self.o.order_create(order)
-
- # Not transmitting
- self.opending[pref].append(order)
- return order
-
- def buy(self, owner, data,
- size, price=None, plimit=None,
- exectype=None, valid=None, tradeid=0, oco=None,
- trailamount=None, trailpercent=None,
- parent=None, transmit=True,
- **kwargs):
-
- order = BuyOrder(owner=owner, data=data,
- size=size, price=price, pricelimit=plimit,
- exectype=exectype, valid=valid, tradeid=tradeid,
- trailamount=trailamount, trailpercent=trailpercent,
- parent=parent, transmit=transmit)
-
- order.addinfo(**kwargs)
- order.addcomminfo(self.getcommissioninfo(data))
- return self._transmit(order)
-
- def sell(self, owner, data,
- size, price=None, plimit=None,
- exectype=None, valid=None, tradeid=0, oco=None,
- trailamount=None, trailpercent=None,
- parent=None, transmit=True,
- **kwargs):
-
- order = SellOrder(owner=owner, data=data,
- size=size, price=price, pricelimit=plimit,
- exectype=exectype, valid=valid, tradeid=tradeid,
- trailamount=trailamount, trailpercent=trailpercent,
- parent=parent, transmit=transmit)
-
- order.addinfo(**kwargs)
- order.addcomminfo(self.getcommissioninfo(data))
- return self._transmit(order)
-
- def cancel(self, order):
- o = self.orders[order.ref]
- if order.status == Order.Cancelled: # already cancelled
- return
-
- return self.o.order_cancel(order)
-
- def notify(self, order):
- self.notifs.append(order.clone())
-
- def get_notification(self):
- if not self.notifs:
- return None
-
- return self.notifs.popleft()
-
- def next(self):
- self.notifs.append(None) # mark notification boundary
diff --git a/spaces/Lianjd/stock_dashboard/backtrader/indicators/momentum.py b/spaces/Lianjd/stock_dashboard/backtrader/indicators/momentum.py
deleted file mode 100644
index 8ed440af1ea27a5b2cfbc9402129845bc86afb14..0000000000000000000000000000000000000000
--- a/spaces/Lianjd/stock_dashboard/backtrader/indicators/momentum.py
+++ /dev/null
@@ -1,126 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8; py-indent-offset:4 -*-
-###############################################################################
-#
-# Copyright (C) 2015-2020 Daniel Rodriguez
-#
-# This program is free software: you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation, either version 3 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program. If not, see .
-#
-###############################################################################
-from __future__ import (absolute_import, division, print_function,
- unicode_literals)
-
-from . import Indicator
-
-
-class Momentum(Indicator):
- '''
- Measures the change in price by calculating the difference between the
- current price and the price from a given period ago
-
-
- Formula:
- - momentum = data - data_period
-
- See:
- - http://en.wikipedia.org/wiki/Momentum_(technical_analysis)
- '''
- lines = ('momentum',)
- params = (('period', 12),)
- plotinfo = dict(plothlines=[0.0])
-
- def __init__(self):
- self.l.momentum = self.data - self.data(-self.p.period)
- super(Momentum, self).__init__()
-
-
-class MomentumOscillator(Indicator):
- '''
- Measures the ratio of change in prices over a period
-
- Formula:
- - mosc = 100 * (data / data_period)
-
- See:
- - http://ta.mql4.com/indicators/oscillators/momentum
- '''
- alias = ('MomentumOsc',)
-
- # Named output lines
- lines = ('momosc',)
-
- # Accepted parameters (and defaults) -
- params = (('period', 12),
- ('band', 100.0))
-
- def _plotlabel(self):
- plabels = [self.p.period]
- return plabels
-
- def _plotinit(self):
- self.plotinfo.plothlines = [self.p.band]
-
- def __init__(self):
- self.l.momosc = 100.0 * (self.data / self.data(-self.p.period))
- super(MomentumOscillator, self).__init__()
-
-
-class RateOfChange(Indicator):
- '''
- Measures the ratio of change in prices over a period
-
- Formula:
- - roc = (data - data_period) / data_period
-
- See:
- - http://en.wikipedia.org/wiki/Momentum_(technical_analysis)
- '''
- alias = ('ROC',)
-
- # Named output lines
- lines = ('roc',)
-
- # Accepted parameters (and defaults) -
- params = (('period', 12),)
-
- def __init__(self):
- dperiod = self.data(-self.p.period)
- self.l.roc = (self.data - dperiod) / dperiod
- super(RateOfChange, self).__init__()
-
-
-class RateOfChange100(Indicator):
- '''
- Measures the ratio of change in prices over a period with base 100
-
- This is for example how ROC is defined in stockcharts
-
- Formula:
- - roc = 100 * (data - data_period) / data_period
-
- See:
- - http://stockcharts.com/school/doku.php?id=chart_school:technical_indicators:rate_of_change_roc_and_momentum
-
- '''
- alias = ('ROC100',)
-
- # Named output lines
- lines = ('roc100',)
-
- # Accepted parameters (and defaults)
- params = (('period', 12),)
-
- def __init__(self):
- self.l.roc100 = 100.0 * ROC(self.data, period=self.p.period)
- super(RateOfChange100, self).__init__()
diff --git a/spaces/Manjushri/MusicGen/audiocraft/modules/conv.py b/spaces/Manjushri/MusicGen/audiocraft/modules/conv.py
deleted file mode 100644
index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000
--- a/spaces/Manjushri/MusicGen/audiocraft/modules/conv.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-import typing as tp
-import warnings
-
-import torch
-from torch import nn
-from torch.nn import functional as F
-from torch.nn.utils import spectral_norm, weight_norm
-
-
-CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm',
- 'time_group_norm'])
-
-
-def apply_parametrization_norm(module: nn.Module, norm: str = 'none'):
- assert norm in CONV_NORMALIZATIONS
- if norm == 'weight_norm':
- return weight_norm(module)
- elif norm == 'spectral_norm':
- return spectral_norm(module)
- else:
- # We already check was in CONV_NORMALIZATION, so any other choice
- # doesn't need reparametrization.
- return module
-
-
-def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs):
- """Return the proper normalization module. If causal is True, this will ensure the returned
- module is causal, or return an error if the normalization doesn't support causal evaluation.
- """
- assert norm in CONV_NORMALIZATIONS
- if norm == 'time_group_norm':
- if causal:
- raise ValueError("GroupNorm doesn't support causal evaluation.")
- assert isinstance(module, nn.modules.conv._ConvNd)
- return nn.GroupNorm(1, module.out_channels, **norm_kwargs)
- else:
- return nn.Identity()
-
-
-def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int,
- padding_total: int = 0) -> int:
- """See `pad_for_conv1d`.
- """
- length = x.shape[-1]
- n_frames = (length - kernel_size + padding_total) / stride + 1
- ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total)
- return ideal_length - length
-
-
-def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0):
- """Pad for a convolution to make sure that the last window is full.
- Extra padding is added at the end. This is required to ensure that we can rebuild
- an output of the same length, as otherwise, even with padding, some time steps
- might get removed.
- For instance, with total padding = 4, kernel size = 4, stride = 2:
- 0 0 1 2 3 4 5 0 0 # (0s are padding)
- 1 2 3 # (output frames of a convolution, last 0 is never used)
- 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding)
- 1 2 3 4 # once you removed padding, we are missing one time step !
- """
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- return F.pad(x, (0, extra_padding))
-
-
-def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.):
- """Tiny wrapper around F.pad, just to allow for reflect padding on small input.
- If this is the case, we insert extra 0 padding to the right before the reflection happen.
- """
- length = x.shape[-1]
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- if mode == 'reflect':
- max_pad = max(padding_left, padding_right)
- extra_pad = 0
- if length <= max_pad:
- extra_pad = max_pad - length + 1
- x = F.pad(x, (0, extra_pad))
- padded = F.pad(x, paddings, mode, value)
- end = padded.shape[-1] - extra_pad
- return padded[..., :end]
- else:
- return F.pad(x, paddings, mode, value)
-
-
-def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]):
- """Remove padding from x, handling properly zero padding. Only for 1d!
- """
- padding_left, padding_right = paddings
- assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right)
- assert (padding_left + padding_right) <= x.shape[-1]
- end = x.shape[-1] - padding_right
- return x[..., padding_left: end]
-
-
-class NormConv1d(nn.Module):
- """Wrapper around Conv1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConv2d(nn.Module):
- """Wrapper around Conv2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.conv(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose1d(nn.Module):
- """Wrapper around ConvTranspose1d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, causal: bool = False, norm: str = 'none',
- norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs)
- self.norm_type = norm
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class NormConvTranspose2d(nn.Module):
- """Wrapper around ConvTranspose2d and normalization applied to this conv
- to provide a uniform interface across normalization approaches.
- """
- def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs):
- super().__init__()
- self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm)
- self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs)
-
- def forward(self, x):
- x = self.convtr(x)
- x = self.norm(x)
- return x
-
-
-class StreamableConv1d(nn.Module):
- """Conv1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, dilation: int = 1,
- groups: int = 1, bias: bool = True, causal: bool = False,
- norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {},
- pad_mode: str = 'reflect'):
- super().__init__()
- # warn user on unusual setup between dilation and stride
- if stride > 1 and dilation > 1:
- warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1'
- f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).')
- self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride,
- dilation=dilation, groups=groups, bias=bias, causal=causal,
- norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.pad_mode = pad_mode
-
- def forward(self, x):
- B, C, T = x.shape
- kernel_size = self.conv.conv.kernel_size[0]
- stride = self.conv.conv.stride[0]
- dilation = self.conv.conv.dilation[0]
- kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations
- padding_total = kernel_size - stride
- extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total)
- if self.causal:
- # Left padding for causal
- x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode)
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode)
- return self.conv(x)
-
-
-class StreamableConvTranspose1d(nn.Module):
- """ConvTranspose1d with some builtin handling of asymmetric or causal padding
- and normalization.
- """
- def __init__(self, in_channels: int, out_channels: int,
- kernel_size: int, stride: int = 1, causal: bool = False,
- norm: str = 'none', trim_right_ratio: float = 1.,
- norm_kwargs: tp.Dict[str, tp.Any] = {}):
- super().__init__()
- self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride,
- causal=causal, norm=norm, norm_kwargs=norm_kwargs)
- self.causal = causal
- self.trim_right_ratio = trim_right_ratio
- assert self.causal or self.trim_right_ratio == 1., \
- "`trim_right_ratio` != 1.0 only makes sense for causal convolutions"
- assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1.
-
- def forward(self, x):
- kernel_size = self.convtr.convtr.kernel_size[0]
- stride = self.convtr.convtr.stride[0]
- padding_total = kernel_size - stride
-
- y = self.convtr(x)
-
- # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be
- # removed at the very end, when keeping only the right length for the output,
- # as removing it here would require also passing the length at the matching layer
- # in the encoder.
- if self.causal:
- # Trim the padding on the right according to the specified ratio
- # if trim_right_ratio = 1.0, trim everything from right
- padding_right = math.ceil(padding_total * self.trim_right_ratio)
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- else:
- # Asymmetric padding required for odd strides
- padding_right = padding_total // 2
- padding_left = padding_total - padding_right
- y = unpad1d(y, (padding_left, padding_right))
- return y
diff --git a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/demos/run_vis.py b/spaces/Marshalls/testmtd/analysis/aistplusplus_api/demos/run_vis.py
deleted file mode 100644
index 55b824fb520d1d5923890d67239b1d4c5ae99119..0000000000000000000000000000000000000000
--- a/spaces/Marshalls/testmtd/analysis/aistplusplus_api/demos/run_vis.py
+++ /dev/null
@@ -1,97 +0,0 @@
-# coding=utf-8
-# Copyright 2020 The Google AI Perception Team Authors.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Test code for running visualizer."""
-import os
-
-from absl import app
-from absl import flags
-from aist_plusplus.loader import AISTDataset
-from aist_plusplus.visualizer import plot_on_video
-from smplx import SMPL
-import torch
-
-FLAGS = flags.FLAGS
-flags.DEFINE_string(
- 'anno_dir',
- '/usr/local/google/home/ruilongli/data/public/aist_plusplus_final/',
- 'input local dictionary for AIST++ annotations.')
-flags.DEFINE_string(
- 'video_dir',
- '/usr/local/google/home/ruilongli/data/AIST_plusplus/refined_10M_all_video/',
- 'input local dictionary for AIST Dance Videos.')
-flags.DEFINE_string(
- 'smpl_dir',
- '/usr/local/google/home/ruilongli/data/SMPL/',
- 'input local dictionary that stores SMPL data.')
-flags.DEFINE_string(
- 'video_name',
- 'gWA_sFM_c01_d27_mWA2_ch21',
- 'input video name to be visualized.')
-flags.DEFINE_string(
- 'save_dir',
- '/usr/local/google/home/ruilongli/data/public/aist_plusplus_final/tmp/',
- 'output local dictionary that stores AIST++ visualization.')
-flags.DEFINE_enum(
- 'mode', '2D', ['2D', '3D', 'SMPL'],
- 'visualize 3D or 2D keypoints, or SMPL joints on image plane.')
-
-
-def main(_):
- # Parsing data info.
- aist_dataset = AISTDataset(FLAGS.anno_dir)
- video_path = os.path.join(FLAGS.video_dir, f'{FLAGS.video_name}.mp4')
- seq_name, view = AISTDataset.get_seq_name(FLAGS.video_name)
- view_idx = AISTDataset.VIEWS.index(view)
-
- # Parsing keypoints.
- if FLAGS.mode == '2D': # raw keypoints detection results.
- keypoints2d, _, _ = AISTDataset.load_keypoint2d(
- aist_dataset.keypoint2d_dir, seq_name)
- keypoints2d = keypoints2d[view_idx, :, :, 0:2]
-
- elif FLAGS.mode == '3D': # 3D keypoints with temporal optimization.
- keypoints3d = AISTDataset.load_keypoint3d(
- aist_dataset.keypoint3d_dir, seq_name, use_optim=True)
- nframes, njoints, _ = keypoints3d.shape
- env_name = aist_dataset.mapping_seq2env[seq_name]
- cgroup = AISTDataset.load_camera_group(aist_dataset.camera_dir, env_name)
- keypoints2d = cgroup.project(keypoints3d)
- keypoints2d = keypoints2d.reshape(9, nframes, njoints, 2)[view_idx]
-
- elif FLAGS.mode == 'SMPL': # SMPL joints
- smpl_poses, smpl_scaling, smpl_trans = AISTDataset.load_motion(
- aist_dataset.motion_dir, seq_name)
- smpl = SMPL(model_path=FLAGS.smpl_dir, gender='MALE', batch_size=1)
- keypoints3d = smpl.forward(
- global_orient=torch.from_numpy(smpl_poses[:, 0:1]).float(),
- body_pose=torch.from_numpy(smpl_poses[:, 1:]).float(),
- transl=torch.from_numpy(smpl_trans).float(),
- scaling=torch.from_numpy(smpl_scaling.reshape(1, 1)).float(),
- ).joints.detach().numpy()
-
- nframes, njoints, _ = keypoints3d.shape
- env_name = aist_dataset.mapping_seq2env[seq_name]
- cgroup = AISTDataset.load_camera_group(aist_dataset.camera_dir, env_name)
- keypoints2d = cgroup.project(keypoints3d)
- keypoints2d = keypoints2d.reshape(9, nframes, njoints, 2)[view_idx]
-
- # Visualize.
- os.makedirs(FLAGS.save_dir, exist_ok=True)
- save_path = os.path.join(FLAGS.save_dir, f'{FLAGS.video_name}.mp4')
- plot_on_video(keypoints2d, video_path, save_path, fps=60)
-
-
-if __name__ == '__main__':
- app.run(main)
diff --git a/spaces/Martlgap/LiveFaceID/README.md b/spaces/Martlgap/LiveFaceID/README.md
deleted file mode 100644
index 6de87596d67944a0ad909ae7bf93951c8a640213..0000000000000000000000000000000000000000
--- a/spaces/Martlgap/LiveFaceID/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: LiveFaceID
-emoji: 🐢
-colorFrom: gray
-colorTo: pink
-sdk: streamlit
-sdk_version: 1.19.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/model/BasePIFuNet.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/model/BasePIFuNet.py
deleted file mode 100644
index cb8423ea7120b09d0627bab40a90bf8ce7d13e14..0000000000000000000000000000000000000000
--- a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/model/BasePIFuNet.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-from ..geometry import index, orthogonal, perspective
-
-class BasePIFuNet(nn.Module):
- def __init__(self,
- projection_mode='orthogonal',
- error_term=nn.MSELoss(),
- ):
- """
- :param projection_mode:
- Either orthogonal or perspective.
- It will call the corresponding function for projection.
- :param error_term:
- nn Loss between the predicted [B, Res, N] and the label [B, Res, N]
- """
- super(BasePIFuNet, self).__init__()
- self.name = 'base'
-
- self.error_term = error_term
-
- self.index = index
- self.projection = orthogonal if projection_mode == 'orthogonal' else perspective
-
- self.preds = None
- self.labels = None
-
- def forward(self, points, images, calibs, transforms=None):
- '''
- :param points: [B, 3, N] world space coordinates of points
- :param images: [B, C, H, W] input images
- :param calibs: [B, 3, 4] calibration matrices for each image
- :param transforms: Optional [B, 2, 3] image space coordinate transforms
- :return: [B, Res, N] predictions for each point
- '''
- self.filter(images)
- self.query(points, calibs, transforms)
- return self.get_preds()
-
- def filter(self, images):
- '''
- Filter the input images
- store all intermediate features.
- :param images: [B, C, H, W] input images
- '''
- None
-
- def query(self, points, calibs, transforms=None, labels=None):
- '''
- Given 3D points, query the network predictions for each point.
- Image features should be pre-computed before this call.
- store all intermediate features.
- query() function may behave differently during training/testing.
- :param points: [B, 3, N] world space coordinates of points
- :param calibs: [B, 3, 4] calibration matrices for each image
- :param transforms: Optional [B, 2, 3] image space coordinate transforms
- :param labels: Optional [B, Res, N] gt labeling
- :return: [B, Res, N] predictions for each point
- '''
- None
-
- def get_preds(self):
- '''
- Get the predictions from the last query
- :return: [B, Res, N] network prediction for the last query
- '''
- return self.preds
-
- def get_error(self):
- '''
- Get the network loss from the last query
- :return: loss term
- '''
- return self.error_term(self.preds, self.labels)
diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/structures/textrecog_data_sample.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/structures/textrecog_data_sample.py
deleted file mode 100644
index f40572b0282dd82d1bc67734dcfe52c0073fe5d4..0000000000000000000000000000000000000000
--- a/spaces/Mountchicken/MAERec-Gradio/mmocr/structures/textrecog_data_sample.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from mmengine.structures import BaseDataElement, LabelData
-
-
-class TextRecogDataSample(BaseDataElement):
- """A data structure interface of MMOCR for text recognition. They are used
- as interfaces between different components.
-
- The attributes in ``TextRecogDataSample`` are divided into two parts:
-
- - ``gt_text``(LabelData): Ground truth text.
- - ``pred_text``(LabelData): predictions text.
-
- Examples:
- >>> import torch
- >>> import numpy as np
- >>> from mmengine.structures import LabelData
- >>> from mmocr.data import TextRecogDataSample
- >>> # gt_text
- >>> data_sample = TextRecogDataSample()
- >>> img_meta = dict(img_shape=(800, 1196, 3),
- ... pad_shape=(800, 1216, 3))
- >>> gt_text = LabelData(metainfo=img_meta)
- >>> gt_text.item = 'mmocr'
- >>> data_sample.gt_text = gt_text
- >>> assert 'img_shape' in data_sample.gt_text.metainfo_keys()
- >>> print(data_sample)
-
- ) at 0x7f21fb1b9880>
- >>> # pred_text
- >>> pred_text = LabelData(metainfo=img_meta)
- >>> pred_text.item = 'mmocr'
- >>> data_sample = TextRecogDataSample(pred_text=pred_text)
- >>> assert 'pred_text' in data_sample
- >>> data_sample = TextRecogDataSample()
- >>> gt_text_data = dict(item='mmocr')
- >>> gt_text = LabelData(**gt_text_data)
- >>> data_sample.gt_text = gt_text
- >>> assert 'gt_text' in data_sample
- >>> assert 'item' in data_sample.gt_text
- """
-
- @property
- def gt_text(self) -> LabelData:
- """LabelData: ground truth text.
- """
- return self._gt_text
-
- @gt_text.setter
- def gt_text(self, value: LabelData) -> None:
- """gt_text setter."""
- self.set_field(value, '_gt_text', dtype=LabelData)
-
- @gt_text.deleter
- def gt_text(self) -> None:
- """gt_text deleter."""
- del self._gt_text
-
- @property
- def pred_text(self) -> LabelData:
- """LabelData: prediction text.
- """
- return self._pred_text
-
- @pred_text.setter
- def pred_text(self, value: LabelData) -> None:
- """pred_text setter."""
- self.set_field(value, '_pred_text', dtype=LabelData)
-
- @pred_text.deleter
- def pred_text(self) -> None:
- """pred_text deleter."""
- del self._pred_text
diff --git a/spaces/Nee001/bing0/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/Nee001/bing0/src/lib/hooks/use-copy-to-clipboard.tsx
deleted file mode 100644
index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000
--- a/spaces/Nee001/bing0/src/lib/hooks/use-copy-to-clipboard.tsx
+++ /dev/null
@@ -1,33 +0,0 @@
-'use client'
-
-import * as React from 'react'
-
-export interface useCopyToClipboardProps {
- timeout?: number
-}
-
-export function useCopyToClipboard({
- timeout = 2000
-}: useCopyToClipboardProps) {
- const [isCopied, setIsCopied] = React.useState(false)
-
- const copyToClipboard = (value: string) => {
- if (typeof window === 'undefined' || !navigator.clipboard?.writeText) {
- return
- }
-
- if (!value) {
- return
- }
-
- navigator.clipboard.writeText(value).then(() => {
- setIsCopied(true)
-
- setTimeout(() => {
- setIsCopied(false)
- }, timeout)
- })
- }
-
- return { isCopied, copyToClipboard }
-}
diff --git a/spaces/NoorAzam/model4/app.py b/spaces/NoorAzam/model4/app.py
deleted file mode 100644
index 93416c9454473157cb6838da3f4771fb04d89c81..0000000000000000000000000000000000000000
--- a/spaces/NoorAzam/model4/app.py
+++ /dev/null
@@ -1,8 +0,0 @@
-import gradio as gr
-
-def greet(name):
- return "Hello " + name + "!"
-
-demo = gr.Interface(fn=greet, inputs="text", outputs="text")
-
-demo.launch()
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py
deleted file mode 100644
index 7a7696403d505afdf0f1606f8220801b0f46152f..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py
+++ /dev/null
@@ -1,311 +0,0 @@
-# *****************************************************************************
-# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
-#
-# Redistribution and use in source and binary forms, with or without
-# modification, are permitted provided that the following conditions are met:
-# * Redistributions of source code must retain the above copyright
-# notice, this list of conditions and the following disclaimer.
-# * Redistributions in binary form must reproduce the above copyright
-# notice, this list of conditions and the following disclaimer in the
-# documentation and/or other materials provided with the distribution.
-# * Neither the name of the NVIDIA CORPORATION nor the
-# names of its contributors may be used to endorse or promote products
-# derived from this software without specific prior written permission.
-#
-# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
-# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
-# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
-# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
-# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
-# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
-# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
-# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-#
-# *****************************************************************************
-import copy
-import torch
-from torch.autograd import Variable
-import torch.nn.functional as F
-
-
-@torch.jit.script
-def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
- n_channels_int = n_channels[0]
- in_act = input_a+input_b
- t_act = torch.tanh(in_act[:, :n_channels_int, :])
- s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
- acts = t_act * s_act
- return acts
-
-
-class WaveGlowLoss(torch.nn.Module):
- def __init__(self, sigma=1.0):
- super(WaveGlowLoss, self).__init__()
- self.sigma = sigma
-
- def forward(self, model_output):
- z, log_s_list, log_det_W_list = model_output
- for i, log_s in enumerate(log_s_list):
- if i == 0:
- log_s_total = torch.sum(log_s)
- log_det_W_total = log_det_W_list[i]
- else:
- log_s_total = log_s_total + torch.sum(log_s)
- log_det_W_total += log_det_W_list[i]
-
- loss = torch.sum(z*z)/(2*self.sigma*self.sigma) - log_s_total - log_det_W_total
- return loss/(z.size(0)*z.size(1)*z.size(2))
-
-
-class Invertible1x1Conv(torch.nn.Module):
- """
- The layer outputs both the convolution, and the log determinant
- of its weight matrix. If reverse=True it does convolution with
- inverse
- """
- def __init__(self, c):
- super(Invertible1x1Conv, self).__init__()
- self.conv = torch.nn.Conv1d(c, c, kernel_size=1, stride=1, padding=0,
- bias=False)
-
- # Sample a random orthonormal matrix to initialize weights
- W = torch.qr(torch.FloatTensor(c, c).normal_())[0]
-
- # Ensure determinant is 1.0 not -1.0
- if torch.det(W) < 0:
- W[:,0] = -1*W[:,0]
- W = W.view(c, c, 1)
- self.conv.weight.data = W
-
- def forward(self, z, reverse=False):
- # shape
- batch_size, group_size, n_of_groups = z.size()
-
- W = self.conv.weight.squeeze()
-
- if reverse:
- if not hasattr(self, 'W_inverse'):
- # Reverse computation
- W_inverse = W.float().inverse()
- W_inverse = Variable(W_inverse[..., None])
- if z.type() == 'torch.cuda.HalfTensor':
- W_inverse = W_inverse.half()
- self.W_inverse = W_inverse
- z = F.conv1d(z, self.W_inverse, bias=None, stride=1, padding=0)
- return z
- else:
- # Forward computation
- log_det_W = batch_size * n_of_groups * torch.logdet(W)
- z = self.conv(z)
- return z, log_det_W
-
-
-class WN(torch.nn.Module):
- """
- This is the WaveNet like layer for the affine coupling. The primary difference
- from WaveNet is the convolutions need not be causal. There is also no dilation
- size reset. The dilation only doubles on each layer
- """
- def __init__(self, n_in_channels, n_mel_channels, n_layers, n_channels,
- kernel_size):
- super(WN, self).__init__()
- assert(kernel_size % 2 == 1)
- assert(n_channels % 2 == 0)
- self.n_layers = n_layers
- self.n_channels = n_channels
- self.in_layers = torch.nn.ModuleList()
- self.res_skip_layers = torch.nn.ModuleList()
-
- start = torch.nn.Conv1d(n_in_channels, n_channels, 1)
- start = torch.nn.utils.weight_norm(start, name='weight')
- self.start = start
-
- # Initializing last layer to 0 makes the affine coupling layers
- # do nothing at first. This helps with training stability
- end = torch.nn.Conv1d(n_channels, 2*n_in_channels, 1)
- end.weight.data.zero_()
- end.bias.data.zero_()
- self.end = end
-
- cond_layer = torch.nn.Conv1d(n_mel_channels, 2*n_channels*n_layers, 1)
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
-
- for i in range(n_layers):
- dilation = 2 ** i
- padding = int((kernel_size*dilation - dilation)/2)
- in_layer = torch.nn.Conv1d(n_channels, 2*n_channels, kernel_size,
- dilation=dilation, padding=padding)
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
- self.in_layers.append(in_layer)
-
-
- # last one is not necessary
- if i < n_layers - 1:
- res_skip_channels = 2*n_channels
- else:
- res_skip_channels = n_channels
- res_skip_layer = torch.nn.Conv1d(n_channels, res_skip_channels, 1)
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
- self.res_skip_layers.append(res_skip_layer)
-
- def forward(self, forward_input):
- audio, spect = forward_input
- audio = self.start(audio)
- output = torch.zeros_like(audio)
- n_channels_tensor = torch.IntTensor([self.n_channels])
-
- spect = self.cond_layer(spect)
-
- for i in range(self.n_layers):
- spect_offset = i*2*self.n_channels
- acts = fused_add_tanh_sigmoid_multiply(
- self.in_layers[i](audio),
- spect[:,spect_offset:spect_offset+2*self.n_channels,:],
- n_channels_tensor)
-
- res_skip_acts = self.res_skip_layers[i](acts)
- if i < self.n_layers - 1:
- audio = audio + res_skip_acts[:,:self.n_channels,:]
- output = output + res_skip_acts[:,self.n_channels:,:]
- else:
- output = output + res_skip_acts
-
- return self.end(output)
-
-
-class WaveGlow(torch.nn.Module):
- def __init__(self, n_mel_channels, n_flows, n_group, n_early_every,
- n_early_size, WN_config):
- super(WaveGlow, self).__init__()
-
- self.upsample = torch.nn.ConvTranspose1d(n_mel_channels,
- n_mel_channels,
- 1024, stride=256)
- assert(n_group % 2 == 0)
- self.n_flows = n_flows
- self.n_group = n_group
- self.n_early_every = n_early_every
- self.n_early_size = n_early_size
- self.WN = torch.nn.ModuleList()
- self.convinv = torch.nn.ModuleList()
-
- n_half = int(n_group/2)
-
- # Set up layers with the right sizes based on how many dimensions
- # have been output already
- n_remaining_channels = n_group
- for k in range(n_flows):
- if k % self.n_early_every == 0 and k > 0:
- n_half = n_half - int(self.n_early_size/2)
- n_remaining_channels = n_remaining_channels - self.n_early_size
- self.convinv.append(Invertible1x1Conv(n_remaining_channels))
- self.WN.append(WN(n_half, n_mel_channels*n_group, **WN_config))
- self.n_remaining_channels = n_remaining_channels # Useful during inference
-
- def forward(self, forward_input):
- """
- forward_input[0] = mel_spectrogram: batch x n_mel_channels x frames
- forward_input[1] = audio: batch x time
- """
- spect, audio = forward_input
-
- # Upsample spectrogram to size of audio
- spect = self.upsample(spect)
- assert(spect.size(2) >= audio.size(1))
- if spect.size(2) > audio.size(1):
- spect = spect[:, :, :audio.size(1)]
-
- spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3)
- spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1)
-
- audio = audio.unfold(1, self.n_group, self.n_group).permute(0, 2, 1)
- output_audio = []
- log_s_list = []
- log_det_W_list = []
-
- for k in range(self.n_flows):
- if k % self.n_early_every == 0 and k > 0:
- output_audio.append(audio[:,:self.n_early_size,:])
- audio = audio[:,self.n_early_size:,:]
-
- audio, log_det_W = self.convinv[k](audio)
- log_det_W_list.append(log_det_W)
-
- n_half = int(audio.size(1)/2)
- audio_0 = audio[:,:n_half,:]
- audio_1 = audio[:,n_half:,:]
-
- output = self.WN[k]((audio_0, spect))
- log_s = output[:, n_half:, :]
- b = output[:, :n_half, :]
- audio_1 = torch.exp(log_s)*audio_1 + b
- log_s_list.append(log_s)
-
- audio = torch.cat([audio_0, audio_1],1)
-
- output_audio.append(audio)
- return torch.cat(output_audio,1), log_s_list, log_det_W_list
-
- def infer(self, spect, sigma=1.0):
- spect = self.upsample(spect)
- # trim conv artifacts. maybe pad spec to kernel multiple
- time_cutoff = self.upsample.kernel_size[0] - self.upsample.stride[0]
- spect = spect[:, :, :-time_cutoff]
-
- spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3)
- spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1)
-
- if spect.type() == 'torch.cuda.HalfTensor':
- audio = torch.cuda.HalfTensor(spect.size(0),
- self.n_remaining_channels,
- spect.size(2)).normal_()
- else:
- audio = torch.cuda.FloatTensor(spect.size(0),
- self.n_remaining_channels,
- spect.size(2)).normal_()
-
- audio = torch.autograd.Variable(sigma*audio)
-
- for k in reversed(range(self.n_flows)):
- n_half = int(audio.size(1)/2)
- audio_0 = audio[:,:n_half,:]
- audio_1 = audio[:,n_half:,:]
-
- output = self.WN[k]((audio_0, spect))
-
- s = output[:, n_half:, :]
- b = output[:, :n_half, :]
- audio_1 = (audio_1 - b)/torch.exp(s)
- audio = torch.cat([audio_0, audio_1],1)
-
- audio = self.convinv[k](audio, reverse=True)
-
- if k % self.n_early_every == 0 and k > 0:
- if spect.type() == 'torch.cuda.HalfTensor':
- z = torch.cuda.HalfTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_()
- else:
- z = torch.cuda.FloatTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_()
- audio = torch.cat((sigma*z, audio),1)
-
- audio = audio.permute(0,2,1).contiguous().view(audio.size(0), -1).data
- return audio
-
- @staticmethod
- def remove_weightnorm(model):
- waveglow = model
- for WN in waveglow.WN:
- WN.start = torch.nn.utils.remove_weight_norm(WN.start)
- WN.in_layers = remove(WN.in_layers)
- WN.cond_layer = torch.nn.utils.remove_weight_norm(WN.cond_layer)
- WN.res_skip_layers = remove(WN.res_skip_layers)
- return waveglow
-
-
-def remove(conv_list):
- new_conv_list = torch.nn.ModuleList()
- for old_conv in conv_list:
- old_conv = torch.nn.utils.remove_weight_norm(old_conv)
- new_conv_list.append(old_conv)
- return new_conv_list
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_model.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_model.py
deleted file mode 100644
index ff26e4fe655d8e8d7f9942c4bd3df7cd267405fb..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/benchmark/dummy_model.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq.data import Dictionary
-from fairseq.models import (
- FairseqDecoder,
- FairseqLanguageModel,
- register_model,
- register_model_architecture,
-)
-
-
-@register_model("dummy_model")
-class DummyModel(FairseqLanguageModel):
- def __init__(self, args, encoder):
- super().__init__(encoder)
- self.args = args
-
- @staticmethod
- def add_args(parser):
- parser.add_argument("--num-layers", type=int, default=24)
- parser.add_argument("--embed-dim", type=int, default=1024)
-
- @classmethod
- def build_model(cls, args, task):
- encoder = DummyEncoder(
- num_embed=len(task.target_dictionary),
- embed_dim=args.embed_dim,
- num_layers=args.num_layers,
- )
- return cls(args, encoder)
-
- def forward(self, src_tokens, masked_tokens=None, **kwargs):
- return self.decoder(src_tokens, masked_tokens=masked_tokens)
-
-
-class DummyEncoder(FairseqDecoder):
- def __init__(self, num_embed=50000, embed_dim=1024, num_layers=24):
- super().__init__(Dictionary())
- self.embed = nn.Embedding(
- num_embeddings=num_embed, embedding_dim=embed_dim, padding_idx=0
- )
- self.layers_a = nn.ModuleList(
- [
- nn.Sequential(
- nn.LayerNorm(embed_dim),
- nn.Linear(embed_dim, 3 * embed_dim), # q, k, v input projection
- nn.Linear(3 * embed_dim, embed_dim), # skip self-attention
- nn.Linear(embed_dim, embed_dim), # output projection
- nn.Dropout(),
- )
- for i in range(num_layers)
- ]
- )
- self.layers_b = nn.ModuleList(
- [
- nn.Sequential(
- nn.LayerNorm(embed_dim),
- nn.Linear(embed_dim, 4 * embed_dim), # FFN
- nn.ReLU(),
- nn.Linear(4 * embed_dim, embed_dim), # FFN
- nn.Dropout(0.1),
- )
- for i in range(num_layers)
- ]
- )
- self.out_proj = nn.Linear(embed_dim, num_embed)
-
- def forward(self, tokens, masked_tokens=None):
- x = self.embed(tokens)
- for layer_a, layer_b in zip(self.layers_a, self.layers_b):
- x = x + layer_a(x)
- x = x + layer_b(x)
- x = self.out_proj(x)
- if masked_tokens is not None:
- x = x[masked_tokens]
- return (x,)
-
- def max_positions(self):
- return 1024
-
- def get_normalized_probs(self, net_output, log_probs, sample=None):
- logits = net_output[0].float()
- if log_probs:
- return F.log_softmax(logits, dim=-1)
- else:
- return F.softmax(logits, dim=-1)
-
-
-@register_model_architecture("dummy_model", "dummy_model")
-def base_architecture(args):
- pass
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/linformer/linformer_src/models/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/linformer/linformer_src/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/__init__.py
deleted file mode 100644
index 239d2e69f9a235095dee1ea7b3a94164a77273f5..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import tasks, criterions, models # noqa
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/ulm/sample.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/ulm/sample.py
deleted file mode 100644
index 77302a6894cacf07588cf34fb1e695dc519d7df5..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/textless_nlp/gslm/ulm/sample.py
+++ /dev/null
@@ -1,174 +0,0 @@
-#!/usr/bin/env python3 -u
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-"""
-Sample from a trained LM; hacked fairseq-interactive
-"""
-from collections import namedtuple
-import os
-import ast
-import numpy as np
-
-from fairseq import checkpoint_utils, options, tasks, utils
-
-import tqdm
-
-Batch = namedtuple('Batch', 'ids src_tokens src_lengths')
-Translation = namedtuple('Translation', 'src_str hypos pos_scores alignments')
-
-
-def make_batches(lines, args, task, max_positions):
- tokens = [
- task.source_dictionary.encode_line(
- src_str, add_if_not_exist=False
- ).long()
- for src_str in lines
- ]
- lengths = [t.numel() for t in tokens]
- itr = task.get_batch_iterator(
- dataset=task.build_dataset_for_inference(tokens, lengths),
- max_tokens=args.dataset.max_tokens,
- max_sentences=args.dataset.batch_size,
- max_positions=max_positions,
- ignore_invalid_inputs=args.dataset.skip_invalid_size_inputs_valid_test
- ).next_epoch_itr(shuffle=False)
- for batch in itr:
- yield Batch(
- ids=batch['id'],
- src_tokens=batch['net_input']['src_tokens'], src_lengths=batch['net_input']['src_lengths'],
- )
-
-
-def main(args):
- arg_prompts = args.prompts
- arg_output = args.output
- arg_debug = args.debug
- arg_sample_size = args.samples_per_prompt
-
- try:
- from fairseq.dataclass.utils import convert_namespace_to_omegaconf
- args = convert_namespace_to_omegaconf(args)
- except:
- pass
-
- # if args.max_tokens is None and args.max_sentences is None:
- if args.common.seed is not None:
- np.random.seed(args.common.seed)
- utils.set_torch_seed(args.common.seed)
-
- if args.generation.sampling:
- args.generation.nbest = args.generation.beam = arg_sample_size
-
- task = tasks.setup_task(args.task)
-
- overrides = ast.literal_eval(args.common_eval.model_overrides)
-
- models, _model_args = checkpoint_utils.load_model_ensemble(
- args.common_eval.path.split(os.pathsep),
- arg_overrides=overrides,
- task=task,
- suffix=getattr(args, "checkpoint_suffix", ""),
- )
-
- # Set dictionaries
- src_dict = task.source_dictionary
- tgt_dict = task.target_dictionary
-
- # Optimize ensemble for generation
- for model in models:
- model.prepare_for_inference_(args)
- model.cuda()
-
- # Load alignment dictionary for unknown word replacement
- # (None if no unknown word replacement, empty if no path to align dictionary)
- align_dict = utils.load_align_dict(args.generation.replace_unk)
-
- max_positions = utils.resolve_max_positions(
- task.max_positions(),
- *[model.max_positions() for model in models]
- )
-
- output_file = open(arg_output, 'w')
-
- with open(arg_prompts, 'r') as fin:
- lines = fin.readlines()
-
- split = [x.split('|', 1) for x in lines]
- seq_id = [x[0] for x in split]
- prompts = [x[1] for x in split]
-
- if args.generation.prefix_size >= 0:
- prompts = [' '.join(l.split()[:args.generation.prefix_size])
- for l in prompts]
-
- if arg_debug:
- prompts = prompts[:10]
-
- generator = task.build_generator(models, args.generation)
-
- start_id = 0
- pbar = tqdm.tqdm(total=len(prompts))
- for batch in make_batches(prompts, args, task, max_positions):
- src_tokens = batch.src_tokens
- src_lengths = batch.src_lengths
- src_tokens = src_tokens.cuda()
- src_lengths = src_lengths.cuda()
-
- sample = {
- 'net_input': {
- 'src_tokens': src_tokens,
- 'src_lengths': src_lengths,
- },
- }
-
- results = []
- translations = task.inference_step(generator, models, sample)
- for i, (id, hypos) in enumerate(zip(batch.ids.tolist(), translations)):
- src_tokens_i = utils.strip_pad(src_tokens[i], tgt_dict.pad())
- results.append((i + start_id, src_tokens_i, hypos))
-
- # sort output to match input order
- for id, src_tokens, hypos in sorted(results, key=lambda x: x[0]):
- if src_dict is not None:
- src_str = src_dict.string(
- src_tokens, args.common_eval.post_process)
-
- # Process top predictions
- for hypo_id, hypo in enumerate(hypos):
- _hypo_tokens, hypo_str, _alignment = utils.post_process_prediction(
- hypo_tokens=hypo['tokens'].int().cpu(),
- src_str=src_str,
- alignment=hypo['alignment'],
- align_dict=align_dict,
- tgt_dict=tgt_dict,
- remove_bpe=args.common_eval.post_process,
- )
-
- detok_hypo_str = hypo_str
- utterance = detok_hypo_str
- print(f'{seq_id[id]}__{hypo_id}|{utterance}', file=output_file)
- pbar.update(1)
- start_id += len(results)
-
- # output_file.close()
-
-
-def cli_main():
- parser = options.get_interactive_generation_parser()
- parser.add_argument('--prompts', type=str, default=None, required=True)
- parser.add_argument('--output', type=str, default=None, required=True)
- parser.add_argument('--debug', action='store_true')
- parser.add_argument('--samples-per-prompt', type=int, default=1)
-
- args = options.parse_args_and_arch(parser)
-
- np.random.seed(args.seed)
- utils.set_torch_seed(args.seed)
-
- main(args)
-
-
-if __name__ == '__main__':
- cli_main()
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/nat/nonautoregressive_ensembles.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/nat/nonautoregressive_ensembles.py
deleted file mode 100644
index 705a04fb49658c91114a26efd411b4653c65b943..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/nat/nonautoregressive_ensembles.py
+++ /dev/null
@@ -1,253 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-import torch.nn.functional as F
-from fairseq.models.nat import (
- _apply_del_words,
- _apply_ins_masks,
- _apply_ins_words,
- _fill,
- _skip,
- _skip_encoder_out,
-)
-
-
-class _EnsembleModelEncoder(object):
- def __init__(self, models):
- self.models = models
-
- def reorder_encoder_out(self, encoder_outs, new_order):
- encoder_outs = [
- model.encoder.reorder_encoder_out(encoder_out, new_order)
- for model, encoder_out in zip(self.models, encoder_outs)
- ]
- return encoder_outs
-
-
-class BasicEnsembleModel(torch.nn.Module):
- """A wrapper around an ensemble of models."""
-
- def __init__(self, models):
- super().__init__()
- self.models = torch.nn.ModuleList(models)
- self.bos = self.models[0].decoder.dictionary.bos()
- self.eos = self.models[0].decoder.dictionary.eos()
- self.pad = self.models[0].decoder.dictionary.pad()
- self.unk = self.models[0].decoder.dictionary.unk()
- self.encoder = _EnsembleModelEncoder(self.models)
-
- def has_encoder(self):
- return hasattr(self.models[0], "encoder")
-
- def max_decoder_positions(self):
- return min(m.max_decoder_positions() for m in self.models)
-
- @torch.no_grad()
- def forward_encoder(self, encoder_input):
- if not self.has_encoder():
- return None
- return [model.forward_encoder(encoder_input) for model in self.models]
-
- @torch.no_grad()
- def forward_decoder(self, *inputs):
- raise NotImplementedError
-
- def initialize_output_tokens(self, *inputs):
- raise NotImplementedError
-
-
-class EnsembleLevT(BasicEnsembleModel):
- """A wrapper around an ensemble of models."""
-
- def __init__(self, models):
- super().__init__(models)
-
- @torch.no_grad()
- def forward_decoder(
- self, decoder_out, encoder_outs, eos_penalty=0.0, max_ratio=None, **kwargs
- ):
- # LevT ensembling
- # A pipeline of three steps: deletion, placeholder, and word insertion.
- # We need to average scores in each step in a pipeline way because of dependence.
- # deletion
- output_tokens = decoder_out.output_tokens
- output_scores = decoder_out.output_scores
- attn = decoder_out.attn
-
- bsz = output_tokens.size(0)
- if max_ratio is None:
- max_lens = output_tokens.new().fill_(255)
- else:
- if not encoder_outs[0]["encoder_padding_mask"]:
- src_lens = (
- encoder_outs[0]["encoder_out"][0].new(bsz)
- .fill_(encoder_outs[0]["encoder_out"][0].size(1))
- )
- else:
- src_lens = (~encoder_outs[0]["encoder_padding_mask"][0]).sum(1)
- max_lens = (src_lens * max_ratio).clamp(min=10).long()
-
- # delete words
- # do not delete tokens if it is
- can_del_word = output_tokens.ne(self.pad).sum(1) > 2
- if can_del_word.sum() != 0: # we cannot delete, skip
- output_tokens, output_scores, attn = self.forward_word_del(
- encoder_outs,
- output_tokens,
- output_scores,
- attn,
- can_del_word,
- )
-
- # insert placeholders
- can_ins_mask = output_tokens.ne(self.pad).sum(1) < max_lens
- if can_ins_mask.sum() != 0:
- output_tokens, output_scores = self.forward_mask_ins(
- encoder_outs,
- output_tokens,
- output_scores,
- can_ins_mask,
- eos_penalty,
- max_lens,
- )
-
- # insert words
- can_ins_word = output_tokens.eq(self.unk).sum(1) > 0
- if can_ins_word.sum() != 0:
- output_tokens, output_scores, attn = self.forward_word_ins(
- encoder_outs,
- output_tokens,
- output_scores,
- attn,
- can_ins_word,
- )
-
- # delete some unnecessary paddings
- cut_off = output_tokens.ne(self.pad).sum(1).max()
- output_tokens = output_tokens[:, :cut_off]
- output_scores = output_scores[:, :cut_off]
- attn = None if attn is None else attn[:, :cut_off, :]
- return decoder_out._replace(
- output_tokens=output_tokens,
- output_scores=output_scores,
- attn=attn,
- history=None,
- )
-
- def forward_word_del(
- self, encoder_outs, output_tokens, output_scores, attn, can_del_word
- ):
- word_del_score_avg = []
- word_del_attn_avg = []
- for model, encoder_out in zip(self.models, encoder_outs):
- word_del_out, word_del_attn = model.decoder.forward_word_del(
- _skip(output_tokens, can_del_word),
- _skip_encoder_out(model.encoder, encoder_out, can_del_word),
- )
- word_del_score = F.log_softmax(word_del_out, 2)
- word_del_score_avg.append(word_del_score)
- word_del_attn_avg.append(word_del_attn)
- word_del_score_avg = torch.logsumexp(
- torch.stack(word_del_score_avg, dim=0), dim=0
- ) - math.log(len(self.models))
- word_del_pred = word_del_score_avg.max(-1)[1].bool()
- if word_del_attn_avg[0] is not None:
- word_del_attn_avg = torch.stack(word_del_attn_avg, dim=0) / len(self.models)
- else:
- word_del_attn_avg = None
-
- _tokens, _scores, _attn = _apply_del_words(
- output_tokens[can_del_word],
- output_scores[can_del_word],
- word_del_attn_avg,
- word_del_pred,
- self.pad,
- self.bos,
- self.eos,
- )
- output_tokens = _fill(output_tokens, can_del_word, _tokens, self.pad)
- output_scores = _fill(output_scores, can_del_word, _scores, 0)
- attn = _fill(attn, can_del_word, _attn, 0.0)
- return output_tokens, output_scores, attn
-
- def forward_mask_ins(
- self,
- encoder_outs,
- output_tokens,
- output_scores,
- can_ins_mask,
- eos_penalty,
- max_lens,
- ):
- mask_ins_score_avg = []
- for model, encoder_out in zip(self.models, encoder_outs):
- mask_ins_out, _ = model.decoder.forward_mask_ins(
- _skip(output_tokens, can_ins_mask),
- _skip_encoder_out(model.encoder, encoder_out, can_ins_mask),
- )
- mask_ins_score = F.log_softmax(mask_ins_out, 2)
- if eos_penalty > 0.0:
- mask_ins_score[:, :, 0] -= eos_penalty
- mask_ins_score_avg.append(mask_ins_score)
- mask_ins_score_avg = torch.logsumexp(
- torch.stack(mask_ins_score_avg, dim=0), dim=0
- ) - math.log(len(self.models))
- mask_ins_pred = mask_ins_score_avg.max(-1)[1]
- mask_ins_pred = torch.min(
- mask_ins_pred, max_lens[can_ins_mask, None].expand_as(mask_ins_pred)
- )
- _tokens, _scores = _apply_ins_masks(
- output_tokens[can_ins_mask],
- output_scores[can_ins_mask],
- mask_ins_pred,
- self.pad,
- self.unk,
- self.eos,
- )
- output_tokens = _fill(output_tokens, can_ins_mask, _tokens, self.pad)
- output_scores = _fill(output_scores, can_ins_mask, _scores, 0)
- return output_tokens, output_scores
-
- def forward_word_ins(
- self, encoder_outs, output_tokens, output_scores, attn, can_ins_word
- ):
- word_ins_score_avg = []
- word_ins_attn_avg = []
- for model, encoder_out in zip(self.models, encoder_outs):
- word_ins_out, word_ins_attn = model.decoder.forward_word_ins(
- _skip(output_tokens, can_ins_word),
- _skip_encoder_out(model.encoder, encoder_out, can_ins_word),
- )
- word_ins_score = F.log_softmax(word_ins_out, 2)
- word_ins_score_avg.append(word_ins_score)
- word_ins_attn_avg.append(word_ins_attn)
- word_ins_score_avg = torch.logsumexp(
- torch.stack(word_ins_score_avg, dim=0), dim=0
- ) - math.log(len(self.models))
- if word_ins_attn_avg[0] is not None:
- word_ins_attn_avg = torch.stack(word_ins_attn_avg, dim=0) / len(self.models)
- else:
- word_ins_attn_avg = None
- word_ins_score_max, word_ins_pred = word_ins_score_avg.max(-1)
-
- _tokens, _scores = _apply_ins_words(
- output_tokens[can_ins_word],
- output_scores[can_ins_word],
- word_ins_pred,
- word_ins_score_max,
- self.unk,
- )
-
- output_tokens = _fill(output_tokens, can_ins_word, _tokens, self.pad)
- output_scores = _fill(output_scores, can_ins_word, _scores, 0)
- attn = _fill(attn, can_ins_word, word_ins_attn, 0.0)
- return output_tokens, output_scores, attn
-
- def initialize_output_tokens(self, encoder_outs, src_tokens):
- # LevT doesn't do length prediction.
- return self.models[0].initialize_output_tokens(encoder_outs[0], src_tokens)
diff --git a/spaces/OIUGLK/bingo/tests/parse.ts b/spaces/OIUGLK/bingo/tests/parse.ts
deleted file mode 100644
index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000
--- a/spaces/OIUGLK/bingo/tests/parse.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import { promises as fs } from 'fs'
-import { join } from 'path'
-import { parseHeadersFromCurl } from '@/lib/utils'
-
-(async () => {
- const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8')
- const headers = parseHeadersFromCurl(content)
- console.log(headers)
-
- const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8')
- const cmdHeaders = parseHeadersFromCurl(cmdContent)
- console.log(cmdHeaders)
-})()
diff --git a/spaces/ORI-Muchim/NahidaTTS/models.py b/spaces/ORI-Muchim/NahidaTTS/models.py
deleted file mode 100644
index fe004e94bbe9074ec736f14325268f4515a53420..0000000000000000000000000000000000000000
--- a/spaces/ORI-Muchim/NahidaTTS/models.py
+++ /dev/null
@@ -1,540 +0,0 @@
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2])
- logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- if self.n_vocab != 0:
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- if self.n_vocab != 0:
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers,
- gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)),
- k, u, padding=(k - u) // 2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates,
- upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16,
- gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- if use_sdp:
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- else:
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid=None):
-
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 1:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2),
- s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
- if self.use_sdp:
- l_length = self.dp(x, x_mask, w, g=g)
- l_length = l_length / torch.sum(x_mask)
- else:
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 1:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1,
- 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:, :, :max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 1, "n_speakers have to be larger than 1."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
diff --git a/spaces/Omnibus/MusicGen/MODEL_CARD.md b/spaces/Omnibus/MusicGen/MODEL_CARD.md
deleted file mode 100644
index 6c2c9f883969eb905e74ad3376966d156cc5ca00..0000000000000000000000000000000000000000
--- a/spaces/Omnibus/MusicGen/MODEL_CARD.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# MusicGen Model Card
-
-## Model details
-
-**Organization developing the model:** The FAIR team of Meta AI.
-
-**Model date:** MusicGen was trained between April 2023 and May 2023.
-
-**Model version:** This is the version 1 of the model.
-
-**Model type:** MusicGen consists of an EnCodec model for audio tokenization, an auto-regressive language model based on the transformer architecture for music modeling. The model comes in different sizes: 300M, 1.5B and 3.3B parameters ; and two variants: a model trained for text-to-music generation task and a model trained for melody-guided music generation.
-
-**Paper or resources for more information:** More information can be found in the paper [Simple and Controllable Music Generation][arxiv].
-
-**Citation details** See [our paper][arxiv]
-
-**License** Code is released under MIT, model weights are released under CC-BY-NC 4.0.
-
-**Where to send questions or comments about the model:** Questions and comments about MusicGen can be sent via the [Github repository](https://github.com/facebookresearch/audiocraft) of the project, or by opening an issue.
-
-## Intended use
-**Primary intended use:** The primary use of MusicGen is research on AI-based music generation, including:
-
-- Research efforts, such as probing and better understanding the limitations of generative models to further improve the state of science
-- Generation of music guided by text or melody to understand current abilities of generative AI models by machine learning amateurs
-
-**Primary intended users:** The primary intended users of the model are researchers in audio, machine learning and artificial intelligence, as well as amateur seeking to better understand those models.
-
-**Out-of-scope use cases** The model should not be used on downstream applications without further risk evaluation and mitigation. The model should not be used to intentionally create or disseminate music pieces that create hostile or alienating environments for people. This includes generating music that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
-
-## Metrics
-
-**Models performance measures:** We used the following objective measure to evaluate the model on a standard music benchmark:
-
-- Frechet Audio Distance computed on features extracted from a pre-trained audio classifier (VGGish)
-- Kullback-Leibler Divergence on label distributions extracted from a pre-trained audio classifier (PaSST)
-- CLAP Score between audio embedding and text embedding extracted from a pre-trained CLAP model
-
-Additionally, we run qualitative studies with human participants, evaluating the performance of the model with the following axes:
-
-- Overall quality of the music samples;
-- Text relevance to the provided text input;
-- Adherence to the melody for melody-guided music generation.
-
-More details on performance measures and human studies can be found in the paper.
-
-**Decision thresholds:** Not applicable.
-
-## Evaluation datasets
-
-The model was evaluated on the [MusicCaps benchmark](https://www.kaggle.com/datasets/googleai/musiccaps) and on an in-domain held-out evaluation set, with no artist overlap with the training set.
-
-## Training datasets
-
-The model was trained on licensed data using the following sources: the [Meta Music Initiative Sound Collection](https://www.fb.com/sound), [Shutterstock music collection](https://www.shutterstock.com/music) and the [Pond5 music collection](https://www.pond5.com/). See the paper for more details about the training set and corresponding preprocessing.
-
-## Quantitative analysis
-
-More information can be found in the paper [Simple and Controllable Music Generation][arxiv], in the Experimental Setup section.
-
-## Limitations and biases
-
-**Data:** The data sources used to train the model are created by music professionals and covered by legal agreements with the right holders. The model is trained on 20K hours of data, we believe that scaling the model on larger datasets can further improve the performance of the model.
-
-**Mitigations:** Vocals have been removed from the data source using corresponding tags, and then using using a state-of-the-art music source separation method, namely using the open source [Hybrid Transformer for Music Source Separation](https://github.com/facebookresearch/demucs) (HT-Demucs).
-
-**Limitations:**
-
-- The model is not able to generate realistic vocals.
-- The model has been trained with English descriptions and will not perform as well in other languages.
-- The model does not perform equally well for all music styles and cultures.
-- The model sometimes generates end of songs, collapsing to silence.
-- It is sometimes difficult to assess what types of text descriptions provide the best generations. Prompt engineering may be required to obtain satisfying results.
-
-**Biases:** The source of data is potentially lacking diversity and all music cultures are not equally represented in the dataset. The model may not perform equally well on the wide variety of music genres that exists. The generated samples from the model will reflect the biases from the training data. Further work on this model should include methods for balanced and just representations of cultures, for example, by scaling the training data to be both diverse and inclusive.
-
-**Risks and harms:** Biases and limitations of the model may lead to generation of samples that may be considered as biased, inappropriate or offensive. We believe that providing the code to reproduce the research and train new models will allow to broaden the application to new and more representative data.
-
-**Use cases:** Users must be aware of the biases, limitations and risks of the model. MusicGen is a model developed for artificial intelligence research on controllable music generation. As such, it should not be used for downstream applications without further investigation and mitigation of risks.
-
-[arxiv]: https://arxiv.org/abs/2306.05284
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py
deleted file mode 100644
index 7b86ea8c6c5c48f5d26c9e0df7cf96e745b17b34..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnety_4gf_dds_FPN_400ep_LSJ.py
+++ /dev/null
@@ -1,14 +0,0 @@
-from .mask_rcnn_regnety_4gf_dds_FPN_100ep_LSJ import (
- dataloader,
- lr_multiplier,
- model,
- optimizer,
- train,
-)
-
-train.max_iter *= 4 # 100ep -> 400ep
-
-lr_multiplier.scheduler.milestones = [
- milestone * 4 for milestone in lr_multiplier.scheduler.milestones
-]
-lr_multiplier.scheduler.num_updates = train.max_iter
diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/config.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/config.py
deleted file mode 100644
index 0dc8320dfb8b7e718cf59b31c5a3f4f018c94d9e..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/config.py
+++ /dev/null
@@ -1,210 +0,0 @@
-# -*- coding: utf-8 -*-
-# Copyright (c) Facebook, Inc. and its affiliates.
-from detectron2.config import CfgNode as CN
-
-__all__ = ["add_common_config", "add_oneformer_config", "add_swin_config",
- "add_dinat_config", "add_convnext_config"]
-
-def add_common_config(cfg):
- """
- Add config for common configuration
- """
-
- # data config
- # select the dataset mapper
- cfg.INPUT.DATASET_MAPPER_NAME = "oneformer_unified"
- # Color augmentation
- cfg.INPUT.COLOR_AUG_SSD = False
- # We retry random cropping until no single category in semantic segmentation GT occupies more
- # than `SINGLE_CATEGORY_MAX_AREA` part of the crop.
- cfg.INPUT.CROP.SINGLE_CATEGORY_MAX_AREA = 1.0
- # Pad image and segmentation GT in dataset mapper.
- cfg.INPUT.SIZE_DIVISIBILITY = -1
-
- cfg.INPUT.TASK_SEQ_LEN = 77
- cfg.INPUT.MAX_SEQ_LEN = 77
-
- cfg.INPUT.TASK_PROB = CN()
- cfg.INPUT.TASK_PROB.SEMANTIC = 0.33
- cfg.INPUT.TASK_PROB.INSTANCE = 0.66
-
- # test dataset
- cfg.DATASETS.TEST_PANOPTIC = ("",)
- cfg.DATASETS.TEST_INSTANCE = ("",)
- cfg.DATASETS.TEST_SEMANTIC = ("",)
-
- # solver config
- # weight decay on embedding
- cfg.SOLVER.WEIGHT_DECAY_EMBED = 0.0
- # optimizer
- cfg.SOLVER.OPTIMIZER = "ADAMW"
- cfg.SOLVER.BACKBONE_MULTIPLIER = 0.1
-
- # wandb
- cfg.WANDB = CN()
- cfg.WANDB.PROJECT = "OneFormer"
- cfg.WANDB.NAME = None
-
- cfg.MODEL.IS_TRAIN = True
- cfg.MODEL.IS_DEMO = False
-
- # text encoder config
- cfg.MODEL.TEXT_ENCODER = CN()
-
- cfg.MODEL.TEXT_ENCODER.WIDTH = 256
- cfg.MODEL.TEXT_ENCODER.CONTEXT_LENGTH = 77
- cfg.MODEL.TEXT_ENCODER.NUM_LAYERS = 12
- cfg.MODEL.TEXT_ENCODER.VOCAB_SIZE = 49408
- cfg.MODEL.TEXT_ENCODER.PROJ_NUM_LAYERS = 2
- cfg.MODEL.TEXT_ENCODER.N_CTX = 16
-
- # oneformer inference config
- cfg.MODEL.TEST = CN()
- cfg.MODEL.TEST.SEMANTIC_ON = True
- cfg.MODEL.TEST.INSTANCE_ON = False
- cfg.MODEL.TEST.PANOPTIC_ON = False
- cfg.MODEL.TEST.DETECTION_ON = False
- cfg.MODEL.TEST.OBJECT_MASK_THRESHOLD = 0.0
- cfg.MODEL.TEST.OVERLAP_THRESHOLD = 0.0
- cfg.MODEL.TEST.SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE = False
- cfg.MODEL.TEST.TASK = "panoptic"
-
- # TEST AUG Slide
- cfg.TEST.AUG.IS_SLIDE = False
- cfg.TEST.AUG.CROP_SIZE = (640, 640)
- cfg.TEST.AUG.STRIDE = (426, 426)
- cfg.TEST.AUG.SCALE = (2048, 640)
- cfg.TEST.AUG.SETR_MULTI_SCALE = True
- cfg.TEST.AUG.KEEP_RATIO = True
- cfg.TEST.AUG.SIZE_DIVISOR = 32
-
- # pixel decoder config
- cfg.MODEL.SEM_SEG_HEAD.MASK_DIM = 256
- # adding transformer in pixel decoder
- cfg.MODEL.SEM_SEG_HEAD.TRANSFORMER_ENC_LAYERS = 0
- # pixel decoder
- cfg.MODEL.SEM_SEG_HEAD.PIXEL_DECODER_NAME = "BasePixelDecoder"
- cfg.MODEL.SEM_SEG_HEAD.SEM_EMBED_DIM = 256
- cfg.MODEL.SEM_SEG_HEAD.INST_EMBED_DIM = 256
-
- # LSJ aug
- cfg.INPUT.IMAGE_SIZE = 1024
- cfg.INPUT.MIN_SCALE = 0.1
- cfg.INPUT.MAX_SCALE = 2.0
-
- # MSDeformAttn encoder configs
- cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES = ["res3", "res4", "res5"]
- cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_N_POINTS = 4
- cfg.MODEL.SEM_SEG_HEAD.DEFORMABLE_TRANSFORMER_ENCODER_N_HEADS = 8
-
-def add_oneformer_config(cfg):
- """
- Add config for ONE_FORMER.
- """
-
- # oneformer model config
- cfg.MODEL.ONE_FORMER = CN()
-
- # loss
- cfg.MODEL.ONE_FORMER.DEEP_SUPERVISION = True
- cfg.MODEL.ONE_FORMER.NO_OBJECT_WEIGHT = 0.1
- cfg.MODEL.ONE_FORMER.CLASS_WEIGHT = 1.0
- cfg.MODEL.ONE_FORMER.DICE_WEIGHT = 1.0
- cfg.MODEL.ONE_FORMER.MASK_WEIGHT = 20.0
- cfg.MODEL.ONE_FORMER.CONTRASTIVE_WEIGHT = 0.5
- cfg.MODEL.ONE_FORMER.CONTRASTIVE_TEMPERATURE = 0.07
-
- # transformer config
- cfg.MODEL.ONE_FORMER.NHEADS = 8
- cfg.MODEL.ONE_FORMER.DROPOUT = 0.1
- cfg.MODEL.ONE_FORMER.DIM_FEEDFORWARD = 2048
- cfg.MODEL.ONE_FORMER.ENC_LAYERS = 0
- cfg.MODEL.ONE_FORMER.CLASS_DEC_LAYERS = 2
- cfg.MODEL.ONE_FORMER.DEC_LAYERS = 6
- cfg.MODEL.ONE_FORMER.PRE_NORM = False
-
- cfg.MODEL.ONE_FORMER.HIDDEN_DIM = 256
- cfg.MODEL.ONE_FORMER.NUM_OBJECT_QUERIES = 120
- cfg.MODEL.ONE_FORMER.NUM_OBJECT_CTX = 16
- cfg.MODEL.ONE_FORMER.USE_TASK_NORM = True
-
- cfg.MODEL.ONE_FORMER.TRANSFORMER_IN_FEATURE = "res5"
- cfg.MODEL.ONE_FORMER.ENFORCE_INPUT_PROJ = False
-
- # Sometimes `backbone.size_divisibility` is set to 0 for some backbone (e.g. ResNet)
- # you can use this config to override
- cfg.MODEL.ONE_FORMER.SIZE_DIVISIBILITY = 32
-
- # transformer module
- cfg.MODEL.ONE_FORMER.TRANSFORMER_DECODER_NAME = "ContrastiveMultiScaleMaskedTransformerDecoder"
-
- # point loss configs
- # Number of points sampled during training for a mask point head.
- cfg.MODEL.ONE_FORMER.TRAIN_NUM_POINTS = 112 * 112
- # Oversampling parameter for PointRend point sampling during training. Parameter `k` in the
- # original paper.
- cfg.MODEL.ONE_FORMER.OVERSAMPLE_RATIO = 3.0
- # Importance sampling parameter for PointRend point sampling during training. Parametr `beta` in
- # the original paper.
- cfg.MODEL.ONE_FORMER.IMPORTANCE_SAMPLE_RATIO = 0.75
-
-def add_swin_config(cfg):
- """
- Add config forSWIN Backbone.
- """
-
- # swin transformer backbone
- cfg.MODEL.SWIN = CN()
- cfg.MODEL.SWIN.PRETRAIN_IMG_SIZE = 224
- cfg.MODEL.SWIN.PATCH_SIZE = 4
- cfg.MODEL.SWIN.EMBED_DIM = 96
- cfg.MODEL.SWIN.DEPTHS = [2, 2, 6, 2]
- cfg.MODEL.SWIN.NUM_HEADS = [3, 6, 12, 24]
- cfg.MODEL.SWIN.WINDOW_SIZE = 7
- cfg.MODEL.SWIN.MLP_RATIO = 4.0
- cfg.MODEL.SWIN.QKV_BIAS = True
- cfg.MODEL.SWIN.QK_SCALE = None
- cfg.MODEL.SWIN.DROP_RATE = 0.0
- cfg.MODEL.SWIN.ATTN_DROP_RATE = 0.0
- cfg.MODEL.SWIN.DROP_PATH_RATE = 0.3
- cfg.MODEL.SWIN.APE = False
- cfg.MODEL.SWIN.PATCH_NORM = True
- cfg.MODEL.SWIN.OUT_FEATURES = ["res2", "res3", "res4", "res5"]
- cfg.MODEL.SWIN.USE_CHECKPOINT = False
-
-def add_dinat_config(cfg):
- """
- Add config for NAT Backbone.
- """
-
- # DINAT transformer backbone
- cfg.MODEL.DiNAT = CN()
- cfg.MODEL.DiNAT.DEPTHS = [3, 4, 18, 5]
- cfg.MODEL.DiNAT.OUT_FEATURES = ["res2", "res3", "res4", "res5"]
- cfg.MODEL.DiNAT.EMBED_DIM = 64
- cfg.MODEL.DiNAT.MLP_RATIO = 3.0
- cfg.MODEL.DiNAT.NUM_HEADS = [2, 4, 8, 16]
- cfg.MODEL.DiNAT.DROP_PATH_RATE = 0.2
- cfg.MODEL.DiNAT.KERNEL_SIZE = 7
- cfg.MODEL.DiNAT.DILATIONS = [[1, 16, 1], [1, 4, 1, 8], [1, 2, 1, 3, 1, 4], [1, 2, 1, 2, 1]]
- cfg.MODEL.DiNAT.OUT_INDICES = (0, 1, 2, 3)
- cfg.MODEL.DiNAT.QKV_BIAS = True
- cfg.MODEL.DiNAT.QK_SCALE = None
- cfg.MODEL.DiNAT.DROP_RATE = 0
- cfg.MODEL.DiNAT.ATTN_DROP_RATE = 0.
- cfg.MODEL.DiNAT.IN_PATCH_SIZE = 4
-
-def add_convnext_config(cfg):
- """
- Add config for ConvNeXt Backbone.
- """
-
- # swin transformer backbone
- cfg.MODEL.CONVNEXT = CN()
- cfg.MODEL.CONVNEXT.IN_CHANNELS = 3
- cfg.MODEL.CONVNEXT.DEPTHS = [3, 3, 27, 3]
- cfg.MODEL.CONVNEXT.DIMS = [192, 384, 768, 1536]
- cfg.MODEL.CONVNEXT.DROP_PATH_RATE = 0.4
- cfg.MODEL.CONVNEXT.LSIT = 1.0
- cfg.MODEL.CONVNEXT.OUT_INDICES = [0, 1, 2, 3]
- cfg.MODEL.CONVNEXT.OUT_FEATURES = ["res2", "res3", "res4", "res5"]
\ No newline at end of file
diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py
deleted file mode 100644
index a06d586f70131c86604ee0113993b99effaba340..0000000000000000000000000000000000000000
--- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/oneformer/modeling/transformer_decoder/oneformer_transformer_decoder.py
+++ /dev/null
@@ -1,528 +0,0 @@
-# ------------------------------------------------------------------------------
-# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/transformer_decoder/mask2former_transformer_decoder.py
-# Modified by Jitesh Jain (https://github.com/praeclarumjj3)
-# ------------------------------------------------------------------------------
-
-import logging
-import fvcore.nn.weight_init as weight_init
-from typing import Optional
-import torch
-from torch import nn, Tensor
-from torch.nn import functional as F
-
-from detectron2.config import configurable
-from detectron2.layers import Conv2d
-
-from .position_encoding import PositionEmbeddingSine
-from .transformer import Transformer
-
-from detectron2.utils.registry import Registry
-
-
-TRANSFORMER_DECODER_REGISTRY = Registry("TRANSFORMER_MODULE")
-TRANSFORMER_DECODER_REGISTRY.__doc__ = """
-Registry for transformer module in OneFormer.
-"""
-
-
-def build_transformer_decoder(cfg, in_channels, mask_classification=True):
- """
- Build a instance embedding branch from `cfg.MODEL.INS_EMBED_HEAD.NAME`.
- """
- name = cfg.MODEL.ONE_FORMER.TRANSFORMER_DECODER_NAME
- return TRANSFORMER_DECODER_REGISTRY.get(name)(cfg, in_channels, mask_classification)
-
-
-class SelfAttentionLayer(nn.Module):
-
- def __init__(self, d_model, nhead, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
-
- self.norm = nn.LayerNorm(d_model)
- self.dropout = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def with_pos_embed(self, tensor, pos: Optional[Tensor]):
- return tensor if pos is None else tensor + pos
-
- def forward_post(self, tgt,
- tgt_mask: Optional[Tensor] = None,
- tgt_key_padding_mask: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- q = k = self.with_pos_embed(tgt, query_pos)
- tgt2 = self.self_attn(q, k, value=tgt, attn_mask=tgt_mask,
- key_padding_mask=tgt_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
- tgt = self.norm(tgt)
-
- return tgt
-
- def forward_pre(self, tgt,
- tgt_mask: Optional[Tensor] = None,
- tgt_key_padding_mask: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- tgt2 = self.norm(tgt)
- q = k = self.with_pos_embed(tgt2, query_pos)
- tgt2 = self.self_attn(q, k, value=tgt2, attn_mask=tgt_mask,
- key_padding_mask=tgt_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
-
- return tgt
-
- def forward(self, tgt,
- tgt_mask: Optional[Tensor] = None,
- tgt_key_padding_mask: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- if self.normalize_before:
- return self.forward_pre(tgt, tgt_mask,
- tgt_key_padding_mask, query_pos)
- return self.forward_post(tgt, tgt_mask,
- tgt_key_padding_mask, query_pos)
-
-
-class CrossAttentionLayer(nn.Module):
-
- def __init__(self, d_model, nhead, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
-
- self.norm = nn.LayerNorm(d_model)
- self.dropout = nn.Dropout(dropout)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def with_pos_embed(self, tensor, pos: Optional[Tensor]):
- return tensor if pos is None else tensor + pos
-
- def forward_post(self, tgt, memory,
- memory_mask: Optional[Tensor] = None,
- memory_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt, query_pos),
- key=self.with_pos_embed(memory, pos),
- value=memory, attn_mask=memory_mask,
- key_padding_mask=memory_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
- tgt = self.norm(tgt)
-
- return tgt
-
- def forward_pre(self, tgt, memory,
- memory_mask: Optional[Tensor] = None,
- memory_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- tgt2 = self.norm(tgt)
- tgt2 = self.multihead_attn(query=self.with_pos_embed(tgt2, query_pos),
- key=self.with_pos_embed(memory, pos),
- value=memory, attn_mask=memory_mask,
- key_padding_mask=memory_key_padding_mask)[0]
- tgt = tgt + self.dropout(tgt2)
-
- return tgt
-
- def forward(self, tgt, memory,
- memory_mask: Optional[Tensor] = None,
- memory_key_padding_mask: Optional[Tensor] = None,
- pos: Optional[Tensor] = None,
- query_pos: Optional[Tensor] = None):
- if self.normalize_before:
- return self.forward_pre(tgt, memory, memory_mask,
- memory_key_padding_mask, pos, query_pos)
- return self.forward_post(tgt, memory, memory_mask,
- memory_key_padding_mask, pos, query_pos)
-
-
-class FFNLayer(nn.Module):
-
- def __init__(self, d_model, dim_feedforward=2048, dropout=0.0,
- activation="relu", normalize_before=False):
- super().__init__()
- # Implementation of Feedforward model
- self.linear1 = nn.Linear(d_model, dim_feedforward)
- self.dropout = nn.Dropout(dropout)
- self.linear2 = nn.Linear(dim_feedforward, d_model)
-
- self.norm = nn.LayerNorm(d_model)
-
- self.activation = _get_activation_fn(activation)
- self.normalize_before = normalize_before
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- for p in self.parameters():
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def with_pos_embed(self, tensor, pos: Optional[Tensor]):
- return tensor if pos is None else tensor + pos
-
- def forward_post(self, tgt):
- tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt))))
- tgt = tgt + self.dropout(tgt2)
- tgt = self.norm(tgt)
- return tgt
-
- def forward_pre(self, tgt):
- tgt2 = self.norm(tgt)
- tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2))))
- tgt = tgt + self.dropout(tgt2)
- return tgt
-
- def forward(self, tgt):
- if self.normalize_before:
- return self.forward_pre(tgt)
- return self.forward_post(tgt)
-
-
-def _get_activation_fn(activation):
- """Return an activation function given a string"""
- if activation == "relu":
- return F.relu
- if activation == "gelu":
- return F.gelu
- if activation == "glu":
- return F.glu
- raise RuntimeError(F"activation should be relu/gelu, not {activation}.")
-
-
-class MLP(nn.Module):
- """ Very simple multi-layer perceptron (also called FFN)"""
-
- def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
- super().__init__()
- self.num_layers = num_layers
- h = [hidden_dim] * (num_layers - 1)
- self.layers = nn.ModuleList(nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
-
- def forward(self, x):
- for i, layer in enumerate(self.layers):
- x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
- return x
-
-
-@TRANSFORMER_DECODER_REGISTRY.register()
-class ContrastiveMultiScaleMaskedTransformerDecoder(nn.Module):
-
- _version = 2
-
- def _load_from_state_dict(
- self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- ):
- version = local_metadata.get("version", None)
- if version is None or version < 2:
- # Do not warn if train from scratch
- scratch = True
- logger = logging.getLogger(__name__)
- for k in list(state_dict.keys()):
- newk = k
- if "static_query" in k:
- newk = k.replace("static_query", "query_feat")
- if newk != k:
- state_dict[newk] = state_dict[k]
- del state_dict[k]
- scratch = False
-
- if not scratch:
- logger.warning(
- f"Weight format of {self.__class__.__name__} have changed! "
- "Please upgrade your models. Applying automatic conversion now ..."
- )
-
- @configurable
- def __init__(
- self,
- in_channels,
- mask_classification=True,
- *,
- num_classes: int,
- hidden_dim: int,
- num_queries: int,
- nheads: int,
- dropout: float,
- dim_feedforward: int,
- enc_layers: int,
- is_train: bool,
- dec_layers: int,
- class_dec_layers: int,
- pre_norm: bool,
- mask_dim: int,
- enforce_input_project: bool,
- use_task_norm: bool,
- ):
- """
- NOTE: this interface is experimental.
- Args:
- in_channels: channels of the input features
- mask_classification: whether to add mask classifier or not
- num_classes: number of classes
- hidden_dim: Transformer feature dimension
- num_queries: number of queries
- nheads: number of heads
- dim_feedforward: feature dimension in feedforward network
- enc_layers: number of Transformer encoder layers
- dec_layers: number of Transformer decoder layers
- pre_norm: whether to use pre-LayerNorm or not
- mask_dim: mask feature dimension
- enforce_input_project: add input project 1x1 conv even if input
- channels and hidden dim is identical
- """
- super().__init__()
-
- assert mask_classification, "Only support mask classification model"
- self.mask_classification = mask_classification
- self.is_train = is_train
- self.use_task_norm = use_task_norm
-
- # positional encoding
- N_steps = hidden_dim // 2
- self.pe_layer = PositionEmbeddingSine(N_steps, normalize=True)
-
- self.class_transformer = Transformer(
- d_model=hidden_dim,
- dropout=dropout,
- nhead=nheads,
- dim_feedforward=dim_feedforward,
- num_encoder_layers=enc_layers,
- num_decoder_layers=class_dec_layers,
- normalize_before=pre_norm,
- return_intermediate_dec=False,
- )
-
- # define Transformer decoder here
- self.num_heads = nheads
- self.num_layers = dec_layers
- self.transformer_self_attention_layers = nn.ModuleList()
- self.transformer_cross_attention_layers = nn.ModuleList()
- self.transformer_ffn_layers = nn.ModuleList()
-
- for _ in range(self.num_layers):
- self.transformer_self_attention_layers.append(
- SelfAttentionLayer(
- d_model=hidden_dim,
- nhead=nheads,
- dropout=0.0,
- normalize_before=pre_norm,
- )
- )
-
- self.transformer_cross_attention_layers.append(
- CrossAttentionLayer(
- d_model=hidden_dim,
- nhead=nheads,
- dropout=0.0,
- normalize_before=pre_norm,
- )
- )
-
- self.transformer_ffn_layers.append(
- FFNLayer(
- d_model=hidden_dim,
- dim_feedforward=dim_feedforward,
- dropout=0.0,
- normalize_before=pre_norm,
- )
- )
-
- self.decoder_norm = nn.LayerNorm(hidden_dim)
-
- self.num_queries = num_queries
- # learnable query p.e.
- self.query_embed = nn.Embedding(num_queries, hidden_dim)
-
- # level embedding (we always use 3 scales)
- self.num_feature_levels = 3
- self.level_embed = nn.Embedding(self.num_feature_levels, hidden_dim)
- self.input_proj = nn.ModuleList()
- for _ in range(self.num_feature_levels):
- if in_channels != hidden_dim or enforce_input_project:
- self.input_proj.append(Conv2d(in_channels, hidden_dim, kernel_size=1))
- weight_init.c2_xavier_fill(self.input_proj[-1])
- else:
- self.input_proj.append(nn.Sequential())
-
- self.class_input_proj = Conv2d(in_channels, hidden_dim, kernel_size=1)
- weight_init.c2_xavier_fill(self.class_input_proj)
-
- # output FFNs
- if self.mask_classification:
- self.class_embed = nn.Linear(hidden_dim, num_classes + 1)
- self.mask_embed = MLP(hidden_dim, hidden_dim, mask_dim, 3)
-
- @classmethod
- def from_config(cls, cfg, in_channels, mask_classification):
- ret = {}
- ret["in_channels"] = in_channels
- ret["mask_classification"] = mask_classification
-
- ret["num_classes"] = cfg.MODEL.SEM_SEG_HEAD.NUM_CLASSES
- ret["hidden_dim"] = cfg.MODEL.ONE_FORMER.HIDDEN_DIM
- ret["num_queries"] = cfg.MODEL.ONE_FORMER.NUM_OBJECT_QUERIES
- # Transformer parameters:
- ret["nheads"] = cfg.MODEL.ONE_FORMER.NHEADS
- ret["dim_feedforward"] = cfg.MODEL.ONE_FORMER.DIM_FEEDFORWARD
-
- # NOTE: because we add learnable query features which requires supervision,
- # we add minus 1 to decoder layers to be consistent with our loss
- # implementation: that is, number of auxiliary losses is always
- # equal to number of decoder layers. With learnable query features, the number of
- # auxiliary losses equals number of decoders plus 1.
- assert cfg.MODEL.ONE_FORMER.DEC_LAYERS >= 1
- ret["dec_layers"] = cfg.MODEL.ONE_FORMER.DEC_LAYERS - 1
- ret["class_dec_layers"] = cfg.MODEL.ONE_FORMER.CLASS_DEC_LAYERS
- ret["enc_layers"] = cfg.MODEL.ONE_FORMER.ENC_LAYERS
- ret["dropout"] = cfg.MODEL.ONE_FORMER.DROPOUT
- ret["pre_norm"] = cfg.MODEL.ONE_FORMER.PRE_NORM
- ret["enforce_input_project"] = cfg.MODEL.ONE_FORMER.ENFORCE_INPUT_PROJ
- ret["is_train"] = cfg.MODEL.IS_TRAIN
- ret["mask_dim"] = cfg.MODEL.SEM_SEG_HEAD.MASK_DIM
- ret["use_task_norm"] = cfg.MODEL.ONE_FORMER.USE_TASK_NORM
-
- return ret
-
- def forward(self, x, mask_features, tasks, mask = None):
- # x is a list of multi-scale feature
- assert len(x) == self.num_feature_levels
- src = []
- pos = []
- size_list = []
-
- # disable mask, it does not affect performance
- del mask
-
- for i in range(self.num_feature_levels):
- size_list.append(x[i].shape[-2:])
- pos.append(self.pe_layer(x[i], None).flatten(2))
- src.append(self.input_proj[i](x[i]).flatten(2) + self.level_embed.weight[i][None, :, None])
-
- # flatten NxCxHxW to HWxNxC
- pos[-1] = pos[-1].permute(2, 0, 1)
- src[-1] = src[-1].permute(2, 0, 1)
-
- _, bs, _ = src[0].shape
-
- # QxNxC
- query_embed = self.query_embed.weight.unsqueeze(1).repeat(1, bs, 1)
- tasks = tasks.unsqueeze(0)
- if self.use_task_norm:
- tasks = self.decoder_norm(tasks)
-
- feats = self.pe_layer(mask_features, None)
-
- out_t, _ = self.class_transformer(feats, None,
- self.query_embed.weight[:-1],
- self.class_input_proj(mask_features),
- tasks if self.use_task_norm else None)
- out_t = out_t[0].permute(1, 0, 2)
-
- out = torch.cat([out_t, tasks], dim=0)
-
- output = out.clone()
-
- predictions_class = []
- predictions_mask = []
-
- # prediction heads on learnable query features
- outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[0], i=0)
- predictions_class.append(outputs_class)
- predictions_mask.append(outputs_mask)
-
- for i in range(self.num_layers):
- level_index = i % self.num_feature_levels
- attn_mask[torch.where(attn_mask.sum(-1) == attn_mask.shape[-1])] = False
- # attention: cross-attention first
- output = self.transformer_cross_attention_layers[i](
- output, src[level_index],
- memory_mask=attn_mask,
- memory_key_padding_mask=None, # here we do not apply masking on padded region
- pos=pos[level_index], query_pos=query_embed
- )
-
- output = self.transformer_self_attention_layers[i](
- output, tgt_mask=None,
- tgt_key_padding_mask=None,
- query_pos=query_embed
- )
-
- # FFN
- output = self.transformer_ffn_layers[i](
- output
- )
-
- outputs_class, outputs_mask, attn_mask = self.forward_prediction_heads(output, mask_features, attn_mask_target_size=size_list[(i + 1) % self.num_feature_levels], i=i+1)
- predictions_class.append(outputs_class)
- predictions_mask.append(outputs_mask)
-
- assert len(predictions_class) == self.num_layers + 1
- if self.is_train:
- query_class = out.permute(1, 0, 2)
- else:
- query_class = None
- out = {
- 'contrastive_logits': query_class,
- 'pred_logits': predictions_class[-1],
- 'pred_masks': predictions_mask[-1],
- 'aux_outputs': self._set_aux_loss(
- predictions_class if self.mask_classification else None,
- predictions_mask,
- )
- }
-
- return out
-
- def forward_prediction_heads(self, output, mask_features, attn_mask_target_size, i):
- decoder_output = self.decoder_norm(output)
- decoder_output = decoder_output.transpose(0, 1)
- outputs_class = self.class_embed(decoder_output)
- mask_embed = self.mask_embed(decoder_output)
- outputs_mask = torch.einsum("bqc,bchw->bqhw", mask_embed, mask_features)
-
- # NOTE: prediction is of higher-resolution
- # [B, Q, H, W] -> [B, Q, H*W] -> [B, h, Q, H*W] -> [B*h, Q, HW]
- attn_mask = F.interpolate(outputs_mask, size=attn_mask_target_size, mode="bilinear", align_corners=False)
-
- # save_attn_masks(attn_mask.sigmoid() < 0.5, fname=f'demo/maps/{i}_pre_bool')
-
- # must use bool type
- # If a BoolTensor is provided, positions with ``True`` are not allowed to attend while ``False`` values will be unchanged.
- attn_mask = (attn_mask.sigmoid().flatten(2).unsqueeze(1).repeat(1, self.num_heads, 1, 1).flatten(0, 1) < 0.5).bool()
- attn_mask = attn_mask.detach()
-
- return outputs_class, outputs_mask, attn_mask
-
- @torch.jit.unused
- def _set_aux_loss(self, outputs_class, outputs_seg_masks):
- # this is a workaround to make torchscript happy, as torchscript
- # doesn't support dictionary with non-homogeneous values, such
- # as a dict having both a Tensor and a list.
- if self.mask_classification:
- aux_list = [
- {"pred_logits": a, "pred_masks": b}
- for a, b in zip(outputs_class[:-1], outputs_seg_masks[:-1])
- ]
- else:
- aux_list = [{"pred_masks": b} for b, in outputs_seg_masks[:-1]]
-
- return aux_list
\ No newline at end of file
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv.py
deleted file mode 100644
index cf54491997a48ac3e7fadc4183ab7bf3e831024c..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/conv.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from torch import nn
-
-from .registry import CONV_LAYERS
-
-CONV_LAYERS.register_module('Conv1d', module=nn.Conv1d)
-CONV_LAYERS.register_module('Conv2d', module=nn.Conv2d)
-CONV_LAYERS.register_module('Conv3d', module=nn.Conv3d)
-CONV_LAYERS.register_module('Conv', module=nn.Conv2d)
-
-
-def build_conv_layer(cfg, *args, **kwargs):
- """Build convolution layer.
-
- Args:
- cfg (None or dict): The conv layer config, which should contain:
- - type (str): Layer type.
- - layer args: Args needed to instantiate an conv layer.
- args (argument list): Arguments passed to the `__init__`
- method of the corresponding conv layer.
- kwargs (keyword arguments): Keyword arguments passed to the `__init__`
- method of the corresponding conv layer.
-
- Returns:
- nn.Module: Created conv layer.
- """
- if cfg is None:
- cfg_ = dict(type='Conv2d')
- else:
- if not isinstance(cfg, dict):
- raise TypeError('cfg must be a dict')
- if 'type' not in cfg:
- raise KeyError('the cfg dict must contain the key "type"')
- cfg_ = cfg.copy()
-
- layer_type = cfg_.pop('type')
- if layer_type not in CONV_LAYERS:
- raise KeyError(f'Unrecognized norm type {layer_type}')
- else:
- conv_layer = CONV_LAYERS.get(layer_type)
-
- layer = conv_layer(*args, **kwargs, **cfg_)
-
- return layer
diff --git a/spaces/PaulHilders/CLIPGroundingExplainability/app.py b/spaces/PaulHilders/CLIPGroundingExplainability/app.py
deleted file mode 100644
index 0732b5ced06d8e39c4340484869ddebe49998461..0000000000000000000000000000000000000000
--- a/spaces/PaulHilders/CLIPGroundingExplainability/app.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import sys
-import gradio as gr
-
-# sys.path.append("../")
-sys.path.append("CLIP_explainability/Transformer-MM-Explainability/")
-
-import torch
-import CLIP.clip as clip
-
-
-from clip_grounding.utils.image import pad_to_square
-from clip_grounding.datasets.png import (
- overlay_relevance_map_on_image,
-)
-from CLIP_explainability.utils import interpret, show_img_heatmap, show_heatmap_on_text
-
-clip.clip._MODELS = {
- "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
- "ViT-B/16": "https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt",
-}
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-model, preprocess = clip.load("ViT-B/32", device=device, jit=False)
-
-# Gradio Section:
-def run_demo(image, text):
- orig_image = pad_to_square(image)
- img = preprocess(orig_image).unsqueeze(0).to(device)
- text_input = clip.tokenize([text]).to(device)
-
- R_text, R_image = interpret(model=model, image=img, texts=text_input, device=device)
-
- image_relevance = show_img_heatmap(R_image[0], img, orig_image=orig_image, device=device, show=False)
- overlapped = overlay_relevance_map_on_image(image, image_relevance)
-
- text_scores, text_tokens_decoded = show_heatmap_on_text(text, text_input, R_text[0], show=False)
-
- highlighted_text = []
- for i, token in enumerate(text_tokens_decoded):
- highlighted_text.append((str(token), float(text_scores[i])))
-
- return overlapped, highlighted_text
-
-input_img = gr.inputs.Image(type='pil', label="Original Image")
-input_txt = "text"
-inputs = [input_img, input_txt]
-
-outputs = [gr.inputs.Image(type='pil', label="Output Image"), "highlight"]
-
-
-iface = gr.Interface(fn=run_demo,
- inputs=inputs,
- outputs=outputs,
- title="CLIP Grounding Explainability",
- description="A demonstration based on the Generic Attention-model Explainability method for Interpreting Bi-Modal Transformers by Chefer et al. (2021): https://github.com/hila-chefer/Transformer-MM-Explainability.",
- examples=[["example_images/London.png", "London Eye"],
- ["example_images/London.png", "Big Ben"],
- ["example_images/harrypotter.png", "Harry"],
- ["example_images/harrypotter.png", "Hermione"],
- ["example_images/harrypotter.png", "Ron"],
- ["example_images/Amsterdam.png", "Amsterdam canal"],
- ["example_images/Amsterdam.png", "Old buildings"],
- ["example_images/Amsterdam.png", "Pink flowers"],
- ["example_images/dogs_on_bed.png", "Two dogs"],
- ["example_images/dogs_on_bed.png", "Book"],
- ["example_images/dogs_on_bed.png", "Cat"]])
-iface.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/transforms/build.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/transforms/build.py
deleted file mode 100644
index 3ed88faea6d328c3ce7e4a9a6361eea6b2646099..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/transforms/build.py
+++ /dev/null
@@ -1,45 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-from . import transforms as T
-
-
-def build_transforms(cfg, is_train=True):
- if is_train:
- if len(cfg.AUGMENT.MULT_MIN_SIZE_TRAIN)>0:
- min_size = cfg.AUGMENT.MULT_MIN_SIZE_TRAIN
- else:
- min_size = cfg.INPUT.MIN_SIZE_TRAIN
- max_size = cfg.INPUT.MAX_SIZE_TRAIN
- flip_horizontal_prob = cfg.AUGMENT.FLIP_PROB_TRAIN
- flip_vertical_prob = cfg.AUGMENT.VERTICAL_FLIP_PROB_TRAIN
- brightness = cfg.AUGMENT.BRIGHTNESS
- contrast = cfg.AUGMENT.CONTRAST
- saturation = cfg.AUGMENT.SATURATION
- hue = cfg.AUGMENT.HUE
-
- crop_prob = cfg.AUGMENT.CROP_PROB
- min_ious = cfg.AUGMENT.CROP_MIN_IOUS
- min_crop_size = cfg.AUGMENT.CROP_MIN_SIZE
-
- else:
- min_size = cfg.INPUT.MIN_SIZE_TEST
- max_size = cfg.INPUT.MAX_SIZE_TEST
- flip_horizontal_prob = 0.0
-
- fix_res = cfg.INPUT.FIX_RES
- if cfg.INPUT.FORMAT is not '':
- input_format = cfg.INPUT.FORMAT
- elif cfg.INPUT.TO_BGR255:
- input_format = 'bgr255'
- normalize_transform = T.Normalize(
- mean=cfg.INPUT.PIXEL_MEAN, std=cfg.INPUT.PIXEL_STD, format=input_format
- )
-
- transform = T.Compose(
- [
- T.Resize(min_size, max_size, restrict=fix_res),
- T.RandomHorizontalFlip(flip_horizontal_prob),
- T.ToTensor(),
- normalize_transform,
- ]
- )
- return transform
diff --git a/spaces/Pinwheel/SuperGlue-Image-Matching/models/superpoint.py b/spaces/Pinwheel/SuperGlue-Image-Matching/models/superpoint.py
deleted file mode 100644
index b837d938f755850180ddc168e957742e874adacd..0000000000000000000000000000000000000000
--- a/spaces/Pinwheel/SuperGlue-Image-Matching/models/superpoint.py
+++ /dev/null
@@ -1,202 +0,0 @@
-# %BANNER_BEGIN%
-# ---------------------------------------------------------------------
-# %COPYRIGHT_BEGIN%
-#
-# Magic Leap, Inc. ("COMPANY") CONFIDENTIAL
-#
-# Unpublished Copyright (c) 2020
-# Magic Leap, Inc., All Rights Reserved.
-#
-# NOTICE: All information contained herein is, and remains the property
-# of COMPANY. The intellectual and technical concepts contained herein
-# are proprietary to COMPANY and may be covered by U.S. and Foreign
-# Patents, patents in process, and are protected by trade secret or
-# copyright law. Dissemination of this information or reproduction of
-# this material is strictly forbidden unless prior written permission is
-# obtained from COMPANY. Access to the source code contained herein is
-# hereby forbidden to anyone except current COMPANY employees, managers
-# or contractors who have executed Confidentiality and Non-disclosure
-# agreements explicitly covering such access.
-#
-# The copyright notice above does not evidence any actual or intended
-# publication or disclosure of this source code, which includes
-# information that is confidential and/or proprietary, and is a trade
-# secret, of COMPANY. ANY REPRODUCTION, MODIFICATION, DISTRIBUTION,
-# PUBLIC PERFORMANCE, OR PUBLIC DISPLAY OF OR THROUGH USE OF THIS
-# SOURCE CODE WITHOUT THE EXPRESS WRITTEN CONSENT OF COMPANY IS
-# STRICTLY PROHIBITED, AND IN VIOLATION OF APPLICABLE LAWS AND
-# INTERNATIONAL TREATIES. THE RECEIPT OR POSSESSION OF THIS SOURCE
-# CODE AND/OR RELATED INFORMATION DOES NOT CONVEY OR IMPLY ANY RIGHTS
-# TO REPRODUCE, DISCLOSE OR DISTRIBUTE ITS CONTENTS, OR TO MANUFACTURE,
-# USE, OR SELL ANYTHING THAT IT MAY DESCRIBE, IN WHOLE OR IN PART.
-#
-# %COPYRIGHT_END%
-# ----------------------------------------------------------------------
-# %AUTHORS_BEGIN%
-#
-# Originating Authors: Paul-Edouard Sarlin
-#
-# %AUTHORS_END%
-# --------------------------------------------------------------------*/
-# %BANNER_END%
-
-from pathlib import Path
-import torch
-from torch import nn
-
-def simple_nms(scores, nms_radius: int):
- """ Fast Non-maximum suppression to remove nearby points """
- assert(nms_radius >= 0)
-
- def max_pool(x):
- return torch.nn.functional.max_pool2d(
- x, kernel_size=nms_radius*2+1, stride=1, padding=nms_radius)
-
- zeros = torch.zeros_like(scores)
- max_mask = scores == max_pool(scores)
- for _ in range(2):
- supp_mask = max_pool(max_mask.float()) > 0
- supp_scores = torch.where(supp_mask, zeros, scores)
- new_max_mask = supp_scores == max_pool(supp_scores)
- max_mask = max_mask | (new_max_mask & (~supp_mask))
- return torch.where(max_mask, scores, zeros)
-
-
-def remove_borders(keypoints, scores, border: int, height: int, width: int):
- """ Removes keypoints too close to the border """
- mask_h = (keypoints[:, 0] >= border) & (keypoints[:, 0] < (height - border))
- mask_w = (keypoints[:, 1] >= border) & (keypoints[:, 1] < (width - border))
- mask = mask_h & mask_w
- return keypoints[mask], scores[mask]
-
-
-def top_k_keypoints(keypoints, scores, k: int):
- if k >= len(keypoints):
- return keypoints, scores
- scores, indices = torch.topk(scores, k, dim=0)
- return keypoints[indices], scores
-
-
-def sample_descriptors(keypoints, descriptors, s: int = 8):
- """ Interpolate descriptors at keypoint locations """
- b, c, h, w = descriptors.shape
- keypoints = keypoints - s / 2 + 0.5
- keypoints /= torch.tensor([(w*s - s/2 - 0.5), (h*s - s/2 - 0.5)],
- ).to(keypoints)[None]
- keypoints = keypoints*2 - 1 # normalize to (-1, 1)
- args = {'align_corners': True} if torch.__version__ >= '1.3' else {}
- descriptors = torch.nn.functional.grid_sample(
- descriptors, keypoints.view(b, 1, -1, 2), mode='bilinear', **args)
- descriptors = torch.nn.functional.normalize(
- descriptors.reshape(b, c, -1), p=2, dim=1)
- return descriptors
-
-
-class SuperPoint(nn.Module):
- """SuperPoint Convolutional Detector and Descriptor
-
- SuperPoint: Self-Supervised Interest Point Detection and
- Description. Daniel DeTone, Tomasz Malisiewicz, and Andrew
- Rabinovich. In CVPRW, 2019. https://arxiv.org/abs/1712.07629
-
- """
- default_config = {
- 'descriptor_dim': 256,
- 'nms_radius': 4,
- 'keypoint_threshold': 0.005,
- 'max_keypoints': -1,
- 'remove_borders': 4,
- }
-
- def __init__(self, config):
- super().__init__()
- self.config = {**self.default_config, **config}
-
- self.relu = nn.ReLU(inplace=True)
- self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
- c1, c2, c3, c4, c5 = 64, 64, 128, 128, 256
-
- self.conv1a = nn.Conv2d(1, c1, kernel_size=3, stride=1, padding=1)
- self.conv1b = nn.Conv2d(c1, c1, kernel_size=3, stride=1, padding=1)
- self.conv2a = nn.Conv2d(c1, c2, kernel_size=3, stride=1, padding=1)
- self.conv2b = nn.Conv2d(c2, c2, kernel_size=3, stride=1, padding=1)
- self.conv3a = nn.Conv2d(c2, c3, kernel_size=3, stride=1, padding=1)
- self.conv3b = nn.Conv2d(c3, c3, kernel_size=3, stride=1, padding=1)
- self.conv4a = nn.Conv2d(c3, c4, kernel_size=3, stride=1, padding=1)
- self.conv4b = nn.Conv2d(c4, c4, kernel_size=3, stride=1, padding=1)
-
- self.convPa = nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1)
- self.convPb = nn.Conv2d(c5, 65, kernel_size=1, stride=1, padding=0)
-
- self.convDa = nn.Conv2d(c4, c5, kernel_size=3, stride=1, padding=1)
- self.convDb = nn.Conv2d(
- c5, self.config['descriptor_dim'],
- kernel_size=1, stride=1, padding=0)
-
- path = Path(__file__).parent / 'weights/superpoint_v1.pth'
- self.load_state_dict(torch.load(str(path)))
-
- mk = self.config['max_keypoints']
- if mk == 0 or mk < -1:
- raise ValueError('\"max_keypoints\" must be positive or \"-1\"')
-
- print('Loaded SuperPoint model')
-
- def forward(self, data):
- """ Compute keypoints, scores, descriptors for image """
- # Shared Encoder
- x = self.relu(self.conv1a(data['image']))
- x = self.relu(self.conv1b(x))
- x = self.pool(x)
- x = self.relu(self.conv2a(x))
- x = self.relu(self.conv2b(x))
- x = self.pool(x)
- x = self.relu(self.conv3a(x))
- x = self.relu(self.conv3b(x))
- x = self.pool(x)
- x = self.relu(self.conv4a(x))
- x = self.relu(self.conv4b(x))
-
- # Compute the dense keypoint scores
- cPa = self.relu(self.convPa(x))
- scores = self.convPb(cPa)
- scores = torch.nn.functional.softmax(scores, 1)[:, :-1]
- b, _, h, w = scores.shape
- scores = scores.permute(0, 2, 3, 1).reshape(b, h, w, 8, 8)
- scores = scores.permute(0, 1, 3, 2, 4).reshape(b, h*8, w*8)
- scores = simple_nms(scores, self.config['nms_radius'])
-
- # Extract keypoints
- keypoints = [
- torch.nonzero(s > self.config['keypoint_threshold'])
- for s in scores]
- scores = [s[tuple(k.t())] for s, k in zip(scores, keypoints)]
-
- # Discard keypoints near the image borders
- keypoints, scores = list(zip(*[
- remove_borders(k, s, self.config['remove_borders'], h*8, w*8)
- for k, s in zip(keypoints, scores)]))
-
- # Keep the k keypoints with highest score
- if self.config['max_keypoints'] >= 0:
- keypoints, scores = list(zip(*[
- top_k_keypoints(k, s, self.config['max_keypoints'])
- for k, s in zip(keypoints, scores)]))
-
- # Convert (h, w) to (x, y)
- keypoints = [torch.flip(k, [1]).float() for k in keypoints]
-
- # Compute the dense descriptors
- cDa = self.relu(self.convDa(x))
- descriptors = self.convDb(cDa)
- descriptors = torch.nn.functional.normalize(descriptors, p=2, dim=1)
-
- # Extract descriptors
- descriptors = [sample_descriptors(k[None], d[None], 8)[0]
- for k, d in zip(keypoints, descriptors)]
-
- return {
- 'keypoints': keypoints,
- 'scores': scores,
- 'descriptors': descriptors,
- }
diff --git a/spaces/Plachta/VALL-E-X/models/macros.py b/spaces/Plachta/VALL-E-X/models/macros.py
deleted file mode 100644
index cbc54966f43b2ef27d87c3b4bc69cb866d2b8fd0..0000000000000000000000000000000000000000
--- a/spaces/Plachta/VALL-E-X/models/macros.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# Text
-NUM_TEXT_TOKENS = 2048
-
-# Audio
-NUM_AUDIO_TOKENS = 1024 # EnCodec RVQ bins
-NUM_MEL_BINS = 100 # BigVGAN bigvgan_24khz_100band
-
-
-# Speaker
-NUM_SPEAKER_CLASSES = 4096
-SPEAKER_EMBEDDING_DIM = 64
diff --git a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_537238KB.py b/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_537238KB.py
deleted file mode 100644
index a1bb530e006482704f234c2e739a695174142941..0000000000000000000000000000000000000000
--- a/spaces/RMXK/RVC_HFF/lib/uvr5_pack/lib_v5/nets_537238KB.py
+++ /dev/null
@@ -1,123 +0,0 @@
-import torch
-import numpy as np
-from torch import nn
-import torch.nn.functional as F
-
-from . import layers_537238KB as layers
-
-
-class BaseASPPNet(nn.Module):
- def __init__(self, nin, ch, dilations=(4, 8, 16)):
- super(BaseASPPNet, self).__init__()
- self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
- self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
- self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
- self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
-
- self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
-
- self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
- self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
- self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
- self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
-
- def __call__(self, x):
- h, e1 = self.enc1(x)
- h, e2 = self.enc2(h)
- h, e3 = self.enc3(h)
- h, e4 = self.enc4(h)
-
- h = self.aspp(h)
-
- h = self.dec4(h, e4)
- h = self.dec3(h, e3)
- h = self.dec2(h, e2)
- h = self.dec1(h, e1)
-
- return h
-
-
-class CascadedASPPNet(nn.Module):
- def __init__(self, n_fft):
- super(CascadedASPPNet, self).__init__()
- self.stg1_low_band_net = BaseASPPNet(2, 64)
- self.stg1_high_band_net = BaseASPPNet(2, 64)
-
- self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
- self.stg2_full_band_net = BaseASPPNet(32, 64)
-
- self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0)
- self.stg3_full_band_net = BaseASPPNet(64, 128)
-
- self.out = nn.Conv2d(128, 2, 1, bias=False)
- self.aux1_out = nn.Conv2d(64, 2, 1, bias=False)
- self.aux2_out = nn.Conv2d(64, 2, 1, bias=False)
-
- self.max_bin = n_fft // 2
- self.output_bin = n_fft // 2 + 1
-
- self.offset = 128
-
- def forward(self, x, aggressiveness=None):
- mix = x.detach()
- x = x.clone()
-
- x = x[:, :, : self.max_bin]
-
- bandw = x.size()[2] // 2
- aux1 = torch.cat(
- [
- self.stg1_low_band_net(x[:, :, :bandw]),
- self.stg1_high_band_net(x[:, :, bandw:]),
- ],
- dim=2,
- )
-
- h = torch.cat([x, aux1], dim=1)
- aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
-
- h = torch.cat([x, aux1, aux2], dim=1)
- h = self.stg3_full_band_net(self.stg3_bridge(h))
-
- mask = torch.sigmoid(self.out(h))
- mask = F.pad(
- input=mask,
- pad=(0, 0, 0, self.output_bin - mask.size()[2]),
- mode="replicate",
- )
-
- if self.training:
- aux1 = torch.sigmoid(self.aux1_out(aux1))
- aux1 = F.pad(
- input=aux1,
- pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
- mode="replicate",
- )
- aux2 = torch.sigmoid(self.aux2_out(aux2))
- aux2 = F.pad(
- input=aux2,
- pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
- mode="replicate",
- )
- return mask * mix, aux1 * mix, aux2 * mix
- else:
- if aggressiveness:
- mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
- mask[:, :, : aggressiveness["split_bin"]],
- 1 + aggressiveness["value"] / 3,
- )
- mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
- mask[:, :, aggressiveness["split_bin"] :],
- 1 + aggressiveness["value"],
- )
-
- return mask * mix
-
- def predict(self, x_mag, aggressiveness=None):
- h = self.forward(x_mag, aggressiveness)
-
- if self.offset > 0:
- h = h[:, :, :, self.offset : -self.offset]
- assert h.size()[3] > 0
-
- return h
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/cli/command_context.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/cli/command_context.py
deleted file mode 100644
index 139995ac3f109a82664e4913f7ebc32ecf7617e1..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/cli/command_context.py
+++ /dev/null
@@ -1,27 +0,0 @@
-from contextlib import ExitStack, contextmanager
-from typing import ContextManager, Generator, TypeVar
-
-_T = TypeVar("_T", covariant=True)
-
-
-class CommandContextMixIn:
- def __init__(self) -> None:
- super().__init__()
- self._in_main_context = False
- self._main_context = ExitStack()
-
- @contextmanager
- def main_context(self) -> Generator[None, None, None]:
- assert not self._in_main_context
-
- self._in_main_context = True
- try:
- with self._main_context:
- yield
- finally:
- self._in_main_context = False
-
- def enter_context(self, context_provider: ContextManager[_T]) -> _T:
- assert self._in_main_context
-
- return self._main_context.enter_context(context_provider)
diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/viz/configs/__init__.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/viz/configs/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Reeve/Ohayou_Face/torch_utils/misc.py b/spaces/Reeve/Ohayou_Face/torch_utils/misc.py
deleted file mode 100644
index 0f158cd871e1df433b018a7658ca24dbddc4ea7c..0000000000000000000000000000000000000000
--- a/spaces/Reeve/Ohayou_Face/torch_utils/misc.py
+++ /dev/null
@@ -1,262 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import re
-import contextlib
-import numpy as np
-import torch
-import warnings
-import dnnlib
-
-#----------------------------------------------------------------------------
-# Cached construction of constant tensors. Avoids CPU=>GPU copy when the
-# same constant is used multiple times.
-
-_constant_cache = dict()
-
-def constant(value, shape=None, dtype=None, device=None, memory_format=None):
- value = np.asarray(value)
- if shape is not None:
- shape = tuple(shape)
- if dtype is None:
- dtype = torch.get_default_dtype()
- if device is None:
- device = torch.device('cpu')
- if memory_format is None:
- memory_format = torch.contiguous_format
-
- key = (value.shape, value.dtype, value.tobytes(), shape, dtype, device, memory_format)
- tensor = _constant_cache.get(key, None)
- if tensor is None:
- tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device)
- if shape is not None:
- tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape))
- tensor = tensor.contiguous(memory_format=memory_format)
- _constant_cache[key] = tensor
- return tensor
-
-#----------------------------------------------------------------------------
-# Replace NaN/Inf with specified numerical values.
-
-try:
- nan_to_num = torch.nan_to_num # 1.8.0a0
-except AttributeError:
- def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin
- assert isinstance(input, torch.Tensor)
- if posinf is None:
- posinf = torch.finfo(input.dtype).max
- if neginf is None:
- neginf = torch.finfo(input.dtype).min
- assert nan == 0
- return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out)
-
-#----------------------------------------------------------------------------
-# Symbolic assert.
-
-try:
- symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access
-except AttributeError:
- symbolic_assert = torch.Assert # 1.7.0
-
-#----------------------------------------------------------------------------
-# Context manager to suppress known warnings in torch.jit.trace().
-
-class suppress_tracer_warnings(warnings.catch_warnings):
- def __enter__(self):
- super().__enter__()
- warnings.simplefilter('ignore', category=torch.jit.TracerWarning)
- return self
-
-#----------------------------------------------------------------------------
-# Assert that the shape of a tensor matches the given list of integers.
-# None indicates that the size of a dimension is allowed to vary.
-# Performs symbolic assertion when used in torch.jit.trace().
-
-def assert_shape(tensor, ref_shape):
- if tensor.ndim != len(ref_shape):
- raise AssertionError(f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}')
- for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)):
- if ref_size is None:
- pass
- elif isinstance(ref_size, torch.Tensor):
- with suppress_tracer_warnings(): # as_tensor results are registered as constants
- symbolic_assert(torch.equal(torch.as_tensor(size), ref_size), f'Wrong size for dimension {idx}')
- elif isinstance(size, torch.Tensor):
- with suppress_tracer_warnings(): # as_tensor results are registered as constants
- symbolic_assert(torch.equal(size, torch.as_tensor(ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}')
- elif size != ref_size:
- raise AssertionError(f'Wrong size for dimension {idx}: got {size}, expected {ref_size}')
-
-#----------------------------------------------------------------------------
-# Function decorator that calls torch.autograd.profiler.record_function().
-
-def profiled_function(fn):
- def decorator(*args, **kwargs):
- with torch.autograd.profiler.record_function(fn.__name__):
- return fn(*args, **kwargs)
- decorator.__name__ = fn.__name__
- return decorator
-
-#----------------------------------------------------------------------------
-# Sampler for torch.utils.data.DataLoader that loops over the dataset
-# indefinitely, shuffling items as it goes.
-
-class InfiniteSampler(torch.utils.data.Sampler):
- def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5):
- assert len(dataset) > 0
- assert num_replicas > 0
- assert 0 <= rank < num_replicas
- assert 0 <= window_size <= 1
- super().__init__(dataset)
- self.dataset = dataset
- self.rank = rank
- self.num_replicas = num_replicas
- self.shuffle = shuffle
- self.seed = seed
- self.window_size = window_size
-
- def __iter__(self):
- order = np.arange(len(self.dataset))
- rnd = None
- window = 0
- if self.shuffle:
- rnd = np.random.RandomState(self.seed)
- rnd.shuffle(order)
- window = int(np.rint(order.size * self.window_size))
-
- idx = 0
- while True:
- i = idx % order.size
- if idx % self.num_replicas == self.rank:
- yield order[i]
- if window >= 2:
- j = (i - rnd.randint(window)) % order.size
- order[i], order[j] = order[j], order[i]
- idx += 1
-
-#----------------------------------------------------------------------------
-# Utilities for operating with torch.nn.Module parameters and buffers.
-
-def params_and_buffers(module):
- assert isinstance(module, torch.nn.Module)
- return list(module.parameters()) + list(module.buffers())
-
-def named_params_and_buffers(module):
- assert isinstance(module, torch.nn.Module)
- return list(module.named_parameters()) + list(module.named_buffers())
-
-def copy_params_and_buffers(src_module, dst_module, require_all=False):
- assert isinstance(src_module, torch.nn.Module)
- assert isinstance(dst_module, torch.nn.Module)
- src_tensors = {name: tensor for name, tensor in named_params_and_buffers(src_module)}
- for name, tensor in named_params_and_buffers(dst_module):
- assert (name in src_tensors) or (not require_all)
- if name in src_tensors:
- tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad)
-
-#----------------------------------------------------------------------------
-# Context manager for easily enabling/disabling DistributedDataParallel
-# synchronization.
-
-@contextlib.contextmanager
-def ddp_sync(module, sync):
- assert isinstance(module, torch.nn.Module)
- if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel):
- yield
- else:
- with module.no_sync():
- yield
-
-#----------------------------------------------------------------------------
-# Check DistributedDataParallel consistency across processes.
-
-def check_ddp_consistency(module, ignore_regex=None):
- assert isinstance(module, torch.nn.Module)
- for name, tensor in named_params_and_buffers(module):
- fullname = type(module).__name__ + '.' + name
- if ignore_regex is not None and re.fullmatch(ignore_regex, fullname):
- continue
- tensor = tensor.detach()
- other = tensor.clone()
- torch.distributed.broadcast(tensor=other, src=0)
- assert (nan_to_num(tensor) == nan_to_num(other)).all(), fullname
-
-#----------------------------------------------------------------------------
-# Print summary table of module hierarchy.
-
-def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True):
- assert isinstance(module, torch.nn.Module)
- assert not isinstance(module, torch.jit.ScriptModule)
- assert isinstance(inputs, (tuple, list))
-
- # Register hooks.
- entries = []
- nesting = [0]
- def pre_hook(_mod, _inputs):
- nesting[0] += 1
- def post_hook(mod, _inputs, outputs):
- nesting[0] -= 1
- if nesting[0] <= max_nesting:
- outputs = list(outputs) if isinstance(outputs, (tuple, list)) else [outputs]
- outputs = [t for t in outputs if isinstance(t, torch.Tensor)]
- entries.append(dnnlib.EasyDict(mod=mod, outputs=outputs))
- hooks = [mod.register_forward_pre_hook(pre_hook) for mod in module.modules()]
- hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()]
-
- # Run module.
- outputs = module(*inputs)
- for hook in hooks:
- hook.remove()
-
- # Identify unique outputs, parameters, and buffers.
- tensors_seen = set()
- for e in entries:
- e.unique_params = [t for t in e.mod.parameters() if id(t) not in tensors_seen]
- e.unique_buffers = [t for t in e.mod.buffers() if id(t) not in tensors_seen]
- e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen]
- tensors_seen |= {id(t) for t in e.unique_params + e.unique_buffers + e.unique_outputs}
-
- # Filter out redundant entries.
- if skip_redundant:
- entries = [e for e in entries if len(e.unique_params) or len(e.unique_buffers) or len(e.unique_outputs)]
-
- # Construct table.
- rows = [[type(module).__name__, 'Parameters', 'Buffers', 'Output shape', 'Datatype']]
- rows += [['---'] * len(rows[0])]
- param_total = 0
- buffer_total = 0
- submodule_names = {mod: name for name, mod in module.named_modules()}
- for e in entries:
- name = '' if e.mod is module else submodule_names[e.mod]
- param_size = sum(t.numel() for t in e.unique_params)
- buffer_size = sum(t.numel() for t in e.unique_buffers)
- output_shapes = [str(list(e.outputs[0].shape)) for t in e.outputs]
- output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs]
- rows += [[
- name + (':0' if len(e.outputs) >= 2 else ''),
- str(param_size) if param_size else '-',
- str(buffer_size) if buffer_size else '-',
- (output_shapes + ['-'])[0],
- (output_dtypes + ['-'])[0],
- ]]
- for idx in range(1, len(e.outputs)):
- rows += [[name + f':{idx}', '-', '-', output_shapes[idx], output_dtypes[idx]]]
- param_total += param_size
- buffer_total += buffer_size
- rows += [['---'] * len(rows[0])]
- rows += [['Total', str(param_total), str(buffer_total), '-', '-']]
-
- # Print table.
- widths = [max(len(cell) for cell in column) for column in zip(*rows)]
- print()
- for row in rows:
- print(' '.join(cell + ' ' * (width - len(cell)) for cell, width in zip(row, widths)))
- print()
- return outputs
-
-#----------------------------------------------------------------------------
diff --git a/spaces/Ritori/TTS_Yui/text/numbers.py b/spaces/Ritori/TTS_Yui/text/numbers.py
deleted file mode 100644
index 0d5f7fa818a45ecf132627d240afac653e148070..0000000000000000000000000000000000000000
--- a/spaces/Ritori/TTS_Yui/text/numbers.py
+++ /dev/null
@@ -1,71 +0,0 @@
-""" from https://github.com/keithito/tacotron """
-
-import inflect
-import re
-
-
-_inflect = inflect.engine()
-_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])')
-_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)')
-_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)')
-_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)')
-_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)')
-_number_re = re.compile(r'[0-9]+')
-
-
-def _remove_commas(m):
- return m.group(1).replace(',', '')
-
-
-def _expand_decimal_point(m):
- return m.group(1).replace('.', ' point ')
-
-
-def _expand_dollars(m):
- match = m.group(1)
- parts = match.split('.')
- if len(parts) > 2:
- return match + ' dollars' # Unexpected format
- dollars = int(parts[0]) if parts[0] else 0
- cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0
- if dollars and cents:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit)
- elif dollars:
- dollar_unit = 'dollar' if dollars == 1 else 'dollars'
- return '%s %s' % (dollars, dollar_unit)
- elif cents:
- cent_unit = 'cent' if cents == 1 else 'cents'
- return '%s %s' % (cents, cent_unit)
- else:
- return 'zero dollars'
-
-
-def _expand_ordinal(m):
- return _inflect.number_to_words(m.group(0))
-
-
-def _expand_number(m):
- num = int(m.group(0))
- if num > 1000 and num < 3000:
- if num == 2000:
- return 'two thousand'
- elif num > 2000 and num < 2010:
- return 'two thousand ' + _inflect.number_to_words(num % 100)
- elif num % 100 == 0:
- return _inflect.number_to_words(num // 100) + ' hundred'
- else:
- return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ')
- else:
- return _inflect.number_to_words(num, andword='')
-
-
-def normalize_numbers(text):
- text = re.sub(_comma_number_re, _remove_commas, text)
- text = re.sub(_pounds_re, r'\1 pounds', text)
- text = re.sub(_dollars_re, _expand_dollars, text)
- text = re.sub(_decimal_number_re, _expand_decimal_point, text)
- text = re.sub(_ordinal_re, _expand_ordinal, text)
- text = re.sub(_number_re, _expand_number, text)
- return text
diff --git a/spaces/RobPruzan/automaticlitassesment/README.md b/spaces/RobPruzan/automaticlitassesment/README.md
deleted file mode 100644
index 8ea880bbb57834038f35cf6efd152e41fff88736..0000000000000000000000000000000000000000
--- a/spaces/RobPruzan/automaticlitassesment/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Automaticlitassesment
-emoji: ⚡
-colorFrom: indigo
-colorTo: green
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
-license: afl-3.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py
deleted file mode 100644
index 7a38772b0c93a8608f32c6357b8616e77c139dc9..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/runner/hooks/logger/neptune.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from ...dist_utils import master_only
-from ..hook import HOOKS
-from .base import LoggerHook
-
-
-@HOOKS.register_module()
-class NeptuneLoggerHook(LoggerHook):
- """Class to log metrics to NeptuneAI.
-
- It requires `neptune-client` to be installed.
-
- Args:
- init_kwargs (dict): a dict contains the initialization keys as below:
- - project (str): Name of a project in a form of
- namespace/project_name. If None, the value of
- NEPTUNE_PROJECT environment variable will be taken.
- - api_token (str): User’s API token.
- If None, the value of NEPTUNE_API_TOKEN environment
- variable will be taken. Note: It is strongly recommended
- to use NEPTUNE_API_TOKEN environment variable rather than
- placing your API token in plain text in your source code.
- - name (str, optional, default is 'Untitled'): Editable name of
- the run. Name is displayed in the run's Details and in
- Runs table as a column.
- Check https://docs.neptune.ai/api-reference/neptune#init for
- more init arguments.
- interval (int): Logging interval (every k iterations).
- ignore_last (bool): Ignore the log of last iterations in each epoch
- if less than `interval`.
- reset_flag (bool): Whether to clear the output buffer after logging
- by_epoch (bool): Whether EpochBasedRunner is used.
-
- .. _NeptuneAI:
- https://docs.neptune.ai/you-should-know/logging-metadata
- """
-
- def __init__(self,
- init_kwargs=None,
- interval=10,
- ignore_last=True,
- reset_flag=True,
- with_step=True,
- by_epoch=True):
-
- super(NeptuneLoggerHook, self).__init__(interval, ignore_last,
- reset_flag, by_epoch)
- self.import_neptune()
- self.init_kwargs = init_kwargs
- self.with_step = with_step
-
- def import_neptune(self):
- try:
- import neptune.new as neptune
- except ImportError:
- raise ImportError(
- 'Please run "pip install neptune-client" to install neptune')
- self.neptune = neptune
- self.run = None
-
- @master_only
- def before_run(self, runner):
- if self.init_kwargs:
- self.run = self.neptune.init(**self.init_kwargs)
- else:
- self.run = self.neptune.init()
-
- @master_only
- def log(self, runner):
- tags = self.get_loggable_tags(runner)
- if tags:
- for tag_name, tag_value in tags.items():
- if self.with_step:
- self.run[tag_name].log(
- tag_value, step=self.get_iter(runner))
- else:
- tags['global_step'] = self.get_iter(runner)
- self.run[tag_name].log(tags)
-
- @master_only
- def after_run(self, runner):
- self.run.stop()
diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/resnet.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/resnet.py
deleted file mode 100644
index 3826815a6d94fdc4c54001d4c186d10ca3380e80..0000000000000000000000000000000000000000
--- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/backbones/resnet.py
+++ /dev/null
@@ -1,663 +0,0 @@
-import torch.nn as nn
-import torch.utils.checkpoint as cp
-from mmcv.cnn import (build_conv_layer, build_norm_layer, build_plugin_layer,
- constant_init, kaiming_init)
-from mmcv.runner import load_checkpoint
-from torch.nn.modules.batchnorm import _BatchNorm
-
-from mmdet.utils import get_root_logger
-from ..builder import BACKBONES
-from ..utils import ResLayer
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self,
- inplanes,
- planes,
- stride=1,
- dilation=1,
- downsample=None,
- style='pytorch',
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- dcn=None,
- plugins=None):
- super(BasicBlock, self).__init__()
- assert dcn is None, 'Not implemented yet.'
- assert plugins is None, 'Not implemented yet.'
-
- self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1)
- self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2)
-
- self.conv1 = build_conv_layer(
- conv_cfg,
- inplanes,
- planes,
- 3,
- stride=stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
- self.add_module(self.norm1_name, norm1)
- self.conv2 = build_conv_layer(
- conv_cfg, planes, planes, 3, padding=1, bias=False)
- self.add_module(self.norm2_name, norm2)
-
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
- self.stride = stride
- self.dilation = dilation
- self.with_cp = with_cp
-
- @property
- def norm1(self):
- """nn.Module: normalization layer after the first convolution layer"""
- return getattr(self, self.norm1_name)
-
- @property
- def norm2(self):
- """nn.Module: normalization layer after the second convolution layer"""
- return getattr(self, self.norm2_name)
-
- def forward(self, x):
- """Forward function."""
-
- def _inner_forward(x):
- identity = x
-
- out = self.conv1(x)
- out = self.norm1(out)
- out = self.relu(out)
-
- out = self.conv2(out)
- out = self.norm2(out)
-
- if self.downsample is not None:
- identity = self.downsample(x)
-
- out += identity
-
- return out
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(_inner_forward, x)
- else:
- out = _inner_forward(x)
-
- out = self.relu(out)
-
- return out
-
-
-class Bottleneck(nn.Module):
- expansion = 4
-
- def __init__(self,
- inplanes,
- planes,
- stride=1,
- dilation=1,
- downsample=None,
- style='pytorch',
- with_cp=False,
- conv_cfg=None,
- norm_cfg=dict(type='BN'),
- dcn=None,
- plugins=None):
- """Bottleneck block for ResNet.
-
- If style is "pytorch", the stride-two layer is the 3x3 conv layer, if
- it is "caffe", the stride-two layer is the first 1x1 conv layer.
- """
- super(Bottleneck, self).__init__()
- assert style in ['pytorch', 'caffe']
- assert dcn is None or isinstance(dcn, dict)
- assert plugins is None or isinstance(plugins, list)
- if plugins is not None:
- allowed_position = ['after_conv1', 'after_conv2', 'after_conv3']
- assert all(p['position'] in allowed_position for p in plugins)
-
- self.inplanes = inplanes
- self.planes = planes
- self.stride = stride
- self.dilation = dilation
- self.style = style
- self.with_cp = with_cp
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.dcn = dcn
- self.with_dcn = dcn is not None
- self.plugins = plugins
- self.with_plugins = plugins is not None
-
- if self.with_plugins:
- # collect plugins for conv1/conv2/conv3
- self.after_conv1_plugins = [
- plugin['cfg'] for plugin in plugins
- if plugin['position'] == 'after_conv1'
- ]
- self.after_conv2_plugins = [
- plugin['cfg'] for plugin in plugins
- if plugin['position'] == 'after_conv2'
- ]
- self.after_conv3_plugins = [
- plugin['cfg'] for plugin in plugins
- if plugin['position'] == 'after_conv3'
- ]
-
- if self.style == 'pytorch':
- self.conv1_stride = 1
- self.conv2_stride = stride
- else:
- self.conv1_stride = stride
- self.conv2_stride = 1
-
- self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1)
- self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2)
- self.norm3_name, norm3 = build_norm_layer(
- norm_cfg, planes * self.expansion, postfix=3)
-
- self.conv1 = build_conv_layer(
- conv_cfg,
- inplanes,
- planes,
- kernel_size=1,
- stride=self.conv1_stride,
- bias=False)
- self.add_module(self.norm1_name, norm1)
- fallback_on_stride = False
- if self.with_dcn:
- fallback_on_stride = dcn.pop('fallback_on_stride', False)
- if not self.with_dcn or fallback_on_stride:
- self.conv2 = build_conv_layer(
- conv_cfg,
- planes,
- planes,
- kernel_size=3,
- stride=self.conv2_stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
- else:
- assert self.conv_cfg is None, 'conv_cfg must be None for DCN'
- self.conv2 = build_conv_layer(
- dcn,
- planes,
- planes,
- kernel_size=3,
- stride=self.conv2_stride,
- padding=dilation,
- dilation=dilation,
- bias=False)
-
- self.add_module(self.norm2_name, norm2)
- self.conv3 = build_conv_layer(
- conv_cfg,
- planes,
- planes * self.expansion,
- kernel_size=1,
- bias=False)
- self.add_module(self.norm3_name, norm3)
-
- self.relu = nn.ReLU(inplace=True)
- self.downsample = downsample
-
- if self.with_plugins:
- self.after_conv1_plugin_names = self.make_block_plugins(
- planes, self.after_conv1_plugins)
- self.after_conv2_plugin_names = self.make_block_plugins(
- planes, self.after_conv2_plugins)
- self.after_conv3_plugin_names = self.make_block_plugins(
- planes * self.expansion, self.after_conv3_plugins)
-
- def make_block_plugins(self, in_channels, plugins):
- """make plugins for block.
-
- Args:
- in_channels (int): Input channels of plugin.
- plugins (list[dict]): List of plugins cfg to build.
-
- Returns:
- list[str]: List of the names of plugin.
- """
- assert isinstance(plugins, list)
- plugin_names = []
- for plugin in plugins:
- plugin = plugin.copy()
- name, layer = build_plugin_layer(
- plugin,
- in_channels=in_channels,
- postfix=plugin.pop('postfix', ''))
- assert not hasattr(self, name), f'duplicate plugin {name}'
- self.add_module(name, layer)
- plugin_names.append(name)
- return plugin_names
-
- def forward_plugin(self, x, plugin_names):
- out = x
- for name in plugin_names:
- out = getattr(self, name)(x)
- return out
-
- @property
- def norm1(self):
- """nn.Module: normalization layer after the first convolution layer"""
- return getattr(self, self.norm1_name)
-
- @property
- def norm2(self):
- """nn.Module: normalization layer after the second convolution layer"""
- return getattr(self, self.norm2_name)
-
- @property
- def norm3(self):
- """nn.Module: normalization layer after the third convolution layer"""
- return getattr(self, self.norm3_name)
-
- def forward(self, x):
- """Forward function."""
-
- def _inner_forward(x):
- identity = x
- out = self.conv1(x)
- out = self.norm1(out)
- out = self.relu(out)
-
- if self.with_plugins:
- out = self.forward_plugin(out, self.after_conv1_plugin_names)
-
- out = self.conv2(out)
- out = self.norm2(out)
- out = self.relu(out)
-
- if self.with_plugins:
- out = self.forward_plugin(out, self.after_conv2_plugin_names)
-
- out = self.conv3(out)
- out = self.norm3(out)
-
- if self.with_plugins:
- out = self.forward_plugin(out, self.after_conv3_plugin_names)
-
- if self.downsample is not None:
- identity = self.downsample(x)
-
- out += identity
-
- return out
-
- if self.with_cp and x.requires_grad:
- out = cp.checkpoint(_inner_forward, x)
- else:
- out = _inner_forward(x)
-
- out = self.relu(out)
-
- return out
-
-
-@BACKBONES.register_module()
-class ResNet(nn.Module):
- """ResNet backbone.
-
- Args:
- depth (int): Depth of resnet, from {18, 34, 50, 101, 152}.
- stem_channels (int | None): Number of stem channels. If not specified,
- it will be the same as `base_channels`. Default: None.
- base_channels (int): Number of base channels of res layer. Default: 64.
- in_channels (int): Number of input image channels. Default: 3.
- num_stages (int): Resnet stages. Default: 4.
- strides (Sequence[int]): Strides of the first block of each stage.
- dilations (Sequence[int]): Dilation of each stage.
- out_indices (Sequence[int]): Output from which stages.
- style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two
- layer is the 3x3 conv layer, otherwise the stride-two layer is
- the first 1x1 conv layer.
- deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv
- avg_down (bool): Use AvgPool instead of stride conv when
- downsampling in the bottleneck.
- frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
- -1 means not freezing any parameters.
- norm_cfg (dict): Dictionary to construct and config norm layer.
- norm_eval (bool): Whether to set norm layers to eval mode, namely,
- freeze running stats (mean and var). Note: Effect on Batch Norm
- and its variants only.
- plugins (list[dict]): List of plugins for stages, each dict contains:
-
- - cfg (dict, required): Cfg dict to build plugin.
- - position (str, required): Position inside block to insert
- plugin, options are 'after_conv1', 'after_conv2', 'after_conv3'.
- - stages (tuple[bool], optional): Stages to apply plugin, length
- should be same as 'num_stages'.
- with_cp (bool): Use checkpoint or not. Using checkpoint will save some
- memory while slowing down the training speed.
- zero_init_residual (bool): Whether to use zero init for last norm layer
- in resblocks to let them behave as identity.
-
- Example:
- >>> from mmdet.models import ResNet
- >>> import torch
- >>> self = ResNet(depth=18)
- >>> self.eval()
- >>> inputs = torch.rand(1, 3, 32, 32)
- >>> level_outputs = self.forward(inputs)
- >>> for level_out in level_outputs:
- ... print(tuple(level_out.shape))
- (1, 64, 8, 8)
- (1, 128, 4, 4)
- (1, 256, 2, 2)
- (1, 512, 1, 1)
- """
-
- arch_settings = {
- 18: (BasicBlock, (2, 2, 2, 2)),
- 34: (BasicBlock, (3, 4, 6, 3)),
- 50: (Bottleneck, (3, 4, 6, 3)),
- 101: (Bottleneck, (3, 4, 23, 3)),
- 152: (Bottleneck, (3, 8, 36, 3))
- }
-
- def __init__(self,
- depth,
- in_channels=3,
- stem_channels=None,
- base_channels=64,
- num_stages=4,
- strides=(1, 2, 2, 2),
- dilations=(1, 1, 1, 1),
- out_indices=(0, 1, 2, 3),
- style='pytorch',
- deep_stem=False,
- avg_down=False,
- frozen_stages=-1,
- conv_cfg=None,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- dcn=None,
- stage_with_dcn=(False, False, False, False),
- plugins=None,
- with_cp=False,
- zero_init_residual=True):
- super(ResNet, self).__init__()
- if depth not in self.arch_settings:
- raise KeyError(f'invalid depth {depth} for resnet')
- self.depth = depth
- if stem_channels is None:
- stem_channels = base_channels
- self.stem_channels = stem_channels
- self.base_channels = base_channels
- self.num_stages = num_stages
- assert num_stages >= 1 and num_stages <= 4
- self.strides = strides
- self.dilations = dilations
- assert len(strides) == len(dilations) == num_stages
- self.out_indices = out_indices
- assert max(out_indices) < num_stages
- self.style = style
- self.deep_stem = deep_stem
- self.avg_down = avg_down
- self.frozen_stages = frozen_stages
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.with_cp = with_cp
- self.norm_eval = norm_eval
- self.dcn = dcn
- self.stage_with_dcn = stage_with_dcn
- if dcn is not None:
- assert len(stage_with_dcn) == num_stages
- self.plugins = plugins
- self.zero_init_residual = zero_init_residual
- self.block, stage_blocks = self.arch_settings[depth]
- self.stage_blocks = stage_blocks[:num_stages]
- self.inplanes = stem_channels
-
- self._make_stem_layer(in_channels, stem_channels)
-
- self.res_layers = []
- for i, num_blocks in enumerate(self.stage_blocks):
- stride = strides[i]
- dilation = dilations[i]
- dcn = self.dcn if self.stage_with_dcn[i] else None
- if plugins is not None:
- stage_plugins = self.make_stage_plugins(plugins, i)
- else:
- stage_plugins = None
- planes = base_channels * 2**i
- res_layer = self.make_res_layer(
- block=self.block,
- inplanes=self.inplanes,
- planes=planes,
- num_blocks=num_blocks,
- stride=stride,
- dilation=dilation,
- style=self.style,
- avg_down=self.avg_down,
- with_cp=with_cp,
- conv_cfg=conv_cfg,
- norm_cfg=norm_cfg,
- dcn=dcn,
- plugins=stage_plugins)
- self.inplanes = planes * self.block.expansion
- layer_name = f'layer{i + 1}'
- self.add_module(layer_name, res_layer)
- self.res_layers.append(layer_name)
-
- self._freeze_stages()
-
- self.feat_dim = self.block.expansion * base_channels * 2**(
- len(self.stage_blocks) - 1)
-
- def make_stage_plugins(self, plugins, stage_idx):
- """Make plugins for ResNet ``stage_idx`` th stage.
-
- Currently we support to insert ``context_block``,
- ``empirical_attention_block``, ``nonlocal_block`` into the backbone
- like ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of
- Bottleneck.
-
- An example of plugins format could be:
-
- Examples:
- >>> plugins=[
- ... dict(cfg=dict(type='xxx', arg1='xxx'),
- ... stages=(False, True, True, True),
- ... position='after_conv2'),
- ... dict(cfg=dict(type='yyy'),
- ... stages=(True, True, True, True),
- ... position='after_conv3'),
- ... dict(cfg=dict(type='zzz', postfix='1'),
- ... stages=(True, True, True, True),
- ... position='after_conv3'),
- ... dict(cfg=dict(type='zzz', postfix='2'),
- ... stages=(True, True, True, True),
- ... position='after_conv3')
- ... ]
- >>> self = ResNet(depth=18)
- >>> stage_plugins = self.make_stage_plugins(plugins, 0)
- >>> assert len(stage_plugins) == 3
-
- Suppose ``stage_idx=0``, the structure of blocks in the stage would be:
-
- .. code-block:: none
-
- conv1-> conv2->conv3->yyy->zzz1->zzz2
-
- Suppose 'stage_idx=1', the structure of blocks in the stage would be:
-
- .. code-block:: none
-
- conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2
-
- If stages is missing, the plugin would be applied to all stages.
-
- Args:
- plugins (list[dict]): List of plugins cfg to build. The postfix is
- required if multiple same type plugins are inserted.
- stage_idx (int): Index of stage to build
-
- Returns:
- list[dict]: Plugins for current stage
- """
- stage_plugins = []
- for plugin in plugins:
- plugin = plugin.copy()
- stages = plugin.pop('stages', None)
- assert stages is None or len(stages) == self.num_stages
- # whether to insert plugin into current stage
- if stages is None or stages[stage_idx]:
- stage_plugins.append(plugin)
-
- return stage_plugins
-
- def make_res_layer(self, **kwargs):
- """Pack all blocks in a stage into a ``ResLayer``."""
- return ResLayer(**kwargs)
-
- @property
- def norm1(self):
- """nn.Module: the normalization layer named "norm1" """
- return getattr(self, self.norm1_name)
-
- def _make_stem_layer(self, in_channels, stem_channels):
- if self.deep_stem:
- self.stem = nn.Sequential(
- build_conv_layer(
- self.conv_cfg,
- in_channels,
- stem_channels // 2,
- kernel_size=3,
- stride=2,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg, stem_channels // 2)[1],
- nn.ReLU(inplace=True),
- build_conv_layer(
- self.conv_cfg,
- stem_channels // 2,
- stem_channels // 2,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg, stem_channels // 2)[1],
- nn.ReLU(inplace=True),
- build_conv_layer(
- self.conv_cfg,
- stem_channels // 2,
- stem_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- bias=False),
- build_norm_layer(self.norm_cfg, stem_channels)[1],
- nn.ReLU(inplace=True))
- else:
- self.conv1 = build_conv_layer(
- self.conv_cfg,
- in_channels,
- stem_channels,
- kernel_size=7,
- stride=2,
- padding=3,
- bias=False)
- self.norm1_name, norm1 = build_norm_layer(
- self.norm_cfg, stem_channels, postfix=1)
- self.add_module(self.norm1_name, norm1)
- self.relu = nn.ReLU(inplace=True)
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
-
- def _freeze_stages(self):
- if self.frozen_stages >= 0:
- if self.deep_stem:
- self.stem.eval()
- for param in self.stem.parameters():
- param.requires_grad = False
- else:
- self.norm1.eval()
- for m in [self.conv1, self.norm1]:
- for param in m.parameters():
- param.requires_grad = False
-
- for i in range(1, self.frozen_stages + 1):
- m = getattr(self, f'layer{i}')
- m.eval()
- for param in m.parameters():
- param.requires_grad = False
-
- def init_weights(self, pretrained=None):
- """Initialize the weights in backbone.
-
- Args:
- pretrained (str, optional): Path to pre-trained weights.
- Defaults to None.
- """
- if isinstance(pretrained, str):
- logger = get_root_logger()
- load_checkpoint(self, pretrained, strict=False, logger=logger)
- elif pretrained is None:
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- kaiming_init(m)
- elif isinstance(m, (_BatchNorm, nn.GroupNorm)):
- constant_init(m, 1)
-
- if self.dcn is not None:
- for m in self.modules():
- if isinstance(m, Bottleneck) and hasattr(
- m.conv2, 'conv_offset'):
- constant_init(m.conv2.conv_offset, 0)
-
- if self.zero_init_residual:
- for m in self.modules():
- if isinstance(m, Bottleneck):
- constant_init(m.norm3, 0)
- elif isinstance(m, BasicBlock):
- constant_init(m.norm2, 0)
- else:
- raise TypeError('pretrained must be a str or None')
-
- def forward(self, x):
- """Forward function."""
- if self.deep_stem:
- x = self.stem(x)
- else:
- x = self.conv1(x)
- x = self.norm1(x)
- x = self.relu(x)
- x = self.maxpool(x)
- outs = []
- for i, layer_name in enumerate(self.res_layers):
- res_layer = getattr(self, layer_name)
- x = res_layer(x)
- if i in self.out_indices:
- outs.append(x)
- return tuple(outs)
-
- def train(self, mode=True):
- """Convert the model into training mode while keep normalization layer
- freezed."""
- super(ResNet, self).train(mode)
- self._freeze_stages()
- if mode and self.norm_eval:
- for m in self.modules():
- # trick: eval have effect on BatchNorm only
- if isinstance(m, _BatchNorm):
- m.eval()
-
-
-@BACKBONES.register_module()
-class ResNetV1d(ResNet):
- r"""ResNetV1d variant described in `Bag of Tricks
- `_.
-
- Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in
- the input stem with three 3x3 convs. And in the downsampling block, a 2x2
- avg_pool with stride 2 is added before conv, whose stride is changed to 1.
- """
-
- def __init__(self, **kwargs):
- super(ResNetV1d, self).__init__(
- deep_stem=True, avg_down=True, **kwargs)
diff --git a/spaces/Rongjiehuang/ProDiff/data_gen/tts/base_preprocess.py b/spaces/Rongjiehuang/ProDiff/data_gen/tts/base_preprocess.py
deleted file mode 100644
index 6c0b2cda06076d32b4eda800b134415e20d0f730..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/ProDiff/data_gen/tts/base_preprocess.py
+++ /dev/null
@@ -1,245 +0,0 @@
-import json
-import os
-import random
-import re
-import traceback
-from collections import Counter
-from functools import partial
-
-import librosa
-from tqdm import tqdm
-from data_gen.tts.txt_processors.base_text_processor import get_txt_processor_cls
-from data_gen.tts.wav_processors.base_processor import get_wav_processor_cls
-from utils.hparams import hparams
-from utils.multiprocess_utils import multiprocess_run_tqdm
-from utils.os_utils import link_file, move_file, remove_file
-from data_gen.tts.data_gen_utils import is_sil_phoneme, build_token_encoder
-
-
-class BasePreprocessor:
- def __init__(self):
- self.preprocess_args = hparams['preprocess_args']
- txt_processor = self.preprocess_args['txt_processor']
- self.txt_processor = get_txt_processor_cls(txt_processor)
- self.raw_data_dir = hparams['raw_data_dir']
- self.processed_dir = hparams['processed_data_dir']
- self.spk_map_fn = f"{self.processed_dir}/spk_map.json"
-
- def meta_data(self):
- """
- :return: {'item_name': Str, 'wav_fn': Str, 'txt': Str, 'spk_name': Str, 'txt_loader': None or Func}
- """
- raise NotImplementedError
-
- def process(self):
- processed_dir = self.processed_dir
- wav_processed_tmp_dir = f'{processed_dir}/processed_tmp'
- remove_file(wav_processed_tmp_dir)
- os.makedirs(wav_processed_tmp_dir, exist_ok=True)
- wav_processed_dir = f'{processed_dir}/{self.wav_processed_dirname}'
- remove_file(wav_processed_dir)
- os.makedirs(wav_processed_dir, exist_ok=True)
-
- meta_data = list(tqdm(self.meta_data(), desc='Load meta data'))
- item_names = [d['item_name'] for d in meta_data]
- assert len(item_names) == len(set(item_names)), 'Key `item_name` should be Unique.'
-
- # preprocess data
- phone_list = []
- word_list = []
- spk_names = set()
- process_item = partial(self.preprocess_first_pass,
- txt_processor=self.txt_processor,
- wav_processed_dir=wav_processed_dir,
- wav_processed_tmp=wav_processed_tmp_dir,
- preprocess_args=self.preprocess_args)
- items = []
- args = [{
- 'item_name': item_raw['item_name'],
- 'txt_raw': item_raw['txt'],
- 'wav_fn': item_raw['wav_fn'],
- 'txt_loader': item_raw.get('txt_loader'),
- 'others': item_raw.get('others', None)
- } for item_raw in meta_data]
- for item_, (item_id, item) in zip(meta_data, multiprocess_run_tqdm(process_item, args, desc='Preprocess')):
- if item is not None:
- item_.update(item)
- item = item_
- if 'txt_loader' in item:
- del item['txt_loader']
- item['id'] = item_id
- item['spk_name'] = item.get('spk_name', '')
- item['others'] = item.get('others', None)
- phone_list += item['ph'].split(" ")
- word_list += item['word'].split(" ")
- spk_names.add(item['spk_name'])
- items.append(item)
-
- # add encoded tokens
- ph_encoder, word_encoder = self._phone_encoder(phone_list), self._word_encoder(word_list)
- spk_map = self.build_spk_map(spk_names)
- args = [{
- 'ph': item['ph'], 'word': item['word'], 'spk_name': item['spk_name'],
- 'word_encoder': word_encoder, 'ph_encoder': ph_encoder, 'spk_map': spk_map
- } for item in items]
- for idx, item_new_kv in multiprocess_run_tqdm(self.preprocess_second_pass, args, desc='Add encoded tokens'):
- items[idx].update(item_new_kv)
-
- # build mfa data
- if self.preprocess_args['use_mfa']:
- mfa_dict = set()
- mfa_input_dir = f'{processed_dir}/mfa_inputs'
- remove_file(mfa_input_dir)
- # group MFA inputs for better parallelism
- mfa_groups = [i // self.preprocess_args['nsample_per_mfa_group'] for i in range(len(items))]
- if self.preprocess_args['mfa_group_shuffle']:
- random.seed(hparams['seed'])
- random.shuffle(mfa_groups)
- args = [{
- 'item': item, 'mfa_input_dir': mfa_input_dir,
- 'mfa_group': mfa_group, 'wav_processed_tmp': wav_processed_tmp_dir,
- 'preprocess_args': self.preprocess_args
- } for item, mfa_group in zip(items, mfa_groups)]
- for i, (ph_gb_word_nosil, new_wav_align_fn) in multiprocess_run_tqdm(
- self.build_mfa_inputs, args, desc='Build MFA data'):
- items[i]['wav_align_fn'] = new_wav_align_fn
- for w in ph_gb_word_nosil.split(" "):
- mfa_dict.add(f"{w} {w.replace('_', ' ')}")
- mfa_dict = sorted(mfa_dict)
- with open(f'{processed_dir}/mfa_dict.txt', 'w') as f:
- f.writelines([f'{l}\n' for l in mfa_dict])
- with open(f"{processed_dir}/{self.meta_csv_filename}.json", 'w') as f:
- f.write(re.sub(r'\n\s+([\d+\]])', r'\1', json.dumps(items, ensure_ascii=False, sort_keys=False, indent=1)))
- remove_file(wav_processed_tmp_dir)
-
- @classmethod
- def preprocess_first_pass(cls, item_name, txt_raw, txt_processor,
- wav_fn, wav_processed_dir, wav_processed_tmp,
- preprocess_args, txt_loader=None, others=None):
- try:
- if txt_loader is not None:
- txt_raw = txt_loader(txt_raw)
- ph, txt, word, ph2word, ph_gb_word = cls.txt_to_ph(txt_processor, txt_raw, preprocess_args)
- wav_fn, wav_align_fn = cls.process_wav(
- item_name, wav_fn,
- hparams['processed_data_dir'],
- wav_processed_tmp, preprocess_args)
-
- # wav for binarization
- ext = os.path.splitext(wav_fn)[1]
- os.makedirs(wav_processed_dir, exist_ok=True)
- new_wav_fn = f"{wav_processed_dir}/{item_name}{ext}"
- move_link_func = move_file if os.path.dirname(wav_fn) == wav_processed_tmp else link_file
- move_link_func(wav_fn, new_wav_fn)
- return {
- 'txt': txt, 'txt_raw': txt_raw, 'ph': ph,
- 'word': word, 'ph2word': ph2word, 'ph_gb_word': ph_gb_word,
- 'wav_fn': new_wav_fn, 'wav_align_fn': wav_align_fn,
- 'others': others
- }
- except:
- traceback.print_exc()
- print(f"| Error is caught. item_name: {item_name}.")
- return None
-
- @staticmethod
- def txt_to_ph(txt_processor, txt_raw, preprocess_args):
- txt_struct, txt = txt_processor.process(txt_raw, preprocess_args)
- ph = [p for w in txt_struct for p in w[1]]
- return " ".join(ph), txt
-
- @staticmethod
- def process_wav(item_name, wav_fn, processed_dir, wav_processed_tmp, preprocess_args):
- processors = [get_wav_processor_cls(v) for v in preprocess_args['wav_processors']]
- processors = [k() for k in processors if k is not None]
- if len(processors) >= 1:
- sr_file = librosa.core.get_samplerate(wav_fn)
- output_fn_for_align = None
- ext = os.path.splitext(wav_fn)[1]
- input_fn = f"{wav_processed_tmp}/{item_name}{ext}"
- link_file(wav_fn, input_fn)
- for p in processors:
- outputs = p.process(input_fn, sr_file, wav_processed_tmp, processed_dir, item_name, preprocess_args)
- if len(outputs) == 3:
- input_fn, sr, output_fn_for_align = outputs
- else:
- input_fn, sr = outputs
- return input_fn, output_fn_for_align
- else:
- return wav_fn, wav_fn
-
- def _phone_encoder(self, ph_set):
- ph_set_fn = f"{self.processed_dir}/phone_set.json"
- if self.preprocess_args['reset_phone_dict'] or not os.path.exists(ph_set_fn):
- ph_set = sorted(set(ph_set))
- json.dump(ph_set, open(ph_set_fn, 'w'), ensure_ascii=False)
- print("| Build phone set: ", ph_set)
- else:
- ph_set = json.load(open(ph_set_fn, 'r'))
- print("| Load phone set: ", ph_set)
- return build_token_encoder(ph_set_fn)
-
- def _word_encoder(self, word_set):
- word_set_fn = f"{self.processed_dir}/word_set.json"
- if self.preprocess_args['reset_word_dict']:
- word_set = Counter(word_set)
- total_words = sum(word_set.values())
- word_set = word_set.most_common(hparams['word_dict_size'])
- num_unk_words = total_words - sum([x[1] for x in word_set])
- word_set = ['', ''] + [x[0] for x in word_set]
- word_set = sorted(set(word_set))
- json.dump(word_set, open(word_set_fn, 'w'), ensure_ascii=False)
- print(f"| Build word set. Size: {len(word_set)}, #total words: {total_words},"
- f" #unk_words: {num_unk_words}, word_set[:10]:, {word_set[:10]}.")
- else:
- word_set = json.load(open(word_set_fn, 'r'))
- print("| Load word set. Size: ", len(word_set), word_set[:10])
- return build_token_encoder(word_set_fn)
-
- @classmethod
- def preprocess_second_pass(cls, word, ph, spk_name, word_encoder, ph_encoder, spk_map):
- word_token = word_encoder.encode(word)
- ph_token = ph_encoder.encode(ph)
- spk_id = spk_map[spk_name]
- return {'word_token': word_token, 'ph_token': ph_token, 'spk_id': spk_id}
-
- def build_spk_map(self, spk_names):
- spk_map = {x: i for i, x in enumerate(sorted(list(spk_names)))}
- assert len(spk_map) == 0 or len(spk_map) <= hparams['num_spk'], len(spk_map)
- print(f"| Number of spks: {len(spk_map)}, spk_map: {spk_map}")
- json.dump(spk_map, open(self.spk_map_fn, 'w'), ensure_ascii=False)
- return spk_map
-
- @classmethod
- def build_mfa_inputs(cls, item, mfa_input_dir, mfa_group, wav_processed_tmp, preprocess_args):
- item_name = item['item_name']
- wav_align_fn = item['wav_align_fn']
- ph_gb_word = item['ph_gb_word']
- ext = os.path.splitext(wav_align_fn)[1]
- mfa_input_group_dir = f'{mfa_input_dir}/{mfa_group}'
- os.makedirs(mfa_input_group_dir, exist_ok=True)
- new_wav_align_fn = f"{mfa_input_group_dir}/{item_name}{ext}"
- move_link_func = move_file if os.path.dirname(wav_align_fn) == wav_processed_tmp else link_file
- move_link_func(wav_align_fn, new_wav_align_fn)
- ph_gb_word_nosil = " ".join(["_".join([p for p in w.split("_") if not is_sil_phoneme(p)])
- for w in ph_gb_word.split(" ") if not is_sil_phoneme(w)])
- with open(f'{mfa_input_group_dir}/{item_name}.lab', 'w') as f_txt:
- f_txt.write(ph_gb_word_nosil)
- return ph_gb_word_nosil, new_wav_align_fn
-
- def load_spk_map(self, base_dir):
- spk_map_fn = f"{base_dir}/spk_map.json"
- spk_map = json.load(open(spk_map_fn, 'r'))
- return spk_map
-
- def load_dict(self, base_dir):
- ph_encoder = build_token_encoder(f'{base_dir}/phone_set.json')
- return ph_encoder
-
- @property
- def meta_csv_filename(self):
- return 'metadata'
-
- @property
- def wav_processed_dirname(self):
- return 'wav_processed'
\ No newline at end of file
diff --git a/spaces/Rongjiehuang/ProDiff/usr/diffspeech_task.py b/spaces/Rongjiehuang/ProDiff/usr/diffspeech_task.py
deleted file mode 100644
index e4fca7e9e46fc378468188d58fc42bc989df824c..0000000000000000000000000000000000000000
--- a/spaces/Rongjiehuang/ProDiff/usr/diffspeech_task.py
+++ /dev/null
@@ -1,122 +0,0 @@
-import torch
-
-import utils
-from utils.hparams import hparams
-from .diff.net import DiffNet
-from .diff.shallow_diffusion_tts import GaussianDiffusion
-from .task import DiffFsTask
-from vocoders.base_vocoder import get_vocoder_cls, BaseVocoder
-from utils.pitch_utils import denorm_f0
-from tasks.tts.fs2_utils import FastSpeechDataset
-
-DIFF_DECODERS = {
- 'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']),
-}
-
-
-class DiffSpeechTask(DiffFsTask):
- def __init__(self):
- super(DiffSpeechTask, self).__init__()
- self.dataset_cls = FastSpeechDataset
- self.vocoder: BaseVocoder = get_vocoder_cls(hparams)()
-
- def build_tts_model(self):
- mel_bins = hparams['audio_num_mel_bins']
- self.model = GaussianDiffusion(
- phone_encoder=self.phone_encoder,
- out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams),
- timesteps=hparams['timesteps'],
- K_step=hparams['K_step'],
- loss_type=hparams['diff_loss_type'],
- spec_min=hparams['spec_min'], spec_max=hparams['spec_max'],
- )
- if hparams['fs2_ckpt'] != '':
- utils.load_ckpt(self.model.fs2, hparams['fs2_ckpt'], 'model', strict=True)
- # self.model.fs2.decoder = None
- for k, v in self.model.fs2.named_parameters():
- if not 'predictor' in k:
- v.requires_grad = False
-
- def build_optimizer(self, model):
- self.optimizer = optimizer = torch.optim.AdamW(
- filter(lambda p: p.requires_grad, model.parameters()),
- lr=hparams['lr'],
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
- weight_decay=hparams['weight_decay'])
- return optimizer
-
- def run_model(self, model, sample, return_output=False, infer=False):
- txt_tokens = sample['txt_tokens'] # [B, T_t]
- target = sample['mels'] # [B, T_s, 80]
- # mel2ph = sample['mel2ph'] if hparams['use_gt_dur'] else None # [B, T_s]
- mel2ph = sample['mel2ph']
- f0 = sample['f0']
- uv = sample['uv']
- energy = sample['energy']
- # fs2_mel = sample['fs2_mels']
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- if hparams['pitch_type'] == 'cwt':
- cwt_spec = sample[f'cwt_spec']
- f0_mean = sample['f0_mean']
- f0_std = sample['f0_std']
- sample['f0_cwt'] = f0 = model.cwt2f0_norm(cwt_spec, f0_mean, f0_std, mel2ph)
-
- output = model(txt_tokens, mel2ph=mel2ph, spk_embed=spk_embed,
- ref_mels=target, f0=f0, uv=uv, energy=energy, infer=infer)
-
- losses = {}
- if 'diff_loss' in output:
- losses['mel'] = output['diff_loss']
- self.add_dur_loss(output['dur'], mel2ph, txt_tokens, losses=losses)
- if hparams['use_pitch_embed']:
- self.add_pitch_loss(output, sample, losses)
- if hparams['use_energy_embed']:
- self.add_energy_loss(output['energy_pred'], energy, losses)
- if not return_output:
- return losses
- else:
- return losses, output
-
- def validation_step(self, sample, batch_idx):
- outputs = {}
- txt_tokens = sample['txt_tokens'] # [B, T_t]
-
- energy = sample['energy']
- spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
- mel2ph = sample['mel2ph']
- f0 = sample['f0']
- uv = sample['uv']
-
- outputs['losses'] = {}
-
- outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False)
-
-
- outputs['total_loss'] = sum(outputs['losses'].values())
- outputs['nsamples'] = sample['nsamples']
- outputs = utils.tensors_to_scalars(outputs)
- if batch_idx < hparams['num_valid_plots']:
- # model_out = self.model(
- # txt_tokens, spk_embed=spk_embed, mel2ph=None, f0=None, uv=None, energy=None, ref_mels=None, inference=True)
- # self.plot_mel(batch_idx, model_out['mel_out'], model_out['fs2_mel'], name=f'diffspeech_vs_fs2_{batch_idx}')
- model_out = self.model(
- txt_tokens, spk_embed=spk_embed, mel2ph=mel2ph, f0=f0, uv=uv, energy=energy, ref_mels=None, infer=True)
- gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
- self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=model_out.get('f0_denorm'))
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'])
- return outputs
-
- ############
- # validation plots
- ############
- def plot_wav(self, batch_idx, gt_wav, wav_out, is_mel=False, gt_f0=None, f0=None, name=None):
- gt_wav = gt_wav[0].cpu().numpy()
- wav_out = wav_out[0].cpu().numpy()
- gt_f0 = gt_f0[0].cpu().numpy()
- f0 = f0[0].cpu().numpy()
- if is_mel:
- gt_wav = self.vocoder.spec2wav(gt_wav, f0=gt_f0)
- wav_out = self.vocoder.spec2wav(wav_out, f0=f0)
- self.logger.experiment.add_audio(f'gt_{batch_idx}', gt_wav, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step)
- self.logger.experiment.add_audio(f'wav_{batch_idx}', wav_out, sample_rate=hparams['audio_sample_rate'], global_step=self.global_step)
-
diff --git a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/dataset_256_val.py b/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/dataset_256_val.py
deleted file mode 100644
index 26b619986ce380a88da88ff5792cb11166cf7e6d..0000000000000000000000000000000000000000
--- a/spaces/Rothfeld/stable-diffusion-mat-outpainting-primer/datasets/dataset_256_val.py
+++ /dev/null
@@ -1,282 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import os
-import numpy as np
-import zipfile
-import PIL.Image
-import cv2
-import json
-import torch
-import dnnlib
-import glob
-
-try:
- import pyspng
-except ImportError:
- pyspng = None
-
-from datasets.mask_generator_256 import RandomMask
-
-#----------------------------------------------------------------------------
-
-class Dataset(torch.utils.data.Dataset):
- def __init__(self,
- name, # Name of the dataset.
- raw_shape, # Shape of the raw image data (NCHW).
- max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip.
- use_labels = False, # Enable conditioning labels? False = label dimension is zero.
- xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size.
- random_seed = 0, # Random seed to use when applying max_size.
- ):
- self._name = name
- self._raw_shape = list(raw_shape)
- self._use_labels = use_labels
- self._raw_labels = None
- self._label_shape = None
-
- # Apply max_size.
- self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64)
- if (max_size is not None) and (self._raw_idx.size > max_size):
- np.random.RandomState(random_seed).shuffle(self._raw_idx)
- self._raw_idx = np.sort(self._raw_idx[:max_size])
-
- # Apply xflip.
- self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8)
- if xflip:
- self._raw_idx = np.tile(self._raw_idx, 2)
- self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)])
-
- def _get_raw_labels(self):
- if self._raw_labels is None:
- self._raw_labels = self._load_raw_labels() if self._use_labels else None
- if self._raw_labels is None:
- self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32)
- assert isinstance(self._raw_labels, np.ndarray)
- assert self._raw_labels.shape[0] == self._raw_shape[0]
- assert self._raw_labels.dtype in [np.float32, np.int64]
- if self._raw_labels.dtype == np.int64:
- assert self._raw_labels.ndim == 1
- assert np.all(self._raw_labels >= 0)
- return self._raw_labels
-
- def close(self): # to be overridden by subclass
- pass
-
- def _load_raw_image(self, raw_idx): # to be overridden by subclass
- raise NotImplementedError
-
- def _load_raw_labels(self): # to be overridden by subclass
- raise NotImplementedError
-
- def __getstate__(self):
- return dict(self.__dict__, _raw_labels=None)
-
- def __del__(self):
- try:
- self.close()
- except:
- pass
-
- def __len__(self):
- return self._raw_idx.size
-
- def __getitem__(self, idx):
- image = self._load_raw_image(self._raw_idx[idx])
- assert isinstance(image, np.ndarray)
- assert list(image.shape) == self.image_shape
- assert image.dtype == np.uint8
- if self._xflip[idx]:
- assert image.ndim == 3 # CHW
- image = image[:, :, ::-1]
- return image.copy(), self.get_label(idx)
-
- def get_label(self, idx):
- label = self._get_raw_labels()[self._raw_idx[idx]]
- if label.dtype == np.int64:
- onehot = np.zeros(self.label_shape, dtype=np.float32)
- onehot[label] = 1
- label = onehot
- return label.copy()
-
- def get_details(self, idx):
- d = dnnlib.EasyDict()
- d.raw_idx = int(self._raw_idx[idx])
- d.xflip = (int(self._xflip[idx]) != 0)
- d.raw_label = self._get_raw_labels()[d.raw_idx].copy()
- return d
-
- @property
- def name(self):
- return self._name
-
- @property
- def image_shape(self):
- return list(self._raw_shape[1:])
-
- @property
- def num_channels(self):
- assert len(self.image_shape) == 3 # CHW
- return self.image_shape[0]
-
- @property
- def resolution(self):
- assert len(self.image_shape) == 3 # CHW
- assert self.image_shape[1] == self.image_shape[2]
- return self.image_shape[1]
-
- @property
- def label_shape(self):
- if self._label_shape is None:
- raw_labels = self._get_raw_labels()
- if raw_labels.dtype == np.int64:
- self._label_shape = [int(np.max(raw_labels)) + 1]
- else:
- self._label_shape = raw_labels.shape[1:]
- return list(self._label_shape)
-
- @property
- def label_dim(self):
- assert len(self.label_shape) == 1
- return self.label_shape[0]
-
- @property
- def has_labels(self):
- return any(x != 0 for x in self.label_shape)
-
- @property
- def has_onehot_labels(self):
- return self._get_raw_labels().dtype == np.int64
-
-
-#----------------------------------------------------------------------------
-
-
-class ImageFolderMaskDataset(Dataset):
- def __init__(self,
- path, # Path to directory or zip.
- resolution = None, # Ensure specific resolution, None = highest available.
- hole_range=[0,1],
- **super_kwargs, # Additional arguments for the Dataset base class.
- ):
- self._path = path
- self._zipfile = None
- self._hole_range = hole_range
-
- if os.path.isdir(self._path):
- self._type = 'dir'
- self._all_fnames = {os.path.relpath(os.path.join(root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files}
- elif self._file_ext(self._path) == '.zip':
- self._type = 'zip'
- self._all_fnames = set(self._get_zipfile().namelist())
- else:
- raise IOError('Path must point to a directory or zip')
-
- PIL.Image.init()
- self._image_fnames = sorted(fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION)
- if len(self._image_fnames) == 0:
- raise IOError('No image files found in the specified path')
-
- name = os.path.splitext(os.path.basename(self._path))[0]
- raw_shape = [len(self._image_fnames)] + list(self._load_raw_image(0).shape)
- if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution):
- raise IOError('Image files do not match the specified resolution')
- self._load_mask()
- super().__init__(name=name, raw_shape=raw_shape, **super_kwargs)
-
- def _load_mask(self, mpath='/data/liwenbo/datasets/Places365/standard/masks_val_256_eval'):
- self.masks = sorted(glob.glob(mpath + '/*.png'))
-
- @staticmethod
- def _file_ext(fname):
- return os.path.splitext(fname)[1].lower()
-
- def _get_zipfile(self):
- assert self._type == 'zip'
- if self._zipfile is None:
- self._zipfile = zipfile.ZipFile(self._path)
- return self._zipfile
-
- def _open_file(self, fname):
- if self._type == 'dir':
- return open(os.path.join(self._path, fname), 'rb')
- if self._type == 'zip':
- return self._get_zipfile().open(fname, 'r')
- return None
-
- def close(self):
- try:
- if self._zipfile is not None:
- self._zipfile.close()
- finally:
- self._zipfile = None
-
- def __getstate__(self):
- return dict(super().__getstate__(), _zipfile=None)
-
- def _load_raw_image(self, raw_idx):
- fname = self._image_fnames[raw_idx]
- with self._open_file(fname) as f:
- if pyspng is not None and self._file_ext(fname) == '.png':
- image = pyspng.load(f.read())
- else:
- image = np.array(PIL.Image.open(f))
- if image.ndim == 2:
- image = image[:, :, np.newaxis] # HW => HWC
-
- # for grayscale image
- if image.shape[2] == 1:
- image = np.repeat(image, 3, axis=2)
-
- # restricted to 256x256
- res = 256
- H, W, C = image.shape
- if H < res or W < res:
- top = 0
- bottom = max(0, res - H)
- left = 0
- right = max(0, res - W)
- image = cv2.copyMakeBorder(image, top, bottom, left, right, cv2.BORDER_REFLECT)
- H, W, C = image.shape
- h = (H - res) // 2
- w = (W - res) // 2
- image = image[h:h+res, w:w+res, :]
-
- image = np.ascontiguousarray(image.transpose(2, 0, 1)) # HWC => CHW
- return image
-
- def _load_raw_labels(self):
- fname = 'labels.json'
- if fname not in self._all_fnames:
- return None
- with self._open_file(fname) as f:
- labels = json.load(f)['labels']
- if labels is None:
- return None
- labels = dict(labels)
- labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames]
- labels = np.array(labels)
- labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim])
- return labels
-
- def __getitem__(self, idx):
- image = self._load_raw_image(self._raw_idx[idx])
-
- # for grayscale image
- if image.shape[0] == 1:
- image = np.repeat(image, 3, axis=0)
-
- assert isinstance(image, np.ndarray)
- assert list(image.shape) == self.image_shape
- assert image.dtype == np.uint8
- if self._xflip[idx]:
- assert image.ndim == 3 # CHW
- image = image[:, :, ::-1]
- # mask = RandomMask(image.shape[-1], hole_range=self._hole_range) # hole as 0, reserved as 1
- mask = cv2.imread(self.masks[idx], cv2.IMREAD_GRAYSCALE).astype(np.float32)[np.newaxis, :, :] / 255.0
- return image.copy(), mask, self.get_label(idx)
diff --git a/spaces/SAAZIZI/SummarizeAV/query_service/__init__.py b/spaces/SAAZIZI/SummarizeAV/query_service/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/SeViLA/SeViLA/lavis/models/clip_models/__init__.py b/spaces/SeViLA/SeViLA/lavis/models/clip_models/__init__.py
deleted file mode 100644
index 325e25255550a00fdd082deb82a8a0da567cadb0..0000000000000000000000000000000000000000
--- a/spaces/SeViLA/SeViLA/lavis/models/clip_models/__init__.py
+++ /dev/null
@@ -1,14 +0,0 @@
-"""
- Copyright (c) 2022, salesforce.com, inc.
- All rights reserved.
- SPDX-License-Identifier: BSD-3-Clause
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
-
- Based on https://github.com/mlfoundations/open_clip
-"""
-
-""" OpenAI pretrained model functions
-Adapted from https://github.com/mlfoundations/open_clip and https://github.com/openai/CLIP.
-
-Originally MIT License, Copyright (c) 2021 OpenAI.
-"""
diff --git a/spaces/Shredder/CONBERT-3/fin_readability_sustainability.py b/spaces/Shredder/CONBERT-3/fin_readability_sustainability.py
deleted file mode 100644
index 53ea0c60eab0dd27868f9bdc6d4652ea0ddc71b9..0000000000000000000000000000000000000000
--- a/spaces/Shredder/CONBERT-3/fin_readability_sustainability.py
+++ /dev/null
@@ -1,110 +0,0 @@
-import torch
-import transformers
-from torch.utils.data import Dataset, DataLoader
-from transformers import RobertaModel, RobertaTokenizer, BertModel, BertTokenizer
-import pandas as pd
-
-device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
-MAX_LEN = 128
-BATCH_SIZE = 20
-text_col_name = 'sentence'
-
-def scoring_data_prep(dataset):
- out = []
- target = []
- mask = []
-
- for i in range(len(dataset)):
- rec = dataset[i]
- out.append(rec['ids'].reshape(-1,MAX_LEN))
- mask.append(rec['mask'].reshape(-1,MAX_LEN))
-
- out_stack = torch.cat(out, dim = 0)
- mask_stack = torch.cat(mask, dim =0 )
- out_stack = out_stack.to(device, dtype = torch.long)
- mask_stack = mask_stack.to(device, dtype = torch.long)
-
- return out_stack, mask_stack
-
-class Triage(Dataset):
- """
- This is a subclass of torch packages Dataset class. It processes input to create ids, masks and targets required for model training.
- """
-
- def __init__(self, dataframe, tokenizer, max_len, text_col_name):
- self.len = len(dataframe)
- self.data = dataframe
- self.tokenizer = tokenizer
- self.max_len = max_len
- self.text_col_name = text_col_name
-
-
- def __getitem__(self, index):
- title = str(self.data[self.text_col_name][index])
- title = " ".join(title.split())
- inputs = self.tokenizer.encode_plus(
- title,
- None,
- add_special_tokens=True,
- max_length=self.max_len,
- pad_to_max_length=True, #padding='max_length' #For future version use `padding='max_length'`
- return_token_type_ids=True,
- truncation=True,
- )
- ids = inputs["input_ids"]
- mask = inputs["attention_mask"]
-
- return {
- "ids": torch.tensor(ids, dtype=torch.long),
- "mask": torch.tensor(mask, dtype=torch.long),
-
- }
-
- def __len__(self):
- return self.len
-
-class BERTClass(torch.nn.Module):
- def __init__(self, num_class, task):
- super(BERTClass, self).__init__()
- self.num_class = num_class
- if task =="sustanability":
- self.l1 = RobertaModel.from_pretrained("roberta-base")
- else:
- self.l1 = BertModel.from_pretrained("ProsusAI/finbert")
- self.pre_classifier = torch.nn.Linear(768, 768)
- self.dropout = torch.nn.Dropout(0.3)
- self.classifier = torch.nn.Linear(768, self.num_class)
- self.history = dict()
-
- def forward(self, input_ids, attention_mask):
- output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask)
- hidden_state = output_1[0]
- pooler = hidden_state[:, 0]
- pooler = self.pre_classifier(pooler)
- pooler = torch.nn.ReLU()(pooler)
- pooler = self.dropout(pooler)
- output = self.classifier(pooler)
- return output
-
-def do_predict(model, tokenizer, test_df):
- test_set = Triage(test_df, tokenizer, MAX_LEN, text_col_name)
- test_params = {'batch_size' : BATCH_SIZE, 'shuffle': False, 'num_workers':0}
- test_loader = DataLoader(test_set, **test_params)
- out_stack, mask_stack = scoring_data_prep(dataset = test_set)
- n = 0
- combined_output = []
- model.eval()
- with torch.no_grad():
- while n < test_df.shape[0]:
- output = model(out_stack[n:n+BATCH_SIZE,:],mask_stack[n:n+BATCH_SIZE,:])
- n = n + BATCH_SIZE
- combined_output.append(output)
- combined_output = torch.cat(combined_output, dim = 0)
- preds = torch.argsort(combined_output, axis = 1, descending = True)
- preds = preds.to('cpu')
- actual_predictions = [i[0] for i in preds.tolist()]
- combined_output = combined_output.to('cpu')
- prob_predictions= [i[1] for i in combined_output.tolist()]
- return (actual_predictions, prob_predictions)
-
\ No newline at end of file
diff --git a/spaces/Singularity666/RadiXGPT_/app.py b/spaces/Singularity666/RadiXGPT_/app.py
deleted file mode 100644
index f67a828cddf75e6d1cca19e109a16fbef2a8f855..0000000000000000000000000000000000000000
--- a/spaces/Singularity666/RadiXGPT_/app.py
+++ /dev/null
@@ -1,149 +0,0 @@
-import torch
-from PIL import Image
-import streamlit as st
-import numpy as np
-import pandas as pd
-from main import predict_caption, CLIPModel, get_text_embeddings
-import openai
-import base64
-from docx import Document
-from docx.enum.text import WD_PARAGRAPH_ALIGNMENT
-from io import BytesIO
-import re
-
-openai.api_key = "sk-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"
-
-st.markdown(
- """
-
- """,
- unsafe_allow_html=True,
-)
-
-
-
-device = torch.device("cpu")
-
-testing_df = pd.read_csv("testing_df.csv")
-model = CLIPModel() # Create an instance of CLIPModel
-# Load the model
-state_dict = torch.load("weights.pt", map_location=torch.device('cpu'))
-print("Loaded State Dict Keys:", state_dict.keys())
-
-# Create an instance of CLIPModel
-model = CLIPModel().to(device)
-print("Model Keys:", model.state_dict().keys())
-
-# Load the state_dict into the model
-model.load_state_dict(state_dict, strict=False) # Set strict=False to ignore unexpected keys
-
-text_embeddings = torch.load('saved_text_embeddings.pt', map_location=device)
-
-def download_link(content, filename, link_text):
- b64 = base64.b64encode(content).decode()
- href = f'{link_text}'
- return href
-
-def show_predicted_caption(image, top_k=8):
- matches = predict_caption(
- image, model, text_embeddings, testing_df["caption"]
- )[:top_k]
- cleaned_matches = [re.sub(r'\s\(ROCO_\d+\)', '', match) for match in matches] # Add this line to clean the matches
- return cleaned_matches # Return the cleaned_matches instead of matches
-
-def generate_radiology_report(prompt):
- response = openai.Completion.create(
- engine="text-davinci-003",
- prompt=prompt,
- max_tokens=800,
- n=1,
- stop=None,
- temperature=1,
- )
- report = response.choices[0].text.strip()
- # Remove reference string from the report
- report = re.sub(r'\(ROCO_\d+\)', '', report).strip()
- return report
-
-
-def save_as_docx(text, filename):
- document = Document()
- document.add_paragraph(text)
- with BytesIO() as output:
- document.save(output)
- output.seek(0)
- return output.getvalue()
-
-st.title("RadiXGPT: An Evolution of machine doctors towards Radiology")
-
-
-# Collect user's personal information
-st.subheader("Personal Information")
-first_name = st.text_input("First Name")
-last_name = st.text_input("Last Name")
-age = st.number_input("Age", min_value=0, max_value=120, value=25, step=1)
-gender = st.selectbox("Gender", ["Male", "Female", "Other"])
-
-st.write("Upload Scan to get Radiological Report:")
-uploaded_file = st.file_uploader("Choose an image...", type=["jpg", "png", "jpeg"])
-if uploaded_file is not None:
- image = Image.open(uploaded_file)
- if st.button("Generate Caption"):
- with st.spinner("Generating caption..."):
- image_np = np.array(image)
- caption = show_predicted_caption(image_np)[0]
-
- st.success(f"Caption: {caption}")
-
- # Generate the radiology report
- radiology_report = generate_radiology_report(f"Write Complete Radiology Report for this with clinical info, subjective, Assessment, Finding, Impressions, Conclusion and more in proper order : {caption}")
-
- # Add personal information to the radiology report
- radiology_report_with_personal_info = f"Patient Name: {first_name} {last_name}\nAge: {age}\nGender: {gender}\n\n{radiology_report}"
-
- st.header("Radiology Report")
- st.write(radiology_report_with_personal_info)
- st.markdown(download_link(save_as_docx(radiology_report_with_personal_info, "radiology_report.docx"), "radiology_report.docx", "Download Report as DOCX"), unsafe_allow_html=True)
-
- feedback_options = ["Satisfied", "Not Satisfied"]
- selected_feedback = st.radio("Please provide feedback on the generated report:", feedback_options)
-
- if selected_feedback == "Not Satisfied":
- if st.button("Regenerate Report"):
- with st.spinner("Regenerating report..."):
- alternative_caption = get_alternative_caption(image_np, model, text_embeddings, testing_df["caption"])
- regenerated_radiology_report = generate_radiology_report(f"Write Complete Radiology Report for this with clinical info, subjective, Assessment, Finding, Impressions, Conclusion and more in proper order : {alternative_caption}")
-
- regenerated_radiology_report_with_personal_info = f"Patient Name: {first_name} {last_name}\nAge: {age}\nGender: {gender}\n\n{regenerated_radiology_report}"
-
- st.header("Regenerated Radiology Report")
- st.write(regenerated_radiology_report_with_personal_info)
- st.markdown(download_link(save_as_docx(regenerated_radiology_report_with_personal_info, "regenerated_radiology_report.docx"), "regenerated_radiology_report.docx", "Download Regenerated Report as DOCX"), unsafe_allow_html=True)
\ No newline at end of file
diff --git a/spaces/Smotto/Vocal-Isolator/src/constants.py b/spaces/Smotto/Vocal-Isolator/src/constants.py
deleted file mode 100644
index 6845f8a7ababcb635ae7fb4f1a6ab68d2d16810a..0000000000000000000000000000000000000000
--- a/spaces/Smotto/Vocal-Isolator/src/constants.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Third-party
-import torch
-
-# Global Variables
-COMPUTATION_DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
-EXECUTION_PROVIDER_LIST = ["CUDAExecutionProvider", "CPUExecutionProvider"]
-ONNX_MODEL_PATH = "./pretrained_models/MDX_net/Kim_Vocal.onnx"
-INPUT_FOLDER = "./datasets/input"
-OUTPUT_FOLDER = "./datasets/output"
diff --git a/spaces/Sultannn/YOLOX-Demo/README.md b/spaces/Sultannn/YOLOX-Demo/README.md
deleted file mode 100644
index c2554d23cda278a5563ab897aa06a12e5e2e5573..0000000000000000000000000000000000000000
--- a/spaces/Sultannn/YOLOX-Demo/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: YOLOX Demo
-emoji: 🖼
-colorFrom: blue
-colorTo: yellow
-sdk: gradio
-sdk_version: 2.8.10
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/_distutils_hack/override.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/_distutils_hack/override.py
deleted file mode 100644
index 2cc433a4a55e3b41fa31089918fb62096092f89f..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/_distutils_hack/override.py
+++ /dev/null
@@ -1 +0,0 @@
-__import__('_distutils_hack').do_override()
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_fileresponse.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_fileresponse.py
deleted file mode 100644
index f41ed3fd0a9c1e0d5e45ce1e97b99bfef8361cac..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/web_fileresponse.py
+++ /dev/null
@@ -1,288 +0,0 @@
-import asyncio
-import mimetypes
-import os
-import pathlib
-import sys
-from typing import ( # noqa
- IO,
- TYPE_CHECKING,
- Any,
- Awaitable,
- Callable,
- Iterator,
- List,
- Optional,
- Tuple,
- Union,
- cast,
-)
-
-from . import hdrs
-from .abc import AbstractStreamWriter
-from .helpers import ETAG_ANY, ETag
-from .typedefs import Final, LooseHeaders
-from .web_exceptions import (
- HTTPNotModified,
- HTTPPartialContent,
- HTTPPreconditionFailed,
- HTTPRequestRangeNotSatisfiable,
-)
-from .web_response import StreamResponse
-
-__all__ = ("FileResponse",)
-
-if TYPE_CHECKING: # pragma: no cover
- from .web_request import BaseRequest
-
-
-_T_OnChunkSent = Optional[Callable[[bytes], Awaitable[None]]]
-
-
-NOSENDFILE: Final[bool] = bool(os.environ.get("AIOHTTP_NOSENDFILE"))
-
-
-class FileResponse(StreamResponse):
- """A response object can be used to send files."""
-
- def __init__(
- self,
- path: Union[str, pathlib.Path],
- chunk_size: int = 256 * 1024,
- status: int = 200,
- reason: Optional[str] = None,
- headers: Optional[LooseHeaders] = None,
- ) -> None:
- super().__init__(status=status, reason=reason, headers=headers)
-
- if isinstance(path, str):
- path = pathlib.Path(path)
-
- self._path = path
- self._chunk_size = chunk_size
-
- async def _sendfile_fallback(
- self, writer: AbstractStreamWriter, fobj: IO[Any], offset: int, count: int
- ) -> AbstractStreamWriter:
- # To keep memory usage low,fobj is transferred in chunks
- # controlled by the constructor's chunk_size argument.
-
- chunk_size = self._chunk_size
- loop = asyncio.get_event_loop()
-
- await loop.run_in_executor(None, fobj.seek, offset)
-
- chunk = await loop.run_in_executor(None, fobj.read, chunk_size)
- while chunk:
- await writer.write(chunk)
- count = count - chunk_size
- if count <= 0:
- break
- chunk = await loop.run_in_executor(None, fobj.read, min(chunk_size, count))
-
- await writer.drain()
- return writer
-
- async def _sendfile(
- self, request: "BaseRequest", fobj: IO[Any], offset: int, count: int
- ) -> AbstractStreamWriter:
- writer = await super().prepare(request)
- assert writer is not None
-
- if NOSENDFILE or sys.version_info < (3, 7) or self.compression:
- return await self._sendfile_fallback(writer, fobj, offset, count)
-
- loop = request._loop
- transport = request.transport
- assert transport is not None
-
- try:
- await loop.sendfile(transport, fobj, offset, count)
- except NotImplementedError:
- return await self._sendfile_fallback(writer, fobj, offset, count)
-
- await super().write_eof()
- return writer
-
- @staticmethod
- def _strong_etag_match(etag_value: str, etags: Tuple[ETag, ...]) -> bool:
- if len(etags) == 1 and etags[0].value == ETAG_ANY:
- return True
- return any(etag.value == etag_value for etag in etags if not etag.is_weak)
-
- async def _not_modified(
- self, request: "BaseRequest", etag_value: str, last_modified: float
- ) -> Optional[AbstractStreamWriter]:
- self.set_status(HTTPNotModified.status_code)
- self._length_check = False
- self.etag = etag_value # type: ignore[assignment]
- self.last_modified = last_modified # type: ignore[assignment]
- # Delete any Content-Length headers provided by user. HTTP 304
- # should always have empty response body
- return await super().prepare(request)
-
- async def _precondition_failed(
- self, request: "BaseRequest"
- ) -> Optional[AbstractStreamWriter]:
- self.set_status(HTTPPreconditionFailed.status_code)
- self.content_length = 0
- return await super().prepare(request)
-
- async def prepare(self, request: "BaseRequest") -> Optional[AbstractStreamWriter]:
- filepath = self._path
-
- gzip = False
- if "gzip" in request.headers.get(hdrs.ACCEPT_ENCODING, ""):
- gzip_path = filepath.with_name(filepath.name + ".gz")
-
- if gzip_path.is_file():
- filepath = gzip_path
- gzip = True
-
- loop = asyncio.get_event_loop()
- st: os.stat_result = await loop.run_in_executor(None, filepath.stat)
-
- etag_value = f"{st.st_mtime_ns:x}-{st.st_size:x}"
- last_modified = st.st_mtime
-
- # https://tools.ietf.org/html/rfc7232#section-6
- ifmatch = request.if_match
- if ifmatch is not None and not self._strong_etag_match(etag_value, ifmatch):
- return await self._precondition_failed(request)
-
- unmodsince = request.if_unmodified_since
- if (
- unmodsince is not None
- and ifmatch is None
- and st.st_mtime > unmodsince.timestamp()
- ):
- return await self._precondition_failed(request)
-
- ifnonematch = request.if_none_match
- if ifnonematch is not None and self._strong_etag_match(etag_value, ifnonematch):
- return await self._not_modified(request, etag_value, last_modified)
-
- modsince = request.if_modified_since
- if (
- modsince is not None
- and ifnonematch is None
- and st.st_mtime <= modsince.timestamp()
- ):
- return await self._not_modified(request, etag_value, last_modified)
-
- if hdrs.CONTENT_TYPE not in self.headers:
- ct, encoding = mimetypes.guess_type(str(filepath))
- if not ct:
- ct = "application/octet-stream"
- should_set_ct = True
- else:
- encoding = "gzip" if gzip else None
- should_set_ct = False
-
- status = self._status
- file_size = st.st_size
- count = file_size
-
- start = None
-
- ifrange = request.if_range
- if ifrange is None or st.st_mtime <= ifrange.timestamp():
- # If-Range header check:
- # condition = cached date >= last modification date
- # return 206 if True else 200.
- # if False:
- # Range header would not be processed, return 200
- # if True but Range header missing
- # return 200
- try:
- rng = request.http_range
- start = rng.start
- end = rng.stop
- except ValueError:
- # https://tools.ietf.org/html/rfc7233:
- # A server generating a 416 (Range Not Satisfiable) response to
- # a byte-range request SHOULD send a Content-Range header field
- # with an unsatisfied-range value.
- # The complete-length in a 416 response indicates the current
- # length of the selected representation.
- #
- # Will do the same below. Many servers ignore this and do not
- # send a Content-Range header with HTTP 416
- self.headers[hdrs.CONTENT_RANGE] = f"bytes */{file_size}"
- self.set_status(HTTPRequestRangeNotSatisfiable.status_code)
- return await super().prepare(request)
-
- # If a range request has been made, convert start, end slice
- # notation into file pointer offset and count
- if start is not None or end is not None:
- if start < 0 and end is None: # return tail of file
- start += file_size
- if start < 0:
- # if Range:bytes=-1000 in request header but file size
- # is only 200, there would be trouble without this
- start = 0
- count = file_size - start
- else:
- # rfc7233:If the last-byte-pos value is
- # absent, or if the value is greater than or equal to
- # the current length of the representation data,
- # the byte range is interpreted as the remainder
- # of the representation (i.e., the server replaces the
- # value of last-byte-pos with a value that is one less than
- # the current length of the selected representation).
- count = (
- min(end if end is not None else file_size, file_size) - start
- )
-
- if start >= file_size:
- # HTTP 416 should be returned in this case.
- #
- # According to https://tools.ietf.org/html/rfc7233:
- # If a valid byte-range-set includes at least one
- # byte-range-spec with a first-byte-pos that is less than
- # the current length of the representation, or at least one
- # suffix-byte-range-spec with a non-zero suffix-length,
- # then the byte-range-set is satisfiable. Otherwise, the
- # byte-range-set is unsatisfiable.
- self.headers[hdrs.CONTENT_RANGE] = f"bytes */{file_size}"
- self.set_status(HTTPRequestRangeNotSatisfiable.status_code)
- return await super().prepare(request)
-
- status = HTTPPartialContent.status_code
- # Even though you are sending the whole file, you should still
- # return a HTTP 206 for a Range request.
- self.set_status(status)
-
- if should_set_ct:
- self.content_type = ct # type: ignore[assignment]
- if encoding:
- self.headers[hdrs.CONTENT_ENCODING] = encoding
- if gzip:
- self.headers[hdrs.VARY] = hdrs.ACCEPT_ENCODING
-
- self.etag = etag_value # type: ignore[assignment]
- self.last_modified = st.st_mtime # type: ignore[assignment]
- self.content_length = count
-
- self.headers[hdrs.ACCEPT_RANGES] = "bytes"
-
- real_start = cast(int, start)
-
- if status == HTTPPartialContent.status_code:
- self.headers[hdrs.CONTENT_RANGE] = "bytes {}-{}/{}".format(
- real_start, real_start + count - 1, file_size
- )
-
- # If we are sending 0 bytes calling sendfile() will throw a ValueError
- if count == 0 or request.method == hdrs.METH_HEAD or self.status in [204, 304]:
- return await super().prepare(request)
-
- fobj = await loop.run_in_executor(None, filepath.open, "rb")
- if start: # be aware that start could be None or int=0 here.
- offset = start
- else:
- offset = 0
-
- try:
- return await self._sendfile(request, fobj, offset, count)
- finally:
- await loop.run_in_executor(None, fobj.close)
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_sockets.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_sockets.py
deleted file mode 100644
index 6aac5f7c22395759ebe3d5633d2adcf1f4ff1fe5..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/anyio/abc/_sockets.py
+++ /dev/null
@@ -1,160 +0,0 @@
-from __future__ import annotations
-
-import socket
-from abc import abstractmethod
-from contextlib import AsyncExitStack
-from io import IOBase
-from ipaddress import IPv4Address, IPv6Address
-from socket import AddressFamily
-from typing import (
- Any,
- Callable,
- Collection,
- Mapping,
- Tuple,
- TypeVar,
- Union,
-)
-
-from .._core._tasks import create_task_group
-from .._core._typedattr import (
- TypedAttributeProvider,
- TypedAttributeSet,
- typed_attribute,
-)
-from ._streams import ByteStream, Listener, UnreliableObjectStream
-from ._tasks import TaskGroup
-
-IPAddressType = Union[str, IPv4Address, IPv6Address]
-IPSockAddrType = Tuple[str, int]
-SockAddrType = Union[IPSockAddrType, str]
-UDPPacketType = Tuple[bytes, IPSockAddrType]
-T_Retval = TypeVar("T_Retval")
-
-
-class SocketAttribute(TypedAttributeSet):
- #: the address family of the underlying socket
- family: AddressFamily = typed_attribute()
- #: the local socket address of the underlying socket
- local_address: SockAddrType = typed_attribute()
- #: for IP addresses, the local port the underlying socket is bound to
- local_port: int = typed_attribute()
- #: the underlying stdlib socket object
- raw_socket: socket.socket = typed_attribute()
- #: the remote address the underlying socket is connected to
- remote_address: SockAddrType = typed_attribute()
- #: for IP addresses, the remote port the underlying socket is connected to
- remote_port: int = typed_attribute()
-
-
-class _SocketProvider(TypedAttributeProvider):
- @property
- def extra_attributes(self) -> Mapping[Any, Callable[[], Any]]:
- from .._core._sockets import convert_ipv6_sockaddr as convert
-
- attributes: dict[Any, Callable[[], Any]] = {
- SocketAttribute.family: lambda: self._raw_socket.family,
- SocketAttribute.local_address: lambda: convert(
- self._raw_socket.getsockname()
- ),
- SocketAttribute.raw_socket: lambda: self._raw_socket,
- }
- try:
- peername: tuple[str, int] | None = convert(self._raw_socket.getpeername())
- except OSError:
- peername = None
-
- # Provide the remote address for connected sockets
- if peername is not None:
- attributes[SocketAttribute.remote_address] = lambda: peername
-
- # Provide local and remote ports for IP based sockets
- if self._raw_socket.family in (AddressFamily.AF_INET, AddressFamily.AF_INET6):
- attributes[
- SocketAttribute.local_port
- ] = lambda: self._raw_socket.getsockname()[1]
- if peername is not None:
- remote_port = peername[1]
- attributes[SocketAttribute.remote_port] = lambda: remote_port
-
- return attributes
-
- @property
- @abstractmethod
- def _raw_socket(self) -> socket.socket:
- pass
-
-
-class SocketStream(ByteStream, _SocketProvider):
- """
- Transports bytes over a socket.
-
- Supports all relevant extra attributes from :class:`~SocketAttribute`.
- """
-
-
-class UNIXSocketStream(SocketStream):
- @abstractmethod
- async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None:
- """
- Send file descriptors along with a message to the peer.
-
- :param message: a non-empty bytestring
- :param fds: a collection of files (either numeric file descriptors or open file or socket
- objects)
- """
-
- @abstractmethod
- async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]:
- """
- Receive file descriptors along with a message from the peer.
-
- :param msglen: length of the message to expect from the peer
- :param maxfds: maximum number of file descriptors to expect from the peer
- :return: a tuple of (message, file descriptors)
- """
-
-
-class SocketListener(Listener[SocketStream], _SocketProvider):
- """
- Listens to incoming socket connections.
-
- Supports all relevant extra attributes from :class:`~SocketAttribute`.
- """
-
- @abstractmethod
- async def accept(self) -> SocketStream:
- """Accept an incoming connection."""
-
- async def serve(
- self,
- handler: Callable[[SocketStream], Any],
- task_group: TaskGroup | None = None,
- ) -> None:
- async with AsyncExitStack() as exit_stack:
- if task_group is None:
- task_group = await exit_stack.enter_async_context(create_task_group())
-
- while True:
- stream = await self.accept()
- task_group.start_soon(handler, stream)
-
-
-class UDPSocket(UnreliableObjectStream[UDPPacketType], _SocketProvider):
- """
- Represents an unconnected UDP socket.
-
- Supports all relevant extra attributes from :class:`~SocketAttribute`.
- """
-
- async def sendto(self, data: bytes, host: str, port: int) -> None:
- """Alias for :meth:`~.UnreliableObjectSendStream.send` ((data, (host, port)))."""
- return await self.send((data, (host, port)))
-
-
-class ConnectedUDPSocket(UnreliableObjectStream[bytes], _SocketProvider):
- """
- Represents an connected UDP socket.
-
- Supports all relevant extra attributes from :class:`~SocketAttribute`.
- """
diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/extensions/types/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/extensions/types/__init__.py
deleted file mode 100644
index 274d7bc4fcf5f780858b55a14c6c4ac85f2a7f0d..0000000000000000000000000000000000000000
--- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_plugins/extensions/types/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-import warnings
-with warnings.catch_warnings():
- try:
- __import__('pkg_resources').declare_namespace(__name__)
- except ImportError:
- import pkgutil
- __path__ = pkgutil.extend_path(__path__, __name__)
diff --git a/spaces/Superlang/ImageProcessor/annotator/openpose/face.py b/spaces/Superlang/ImageProcessor/annotator/openpose/face.py
deleted file mode 100644
index f3c46d77664aa9fa91c63785a1485a396f05cacc..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/openpose/face.py
+++ /dev/null
@@ -1,362 +0,0 @@
-import logging
-import numpy as np
-from torchvision.transforms import ToTensor, ToPILImage
-import torch
-import torch.nn.functional as F
-import cv2
-
-from . import util
-from torch.nn import Conv2d, Module, ReLU, MaxPool2d, init
-
-
-class FaceNet(Module):
- """Model the cascading heatmaps. """
- def __init__(self):
- super(FaceNet, self).__init__()
- # cnn to make feature map
- self.relu = ReLU()
- self.max_pooling_2d = MaxPool2d(kernel_size=2, stride=2)
- self.conv1_1 = Conv2d(in_channels=3, out_channels=64,
- kernel_size=3, stride=1, padding=1)
- self.conv1_2 = Conv2d(
- in_channels=64, out_channels=64, kernel_size=3, stride=1,
- padding=1)
- self.conv2_1 = Conv2d(
- in_channels=64, out_channels=128, kernel_size=3, stride=1,
- padding=1)
- self.conv2_2 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=3, stride=1,
- padding=1)
- self.conv3_1 = Conv2d(
- in_channels=128, out_channels=256, kernel_size=3, stride=1,
- padding=1)
- self.conv3_2 = Conv2d(
- in_channels=256, out_channels=256, kernel_size=3, stride=1,
- padding=1)
- self.conv3_3 = Conv2d(
- in_channels=256, out_channels=256, kernel_size=3, stride=1,
- padding=1)
- self.conv3_4 = Conv2d(
- in_channels=256, out_channels=256, kernel_size=3, stride=1,
- padding=1)
- self.conv4_1 = Conv2d(
- in_channels=256, out_channels=512, kernel_size=3, stride=1,
- padding=1)
- self.conv4_2 = Conv2d(
- in_channels=512, out_channels=512, kernel_size=3, stride=1,
- padding=1)
- self.conv4_3 = Conv2d(
- in_channels=512, out_channels=512, kernel_size=3, stride=1,
- padding=1)
- self.conv4_4 = Conv2d(
- in_channels=512, out_channels=512, kernel_size=3, stride=1,
- padding=1)
- self.conv5_1 = Conv2d(
- in_channels=512, out_channels=512, kernel_size=3, stride=1,
- padding=1)
- self.conv5_2 = Conv2d(
- in_channels=512, out_channels=512, kernel_size=3, stride=1,
- padding=1)
- self.conv5_3_CPM = Conv2d(
- in_channels=512, out_channels=128, kernel_size=3, stride=1,
- padding=1)
-
- # stage1
- self.conv6_1_CPM = Conv2d(
- in_channels=128, out_channels=512, kernel_size=1, stride=1,
- padding=0)
- self.conv6_2_CPM = Conv2d(
- in_channels=512, out_channels=71, kernel_size=1, stride=1,
- padding=0)
-
- # stage2
- self.Mconv1_stage2 = Conv2d(
- in_channels=199, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv2_stage2 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv3_stage2 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv4_stage2 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv5_stage2 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv6_stage2 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=1, stride=1,
- padding=0)
- self.Mconv7_stage2 = Conv2d(
- in_channels=128, out_channels=71, kernel_size=1, stride=1,
- padding=0)
-
- # stage3
- self.Mconv1_stage3 = Conv2d(
- in_channels=199, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv2_stage3 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv3_stage3 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv4_stage3 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv5_stage3 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv6_stage3 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=1, stride=1,
- padding=0)
- self.Mconv7_stage3 = Conv2d(
- in_channels=128, out_channels=71, kernel_size=1, stride=1,
- padding=0)
-
- # stage4
- self.Mconv1_stage4 = Conv2d(
- in_channels=199, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv2_stage4 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv3_stage4 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv4_stage4 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv5_stage4 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv6_stage4 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=1, stride=1,
- padding=0)
- self.Mconv7_stage4 = Conv2d(
- in_channels=128, out_channels=71, kernel_size=1, stride=1,
- padding=0)
-
- # stage5
- self.Mconv1_stage5 = Conv2d(
- in_channels=199, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv2_stage5 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv3_stage5 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv4_stage5 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv5_stage5 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv6_stage5 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=1, stride=1,
- padding=0)
- self.Mconv7_stage5 = Conv2d(
- in_channels=128, out_channels=71, kernel_size=1, stride=1,
- padding=0)
-
- # stage6
- self.Mconv1_stage6 = Conv2d(
- in_channels=199, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv2_stage6 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv3_stage6 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv4_stage6 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv5_stage6 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=7, stride=1,
- padding=3)
- self.Mconv6_stage6 = Conv2d(
- in_channels=128, out_channels=128, kernel_size=1, stride=1,
- padding=0)
- self.Mconv7_stage6 = Conv2d(
- in_channels=128, out_channels=71, kernel_size=1, stride=1,
- padding=0)
-
- for m in self.modules():
- if isinstance(m, Conv2d):
- init.constant_(m.bias, 0)
-
- def forward(self, x):
- """Return a list of heatmaps."""
- heatmaps = []
-
- h = self.relu(self.conv1_1(x))
- h = self.relu(self.conv1_2(h))
- h = self.max_pooling_2d(h)
- h = self.relu(self.conv2_1(h))
- h = self.relu(self.conv2_2(h))
- h = self.max_pooling_2d(h)
- h = self.relu(self.conv3_1(h))
- h = self.relu(self.conv3_2(h))
- h = self.relu(self.conv3_3(h))
- h = self.relu(self.conv3_4(h))
- h = self.max_pooling_2d(h)
- h = self.relu(self.conv4_1(h))
- h = self.relu(self.conv4_2(h))
- h = self.relu(self.conv4_3(h))
- h = self.relu(self.conv4_4(h))
- h = self.relu(self.conv5_1(h))
- h = self.relu(self.conv5_2(h))
- h = self.relu(self.conv5_3_CPM(h))
- feature_map = h
-
- # stage1
- h = self.relu(self.conv6_1_CPM(h))
- h = self.conv6_2_CPM(h)
- heatmaps.append(h)
-
- # stage2
- h = torch.cat([h, feature_map], dim=1) # channel concat
- h = self.relu(self.Mconv1_stage2(h))
- h = self.relu(self.Mconv2_stage2(h))
- h = self.relu(self.Mconv3_stage2(h))
- h = self.relu(self.Mconv4_stage2(h))
- h = self.relu(self.Mconv5_stage2(h))
- h = self.relu(self.Mconv6_stage2(h))
- h = self.Mconv7_stage2(h)
- heatmaps.append(h)
-
- # stage3
- h = torch.cat([h, feature_map], dim=1) # channel concat
- h = self.relu(self.Mconv1_stage3(h))
- h = self.relu(self.Mconv2_stage3(h))
- h = self.relu(self.Mconv3_stage3(h))
- h = self.relu(self.Mconv4_stage3(h))
- h = self.relu(self.Mconv5_stage3(h))
- h = self.relu(self.Mconv6_stage3(h))
- h = self.Mconv7_stage3(h)
- heatmaps.append(h)
-
- # stage4
- h = torch.cat([h, feature_map], dim=1) # channel concat
- h = self.relu(self.Mconv1_stage4(h))
- h = self.relu(self.Mconv2_stage4(h))
- h = self.relu(self.Mconv3_stage4(h))
- h = self.relu(self.Mconv4_stage4(h))
- h = self.relu(self.Mconv5_stage4(h))
- h = self.relu(self.Mconv6_stage4(h))
- h = self.Mconv7_stage4(h)
- heatmaps.append(h)
-
- # stage5
- h = torch.cat([h, feature_map], dim=1) # channel concat
- h = self.relu(self.Mconv1_stage5(h))
- h = self.relu(self.Mconv2_stage5(h))
- h = self.relu(self.Mconv3_stage5(h))
- h = self.relu(self.Mconv4_stage5(h))
- h = self.relu(self.Mconv5_stage5(h))
- h = self.relu(self.Mconv6_stage5(h))
- h = self.Mconv7_stage5(h)
- heatmaps.append(h)
-
- # stage6
- h = torch.cat([h, feature_map], dim=1) # channel concat
- h = self.relu(self.Mconv1_stage6(h))
- h = self.relu(self.Mconv2_stage6(h))
- h = self.relu(self.Mconv3_stage6(h))
- h = self.relu(self.Mconv4_stage6(h))
- h = self.relu(self.Mconv5_stage6(h))
- h = self.relu(self.Mconv6_stage6(h))
- h = self.Mconv7_stage6(h)
- heatmaps.append(h)
-
- return heatmaps
-
-
-LOG = logging.getLogger(__name__)
-TOTEN = ToTensor()
-TOPIL = ToPILImage()
-
-
-params = {
- 'gaussian_sigma': 2.5,
- 'inference_img_size': 736, # 368, 736, 1312
- 'heatmap_peak_thresh': 0.1,
- 'crop_scale': 1.5,
- 'line_indices': [
- [0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 6],
- [6, 7], [7, 8], [8, 9], [9, 10], [10, 11], [11, 12], [12, 13],
- [13, 14], [14, 15], [15, 16],
- [17, 18], [18, 19], [19, 20], [20, 21],
- [22, 23], [23, 24], [24, 25], [25, 26],
- [27, 28], [28, 29], [29, 30],
- [31, 32], [32, 33], [33, 34], [34, 35],
- [36, 37], [37, 38], [38, 39], [39, 40], [40, 41], [41, 36],
- [42, 43], [43, 44], [44, 45], [45, 46], [46, 47], [47, 42],
- [48, 49], [49, 50], [50, 51], [51, 52], [52, 53], [53, 54],
- [54, 55], [55, 56], [56, 57], [57, 58], [58, 59], [59, 48],
- [60, 61], [61, 62], [62, 63], [63, 64], [64, 65], [65, 66],
- [66, 67], [67, 60]
- ],
-}
-
-
-class Face(object):
- """
- The OpenPose face landmark detector model.
-
- Args:
- inference_size: set the size of the inference image size, suggested:
- 368, 736, 1312, default 736
- gaussian_sigma: blur the heatmaps, default 2.5
- heatmap_peak_thresh: return landmark if over threshold, default 0.1
-
- """
- def __init__(self, face_model_path,
- inference_size=None,
- gaussian_sigma=None,
- heatmap_peak_thresh=None):
- self.inference_size = inference_size or params["inference_img_size"]
- self.sigma = gaussian_sigma or params['gaussian_sigma']
- self.threshold = heatmap_peak_thresh or params["heatmap_peak_thresh"]
- self.model = FaceNet()
- self.model.load_state_dict(torch.load(face_model_path))
- # if torch.cuda.is_available():
- # self.model = self.model.cuda()
- # print('cuda')
- self.model.eval()
-
- def __call__(self, face_img):
- H, W, C = face_img.shape
-
- w_size = 384
- x_data = torch.from_numpy(util.smart_resize(face_img, (w_size, w_size))).permute([2, 0, 1]) / 256.0 - 0.5
-
- x_data = x_data.to(self.cn_device)
-
- with torch.no_grad():
- hs = self.model(x_data[None, ...])
- heatmaps = F.interpolate(
- hs[-1],
- (H, W),
- mode='bilinear', align_corners=True).cpu().numpy()[0]
- return heatmaps
-
- def compute_peaks_from_heatmaps(self, heatmaps):
- all_peaks = []
- for part in range(heatmaps.shape[0]):
- map_ori = heatmaps[part].copy()
- binary = np.ascontiguousarray(map_ori > 0.05, dtype=np.uint8)
-
- if np.sum(binary) == 0:
- continue
-
- positions = np.where(binary > 0.5)
- intensities = map_ori[positions]
- mi = np.argmax(intensities)
- y, x = positions[0][mi], positions[1][mi]
- all_peaks.append([x, y])
-
- return np.array(all_peaks)
\ No newline at end of file
diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/roi_align_rotated.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/roi_align_rotated.py
deleted file mode 100644
index 0ce4961a3555d4da8bc3e32f1f7d5ad50036587d..0000000000000000000000000000000000000000
--- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/ops/roi_align_rotated.py
+++ /dev/null
@@ -1,177 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch.nn as nn
-from torch.autograd import Function
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext(
- '_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward'])
-
-
-class RoIAlignRotatedFunction(Function):
-
- @staticmethod
- def symbolic(g, features, rois, out_size, spatial_scale, sample_num,
- aligned, clockwise):
- if isinstance(out_size, int):
- out_h = out_size
- out_w = out_size
- elif isinstance(out_size, tuple):
- assert len(out_size) == 2
- assert isinstance(out_size[0], int)
- assert isinstance(out_size[1], int)
- out_h, out_w = out_size
- else:
- raise TypeError(
- '"out_size" must be an integer or tuple of integers')
- return g.op(
- 'mmcv::MMCVRoIAlignRotated',
- features,
- rois,
- output_height_i=out_h,
- output_width_i=out_h,
- spatial_scale_f=spatial_scale,
- sampling_ratio_i=sample_num,
- aligned_i=aligned,
- clockwise_i=clockwise)
-
- @staticmethod
- def forward(ctx,
- features,
- rois,
- out_size,
- spatial_scale,
- sample_num=0,
- aligned=True,
- clockwise=False):
- if isinstance(out_size, int):
- out_h = out_size
- out_w = out_size
- elif isinstance(out_size, tuple):
- assert len(out_size) == 2
- assert isinstance(out_size[0], int)
- assert isinstance(out_size[1], int)
- out_h, out_w = out_size
- else:
- raise TypeError(
- '"out_size" must be an integer or tuple of integers')
- ctx.spatial_scale = spatial_scale
- ctx.sample_num = sample_num
- ctx.aligned = aligned
- ctx.clockwise = clockwise
- ctx.save_for_backward(rois)
- ctx.feature_size = features.size()
-
- batch_size, num_channels, data_height, data_width = features.size()
- num_rois = rois.size(0)
-
- output = features.new_zeros(num_rois, num_channels, out_h, out_w)
- ext_module.roi_align_rotated_forward(
- features,
- rois,
- output,
- pooled_height=out_h,
- pooled_width=out_w,
- spatial_scale=spatial_scale,
- sample_num=sample_num,
- aligned=aligned,
- clockwise=clockwise)
- return output
-
- @staticmethod
- def backward(ctx, grad_output):
- feature_size = ctx.feature_size
- spatial_scale = ctx.spatial_scale
- aligned = ctx.aligned
- clockwise = ctx.clockwise
- sample_num = ctx.sample_num
- rois = ctx.saved_tensors[0]
- assert feature_size is not None
- batch_size, num_channels, data_height, data_width = feature_size
-
- out_w = grad_output.size(3)
- out_h = grad_output.size(2)
-
- grad_input = grad_rois = None
-
- if ctx.needs_input_grad[0]:
- grad_input = rois.new_zeros(batch_size, num_channels, data_height,
- data_width)
- ext_module.roi_align_rotated_backward(
- grad_output.contiguous(),
- rois,
- grad_input,
- pooled_height=out_h,
- pooled_width=out_w,
- spatial_scale=spatial_scale,
- sample_num=sample_num,
- aligned=aligned,
- clockwise=clockwise)
- return grad_input, grad_rois, None, None, None, None, None
-
-
-roi_align_rotated = RoIAlignRotatedFunction.apply
-
-
-class RoIAlignRotated(nn.Module):
- """RoI align pooling layer for rotated proposals.
-
- It accepts a feature map of shape (N, C, H, W) and rois with shape
- (n, 6) with each roi decoded as (batch_index, center_x, center_y,
- w, h, angle). The angle is in radian.
-
- Args:
- out_size (tuple): h, w
- spatial_scale (float): scale the input boxes by this number
- sample_num (int): number of inputs samples to take for each
- output sample. 0 to take samples densely for current models.
- aligned (bool): if False, use the legacy implementation in
- MMDetection. If True, align the results more perfectly.
- Default: True.
- clockwise (bool): If True, the angle in each proposal follows a
- clockwise fashion in image space, otherwise, the angle is
- counterclockwise. Default: False.
-
- Note:
- The implementation of RoIAlign when aligned=True is modified from
- https://github.com/facebookresearch/detectron2/
-
- The meaning of aligned=True:
-
- Given a continuous coordinate c, its two neighboring pixel
- indices (in our pixel model) are computed by floor(c - 0.5) and
- ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete
- indices [0] and [1] (which are sampled from the underlying signal
- at continuous coordinates 0.5 and 1.5). But the original roi_align
- (aligned=False) does not subtract the 0.5 when computing
- neighboring pixel indices and therefore it uses pixels with a
- slightly incorrect alignment (relative to our pixel model) when
- performing bilinear interpolation.
-
- With `aligned=True`,
- we first appropriately scale the ROI and then shift it by -0.5
- prior to calling roi_align. This produces the correct neighbors;
-
- The difference does not make a difference to the model's
- performance if ROIAlign is used together with conv layers.
- """
-
- def __init__(self,
- out_size,
- spatial_scale,
- sample_num=0,
- aligned=True,
- clockwise=False):
- super(RoIAlignRotated, self).__init__()
-
- self.out_size = out_size
- self.spatial_scale = float(spatial_scale)
- self.sample_num = int(sample_num)
- self.aligned = aligned
- self.clockwise = clockwise
-
- def forward(self, features, rois):
- return RoIAlignRotatedFunction.apply(features, rois, self.out_size,
- self.spatial_scale,
- self.sample_num, self.aligned,
- self.clockwise)
diff --git a/spaces/TabbyML/tabby-template-space/README.md b/spaces/TabbyML/tabby-template-space/README.md
deleted file mode 100644
index 9346d32f16fd9787bbb1a957171223be775bf3a5..0000000000000000000000000000000000000000
--- a/spaces/TabbyML/tabby-template-space/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-title: Tabby Template Space
-emoji: 🏷️
-colorFrom: gray
-colorTo: purple
-sdk: docker
-app_port: 8080
-fullWidth: true
-suggested_storage: small
-suggested_hardware: t4-medium
-tags:
- - tabby
----
-
-This is the Tabby Space Template you can use to deploy and run your own instance of Tabby on the Hugging Face Hub.
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_elffile.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_elffile.py
deleted file mode 100644
index 6fb19b30bb53c18f38a9ef02dd7c4478670fb962..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_elffile.py
+++ /dev/null
@@ -1,108 +0,0 @@
-"""
-ELF file parser.
-
-This provides a class ``ELFFile`` that parses an ELF executable in a similar
-interface to ``ZipFile``. Only the read interface is implemented.
-
-Based on: https://gist.github.com/lyssdod/f51579ae8d93c8657a5564aefc2ffbca
-ELF header: https://refspecs.linuxfoundation.org/elf/gabi4+/ch4.eheader.html
-"""
-
-import enum
-import os
-import struct
-from typing import IO, Optional, Tuple
-
-
-class ELFInvalid(ValueError):
- pass
-
-
-class EIClass(enum.IntEnum):
- C32 = 1
- C64 = 2
-
-
-class EIData(enum.IntEnum):
- Lsb = 1
- Msb = 2
-
-
-class EMachine(enum.IntEnum):
- I386 = 3
- S390 = 22
- Arm = 40
- X8664 = 62
- AArc64 = 183
-
-
-class ELFFile:
- """
- Representation of an ELF executable.
- """
-
- def __init__(self, f: IO[bytes]) -> None:
- self._f = f
-
- try:
- ident = self._read("16B")
- except struct.error:
- raise ELFInvalid("unable to parse identification")
- magic = bytes(ident[:4])
- if magic != b"\x7fELF":
- raise ELFInvalid(f"invalid magic: {magic!r}")
-
- self.capacity = ident[4] # Format for program header (bitness).
- self.encoding = ident[5] # Data structure encoding (endianness).
-
- try:
- # e_fmt: Format for program header.
- # p_fmt: Format for section header.
- # p_idx: Indexes to find p_type, p_offset, and p_filesz.
- e_fmt, self._p_fmt, self._p_idx = {
- (1, 1): ("HHIIIIIHHH", ">IIIIIIII", (0, 1, 4)), # 32-bit MSB.
- (2, 1): ("HHIQQQIHHH", ">IIQQQQQQ", (0, 2, 5)), # 64-bit MSB.
- }[(self.capacity, self.encoding)]
- except KeyError:
- raise ELFInvalid(
- f"unrecognized capacity ({self.capacity}) or "
- f"encoding ({self.encoding})"
- )
-
- try:
- (
- _,
- self.machine, # Architecture type.
- _,
- _,
- self._e_phoff, # Offset of program header.
- _,
- self.flags, # Processor-specific flags.
- _,
- self._e_phentsize, # Size of section.
- self._e_phnum, # Number of sections.
- ) = self._read(e_fmt)
- except struct.error as e:
- raise ELFInvalid("unable to parse machine and section information") from e
-
- def _read(self, fmt: str) -> Tuple[int, ...]:
- return struct.unpack(fmt, self._f.read(struct.calcsize(fmt)))
-
- @property
- def interpreter(self) -> Optional[str]:
- """
- The path recorded in the ``PT_INTERP`` section header.
- """
- for index in range(self._e_phnum):
- self._f.seek(self._e_phoff + self._e_phentsize * index)
- try:
- data = self._read(self._p_fmt)
- except struct.error:
- continue
- if data[self._p_idx[0]] != 3: # Not PT_INTERP.
- continue
- self._f.seek(data[self._p_idx[1]])
- return os.fsdecode(self._f.read(data[self._p_idx[2]])).strip("\0")
- return None
diff --git a/spaces/Tape/yoga/openpose/model.py b/spaces/Tape/yoga/openpose/model.py
deleted file mode 100644
index 5dfc80de827a17beccb9b0f3f7588545be78c9de..0000000000000000000000000000000000000000
--- a/spaces/Tape/yoga/openpose/model.py
+++ /dev/null
@@ -1,219 +0,0 @@
-import torch
-from collections import OrderedDict
-
-import torch
-import torch.nn as nn
-
-def make_layers(block, no_relu_layers):
- layers = []
- for layer_name, v in block.items():
- if 'pool' in layer_name:
- layer = nn.MaxPool2d(kernel_size=v[0], stride=v[1],
- padding=v[2])
- layers.append((layer_name, layer))
- else:
- conv2d = nn.Conv2d(in_channels=v[0], out_channels=v[1],
- kernel_size=v[2], stride=v[3],
- padding=v[4])
- layers.append((layer_name, conv2d))
- if layer_name not in no_relu_layers:
- layers.append(('relu_'+layer_name, nn.ReLU(inplace=True)))
-
- return nn.Sequential(OrderedDict(layers))
-
-class bodypose_model(nn.Module):
- def __init__(self):
- super(bodypose_model, self).__init__()
-
- # these layers have no relu layer
- no_relu_layers = ['conv5_5_CPM_L1', 'conv5_5_CPM_L2', 'Mconv7_stage2_L1',\
- 'Mconv7_stage2_L2', 'Mconv7_stage3_L1', 'Mconv7_stage3_L2',\
- 'Mconv7_stage4_L1', 'Mconv7_stage4_L2', 'Mconv7_stage5_L1',\
- 'Mconv7_stage5_L2', 'Mconv7_stage6_L1', 'Mconv7_stage6_L1']
- blocks = {}
- block0 = OrderedDict([
- ('conv1_1', [3, 64, 3, 1, 1]),
- ('conv1_2', [64, 64, 3, 1, 1]),
- ('pool1_stage1', [2, 2, 0]),
- ('conv2_1', [64, 128, 3, 1, 1]),
- ('conv2_2', [128, 128, 3, 1, 1]),
- ('pool2_stage1', [2, 2, 0]),
- ('conv3_1', [128, 256, 3, 1, 1]),
- ('conv3_2', [256, 256, 3, 1, 1]),
- ('conv3_3', [256, 256, 3, 1, 1]),
- ('conv3_4', [256, 256, 3, 1, 1]),
- ('pool3_stage1', [2, 2, 0]),
- ('conv4_1', [256, 512, 3, 1, 1]),
- ('conv4_2', [512, 512, 3, 1, 1]),
- ('conv4_3_CPM', [512, 256, 3, 1, 1]),
- ('conv4_4_CPM', [256, 128, 3, 1, 1])
- ])
-
-
- # Stage 1
- block1_1 = OrderedDict([
- ('conv5_1_CPM_L1', [128, 128, 3, 1, 1]),
- ('conv5_2_CPM_L1', [128, 128, 3, 1, 1]),
- ('conv5_3_CPM_L1', [128, 128, 3, 1, 1]),
- ('conv5_4_CPM_L1', [128, 512, 1, 1, 0]),
- ('conv5_5_CPM_L1', [512, 38, 1, 1, 0])
- ])
-
- block1_2 = OrderedDict([
- ('conv5_1_CPM_L2', [128, 128, 3, 1, 1]),
- ('conv5_2_CPM_L2', [128, 128, 3, 1, 1]),
- ('conv5_3_CPM_L2', [128, 128, 3, 1, 1]),
- ('conv5_4_CPM_L2', [128, 512, 1, 1, 0]),
- ('conv5_5_CPM_L2', [512, 19, 1, 1, 0])
- ])
- blocks['block1_1'] = block1_1
- blocks['block1_2'] = block1_2
-
- self.model0 = make_layers(block0, no_relu_layers)
-
- # Stages 2 - 6
- for i in range(2, 7):
- blocks['block%d_1' % i] = OrderedDict([
- ('Mconv1_stage%d_L1' % i, [185, 128, 7, 1, 3]),
- ('Mconv2_stage%d_L1' % i, [128, 128, 7, 1, 3]),
- ('Mconv3_stage%d_L1' % i, [128, 128, 7, 1, 3]),
- ('Mconv4_stage%d_L1' % i, [128, 128, 7, 1, 3]),
- ('Mconv5_stage%d_L1' % i, [128, 128, 7, 1, 3]),
- ('Mconv6_stage%d_L1' % i, [128, 128, 1, 1, 0]),
- ('Mconv7_stage%d_L1' % i, [128, 38, 1, 1, 0])
- ])
-
- blocks['block%d_2' % i] = OrderedDict([
- ('Mconv1_stage%d_L2' % i, [185, 128, 7, 1, 3]),
- ('Mconv2_stage%d_L2' % i, [128, 128, 7, 1, 3]),
- ('Mconv3_stage%d_L2' % i, [128, 128, 7, 1, 3]),
- ('Mconv4_stage%d_L2' % i, [128, 128, 7, 1, 3]),
- ('Mconv5_stage%d_L2' % i, [128, 128, 7, 1, 3]),
- ('Mconv6_stage%d_L2' % i, [128, 128, 1, 1, 0]),
- ('Mconv7_stage%d_L2' % i, [128, 19, 1, 1, 0])
- ])
-
- for k in blocks.keys():
- blocks[k] = make_layers(blocks[k], no_relu_layers)
-
- self.model1_1 = blocks['block1_1']
- self.model2_1 = blocks['block2_1']
- self.model3_1 = blocks['block3_1']
- self.model4_1 = blocks['block4_1']
- self.model5_1 = blocks['block5_1']
- self.model6_1 = blocks['block6_1']
-
- self.model1_2 = blocks['block1_2']
- self.model2_2 = blocks['block2_2']
- self.model3_2 = blocks['block3_2']
- self.model4_2 = blocks['block4_2']
- self.model5_2 = blocks['block5_2']
- self.model6_2 = blocks['block6_2']
-
-
- def forward(self, x):
-
- out1 = self.model0(x)
-
- out1_1 = self.model1_1(out1)
- out1_2 = self.model1_2(out1)
- out2 = torch.cat([out1_1, out1_2, out1], 1)
-
- out2_1 = self.model2_1(out2)
- out2_2 = self.model2_2(out2)
- out3 = torch.cat([out2_1, out2_2, out1], 1)
-
- out3_1 = self.model3_1(out3)
- out3_2 = self.model3_2(out3)
- out4 = torch.cat([out3_1, out3_2, out1], 1)
-
- out4_1 = self.model4_1(out4)
- out4_2 = self.model4_2(out4)
- out5 = torch.cat([out4_1, out4_2, out1], 1)
-
- out5_1 = self.model5_1(out5)
- out5_2 = self.model5_2(out5)
- out6 = torch.cat([out5_1, out5_2, out1], 1)
-
- out6_1 = self.model6_1(out6)
- out6_2 = self.model6_2(out6)
-
- return out6_1, out6_2
-
-class handpose_model(nn.Module):
- def __init__(self):
- super(handpose_model, self).__init__()
-
- # these layers have no relu layer
- no_relu_layers = ['conv6_2_CPM', 'Mconv7_stage2', 'Mconv7_stage3',\
- 'Mconv7_stage4', 'Mconv7_stage5', 'Mconv7_stage6']
- # stage 1
- block1_0 = OrderedDict([
- ('conv1_1', [3, 64, 3, 1, 1]),
- ('conv1_2', [64, 64, 3, 1, 1]),
- ('pool1_stage1', [2, 2, 0]),
- ('conv2_1', [64, 128, 3, 1, 1]),
- ('conv2_2', [128, 128, 3, 1, 1]),
- ('pool2_stage1', [2, 2, 0]),
- ('conv3_1', [128, 256, 3, 1, 1]),
- ('conv3_2', [256, 256, 3, 1, 1]),
- ('conv3_3', [256, 256, 3, 1, 1]),
- ('conv3_4', [256, 256, 3, 1, 1]),
- ('pool3_stage1', [2, 2, 0]),
- ('conv4_1', [256, 512, 3, 1, 1]),
- ('conv4_2', [512, 512, 3, 1, 1]),
- ('conv4_3', [512, 512, 3, 1, 1]),
- ('conv4_4', [512, 512, 3, 1, 1]),
- ('conv5_1', [512, 512, 3, 1, 1]),
- ('conv5_2', [512, 512, 3, 1, 1]),
- ('conv5_3_CPM', [512, 128, 3, 1, 1])
- ])
-
- block1_1 = OrderedDict([
- ('conv6_1_CPM', [128, 512, 1, 1, 0]),
- ('conv6_2_CPM', [512, 22, 1, 1, 0])
- ])
-
- blocks = {}
- blocks['block1_0'] = block1_0
- blocks['block1_1'] = block1_1
-
- # stage 2-6
- for i in range(2, 7):
- blocks['block%d' % i] = OrderedDict([
- ('Mconv1_stage%d' % i, [150, 128, 7, 1, 3]),
- ('Mconv2_stage%d' % i, [128, 128, 7, 1, 3]),
- ('Mconv3_stage%d' % i, [128, 128, 7, 1, 3]),
- ('Mconv4_stage%d' % i, [128, 128, 7, 1, 3]),
- ('Mconv5_stage%d' % i, [128, 128, 7, 1, 3]),
- ('Mconv6_stage%d' % i, [128, 128, 1, 1, 0]),
- ('Mconv7_stage%d' % i, [128, 22, 1, 1, 0])
- ])
-
- for k in blocks.keys():
- blocks[k] = make_layers(blocks[k], no_relu_layers)
-
- self.model1_0 = blocks['block1_0']
- self.model1_1 = blocks['block1_1']
- self.model2 = blocks['block2']
- self.model3 = blocks['block3']
- self.model4 = blocks['block4']
- self.model5 = blocks['block5']
- self.model6 = blocks['block6']
-
- def forward(self, x):
- out1_0 = self.model1_0(x)
- out1_1 = self.model1_1(out1_0)
- concat_stage2 = torch.cat([out1_1, out1_0], 1)
- out_stage2 = self.model2(concat_stage2)
- concat_stage3 = torch.cat([out_stage2, out1_0], 1)
- out_stage3 = self.model3(concat_stage3)
- concat_stage4 = torch.cat([out_stage3, out1_0], 1)
- out_stage4 = self.model4(concat_stage4)
- concat_stage5 = torch.cat([out_stage4, out1_0], 1)
- out_stage5 = self.model5(concat_stage5)
- concat_stage6 = torch.cat([out_stage5, out1_0], 1)
- out_stage6 = self.model6(concat_stage6)
- return out_stage6
-
-
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py
deleted file mode 100644
index 99cd536d2f9880d2049390c45f73eb22335e1b82..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rpn.py
+++ /dev/null
@@ -1,533 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from typing import Dict, List, Optional, Tuple, Union
-import torch
-import torch.nn.functional as F
-from torch import nn
-
-from detectron2.config import configurable
-from detectron2.layers import Conv2d, ShapeSpec, cat
-from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou
-from detectron2.utils.events import get_event_storage
-from detectron2.utils.memory import retry_if_cuda_oom
-from detectron2.utils.registry import Registry
-
-from ..anchor_generator import build_anchor_generator
-from ..box_regression import Box2BoxTransform, _dense_box_regression_loss
-from ..matcher import Matcher
-from ..sampling import subsample_labels
-from .build import PROPOSAL_GENERATOR_REGISTRY
-from .proposal_utils import find_top_rpn_proposals
-
-RPN_HEAD_REGISTRY = Registry("RPN_HEAD")
-RPN_HEAD_REGISTRY.__doc__ = """
-Registry for RPN heads, which take feature maps and perform
-objectness classification and bounding box regression for anchors.
-
-The registered object will be called with `obj(cfg, input_shape)`.
-The call should return a `nn.Module` object.
-"""
-
-
-"""
-Shape shorthand in this module:
-
- N: number of images in the minibatch
- L: number of feature maps per image on which RPN is run
- A: number of cell anchors (must be the same for all feature maps)
- Hi, Wi: height and width of the i-th feature map
- B: size of the box parameterization
-
-Naming convention:
-
- objectness: refers to the binary classification of an anchor as object vs. not object.
-
- deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box
- transform (see :class:`box_regression.Box2BoxTransform`), or 5d for rotated boxes.
-
- pred_objectness_logits: predicted objectness scores in [-inf, +inf]; use
- sigmoid(pred_objectness_logits) to estimate P(object).
-
- gt_labels: ground-truth binary classification labels for objectness
-
- pred_anchor_deltas: predicted box2box transform deltas
-
- gt_anchor_deltas: ground-truth box2box transform deltas
-"""
-
-
-def build_rpn_head(cfg, input_shape):
- """
- Build an RPN head defined by `cfg.MODEL.RPN.HEAD_NAME`.
- """
- name = cfg.MODEL.RPN.HEAD_NAME
- return RPN_HEAD_REGISTRY.get(name)(cfg, input_shape)
-
-
-@RPN_HEAD_REGISTRY.register()
-class StandardRPNHead(nn.Module):
- """
- Standard RPN classification and regression heads described in :paper:`Faster R-CNN`.
- Uses a 3x3 conv to produce a shared hidden state from which one 1x1 conv predicts
- objectness logits for each anchor and a second 1x1 conv predicts bounding-box deltas
- specifying how to deform each anchor into an object proposal.
- """
-
- @configurable
- def __init__(
- self, *, in_channels: int, num_anchors: int, box_dim: int = 4, conv_dims: List[int] = (-1,)
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- in_channels (int): number of input feature channels. When using multiple
- input features, they must have the same number of channels.
- num_anchors (int): number of anchors to predict for *each spatial position*
- on the feature map. The total number of anchors for each
- feature map will be `num_anchors * H * W`.
- box_dim (int): dimension of a box, which is also the number of box regression
- predictions to make for each anchor. An axis aligned box has
- box_dim=4, while a rotated box has box_dim=5.
- conv_dims (list[int]): a list of integers representing the output channels
- of N conv layers. Set it to -1 to use the same number of output channels
- as input channels.
- """
- super().__init__()
- cur_channels = in_channels
- # Keeping the old variable names and structure for backwards compatiblity.
- # Otherwise the old checkpoints will fail to load.
- if len(conv_dims) == 1:
- out_channels = cur_channels if conv_dims[0] == -1 else conv_dims[0]
- # 3x3 conv for the hidden representation
- self.conv = self._get_rpn_conv(cur_channels, out_channels)
- cur_channels = out_channels
- else:
- self.conv = nn.Sequential()
- for k, conv_dim in enumerate(conv_dims):
- out_channels = cur_channels if conv_dim == -1 else conv_dim
- if out_channels <= 0:
- raise ValueError(
- f"Conv output channels should be greater than 0. Got {out_channels}"
- )
- conv = self._get_rpn_conv(cur_channels, out_channels)
- self.conv.add_module(f"conv{k}", conv)
- cur_channels = out_channels
- # 1x1 conv for predicting objectness logits
- self.objectness_logits = nn.Conv2d(cur_channels, num_anchors, kernel_size=1, stride=1)
- # 1x1 conv for predicting box2box transform deltas
- self.anchor_deltas = nn.Conv2d(cur_channels, num_anchors * box_dim, kernel_size=1, stride=1)
-
- # Keeping the order of weights initialization same for backwards compatiblility.
- for layer in self.modules():
- if isinstance(layer, nn.Conv2d):
- nn.init.normal_(layer.weight, std=0.01)
- nn.init.constant_(layer.bias, 0)
-
- def _get_rpn_conv(self, in_channels, out_channels):
- return Conv2d(
- in_channels,
- out_channels,
- kernel_size=3,
- stride=1,
- padding=1,
- activation=nn.ReLU(),
- )
-
- @classmethod
- def from_config(cls, cfg, input_shape):
- # Standard RPN is shared across levels:
- in_channels = [s.channels for s in input_shape]
- assert len(set(in_channels)) == 1, "Each level must have the same channel!"
- in_channels = in_channels[0]
-
- # RPNHead should take the same input as anchor generator
- # NOTE: it assumes that creating an anchor generator does not have unwanted side effect.
- anchor_generator = build_anchor_generator(cfg, input_shape)
- num_anchors = anchor_generator.num_anchors
- box_dim = anchor_generator.box_dim
- assert (
- len(set(num_anchors)) == 1
- ), "Each level must have the same number of anchors per spatial position"
- return {
- "in_channels": in_channels,
- "num_anchors": num_anchors[0],
- "box_dim": box_dim,
- "conv_dims": cfg.MODEL.RPN.CONV_DIMS,
- }
-
- def forward(self, features: List[torch.Tensor]):
- """
- Args:
- features (list[Tensor]): list of feature maps
-
- Returns:
- list[Tensor]: A list of L elements.
- Element i is a tensor of shape (N, A, Hi, Wi) representing
- the predicted objectness logits for all anchors. A is the number of cell anchors.
- list[Tensor]: A list of L elements. Element i is a tensor of shape
- (N, A*box_dim, Hi, Wi) representing the predicted "deltas" used to transform anchors
- to proposals.
- """
- pred_objectness_logits = []
- pred_anchor_deltas = []
- for x in features:
- t = self.conv(x)
- pred_objectness_logits.append(self.objectness_logits(t))
- pred_anchor_deltas.append(self.anchor_deltas(t))
- return pred_objectness_logits, pred_anchor_deltas
-
-
-@PROPOSAL_GENERATOR_REGISTRY.register()
-class RPN(nn.Module):
- """
- Region Proposal Network, introduced by :paper:`Faster R-CNN`.
- """
-
- @configurable
- def __init__(
- self,
- *,
- in_features: List[str],
- head: nn.Module,
- anchor_generator: nn.Module,
- anchor_matcher: Matcher,
- box2box_transform: Box2BoxTransform,
- batch_size_per_image: int,
- positive_fraction: float,
- pre_nms_topk: Tuple[float, float],
- post_nms_topk: Tuple[float, float],
- nms_thresh: float = 0.7,
- min_box_size: float = 0.0,
- anchor_boundary_thresh: float = -1.0,
- loss_weight: Union[float, Dict[str, float]] = 1.0,
- box_reg_loss_type: str = "smooth_l1",
- smooth_l1_beta: float = 0.0,
- ):
- """
- NOTE: this interface is experimental.
-
- Args:
- in_features (list[str]): list of names of input features to use
- head (nn.Module): a module that predicts logits and regression deltas
- for each level from a list of per-level features
- anchor_generator (nn.Module): a module that creates anchors from a
- list of features. Usually an instance of :class:`AnchorGenerator`
- anchor_matcher (Matcher): label the anchors by matching them with ground truth.
- box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to
- instance boxes
- batch_size_per_image (int): number of anchors per image to sample for training
- positive_fraction (float): fraction of foreground anchors to sample for training
- pre_nms_topk (tuple[float]): (train, test) that represents the
- number of top k proposals to select before NMS, in
- training and testing.
- post_nms_topk (tuple[float]): (train, test) that represents the
- number of top k proposals to select after NMS, in
- training and testing.
- nms_thresh (float): NMS threshold used to de-duplicate the predicted proposals
- min_box_size (float): remove proposal boxes with any side smaller than this threshold,
- in the unit of input image pixels
- anchor_boundary_thresh (float): legacy option
- loss_weight (float|dict): weights to use for losses. Can be single float for weighting
- all rpn losses together, or a dict of individual weightings. Valid dict keys are:
- "loss_rpn_cls" - applied to classification loss
- "loss_rpn_loc" - applied to box regression loss
- box_reg_loss_type (str): Loss type to use. Supported losses: "smooth_l1", "giou".
- smooth_l1_beta (float): beta parameter for the smooth L1 regression loss. Default to
- use L1 loss. Only used when `box_reg_loss_type` is "smooth_l1"
- """
- super().__init__()
- self.in_features = in_features
- self.rpn_head = head
- self.anchor_generator = anchor_generator
- self.anchor_matcher = anchor_matcher
- self.box2box_transform = box2box_transform
- self.batch_size_per_image = batch_size_per_image
- self.positive_fraction = positive_fraction
- # Map from self.training state to train/test settings
- self.pre_nms_topk = {True: pre_nms_topk[0], False: pre_nms_topk[1]}
- self.post_nms_topk = {True: post_nms_topk[0], False: post_nms_topk[1]}
- self.nms_thresh = nms_thresh
- self.min_box_size = float(min_box_size)
- self.anchor_boundary_thresh = anchor_boundary_thresh
- if isinstance(loss_weight, float):
- loss_weight = {"loss_rpn_cls": loss_weight, "loss_rpn_loc": loss_weight}
- self.loss_weight = loss_weight
- self.box_reg_loss_type = box_reg_loss_type
- self.smooth_l1_beta = smooth_l1_beta
-
- @classmethod
- def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
- in_features = cfg.MODEL.RPN.IN_FEATURES
- ret = {
- "in_features": in_features,
- "min_box_size": cfg.MODEL.PROPOSAL_GENERATOR.MIN_SIZE,
- "nms_thresh": cfg.MODEL.RPN.NMS_THRESH,
- "batch_size_per_image": cfg.MODEL.RPN.BATCH_SIZE_PER_IMAGE,
- "positive_fraction": cfg.MODEL.RPN.POSITIVE_FRACTION,
- "loss_weight": {
- "loss_rpn_cls": cfg.MODEL.RPN.LOSS_WEIGHT,
- "loss_rpn_loc": cfg.MODEL.RPN.BBOX_REG_LOSS_WEIGHT * cfg.MODEL.RPN.LOSS_WEIGHT,
- },
- "anchor_boundary_thresh": cfg.MODEL.RPN.BOUNDARY_THRESH,
- "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS),
- "box_reg_loss_type": cfg.MODEL.RPN.BBOX_REG_LOSS_TYPE,
- "smooth_l1_beta": cfg.MODEL.RPN.SMOOTH_L1_BETA,
- }
-
- ret["pre_nms_topk"] = (cfg.MODEL.RPN.PRE_NMS_TOPK_TRAIN, cfg.MODEL.RPN.PRE_NMS_TOPK_TEST)
- ret["post_nms_topk"] = (cfg.MODEL.RPN.POST_NMS_TOPK_TRAIN, cfg.MODEL.RPN.POST_NMS_TOPK_TEST)
-
- ret["anchor_generator"] = build_anchor_generator(cfg, [input_shape[f] for f in in_features])
- ret["anchor_matcher"] = Matcher(
- cfg.MODEL.RPN.IOU_THRESHOLDS, cfg.MODEL.RPN.IOU_LABELS, allow_low_quality_matches=True
- )
- ret["head"] = build_rpn_head(cfg, [input_shape[f] for f in in_features])
- return ret
-
- def _subsample_labels(self, label):
- """
- Randomly sample a subset of positive and negative examples, and overwrite
- the label vector to the ignore value (-1) for all elements that are not
- included in the sample.
-
- Args:
- labels (Tensor): a vector of -1, 0, 1. Will be modified in-place and returned.
- """
- pos_idx, neg_idx = subsample_labels(
- label, self.batch_size_per_image, self.positive_fraction, 0
- )
- # Fill with the ignore label (-1), then set positive and negative labels
- label.fill_(-1)
- label.scatter_(0, pos_idx, 1)
- label.scatter_(0, neg_idx, 0)
- return label
-
- @torch.jit.unused
- @torch.no_grad()
- def label_and_sample_anchors(
- self, anchors: List[Boxes], gt_instances: List[Instances]
- ) -> Tuple[List[torch.Tensor], List[torch.Tensor]]:
- """
- Args:
- anchors (list[Boxes]): anchors for each feature map.
- gt_instances: the ground-truth instances for each image.
-
- Returns:
- list[Tensor]:
- List of #img tensors. i-th element is a vector of labels whose length is
- the total number of anchors across all feature maps R = sum(Hi * Wi * A).
- Label values are in {-1, 0, 1}, with meanings: -1 = ignore; 0 = negative
- class; 1 = positive class.
- list[Tensor]:
- i-th element is a Rx4 tensor. The values are the matched gt boxes for each
- anchor. Values are undefined for those anchors not labeled as 1.
- """
- anchors = Boxes.cat(anchors)
-
- gt_boxes = [x.gt_boxes for x in gt_instances]
- image_sizes = [x.image_size for x in gt_instances]
- del gt_instances
-
- gt_labels = []
- matched_gt_boxes = []
- for image_size_i, gt_boxes_i in zip(image_sizes, gt_boxes):
- """
- image_size_i: (h, w) for the i-th image
- gt_boxes_i: ground-truth boxes for i-th image
- """
-
- match_quality_matrix = retry_if_cuda_oom(pairwise_iou)(gt_boxes_i, anchors)
- matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix)
- # Matching is memory-expensive and may result in CPU tensors. But the result is small
- gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device)
- del match_quality_matrix
-
- if self.anchor_boundary_thresh >= 0:
- # Discard anchors that go out of the boundaries of the image
- # NOTE: This is legacy functionality that is turned off by default in Detectron2
- anchors_inside_image = anchors.inside_box(image_size_i, self.anchor_boundary_thresh)
- gt_labels_i[~anchors_inside_image] = -1
-
- # A vector of labels (-1, 0, 1) for each anchor
- gt_labels_i = self._subsample_labels(gt_labels_i)
-
- if len(gt_boxes_i) == 0:
- # These values won't be used anyway since the anchor is labeled as background
- matched_gt_boxes_i = torch.zeros_like(anchors.tensor)
- else:
- # TODO wasted indexing computation for ignored boxes
- matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor
-
- gt_labels.append(gt_labels_i) # N,AHW
- matched_gt_boxes.append(matched_gt_boxes_i)
- return gt_labels, matched_gt_boxes
-
- @torch.jit.unused
- def losses(
- self,
- anchors: List[Boxes],
- pred_objectness_logits: List[torch.Tensor],
- gt_labels: List[torch.Tensor],
- pred_anchor_deltas: List[torch.Tensor],
- gt_boxes: List[torch.Tensor],
- ) -> Dict[str, torch.Tensor]:
- """
- Return the losses from a set of RPN predictions and their associated ground-truth.
-
- Args:
- anchors (list[Boxes or RotatedBoxes]): anchors for each feature map, each
- has shape (Hi*Wi*A, B), where B is box dimension (4 or 5).
- pred_objectness_logits (list[Tensor]): A list of L elements.
- Element i is a tensor of shape (N, Hi*Wi*A) representing
- the predicted objectness logits for all anchors.
- gt_labels (list[Tensor]): Output of :meth:`label_and_sample_anchors`.
- pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape
- (N, Hi*Wi*A, 4 or 5) representing the predicted "deltas" used to transform anchors
- to proposals.
- gt_boxes (list[Tensor]): Output of :meth:`label_and_sample_anchors`.
-
- Returns:
- dict[loss name -> loss value]: A dict mapping from loss name to loss value.
- Loss names are: `loss_rpn_cls` for objectness classification and
- `loss_rpn_loc` for proposal localization.
- """
- num_images = len(gt_labels)
- gt_labels = torch.stack(gt_labels) # (N, sum(Hi*Wi*Ai))
-
- # Log the number of positive/negative anchors per-image that's used in training
- pos_mask = gt_labels == 1
- num_pos_anchors = pos_mask.sum().item()
- num_neg_anchors = (gt_labels == 0).sum().item()
- storage = get_event_storage()
- storage.put_scalar("rpn/num_pos_anchors", num_pos_anchors / num_images)
- storage.put_scalar("rpn/num_neg_anchors", num_neg_anchors / num_images)
-
- localization_loss = _dense_box_regression_loss(
- anchors,
- self.box2box_transform,
- pred_anchor_deltas,
- gt_boxes,
- pos_mask,
- box_reg_loss_type=self.box_reg_loss_type,
- smooth_l1_beta=self.smooth_l1_beta,
- )
-
- valid_mask = gt_labels >= 0
- objectness_loss = F.binary_cross_entropy_with_logits(
- cat(pred_objectness_logits, dim=1)[valid_mask],
- gt_labels[valid_mask].to(torch.float32),
- reduction="sum",
- )
- normalizer = self.batch_size_per_image * num_images
- losses = {
- "loss_rpn_cls": objectness_loss / normalizer,
- # The original Faster R-CNN paper uses a slightly different normalizer
- # for loc loss. But it doesn't matter in practice
- "loss_rpn_loc": localization_loss / normalizer,
- }
- losses = {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()}
- return losses
-
- def forward(
- self,
- images: ImageList,
- features: Dict[str, torch.Tensor],
- gt_instances: Optional[List[Instances]] = None,
- ):
- """
- Args:
- images (ImageList): input images of length `N`
- features (dict[str, Tensor]): input data as a mapping from feature
- map name to tensor. Axis 0 represents the number of images `N` in
- the input data; axes 1-3 are channels, height, and width, which may
- vary between feature maps (e.g., if a feature pyramid is used).
- gt_instances (list[Instances], optional): a length `N` list of `Instances`s.
- Each `Instances` stores ground-truth instances for the corresponding image.
-
- Returns:
- proposals: list[Instances]: contains fields "proposal_boxes", "objectness_logits"
- loss: dict[Tensor] or None
- """
- features = [features[f] for f in self.in_features]
- anchors = self.anchor_generator(features)
-
- pred_objectness_logits, pred_anchor_deltas = self.rpn_head(features)
- # Transpose the Hi*Wi*A dimension to the middle:
- pred_objectness_logits = [
- # (N, A, Hi, Wi) -> (N, Hi, Wi, A) -> (N, Hi*Wi*A)
- score.permute(0, 2, 3, 1).flatten(1)
- for score in pred_objectness_logits
- ]
- pred_anchor_deltas = [
- # (N, A*B, Hi, Wi) -> (N, A, B, Hi, Wi) -> (N, Hi, Wi, A, B) -> (N, Hi*Wi*A, B)
- x.view(x.shape[0], -1, self.anchor_generator.box_dim, x.shape[-2], x.shape[-1])
- .permute(0, 3, 4, 1, 2)
- .flatten(1, -2)
- for x in pred_anchor_deltas
- ]
-
- if self.training:
- assert gt_instances is not None, "RPN requires gt_instances in training!"
- gt_labels, gt_boxes = self.label_and_sample_anchors(anchors, gt_instances)
- losses = self.losses(
- anchors, pred_objectness_logits, gt_labels, pred_anchor_deltas, gt_boxes
- )
- else:
- losses = {}
- proposals = self.predict_proposals(
- anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes
- )
- return proposals, losses
-
- def predict_proposals(
- self,
- anchors: List[Boxes],
- pred_objectness_logits: List[torch.Tensor],
- pred_anchor_deltas: List[torch.Tensor],
- image_sizes: List[Tuple[int, int]],
- ):
- """
- Decode all the predicted box regression deltas to proposals. Find the top proposals
- by applying NMS and removing boxes that are too small.
-
- Returns:
- proposals (list[Instances]): list of N Instances. The i-th Instances
- stores post_nms_topk object proposals for image i, sorted by their
- objectness score in descending order.
- """
- # The proposals are treated as fixed for joint training with roi heads.
- # This approach ignores the derivative w.r.t. the proposal boxes’ coordinates that
- # are also network responses.
- with torch.no_grad():
- pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas)
- return find_top_rpn_proposals(
- pred_proposals,
- pred_objectness_logits,
- image_sizes,
- self.nms_thresh,
- self.pre_nms_topk[self.training],
- self.post_nms_topk[self.training],
- self.min_box_size,
- self.training,
- )
-
- def _decode_proposals(self, anchors: List[Boxes], pred_anchor_deltas: List[torch.Tensor]):
- """
- Transform anchors into proposals by applying the predicted anchor deltas.
-
- Returns:
- proposals (list[Tensor]): A list of L tensors. Tensor i has shape
- (N, Hi*Wi*A, B)
- """
- N = pred_anchor_deltas[0].shape[0]
- proposals = []
- # For each feature map
- for anchors_i, pred_anchor_deltas_i in zip(anchors, pred_anchor_deltas):
- B = anchors_i.tensor.size(1)
- pred_anchor_deltas_i = pred_anchor_deltas_i.reshape(-1, B)
- # Expand anchors to shape (N*Hi*Wi*A, B)
- anchors_i = anchors_i.tensor.unsqueeze(0).expand(N, -1, -1).reshape(-1, B)
- proposals_i = self.box2box_transform.apply_deltas(pred_anchor_deltas_i, anchors_i)
- # Append feature map proposals with shape (N, Hi*Wi*A, B)
- proposals.append(proposals_i.view(N, -1, B))
- return proposals
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/config/test_lazy_config.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/config/test_lazy_config.py
deleted file mode 100644
index 6ff5b6dc117744a9d978e0aff324bddeb496409b..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/config/test_lazy_config.py
+++ /dev/null
@@ -1,79 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import os
-import unittest
-import tempfile
-from itertools import count
-
-from detectron2.config import LazyConfig, LazyCall as L
-from omegaconf import DictConfig
-
-
-class TestLazyPythonConfig(unittest.TestCase):
- def setUp(self):
- self.root_filename = os.path.join(os.path.dirname(__file__), "root_cfg.py")
-
- def test_load(self):
- cfg = LazyConfig.load(self.root_filename)
-
- self.assertEqual(cfg.dir1a_dict.a, "modified")
- self.assertEqual(cfg.dir1b_dict.a, 1)
- self.assertEqual(cfg.lazyobj.x, "base_a_1")
-
- cfg.lazyobj.x = "new_x"
- # reload
- cfg = LazyConfig.load(self.root_filename)
- self.assertEqual(cfg.lazyobj.x, "base_a_1")
-
- def test_save_load(self):
- cfg = LazyConfig.load(self.root_filename)
- with tempfile.TemporaryDirectory(prefix="detectron2") as d:
- fname = os.path.join(d, "test_config.yaml")
- LazyConfig.save(cfg, fname)
- cfg2 = LazyConfig.load(fname)
-
- self.assertEqual(cfg2.lazyobj._target_, "itertools.count")
- self.assertEqual(cfg.lazyobj._target_, count)
- cfg2.lazyobj.pop("_target_")
- cfg.lazyobj.pop("_target_")
- # the rest are equal
- self.assertEqual(cfg, cfg2)
-
- def test_failed_save(self):
- cfg = DictConfig({"x": lambda: 3}, flags={"allow_objects": True})
- with tempfile.TemporaryDirectory(prefix="detectron2") as d:
- fname = os.path.join(d, "test_config.yaml")
- LazyConfig.save(cfg, fname)
- self.assertTrue(os.path.exists(fname))
- self.assertTrue(os.path.exists(fname + ".pkl"))
-
- def test_overrides(self):
- cfg = LazyConfig.load(self.root_filename)
- LazyConfig.apply_overrides(cfg, ["lazyobj.x=123", 'dir1b_dict.a="123"'])
- self.assertEqual(cfg.dir1b_dict.a, "123")
- self.assertEqual(cfg.lazyobj.x, 123)
-
- def test_invalid_overrides(self):
- cfg = LazyConfig.load(self.root_filename)
- with self.assertRaises(KeyError):
- LazyConfig.apply_overrides(cfg, ["lazyobj.x.xxx=123"])
-
- def test_to_py(self):
- cfg = LazyConfig.load(self.root_filename)
- cfg.lazyobj.x = {"a": 1, "b": 2, "c": L(count)(x={"r": "a", "s": 2.4, "t": [1, 2, 3, "z"]})}
- cfg.list = ["a", 1, "b", 3.2]
- py_str = LazyConfig.to_py(cfg)
- expected = """cfg.dir1a_dict.a = "modified"
-cfg.dir1a_dict.b = 2
-cfg.dir1b_dict.a = 1
-cfg.dir1b_dict.b = 2
-cfg.lazyobj = itertools.count(
- x={
- "a": 1,
- "b": 2,
- "c": itertools.count(x={"r": "a", "s": 2.4, "t": [1, 2, 3, "z"]}),
- },
- y="base_a_1_from_b",
-)
-cfg.list = ["a", 1, "b", 3.2]
-"""
- self.assertEqual(py_str, expected)
diff --git a/spaces/Tuana/find-the-animal/utils/config.py b/spaces/Tuana/find-the-animal/utils/config.py
deleted file mode 100644
index 2017963f8c761a645cfcc452001690f5979b78d7..0000000000000000000000000000000000000000
--- a/spaces/Tuana/find-the-animal/utils/config.py
+++ /dev/null
@@ -1 +0,0 @@
-INDEX_DIR = "data/index"
\ No newline at end of file
diff --git a/spaces/Vern0n/pls_work/Dockerfile b/spaces/Vern0n/pls_work/Dockerfile
deleted file mode 100644
index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000
--- a/spaces/Vern0n/pls_work/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node:18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md* .env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD [ "npm", "start" ]
\ No newline at end of file
diff --git a/spaces/Vinnybustacap/Gryphe-MythoLogic-13b/app.py b/spaces/Vinnybustacap/Gryphe-MythoLogic-13b/app.py
deleted file mode 100644
index 366af980d86a1b9d026ed59fe4d987a4ee0e61c2..0000000000000000000000000000000000000000
--- a/spaces/Vinnybustacap/Gryphe-MythoLogic-13b/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/Gryphe/MythoLogic-13b").launch()
\ No newline at end of file
diff --git a/spaces/XzJosh/Nana7mi-Bert-VITS2/README.md b/spaces/XzJosh/Nana7mi-Bert-VITS2/README.md
deleted file mode 100644
index af1d898ea1f7d6675042e50d5874d5fcec8744c8..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Nana7mi-Bert-VITS2/README.md
+++ /dev/null
@@ -1,5 +0,0 @@
----
-license: mit
-sdk: gradio
-title: AI七海
----
\ No newline at end of file
diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/__init__.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/__init__.py
deleted file mode 100644
index fb1623a14865e1d1b1e79275a3d5595642f92d9b..0000000000000000000000000000000000000000
--- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/Waifu2x/utils/__init__.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# -*- coding: utf-8 -*-
-# file: __init__.py
-# time: 05/12/2022
-# author: yangheng
-# github: https://github.com/yangheng95
-# huggingface: https://huggingface.co/yangheng
-# google scholar: https://scholar.google.com/citations?user=NPq5a_0AAAAJ&hl=en
-# Copyright (C) 2021. All Rights Reserved.
diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/ddpm/__init__.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/ddpm/__init__.py
deleted file mode 100644
index 8889bdae1224e91916e0f8454bafba0ee566f3b9..0000000000000000000000000000000000000000
--- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/ddpm/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-# flake8: noqa
-from .pipeline_ddpm import DDPMPipeline
diff --git a/spaces/YotamNitzan/domain-expansion/README.md b/spaces/YotamNitzan/domain-expansion/README.md
deleted file mode 100644
index 9dc5ab00f325c2428c4c6fdac3ce3c7526bede3a..0000000000000000000000000000000000000000
--- a/spaces/YotamNitzan/domain-expansion/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Domain Expansion
-emoji: 👁
-colorFrom: pink
-colorTo: yellow
-sdk: docker
-pinned: false
-tags:
-- making-demos
-duplicated_from: alvanlii/domain-expansion
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/YuAnthony/Audio-Caption/data_handling/clotho_test_set.py b/spaces/YuAnthony/Audio-Caption/data_handling/clotho_test_set.py
deleted file mode 100644
index c272a11b3aed808112c4e9b94ba7c6ba9d3fcfd9..0000000000000000000000000000000000000000
--- a/spaces/YuAnthony/Audio-Caption/data_handling/clotho_test_set.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import glob
-import numpy as np
-import os
-from torch.utils.data import Dataset
-
-
-class ClothoTestset(Dataset):
- def __init__(self, data_dir):
- super(ClothoTestset, self).__init__()
- self.data_dir = data_dir
- self.data = glob.glob(f'{data_dir}/*.npy')
-
- def __len__(self):
- return len(self.data)
-
- def __getitem__(self, item): # return: mel, filename (with out extension)
- return np.load(self.data[item]), os.path.splitext(os.path.basename(self.data[item]))[0]
\ No newline at end of file
diff --git a/spaces/Yusin/ChatGPT-Speech/text/mandarin.py b/spaces/Yusin/ChatGPT-Speech/text/mandarin.py
deleted file mode 100644
index 8bc31aea94e1abe111f9bb78c878c1c71e55d4ba..0000000000000000000000000000000000000000
--- a/spaces/Yusin/ChatGPT-Speech/text/mandarin.py
+++ /dev/null
@@ -1,170 +0,0 @@
-import os
-import re
-import sys
-
-import jieba
-import cn2an
-import logging
-from pypinyin import lazy_pinyin, BOPOMOFO
-
-# logging.getLogger('jieba').setLevel(logging.WARNING)
-# jieba.set_dictionary(os.path.dirname(sys.argv[0]) + '/jieba/dict.txt')
-
-# List of (Latin alphabet, bopomofo) pairs:
-_latin_to_bopomofo = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', 'ㄟˉ'),
- ('b', 'ㄅㄧˋ'),
- ('c', 'ㄙㄧˉ'),
- ('d', 'ㄉㄧˋ'),
- ('e', 'ㄧˋ'),
- ('f', 'ㄝˊㄈㄨˋ'),
- ('g', 'ㄐㄧˋ'),
- ('h', 'ㄝˇㄑㄩˋ'),
- ('i', 'ㄞˋ'),
- ('j', 'ㄐㄟˋ'),
- ('k', 'ㄎㄟˋ'),
- ('l', 'ㄝˊㄛˋ'),
- ('m', 'ㄝˊㄇㄨˋ'),
- ('n', 'ㄣˉ'),
- ('o', 'ㄡˉ'),
- ('p', 'ㄆㄧˉ'),
- ('q', 'ㄎㄧㄡˉ'),
- ('r', 'ㄚˋ'),
- ('s', 'ㄝˊㄙˋ'),
- ('t', 'ㄊㄧˋ'),
- ('u', 'ㄧㄡˉ'),
- ('v', 'ㄨㄧˉ'),
- ('w', 'ㄉㄚˋㄅㄨˋㄌㄧㄡˋ'),
- ('x', 'ㄝˉㄎㄨˋㄙˋ'),
- ('y', 'ㄨㄞˋ'),
- ('z', 'ㄗㄟˋ')
-]]
-
-# List of (bopomofo, romaji) pairs:
-_bopomofo_to_romaji = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄅㄛ', 'p⁼wo'),
- ('ㄆㄛ', 'pʰwo'),
- ('ㄇㄛ', 'mwo'),
- ('ㄈㄛ', 'fwo'),
- ('ㄅ', 'p⁼'),
- ('ㄆ', 'pʰ'),
- ('ㄇ', 'm'),
- ('ㄈ', 'f'),
- ('ㄉ', 't⁼'),
- ('ㄊ', 'tʰ'),
- ('ㄋ', 'n'),
- ('ㄌ', 'l'),
- ('ㄍ', 'k⁼'),
- ('ㄎ', 'kʰ'),
- ('ㄏ', 'h'),
- ('ㄐ', 'ʧ⁼'),
- ('ㄑ', 'ʧʰ'),
- ('ㄒ', 'ʃ'),
- ('ㄓ', 'ʦ`⁼'),
- ('ㄔ', 'ʦ`ʰ'),
- ('ㄕ', 's`'),
- ('ㄖ', 'ɹ`'),
- ('ㄗ', 'ʦ⁼'),
- ('ㄘ', 'ʦʰ'),
- ('ㄙ', 's'),
- ('ㄚ', 'a'),
- ('ㄛ', 'o'),
- ('ㄜ', 'ə'),
- ('ㄝ', 'e'),
- ('ㄞ', 'ai'),
- ('ㄟ', 'ei'),
- ('ㄠ', 'au'),
- ('ㄡ', 'ou'),
- ('ㄧㄢ', 'yeNN'),
- ('ㄢ', 'aNN'),
- ('ㄧㄣ', 'iNN'),
- ('ㄣ', 'əNN'),
- ('ㄤ', 'aNg'),
- ('ㄧㄥ', 'iNg'),
- ('ㄨㄥ', 'uNg'),
- ('ㄩㄥ', 'yuNg'),
- ('ㄥ', 'əNg'),
- ('ㄦ', 'əɻ'),
- ('ㄧ', 'i'),
- ('ㄨ', 'u'),
- ('ㄩ', 'ɥ'),
- ('ˉ', '→'),
- ('ˊ', '↑'),
- ('ˇ', '↓↑'),
- ('ˋ', '↓'),
- ('˙', ''),
- (',', ','),
- ('。', '.'),
- ('!', '!'),
- ('?', '?'),
- ('—', '-')
-]]
-
-# List of (romaji, ipa) pairs:
-_romaji_to_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('ʃy', 'ʃ'),
- ('ʧʰy', 'ʧʰ'),
- ('ʧ⁼y', 'ʧ⁼'),
- ('NN', 'n'),
- ('Ng', 'ŋ'),
- ('y', 'j'),
- ('h', 'x')
-]]
-
-
-def number_to_chinese(text):
- numbers = re.findall(r'\d+(?:\.?\d+)?', text)
- for number in numbers:
- text = text.replace(number, cn2an.an2cn(number), 1)
- return text
-
-
-def chinese_to_bopomofo(text):
- text = text.replace('、', ',').replace(';', ',').replace(':', ',')
- words = jieba.lcut(text, cut_all=False)
- text = ''
- for word in words:
- bopomofos = lazy_pinyin(word, BOPOMOFO)
- if not re.search('[\u4e00-\u9fff]', word):
- text += word
- continue
- for i in range(len(bopomofos)):
- if re.match('[\u3105-\u3129]', bopomofos[i][-1]):
- bopomofos[i] += 'ˉ'
- if text != '':
- text += ' '
- text += ''.join(bopomofos)
- return text
-
-
-def latin_to_bopomofo(text):
- for regex, replacement in _latin_to_bopomofo:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def bopomofo_to_romaji(text):
- for regex, replacement in _bopomofo_to_romaji:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def chinese_to_romaji(text):
- text = number_to_chinese(text)
- text = chinese_to_bopomofo(text)
- text = latin_to_bopomofo(text)
- text = bopomofo_to_romaji(text)
- text = re.sub('i[aoe]', lambda x: 'y' + x.group(0)[1:], text)
- text = re.sub('u[aoəe]', lambda x: 'w' + x.group(0)[1:], text)
- text = re.sub('([ʦsɹ]`[⁼ʰ]?)([→↓↑ ]+|$)', lambda x: x.group(1) +
- 'ɹ`' + x.group(2), text).replace('ɻ', 'ɹ`')
- text = re.sub('([ʦs][⁼ʰ]?)([→↓↑ ]+|$)',
- lambda x: x.group(1) + 'ɹ' + x.group(2), text)
- return text
-
-
-def chinese_to_lazy_ipa(text):
- text = chinese_to_romaji(text)
- for regex, replacement in _romaji_to_ipa:
- text = re.sub(regex, replacement, text)
- return text
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/builder.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/builder.py
deleted file mode 100644
index 81c927e507a7c1625ffb114de10e93c94927af25..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/builder.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import warnings
-
-from mmcv.utils import Registry, build_from_cfg
-from torch import nn
-
-BACKBONES = Registry('backbone')
-NECKS = Registry('neck')
-ROI_EXTRACTORS = Registry('roi_extractor')
-SHARED_HEADS = Registry('shared_head')
-HEADS = Registry('head')
-LOSSES = Registry('loss')
-DETECTORS = Registry('detector')
-
-
-def build(cfg, registry, default_args=None):
- """Build a module.
-
- Args:
- cfg (dict, list[dict]): The config of modules, is is either a dict
- or a list of configs.
- registry (:obj:`Registry`): A registry the module belongs to.
- default_args (dict, optional): Default arguments to build the module.
- Defaults to None.
-
- Returns:
- nn.Module: A built nn module.
- """
- if isinstance(cfg, list):
- modules = [
- build_from_cfg(cfg_, registry, default_args) for cfg_ in cfg
- ]
- return nn.Sequential(*modules)
- else:
- return build_from_cfg(cfg, registry, default_args)
-
-
-def build_backbone(cfg):
- """Build backbone."""
- return build(cfg, BACKBONES)
-
-
-def build_neck(cfg):
- """Build neck."""
- return build(cfg, NECKS)
-
-
-def build_roi_extractor(cfg):
- """Build roi extractor."""
- return build(cfg, ROI_EXTRACTORS)
-
-
-def build_shared_head(cfg):
- """Build shared head."""
- return build(cfg, SHARED_HEADS)
-
-
-def build_head(cfg):
- """Build head."""
- return build(cfg, HEADS)
-
-
-def build_loss(cfg):
- """Build loss."""
- return build(cfg, LOSSES)
-
-
-def build_detector(cfg, train_cfg=None, test_cfg=None):
- """Build detector."""
- if train_cfg is not None or test_cfg is not None:
- warnings.warn(
- 'train_cfg and test_cfg is deprecated, '
- 'please specify them in model', UserWarning)
- assert cfg.get('train_cfg') is None or train_cfg is None, \
- 'train_cfg specified in both outer field and model field '
- assert cfg.get('test_cfg') is None or test_cfg is None, \
- 'test_cfg specified in both outer field and model field '
- return build(cfg, DETECTORS, dict(train_cfg=train_cfg, test_cfg=test_cfg))
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/evaluation/class_names.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/evaluation/class_names.py
deleted file mode 100644
index ffae816cf980ce4b03e491cc0c4298cb823797e6..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/evaluation/class_names.py
+++ /dev/null
@@ -1,152 +0,0 @@
-import annotator.uniformer.mmcv as mmcv
-
-
-def cityscapes_classes():
- """Cityscapes class names for external use."""
- return [
- 'road', 'sidewalk', 'building', 'wall', 'fence', 'pole',
- 'traffic light', 'traffic sign', 'vegetation', 'terrain', 'sky',
- 'person', 'rider', 'car', 'truck', 'bus', 'train', 'motorcycle',
- 'bicycle'
- ]
-
-
-def ade_classes():
- """ADE20K class names for external use."""
- return [
- 'wall', 'building', 'sky', 'floor', 'tree', 'ceiling', 'road', 'bed ',
- 'windowpane', 'grass', 'cabinet', 'sidewalk', 'person', 'earth',
- 'door', 'table', 'mountain', 'plant', 'curtain', 'chair', 'car',
- 'water', 'painting', 'sofa', 'shelf', 'house', 'sea', 'mirror', 'rug',
- 'field', 'armchair', 'seat', 'fence', 'desk', 'rock', 'wardrobe',
- 'lamp', 'bathtub', 'railing', 'cushion', 'base', 'box', 'column',
- 'signboard', 'chest of drawers', 'counter', 'sand', 'sink',
- 'skyscraper', 'fireplace', 'refrigerator', 'grandstand', 'path',
- 'stairs', 'runway', 'case', 'pool table', 'pillow', 'screen door',
- 'stairway', 'river', 'bridge', 'bookcase', 'blind', 'coffee table',
- 'toilet', 'flower', 'book', 'hill', 'bench', 'countertop', 'stove',
- 'palm', 'kitchen island', 'computer', 'swivel chair', 'boat', 'bar',
- 'arcade machine', 'hovel', 'bus', 'towel', 'light', 'truck', 'tower',
- 'chandelier', 'awning', 'streetlight', 'booth', 'television receiver',
- 'airplane', 'dirt track', 'apparel', 'pole', 'land', 'bannister',
- 'escalator', 'ottoman', 'bottle', 'buffet', 'poster', 'stage', 'van',
- 'ship', 'fountain', 'conveyer belt', 'canopy', 'washer', 'plaything',
- 'swimming pool', 'stool', 'barrel', 'basket', 'waterfall', 'tent',
- 'bag', 'minibike', 'cradle', 'oven', 'ball', 'food', 'step', 'tank',
- 'trade name', 'microwave', 'pot', 'animal', 'bicycle', 'lake',
- 'dishwasher', 'screen', 'blanket', 'sculpture', 'hood', 'sconce',
- 'vase', 'traffic light', 'tray', 'ashcan', 'fan', 'pier', 'crt screen',
- 'plate', 'monitor', 'bulletin board', 'shower', 'radiator', 'glass',
- 'clock', 'flag'
- ]
-
-
-def voc_classes():
- """Pascal VOC class names for external use."""
- return [
- 'background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus',
- 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse',
- 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train',
- 'tvmonitor'
- ]
-
-
-def cityscapes_palette():
- """Cityscapes palette for external use."""
- return [[128, 64, 128], [244, 35, 232], [70, 70, 70], [102, 102, 156],
- [190, 153, 153], [153, 153, 153], [250, 170, 30], [220, 220, 0],
- [107, 142, 35], [152, 251, 152], [70, 130, 180], [220, 20, 60],
- [255, 0, 0], [0, 0, 142], [0, 0, 70], [0, 60, 100], [0, 80, 100],
- [0, 0, 230], [119, 11, 32]]
-
-
-def ade_palette():
- """ADE20K palette for external use."""
- return [[120, 120, 120], [180, 120, 120], [6, 230, 230], [80, 50, 50],
- [4, 200, 3], [120, 120, 80], [140, 140, 140], [204, 5, 255],
- [230, 230, 230], [4, 250, 7], [224, 5, 255], [235, 255, 7],
- [150, 5, 61], [120, 120, 70], [8, 255, 51], [255, 6, 82],
- [143, 255, 140], [204, 255, 4], [255, 51, 7], [204, 70, 3],
- [0, 102, 200], [61, 230, 250], [255, 6, 51], [11, 102, 255],
- [255, 7, 71], [255, 9, 224], [9, 7, 230], [220, 220, 220],
- [255, 9, 92], [112, 9, 255], [8, 255, 214], [7, 255, 224],
- [255, 184, 6], [10, 255, 71], [255, 41, 10], [7, 255, 255],
- [224, 255, 8], [102, 8, 255], [255, 61, 6], [255, 194, 7],
- [255, 122, 8], [0, 255, 20], [255, 8, 41], [255, 5, 153],
- [6, 51, 255], [235, 12, 255], [160, 150, 20], [0, 163, 255],
- [140, 140, 140], [250, 10, 15], [20, 255, 0], [31, 255, 0],
- [255, 31, 0], [255, 224, 0], [153, 255, 0], [0, 0, 255],
- [255, 71, 0], [0, 235, 255], [0, 173, 255], [31, 0, 255],
- [11, 200, 200], [255, 82, 0], [0, 255, 245], [0, 61, 255],
- [0, 255, 112], [0, 255, 133], [255, 0, 0], [255, 163, 0],
- [255, 102, 0], [194, 255, 0], [0, 143, 255], [51, 255, 0],
- [0, 82, 255], [0, 255, 41], [0, 255, 173], [10, 0, 255],
- [173, 255, 0], [0, 255, 153], [255, 92, 0], [255, 0, 255],
- [255, 0, 245], [255, 0, 102], [255, 173, 0], [255, 0, 20],
- [255, 184, 184], [0, 31, 255], [0, 255, 61], [0, 71, 255],
- [255, 0, 204], [0, 255, 194], [0, 255, 82], [0, 10, 255],
- [0, 112, 255], [51, 0, 255], [0, 194, 255], [0, 122, 255],
- [0, 255, 163], [255, 153, 0], [0, 255, 10], [255, 112, 0],
- [143, 255, 0], [82, 0, 255], [163, 255, 0], [255, 235, 0],
- [8, 184, 170], [133, 0, 255], [0, 255, 92], [184, 0, 255],
- [255, 0, 31], [0, 184, 255], [0, 214, 255], [255, 0, 112],
- [92, 255, 0], [0, 224, 255], [112, 224, 255], [70, 184, 160],
- [163, 0, 255], [153, 0, 255], [71, 255, 0], [255, 0, 163],
- [255, 204, 0], [255, 0, 143], [0, 255, 235], [133, 255, 0],
- [255, 0, 235], [245, 0, 255], [255, 0, 122], [255, 245, 0],
- [10, 190, 212], [214, 255, 0], [0, 204, 255], [20, 0, 255],
- [255, 255, 0], [0, 153, 255], [0, 41, 255], [0, 255, 204],
- [41, 0, 255], [41, 255, 0], [173, 0, 255], [0, 245, 255],
- [71, 0, 255], [122, 0, 255], [0, 255, 184], [0, 92, 255],
- [184, 255, 0], [0, 133, 255], [255, 214, 0], [25, 194, 194],
- [102, 255, 0], [92, 0, 255]]
-
-
-def voc_palette():
- """Pascal VOC palette for external use."""
- return [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128],
- [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0],
- [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128],
- [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0],
- [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]]
-
-
-dataset_aliases = {
- 'cityscapes': ['cityscapes'],
- 'ade': ['ade', 'ade20k'],
- 'voc': ['voc', 'pascal_voc', 'voc12', 'voc12aug']
-}
-
-
-def get_classes(dataset):
- """Get class names of a dataset."""
- alias2name = {}
- for name, aliases in dataset_aliases.items():
- for alias in aliases:
- alias2name[alias] = name
-
- if mmcv.is_str(dataset):
- if dataset in alias2name:
- labels = eval(alias2name[dataset] + '_classes()')
- else:
- raise ValueError(f'Unrecognized dataset: {dataset}')
- else:
- raise TypeError(f'dataset must a str, but got {type(dataset)}')
- return labels
-
-
-def get_palette(dataset):
- """Get class palette (RGB) of a dataset."""
- alias2name = {}
- for name, aliases in dataset_aliases.items():
- for alias in aliases:
- alias2name[alias] = name
-
- if mmcv.is_str(dataset):
- if dataset in alias2name:
- labels = eval(alias2name[dataset] + '_palette()')
- else:
- raise ValueError(f'Unrecognized dataset: {dataset}')
- else:
- raise TypeError(f'dataset must a str, but got {type(dataset)}')
- return labels
diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/GPT_eval_multi.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/GPT_eval_multi.py
deleted file mode 100644
index b5e3ebcb1199e42cf16748e60863b554a0046f00..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/GPT_eval_multi.py
+++ /dev/null
@@ -1,121 +0,0 @@
-import os
-import torch
-import numpy as np
-from torch.utils.tensorboard import SummaryWriter
-import json
-import clip
-
-import options.option_transformer as option_trans
-import models.vqvae as vqvae
-import utils.utils_model as utils_model
-import utils.eval_trans as eval_trans
-from dataset import dataset_TM_eval
-import models.t2m_trans as trans
-from options.get_eval_option import get_opt
-from models.evaluator_wrapper import EvaluatorModelWrapper
-import warnings
-warnings.filterwarnings('ignore')
-
-##### ---- Exp dirs ---- #####
-args = option_trans.get_args_parser()
-torch.manual_seed(args.seed)
-
-args.out_dir = os.path.join(args.out_dir, f'{args.exp_name}')
-os.makedirs(args.out_dir, exist_ok = True)
-
-##### ---- Logger ---- #####
-logger = utils_model.get_logger(args.out_dir)
-writer = SummaryWriter(args.out_dir)
-logger.info(json.dumps(vars(args), indent=4, sort_keys=True))
-
-from utils.word_vectorizer import WordVectorizer
-w_vectorizer = WordVectorizer('./glove', 'our_vab')
-val_loader = dataset_TM_eval.DATALoader(args.dataname, True, 32, w_vectorizer)
-
-dataset_opt_path = 'checkpoints/kit/Comp_v6_KLD005/opt.txt' if args.dataname == 'kit' else 'checkpoints/t2m/Comp_v6_KLD005/opt.txt'
-
-wrapper_opt = get_opt(dataset_opt_path, torch.device('cuda'))
-eval_wrapper = EvaluatorModelWrapper(wrapper_opt)
-
-##### ---- Network ---- #####
-
-## load clip model and datasets
-clip_model, clip_preprocess = clip.load("ViT-B/32", device=torch.device('cuda'), jit=False, download_root='/apdcephfs_cq2/share_1290939/maelyszhang/.cache/clip') # Must set jit=False for training
-clip.model.convert_weights(clip_model) # Actually this line is unnecessary since clip by default already on float16
-clip_model.eval()
-for p in clip_model.parameters():
- p.requires_grad = False
-
-net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers
- args.nb_code,
- args.code_dim,
- args.output_emb_width,
- args.down_t,
- args.stride_t,
- args.width,
- args.depth,
- args.dilation_growth_rate)
-
-
-trans_encoder = trans.Text2Motion_Transformer(num_vq=args.nb_code,
- embed_dim=args.embed_dim_gpt,
- clip_dim=args.clip_dim,
- block_size=args.block_size,
- num_layers=args.num_layers,
- n_head=args.n_head_gpt,
- drop_out_rate=args.drop_out_rate,
- fc_rate=args.ff_rate)
-
-
-print ('loading checkpoint from {}'.format(args.resume_pth))
-ckpt = torch.load(args.resume_pth, map_location='cpu')
-net.load_state_dict(ckpt['net'], strict=True)
-net.eval()
-net.cuda()
-
-if args.resume_trans is not None:
- print ('loading transformer checkpoint from {}'.format(args.resume_trans))
- ckpt = torch.load(args.resume_trans, map_location='cpu')
- trans_encoder.load_state_dict(ckpt['trans'], strict=True)
-trans_encoder.train()
-trans_encoder.cuda()
-
-
-fid = []
-div = []
-top1 = []
-top2 = []
-top3 = []
-matching = []
-multi = []
-repeat_time = 20
-
-
-for i in range(repeat_time):
- best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, best_multi, writer, logger = eval_trans.evaluation_transformer_test(args.out_dir, val_loader, net, trans_encoder, logger, writer, 0, best_fid=1000, best_iter=0, best_div=100, best_top1=0, best_top2=0, best_top3=0, best_matching=100, best_multi=0, clip_model=clip_model, eval_wrapper=eval_wrapper, draw=False, savegif=False, save=False, savenpy=(i==0))
- fid.append(best_fid)
- div.append(best_div)
- top1.append(best_top1)
- top2.append(best_top2)
- top3.append(best_top3)
- matching.append(best_matching)
- multi.append(best_multi)
-
-print('final result:')
-print('fid: ', sum(fid)/repeat_time)
-print('div: ', sum(div)/repeat_time)
-print('top1: ', sum(top1)/repeat_time)
-print('top2: ', sum(top2)/repeat_time)
-print('top3: ', sum(top3)/repeat_time)
-print('matching: ', sum(matching)/repeat_time)
-print('multi: ', sum(multi)/repeat_time)
-
-fid = np.array(fid)
-div = np.array(div)
-top1 = np.array(top1)
-top2 = np.array(top2)
-top3 = np.array(top3)
-matching = np.array(matching)
-multi = np.array(multi)
-msg_final = f"FID. {np.mean(fid):.3f}, conf. {np.std(fid)*1.96/np.sqrt(repeat_time):.3f}, Diversity. {np.mean(div):.3f}, conf. {np.std(div)*1.96/np.sqrt(repeat_time):.3f}, TOP1. {np.mean(top1):.3f}, conf. {np.std(top1)*1.96/np.sqrt(repeat_time):.3f}, TOP2. {np.mean(top2):.3f}, conf. {np.std(top2)*1.96/np.sqrt(repeat_time):.3f}, TOP3. {np.mean(top3):.3f}, conf. {np.std(top3)*1.96/np.sqrt(repeat_time):.3f}, Matching. {np.mean(matching):.3f}, conf. {np.std(matching)*1.96/np.sqrt(repeat_time):.3f}, Multi. {np.mean(multi):.3f}, conf. {np.std(multi)*1.96/np.sqrt(repeat_time):.3f}"
-logger.info(msg_final)
\ No newline at end of file
diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/utils_model.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/utils_model.py
deleted file mode 100644
index b3653a47ddb96f2ba27aae73b4eef8be904e9bf0..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/utils/utils_model.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import numpy as np
-import torch
-import torch.optim as optim
-import logging
-import os
-import sys
-
-def getCi(accLog):
-
- mean = np.mean(accLog)
- std = np.std(accLog)
- ci95 = 1.96*std/np.sqrt(len(accLog))
-
- return mean, ci95
-
-def get_logger(out_dir):
- logger = logging.getLogger('Exp')
- logger.setLevel(logging.INFO)
- formatter = logging.Formatter("%(asctime)s %(levelname)s %(message)s")
-
- file_path = os.path.join(out_dir, "run.log")
- file_hdlr = logging.FileHandler(file_path)
- file_hdlr.setFormatter(formatter)
-
- strm_hdlr = logging.StreamHandler(sys.stdout)
- strm_hdlr.setFormatter(formatter)
-
- logger.addHandler(file_hdlr)
- logger.addHandler(strm_hdlr)
- return logger
-
-## Optimizer
-def initial_optim(decay_option, lr, weight_decay, net, optimizer) :
-
- if optimizer == 'adamw' :
- optimizer_adam_family = optim.AdamW
- elif optimizer == 'adam' :
- optimizer_adam_family = optim.Adam
- if decay_option == 'all':
- #optimizer = optimizer_adam_family(net.parameters(), lr=lr, betas=(0.9, 0.999), weight_decay=weight_decay)
- optimizer = optimizer_adam_family(net.parameters(), lr=lr, betas=(0.5, 0.9), weight_decay=weight_decay)
-
- elif decay_option == 'noVQ':
- all_params = set(net.parameters())
- no_decay = set([net.vq_layer])
-
- decay = all_params - no_decay
- optimizer = optimizer_adam_family([
- {'params': list(no_decay), 'weight_decay': 0},
- {'params': list(decay), 'weight_decay' : weight_decay}], lr=lr)
-
- return optimizer
-
-
-def get_motion_with_trans(motion, velocity) :
- '''
- motion : torch.tensor, shape (batch_size, T, 72), with the global translation = 0
- velocity : torch.tensor, shape (batch_size, T, 3), contain the information of velocity = 0
-
- '''
- trans = torch.cumsum(velocity, dim=1)
- trans = trans - trans[:, :1] ## the first root is initialized at 0 (just for visualization)
- trans = trans.repeat((1, 1, 21))
- motion_with_trans = motion + trans
- return motion_with_trans
-
\ No newline at end of file
diff --git a/spaces/adirik/stylemc-demo/encoder4editing/datasets/__init__.py b/spaces/adirik/stylemc-demo/encoder4editing/datasets/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/adirik/stylemc-demo/torch_utils/ops/upfirdn2d.cpp b/spaces/adirik/stylemc-demo/torch_utils/ops/upfirdn2d.cpp
deleted file mode 100644
index 2d7177fc60040751d20e9a8da0301fa3ab64968a..0000000000000000000000000000000000000000
--- a/spaces/adirik/stylemc-demo/torch_utils/ops/upfirdn2d.cpp
+++ /dev/null
@@ -1,103 +0,0 @@
-// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-//
-// NVIDIA CORPORATION and its licensors retain all intellectual property
-// and proprietary rights in and to this software, related documentation
-// and any modifications thereto. Any use, reproduction, disclosure or
-// distribution of this software and related documentation without an express
-// license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-#include
-#include
-#include
-#include "upfirdn2d.h"
-
-//------------------------------------------------------------------------
-
-static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain)
-{
- // Validate arguments.
- TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device");
- TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x");
- TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32");
- TORCH_CHECK(x.numel() <= INT_MAX, "x is too large");
- TORCH_CHECK(f.numel() <= INT_MAX, "f is too large");
- TORCH_CHECK(x.dim() == 4, "x must be rank 4");
- TORCH_CHECK(f.dim() == 2, "f must be rank 2");
- TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1");
- TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1");
- TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1");
-
- // Create output tensor.
- const at::cuda::OptionalCUDAGuard device_guard(device_of(x));
- int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx;
- int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy;
- TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1");
- torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format());
- TORCH_CHECK(y.numel() <= INT_MAX, "output is too large");
-
- // Initialize CUDA kernel parameters.
- upfirdn2d_kernel_params p;
- p.x = x.data_ptr();
- p.f = f.data_ptr();
- p.y = y.data_ptr();
- p.up = make_int2(upx, upy);
- p.down = make_int2(downx, downy);
- p.pad0 = make_int2(padx0, pady0);
- p.flip = (flip) ? 1 : 0;
- p.gain = gain;
- p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0));
- p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0));
- p.filterSize = make_int2((int)f.size(1), (int)f.size(0));
- p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0));
- p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0));
- p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0));
- p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z;
- p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1;
-
- // Choose CUDA kernel.
- upfirdn2d_kernel_spec spec;
- AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&]
- {
- spec = choose_upfirdn2d_kernel(p);
- });
-
- // Set looping options.
- p.loopMajor = (p.sizeMajor - 1) / 16384 + 1;
- p.loopMinor = spec.loopMinor;
- p.loopX = spec.loopX;
- p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1;
- p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1;
-
- // Compute grid size.
- dim3 blockSize, gridSize;
- if (spec.tileOutW < 0) // large
- {
- blockSize = dim3(4, 32, 1);
- gridSize = dim3(
- ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor,
- (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1,
- p.launchMajor);
- }
- else // small
- {
- blockSize = dim3(256, 1, 1);
- gridSize = dim3(
- ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor,
- (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1,
- p.launchMajor);
- }
-
- // Launch CUDA kernel.
- void* args[] = {&p};
- AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream()));
- return y;
-}
-
-//------------------------------------------------------------------------
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m)
-{
- m.def("upfirdn2d", &upfirdn2d);
-}
-
-//------------------------------------------------------------------------
diff --git a/spaces/afffffdf/QSign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat b/spaces/afffffdf/QSign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat
deleted file mode 100644
index 4e44bab8aa65d16e35e935f1273de2e98ce80cf9..0000000000000000000000000000000000000000
--- a/spaces/afffffdf/QSign/unidbg-fetch-qsign/bin/unidbg-fetch-qsign.bat
+++ /dev/null
@@ -1,89 +0,0 @@
-@rem
-@rem Copyright 2015 the original author or authors.
-@rem
-@rem Licensed under the Apache License, Version 2.0 (the "License");
-@rem you may not use this file except in compliance with the License.
-@rem You may obtain a copy of the License at
-@rem
-@rem https://www.apache.org/licenses/LICENSE-2.0
-@rem
-@rem Unless required by applicable law or agreed to in writing, software
-@rem distributed under the License is distributed on an "AS IS" BASIS,
-@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-@rem See the License for the specific language governing permissions and
-@rem limitations under the License.
-@rem
-
-@if "%DEBUG%" == "" @echo off
-@rem ##########################################################################
-@rem
-@rem unidbg-fetch-qsign startup script for Windows
-@rem
-@rem ##########################################################################
-
-@rem Set local scope for the variables with windows NT shell
-if "%OS%"=="Windows_NT" setlocal
-
-set DIRNAME=%~dp0
-if "%DIRNAME%" == "" set DIRNAME=.
-set APP_BASE_NAME=%~n0
-set APP_HOME=%DIRNAME%..
-
-@rem Resolve any "." and ".." in APP_HOME to make it shorter.
-for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
-
-@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script.
-set DEFAULT_JVM_OPTS=
-
-@rem Find java.exe
-if defined JAVA_HOME goto findJavaFromJavaHome
-
-set JAVA_EXE=java.exe
-%JAVA_EXE% -version >NUL 2>&1
-if "%ERRORLEVEL%" == "0" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:findJavaFromJavaHome
-set JAVA_HOME=%JAVA_HOME:"=%
-set JAVA_EXE=%JAVA_HOME%/bin/java.exe
-
-if exist "%JAVA_EXE%" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:execute
-@rem Setup the command line
-
-set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.0.jar;%APP_HOME%\lib\unidbg-fix.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar
-
-
-@rem Execute unidbg-fetch-qsign
-"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %*
-
-:end
-@rem End local scope for the variables with windows NT shell
-if "%ERRORLEVEL%"=="0" goto mainEnd
-
-:fail
-rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of
-rem the _cmd.exe /c_ return code!
-if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1
-exit /b 1
-
-:mainEnd
-if "%OS%"=="Windows_NT" endlocal
-
-:omega
diff --git a/spaces/agunes/ChatGPT4/app.py b/spaces/agunes/ChatGPT4/app.py
deleted file mode 100644
index 7e09e57ef928fd2451fd0ed1295d0994ca75d026..0000000000000000000000000000000000000000
--- a/spaces/agunes/ChatGPT4/app.py
+++ /dev/null
@@ -1,193 +0,0 @@
-import gradio as gr
-import os
-import json
-import requests
-
-#Streaming endpoint
-API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream"
-
-#Huggingface provided GPT4 OpenAI API Key
-OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
-
-#Inferenec function
-def predict(system_msg, inputs, top_p, temperature, chat_counter, chatbot=[], history=[]):
-
- headers = {
- "Content-Type": "application/json",
- "Authorization": f"Bearer {OPENAI_API_KEY}"
- }
- print(f"system message is ^^ {system_msg}")
- if system_msg.strip() == '':
- initial_message = [{"role": "user", "content": f"{inputs}"},]
- multi_turn_message = []
- else:
- initial_message= [{"role": "system", "content": system_msg},
- {"role": "user", "content": f"{inputs}"},]
- multi_turn_message = [{"role": "system", "content": system_msg},]
-
- if chat_counter == 0 :
- payload = {
- "model": "gpt-4",
- "messages": initial_message ,
- "temperature" : 1.0,
- "top_p":1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,
- }
- print(f"chat_counter - {chat_counter}")
- else: #if chat_counter != 0 :
- messages=multi_turn_message # Of the type of - [{"role": "system", "content": system_msg},]
- for data in chatbot:
- user = {}
- user["role"] = "user"
- user["content"] = data[0]
- assistant = {}
- assistant["role"] = "assistant"
- assistant["content"] = data[1]
- messages.append(user)
- messages.append(assistant)
- temp = {}
- temp["role"] = "user"
- temp["content"] = inputs
- messages.append(temp)
- #messages
- payload = {
- "model": "gpt-4",
- "messages": messages, # Of the type of [{"role": "user", "content": f"{inputs}"}],
- "temperature" : temperature, #1.0,
- "top_p": top_p, #1.0,
- "n" : 1,
- "stream": True,
- "presence_penalty":0,
- "frequency_penalty":0,}
-
- chat_counter+=1
-
- history.append(inputs)
- print(f"Logging : payload is - {payload}")
- # make a POST request to the API endpoint using the requests.post method, passing in stream=True
- response = requests.post(API_URL, headers=headers, json=payload, stream=True)
- print(f"Logging : response code - {response}")
- token_counter = 0
- partial_words = ""
-
- counter=0
- for chunk in response.iter_lines():
- #Skipping first chunk
- if counter == 0:
- counter+=1
- continue
- # check whether each line is non-empty
- if chunk.decode() :
- chunk = chunk.decode()
- # decode each line as response data is in bytes
- if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']:
- partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
- if token_counter == 0:
- history.append(" " + partial_words)
- else:
- history[-1] = partial_words
- chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list
- token_counter+=1
- yield chat, history, chat_counter, response # resembles {chatbot: chat, state: history}
-
-#Resetting to blank
-def reset_textbox():
- return gr.update(value='')
-
-#to set a component as visible=False
-def set_visible_false():
- return gr.update(visible=False)
-
-#to set a component as visible=True
-def set_visible_true():
- return gr.update(visible=True)
-
-title = """
🔥GPT4 with ChatCompletions API +🚀Gradio-Streaming
"""
-
-#display message for themes feature
-theme_addon_msg = """
🌟 Discover Gradio Themes with this Demo, featuring v3.22.0! Gradio v3.23.0 also enables seamless Theme sharing. You can develop or modify a theme, and send it to the hub using simple theme.push_to_hub().
- 🏆Participate in Gradio's Theme Building Hackathon to exhibit your creative flair and win fabulous rewards! Join here - Gradio-Themes-Party🎨 🏆
-"""
-
-#Using info to add additional information about System message in GPT4
-system_msg_info = """A conversation could begin with a system message to gently instruct the assistant.
-System message helps set the behavior of the AI Assistant. For example, the assistant could be instructed with 'You are a helpful assistant.'"""
-
-#Modifying existing Gradio Theme
-theme = gr.themes.Soft(primary_hue="zinc", secondary_hue="green", neutral_hue="green",
- text_size=gr.themes.sizes.text_lg)
-
-with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;} #chatbot {height: 520px; overflow: auto;}""",
- theme=theme) as demo:
- gr.HTML(title)
- gr.HTML("""
🔥This Huggingface Gradio Demo provides you full access to GPT4 API (4096 token limit). 🎉🥳🎉You don't need any OPENAI API key🙌""")
- gr.HTML(theme_addon_msg)
- gr.HTML('''
Duplicate the Space and run securely with your OpenAI API Key
''')
-
- with gr.Column(elem_id = "col_container"):
- #GPT4 API Key is provided by Huggingface
- with gr.Accordion(label="System message:", open=False):
- system_msg = gr.Textbox(label="Instruct the AI Assistant to set its beaviour", info = system_msg_info, value="")
- accordion_msg = gr.HTML(value="🚧 To set System message you will have to refresh the app", visible=False)
- chatbot = gr.Chatbot(label='GPT4', elem_id="chatbot")
- inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter")
- state = gr.State([])
- with gr.Row():
- with gr.Column(scale=7):
- b1 = gr.Button().style(full_width=True)
- with gr.Column(scale=3):
- server_status_code = gr.Textbox(label="Status code from OpenAI server", )
-
- #top_p, temperature
- with gr.Accordion("Parameters", open=False):
- top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",)
- temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",)
- chat_counter = gr.Number(value=0, visible=False, precision=0)
-
- #Event handling
- inputs.submit( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
- b1.click( predict, [system_msg, inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
-
- inputs.submit(set_visible_false, [], [system_msg])
- b1.click(set_visible_false, [], [system_msg])
- inputs.submit(set_visible_true, [], [accordion_msg])
- b1.click(set_visible_true, [], [accordion_msg])
-
- b1.click(reset_textbox, [], [inputs])
- inputs.submit(reset_textbox, [], [inputs])
-
- #Examples
- with gr.Accordion(label="Examples for System message:", open=False):
- gr.Examples(
- examples = [["""You are an AI programming assistant.
-
- - Follow the user's requirements carefully and to the letter.
- - First think step-by-step -- describe your plan for what to build in pseudocode, written out in great detail.
- - Then output the code in a single code block.
- - Minimize any other prose."""], ["""You are ComedianGPT who is a helpful assistant. You answer everything with a joke and witty replies."""],
- ["You are ChefGPT, a helpful assistant who answers questions with culinary expertise and a pinch of humor."],
- ["You are FitnessGuruGPT, a fitness expert who shares workout tips and motivation with a playful twist."],
- ["You are SciFiGPT, an AI assistant who discusses science fiction topics with a blend of knowledge and wit."],
- ["You are PhilosopherGPT, a thoughtful assistant who responds to inquiries with philosophical insights and a touch of humor."],
- ["You are EcoWarriorGPT, a helpful assistant who shares environment-friendly advice with a lighthearted approach."],
- ["You are MusicMaestroGPT, a knowledgeable AI who discusses music and its history with a mix of facts and playful banter."],
- ["You are SportsFanGPT, an enthusiastic assistant who talks about sports and shares amusing anecdotes."],
- ["You are TechWhizGPT, a tech-savvy AI who can help users troubleshoot issues and answer questions with a dash of humor."],
- ["You are FashionistaGPT, an AI fashion expert who shares style advice and trends with a sprinkle of wit."],
- ["You are ArtConnoisseurGPT, an AI assistant who discusses art and its history with a blend of knowledge and playful commentary."],
- ["You are a helpful assistant that provides detailed and accurate information."],
- ["You are an assistant that speaks like Shakespeare."],
- ["You are a friendly assistant who uses casual language and humor."],
- ["You are a financial advisor who gives expert advice on investments and budgeting."],
- ["You are a health and fitness expert who provides advice on nutrition and exercise."],
- ["You are a travel consultant who offers recommendations for destinations, accommodations, and attractions."],
- ["You are a movie critic who shares insightful opinions on films and their themes."],
- ["You are a history enthusiast who loves to discuss historical events and figures."],
- ["You are a tech-savvy assistant who can help users troubleshoot issues and answer questions about gadgets and software."],
- ["You are an AI poet who can compose creative and evocative poems on any given topic."],],
- inputs = system_msg,)
-
-demo.queue(max_size=99, concurrency_count=20).launch(debug=True)
\ No newline at end of file
diff --git a/spaces/akhaliq/openlm-research-open_llama_13b/README.md b/spaces/akhaliq/openlm-research-open_llama_13b/README.md
deleted file mode 100644
index ec7b1fb4f350749f64dbe94dff5791c7cbf8f3fb..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/openlm-research-open_llama_13b/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Openlm-research-open Llama 13b
-emoji: 🚀
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/alamin655/Personas/styles.css b/spaces/alamin655/Personas/styles.css
deleted file mode 100644
index ea5f28e61617064df2b252f61c599ec3775cb3a5..0000000000000000000000000000000000000000
--- a/spaces/alamin655/Personas/styles.css
+++ /dev/null
@@ -1,56 +0,0 @@
-/* Style the overall app container */
-div.css-k1ih3n.egzxvld4 {
- padding: 1rem 1rem 1rem;
- display: flex;
- overflow: visible;
- flex-grow: 1; /* This allows the chat window to be anchored at the bottom */
-}
-/* Hide the streamlit injected data-iframe-height div */
-div.css-qcqlej.egzxvld3 {
- display: none;
-}
-.css-ocqkz7 {
- flex-grow: 0;
-}
-
-/* Style the app so the scrollbar is anchored to the bottom */
-section.css-k1vhr4.egzxvld5 {
- display: flex;
-}
-
-/* Style prompt_json_view_placeholder header so it is aligned. */
-div.css-k1ih3n.egzxvld4 > div:nth-child(1) > div:nth-child(1) > div:nth-child(7) > div:nth-child(1) p {
- margin-left: 8px;
-}
-
-/* Style prompt_json_view_placeholder so overflow is scrollable */
-div.css-k1ih3n.egzxvld4 > div:nth-child(1) > div:nth-child(1) > div:nth-child(7) > div:nth-child(1) > div:nth-child(1) > div:nth-child(1) > div:nth-child(2) > div:nth-child(1) > div:nth-child(2) {
- overflow-x: hidden;
- overflow-y: scroll;
- max-height: 955px;
- margin-top: 8px;
- margin-left: 8px;
-}
-
-/* Style prompt_string_placeholder so overflow is scrollable */
-div.css-k1ih3n.egzxvld4 > div:nth-child(1) > div:nth-child(1) > div:nth-child(7) > div:nth-child(2) .stCodeBlock {
- overflow-x: hidden;
- overflow-y: scroll;
- max-height: 955px;
- margin-top: 8px;
-}
-
-/* Remove "Press enter to apply" from text input */
-.stTextInput div.css-1if5ada.effi0qh1 {
- visibility: hidden;
-}
-
-/* Make markdown code wrapped */
-code.language-markdown {
- white-space: pre-wrap !important ;
-}
-
-/* Make padding smaller on st.sidebar */
-div.css-1vq4p4l.e1fqkh3o4 {
- padding-top: 2rem;
-}
\ No newline at end of file
diff --git a/spaces/aleloved02/Salesforce-codet5-large/README.md b/spaces/aleloved02/Salesforce-codet5-large/README.md
deleted file mode 100644
index 33a4af76aa96c569bdfbf50cdc8568fd5e37464a..0000000000000000000000000000000000000000
--- a/spaces/aleloved02/Salesforce-codet5-large/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Salesforce Codet5 Large
-emoji: 📈
-colorFrom: gray
-colorTo: gray
-sdk: gradio
-sdk_version: 3.17.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/alex-mindspace/gpt-agents/swarmai/__init__.py b/spaces/alex-mindspace/gpt-agents/swarmai/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/allknowingroger/Image-Models-Test196/app.py b/spaces/allknowingroger/Image-Models-Test196/app.py
deleted file mode 100644
index 50eff76e0dbe1fc399444043ceb71330e6585eff..0000000000000000000000000000000000000000
--- a/spaces/allknowingroger/Image-Models-Test196/app.py
+++ /dev/null
@@ -1,144 +0,0 @@
-import gradio as gr
-# import os
-# import sys
-# from pathlib import Path
-import time
-
-models =[
- "theexcitedgirl/my-garden-flowers-nxt",
- "Danxie/lora-trained-xl-colab",
- "VHDSDK/tcoaalt2",
- "meowXin/lora-trained-xl-colab",
- "hjhjhqw/my-lion",
- "artificialguybr/TshirtDesignRedmond-V2",
- "artificialguybr/ColoringBookRedmond",
- "KyriaAnnwyn/lora-trained-NoahSanchez_baseRVsamples_long-xl",
- "theexcitedgirl/my-pet-dog",
-]
-
-
-model_functions = {}
-model_idx = 1
-for model_path in models:
- try:
- model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False)
- except Exception as error:
- def the_fn(txt):
- return None
- model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"])
- model_idx+=1
-
-
-def send_it_idx(idx):
- def send_it_fn(prompt):
- output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt)
- return output
- return send_it_fn
-
-def get_prompts(prompt_text):
- return prompt_text
-
-def clear_it(val):
- if int(val) != 0:
- val = 0
- else:
- val = 0
- pass
- return val
-
-def all_task_end(cnt,t_stamp):
- to = t_stamp + 60
- et = time.time()
- if et > to and t_stamp != 0:
- d = gr.update(value=0)
- tog = gr.update(value=1)
- #print(f'to: {to} et: {et}')
- else:
- if cnt != 0:
- d = gr.update(value=et)
- else:
- d = gr.update(value=0)
- tog = gr.update(value=0)
- #print (f'passing: to: {to} et: {et}')
- pass
- return d, tog
-
-def all_task_start():
- print("\n\n\n\n\n\n\n")
- t = time.gmtime()
- t_stamp = time.time()
- current_time = time.strftime("%H:%M:%S", t)
- return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0)
-
-def clear_fn():
- nn = len(models)
- return tuple([None, *[None for _ in range(nn)]])
-
-
-
-with gr.Blocks(title="SD Models") as my_interface:
- with gr.Column(scale=12):
- # with gr.Row():
- # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""")
- with gr.Row():
- with gr.Row(scale=6):
- primary_prompt=gr.Textbox(label="Prompt", value="")
- # real_prompt=gr.Textbox(label="Real prompt")
- with gr.Row(scale=6):
- # improve_prompts_btn=gr.Button("Improve")
- with gr.Row():
- run=gr.Button("Run",variant="primary")
- clear_btn=gr.Button("Clear")
- with gr.Row():
- sd_outputs = {}
- model_idx = 1
- for model_path in models:
- with gr.Column(scale=3, min_width=320):
- with gr.Box():
- sd_outputs[model_idx] = gr.Image(label=model_path)
- pass
- model_idx += 1
- pass
- pass
-
- with gr.Row(visible=False):
- start_box=gr.Number(interactive=False)
- end_box=gr.Number(interactive=False)
- tog_box=gr.Textbox(value=0,interactive=False)
-
- start_box.change(
- all_task_end,
- [start_box, end_box],
- [start_box, tog_box],
- every=1,
- show_progress=False)
-
- primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box])
- run.click(all_task_start, None, [start_box, end_box, tog_box])
- runs_dict = {}
- model_idx = 1
- for model_path in models:
- runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]])
- model_idx += 1
- pass
- pass
-
- # improve_prompts_btn_clicked=improve_prompts_btn.click(
- # get_prompts,
- # inputs=[primary_prompt],
- # outputs=[primary_prompt],
- # cancels=list(runs_dict.values()))
- clear_btn.click(
- clear_fn,
- None,
- [primary_prompt, *list(sd_outputs.values())],
- cancels=[*list(runs_dict.values())])
- tog_box.change(
- clear_it,
- tog_box,
- tog_box,
- cancels=[*list(runs_dict.values())])
-
-my_interface.queue(concurrency_count=600, status_update_rate=1)
-my_interface.launch(inline=True, show_api=False)
-
\ No newline at end of file
diff --git a/spaces/andreped/AeroPath/demo/src/convert.py b/spaces/andreped/AeroPath/demo/src/convert.py
deleted file mode 100644
index d4b97da96035bdd6ce953f5f0e0ebddd03d42ad7..0000000000000000000000000000000000000000
--- a/spaces/andreped/AeroPath/demo/src/convert.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import nibabel as nib
-from nibabel.processing import resample_to_output
-from skimage.measure import marching_cubes
-
-
-def nifti_to_obj(path, output="prediction.obj"):
- # load NIFTI into numpy array
- image = nib.load(path)
- resampled = resample_to_output(image, [1, 1, 1], order=1)
- data = resampled.get_fdata().astype("uint8")
-
- # Create a material with a red diffuse color (RGB value)
- red_material = "newmtl RedMaterial\nKd 1 0 0" # Red diffuse color (RGB)
-
- # extract surface
- verts, faces, normals, values = marching_cubes(data, 0)
- faces += 1
-
- with open(output, "w") as thefile:
- # Write the material definition to the OBJ file
- thefile.write(red_material + "\n")
-
- for item in verts:
- # thefile.write('usemtl RedMaterial\n')
- thefile.write("v {0} {1} {2}\n".format(item[0], item[1], item[2]))
-
- for item in normals:
- thefile.write("vn {0} {1} {2}\n".format(item[0], item[1], item[2]))
-
- for item in faces:
- thefile.write(
- "f {0}//{0} {1}//{1} {2}//{2}\n".format(
- item[0], item[1], item[2]
- )
- )
diff --git a/spaces/animeartstudio/AnimeArtmodels2/app.py b/spaces/animeartstudio/AnimeArtmodels2/app.py
deleted file mode 100644
index 6370af2203740a7ca018e2e68c35ea71ad7b68aa..0000000000000000000000000000000000000000
--- a/spaces/animeartstudio/AnimeArtmodels2/app.py
+++ /dev/null
@@ -1,216 +0,0 @@
-import gradio as gr
-import os
-import sys
-from pathlib import Path
-
-models = [
- {"name": "Anything 5.0", "url": "stablediffusionapi/anything-v5"},
- {"name": "Counterfeit 2.0", "url": "gsdf/Counterfeit-V2.0"},
- {"name": "Counterfeit 3.0", "url": "stablediffusionapi/counterfeit-v30"},
- {"name": "Dreamlike Anime", "url": "dreamlike-art/dreamlike-anime-1.0"},
- {"name": "MeinaMix 7", "url": "Nacholmo/meinamixv7-diffusers"},
- {"name": "Openjourney 4", "url": "prompthero/openjourney-v4"},
- {"name": "OpenNiji", "url": "Korakoe/OpenNiji"},
- {"name": "Pastel Mix", "url": "andite/pastel-mix"},
- {"name": "Picasso Diffusion 1.1", "url": "aipicasso/picasso-diffusion-1-1"},
- {"name": "Rev Anim", "url": "stablediffusionapi/rev-anim"},
- {"name": "TMND mix", "url": "stablediffusionapi/tmnd-mix"},
- {"name": "Waifu Diffusion", "url": "hakurei/waifu-diffusion"},
- {"name": "-------- TOP MODELS -------", "url": "WarriorMama777/AbyssOrangeMix"},
- {"name": "Abyss Orange Mix 2", "url": "WarriorMama777/AbyssOrangeMix2"},
- {"name": "Anything 3.0", "url": "Linaqruf/anything-v3.0"},
- {"name": "Anything 3.1", "url": "cag/anything-v3-1"},
- {"name": "Anything 3X", "url": "iZELX1/Anything-V3-X"},
- {"name": "Anything 4.0", "url": "andite/anything-v4.0"},
- {"name": "Anything 5.0", "url": "stablediffusionapi/anything-v5"},
- {"name": "Chillout App Factory","url": "stablediffusionapi/chillout-app-factory"},
- {"name": "Classic Anime", "url": "nitrosocke/classic-anim-diffusion"},
- {"name": "Cool Japan Diffusion 2.1.2", "url": "aipicasso/cool-japan-diffusion-2-1-2"},
- {"name": "Counterfeit 2.0", "url": "gsdf/Counterfeit-V2.0"},
- {"name": "Counterfeit 3.0", "url": "stablediffusionapi/counterfeit-v30"},
- {"name": "CyberPunk Anime", "url": "DGSpitzer/Cyberpunk-Anime-Diffusion"},
- {"name": "Dark Sushi Mix", "url": "stablediffusionapi/dark-sushi-mix"},
- {"name": "Dreamlike Anime", "url": "dreamlike-art/dreamlike-anime-1.0"},
- {"name": "Eimis Anime Diffusion", "url": "eimiss/EimisAnimeDiffusion_1.0v"},
- {"name": "Ghibli Diffusion", "url": "nitrosocke/Ghibli-Diffusion"},
- {"name": "GrapeFruit", "url": "iZELX1/Grapefruit"},
- {"name": "GuoFeng 3", "url": "xiaolxl/GuoFeng3"},
- {"name": "Meina Pastel", "url": "stablediffusionapi/meinapastel"},
- {"name": "MeinaMix 7", "url": "Nacholmo/meinamixv7-diffusers"},
- {"name": "Openjourney 4", "url": "prompthero/openjourney-v4"},
- {"name": "OpenNiji", "url": "Korakoe/OpenNiji"},
- {"name": "Pastel Mix", "url": "andite/pastel-mix"},
- {"name": "Picasso Diffusion 1.1", "url": "aipicasso/picasso-diffusion-1-1"},
- {"name": "Protogen 2.2", "url": "darkstorm2150/Protogen_v2.2_Official_Release"},
- {"name": "Protogen Infinity", "url": "darkstorm2150/Protogen_Infinity_Official_Release"},
- {"name": "Protogen X 3.4", "url": "darkstorm2150/Protogen_x3.4_Official_Release"},
- {"name": "Rev Anim", "url": "stablediffusionapi/rev-anim"},
- {"name": "TMND mix", "url": "stablediffusionapi/tmnd-mix"},
- {"name": "Waifu Diffusion", "url": "hakurei/waifu-diffusion"},
- {"name": "-------- ALL ANIME MODELS -------", "url": "WarriorMama777/AbyssOrangeMix"},
- {"name": "7 Pa", "url": "AIARTCHAN/7pa"},
- {"name": "A Certain Model", "url": "JosephusCheung/ACertainModel"},
- {"name": "A Certain Thing", "url": "JosephusCheung/ACertainThing"},
- {"name": "A Certainity", "url": "JosephusCheung/ACertainty"},
- {"name": "Abyss Hell Hero", "url": "AIARTCHAN/AbyssHellHero"},
- {"name": "Abyss Maple 3", "url": "AIARTCHAN/AbyssMapleVer3"},
- {"name": "Abyss Orange Mix 2", "url": "WarriorMama777/AbyssOrangeMix2"},
- {"name": "Abyss Orange Mix", "url": "WarriorMama777/AbyssOrangeMix"},
- {"name": "AbyssHell 3", "url": "AIARTCHAN/AbyssHellVer3"},
- {"name": "All 526 Animated", "url": "stablediffusionapi/all-526-animated"},
- {"name": "Anidosmix 3", "url": "AIARTCHAN/anidosmixV2"},
- {"name": "Anime Kawai Diffusion", "url": "Ojimi/anime-kawai-diffusion"},
- {"name": "Anireal 3D V2", "url": "circulus/sd-anireal-3d-v2"},
- {"name": "AnyLORA", "url": "kubanemil/AnyLORA"},
- {"name": "Anything 2.1", "url": "swl-models/anything-v2.1"},
- {"name": "Anything 3.0 Light", "url": "mm00/anything-v3.0-light"},
- {"name": "Anything 3.0", "url": "Linaqruf/anything-v3.0"},
- {"name": "Anything 3.1", "url": "cag/anything-v3-1"},
- {"name": "Anything 3X", "url": "iZELX1/Anything-V3-X"},
- {"name": "Anything 4.0", "url": "andite/anything-v4.0"},
- {"name": "Anything 5.0", "url": "stablediffusionapi/anything-v5"},
- {"name": "Anything Else 4", "url": "stablediffusionapi/anythingelse-v4"},
- {"name": "Anything Else 5", "url": "stablediffusionapi/anything-v5"},
- {"name": "Arcane Diffusion", "url": "nitrosocke/Arcane-Diffusion"},
- {"name": "Archer Diffusion", "url": "nitrosocke/archer-diffusion"},
- {"name": "Asian Mix", "url": "D1b4l4p/AsianMix"},
- {"name": "Blood Orange Mix", "url": "WarriorMama777/BloodOrangeMix"},
- {"name": "CamelliaMix 2.5D","url": "stablediffusionapi/camelliamix25d"},
- {"name": "CamelliaMix Line","url": "stablediffusionapi/camelliamixline"},
- {"name": "CamelliaMix","url": "Powidl43/CamelliaMix"},
- {"name": "Cetusmix", "url": "stablediffusionapi/cetusmix"},
- {"name": "Chik Mix", "url": "stablediffusionapi/chikmix"},
- {"name": "Chikmix", "url": "stablediffusionapi/chikmix"},
- {"name": "Chillout App Factory","url": "stablediffusionapi/chillout-app-factory"},
- {"name": "Classic Anime", "url": "nitrosocke/classic-anim-diffusion"},
- {"name": "Cool Japan Diffusion 2.1.2", "url": "aipicasso/cool-japan-diffusion-2-1-2"},
- {"name": "Cosmic Babes", "url": "stablediffusionapi/cosmic-babes"},
- {"name": "Counterfeit 1.0", "url": "gsdf/counterfeit-v1.0"},
- {"name": "Counterfeit 2", "url": "gsdf/Counterfeit-V2.0"},
- {"name": "Counterfeit 2.0", "url": "gsdf/Counterfeit-V2.0"},
- {"name": "Counterfeit 3.0", "url": "stablediffusionapi/counterfeit-v30"},
- {"name": "CuteSexyRobutts", "url": "andite/cutesexyrobutts-diffusion"},
- {"name": "CyberPunk Anime", "url": "DGSpitzer/Cyberpunk-Anime-Diffusion"},
- {"name": "Dark Sushi Mix", "url": "stablediffusionapi/dark-sushi-mix"},
- {"name": "Dash Sushi 25d", "url": "stablediffusionapi/dark-sushi-25d"},
- {"name": "Dreamlike Anime", "url": "dreamlike-art/dreamlike-anime-1.0"},
- {"name": "DucHaiten Anime", "url": "DucHaiten/DucHaitenAnime"},
- {"name": "Eerie Orange Mix", "url": "WarriorMama777/EerieOrangeMix"},
- {"name": "Eimis Anime Diffusion", "url": "eimiss/EimisAnimeDiffusion_1.0v"},
- {"name": "Ghibli Diffusion", "url": "nitrosocke/Ghibli-Diffusion"},
- {"name": "GrapeFruit", "url": "iZELX1/Grapefruit"},
- {"name": "GuoFeng 3", "url": "xiaolxl/GuoFeng3"},
- {"name": "Guweiz Diffusion", "url": "andite/guweiz-diffusion"},
- {"name": "Hiten Diffusion", "url": "andite/hiten-diffusion"},
- {"name": "Icomix 2", "url": "stablediffusionapi/icomix-2"},
- {"name": "InkPunk Diffusion", "url": "Envvi/Inkpunk-Diffusion"},
- {"name": "Mama Orange Mixs", "url": "WarriorMama777/OrangeMixs"},
- {"name": "Mashuu Diffusion", "url": "andite/mashuu-diffusion"},
- {"name": "Meina Alter", "url": "stablediffusionapi/meinaalter"},
- {"name": "Meina Pastel", "url": "stablediffusionapi/meinapastel"},
- {"name": "MeinaMix 7", "url": "Nacholmo/meinamixv7-diffusers"},
- {"name": "Mignon Diffusion", "url": "andite/mignon-diffusion"},
- {"name": "MikaPikazo Diffusion", "url": "andite/mikapikazo-diffusion"},
- {"name": "Mikapikazo", "url": "andite/mikapikazo-diffusion"},
- {"name": "Mix Pro V4", "url": "AIARTCHAN/MIX-Pro-V4"},
- {"name": "NeverEnding-Dream", "url": "Lykon/NeverEnding-Dream"},
- {"name": "Openjourney 4", "url": "prompthero/openjourney-v4"},
- {"name": "OpenNiji", "url": "Korakoe/OpenNiji"},
- {"name": "Pastel Mix", "url": "andite/pastel-mix"},
- {"name": "Picasso Diffusion 1.1", "url": "aipicasso/picasso-diffusion-1-1"},
- {"name": "Piromizu Diffusion", "url": "andite/piromizu-diffusion"},
- {"name": "Protogen 2.2", "url": "darkstorm2150/Protogen_v2.2_Official_Release"},
- {"name": "Protogen Infinity", "url": "darkstorm2150/Protogen_Infinity_Official_Release"},
- {"name": "Protogen X 3.4", "url": "darkstorm2150/Protogen_x3.4_Official_Release"},
- {"name": "Rev Anim", "url": "stablediffusionapi/rev-anim"},
- {"name": "Rev Animated", "url": "coreml/coreml-ReV-Animated"},
- {"name": "Rev Animated", "url": "LottePeisch/RevAnimated-Diffusers"},
- {"name": "Something V 2.2","url": "NoCrypt/SomethingV2_2"},
- {"name": "Something V2","url": "NoCrypt/SomethingV2"},
- {"name": "Three Delicacy", "url": "stablediffusionapi/three-delicacy"},
- {"name": "Three Delicacy wonto", "url": "stablediffusionapi/three-delicacy-wonto"},
- {"name": "TMND mix", "url": "stablediffusionapi/tmnd-mix"},
- {"name": "Waifu Diffusion", "url": "hakurei/waifu-diffusion"}
-]
-
-current_model = models[0]
-
-text_gen = gr.Interface.load("spaces/daspartho/prompt-extend")
-
-models2 = []
-for model in models:
- model_url = f"models/{model['url']}"
- loaded_model = gr.Interface.load(model_url, live=True, preprocess=True)
- models2.append(loaded_model)
-
-
-def text_it(inputs, text_gen=text_gen):
- return text_gen(inputs)
-
-
-def set_model(current_model_index):
- global current_model
- current_model = models[current_model_index]
- return gr.update(value=f"{current_model['name']}")
-
-
-def send_it(inputs, model_choice):
- proc = models2[model_choice]
- return proc(inputs)
-
-
-with gr.Blocks() as myface:
- gr.HTML(
-
- )
-
- with gr.Row():
- with gr.Row():
- input_text = gr.Textbox(label="Prompt idea", placeholder="Eg. Cyberpunk anime princess", lines=1)
- # Model selection dropdown
- model_name1 = gr.Dropdown(
- label="Choose Model",
- choices=[m["name"] for m in models],
- type="index",
- value=current_model["name"],
- interactive=True,
- )
- with gr.Row():
- see_prompts = gr.Button("Generate Prompts")
- run = gr.Button("Generate Images", variant="primary")
-
- with gr.Row():
- output1 = gr.Image(label="")
- output2 = gr.Image(label="")
- output3 = gr.Image(label="")
- with gr.Row():
- magic1 = gr.Textbox(label="Generated Prompt", lines=2)
- magic2 = gr.Textbox(label="Generated Prompt", lines=2)
- magic3 = gr.Textbox(label="Generated Prompt", lines=2)
- with gr.Row():
- output4 = gr.Image(label="")
- output5 = gr.Image(label="")
- output6 = gr.Image(label="")
- with gr.Row():
- magic4 = gr.Textbox(label="Generated Prompt", lines=2)
- magic5 = gr.Textbox(label="Generated Prompt", lines=2)
- magic6 = gr.Textbox(label="Generated Prompt", lines=2)
-
- model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3, output4, output5, output6])
-
- run.click(send_it, inputs=[magic1, model_name1], outputs=[output1])
- run.click(send_it, inputs=[magic2, model_name1], outputs=[output2])
- run.click(send_it, inputs=[magic3, model_name1], outputs=[output3])
- run.click(send_it, inputs=[magic4, model_name1], outputs=[output4])
- run.click(send_it, inputs=[magic5, model_name1], outputs=[output5])
- run.click(send_it, inputs=[magic6, model_name1], outputs=[output6])
-
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic1])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic2])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic3])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic4])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic5])
- see_prompts.click(text_it, inputs=[input_text], outputs=[magic6])
-
-myface.queue(concurrency_count=200)
-myface.launch(inline=True, show_api=False, max_threads=400)
\ No newline at end of file
diff --git a/spaces/antonovmaxim/text-generation-webui-space/docs/README.md b/spaces/antonovmaxim/text-generation-webui-space/docs/README.md
deleted file mode 100644
index 65dadd7cfc906247d9c6995896ba6a144a0d31c1..0000000000000000000000000000000000000000
--- a/spaces/antonovmaxim/text-generation-webui-space/docs/README.md
+++ /dev/null
@@ -1,19 +0,0 @@
-# text-generation-webui documentation
-
-## Table of contents
-
-* [GPTQ models (4 bit mode)](GPTQ-models-(4-bit-mode).md)
-* [LLaMA model](LLaMA-model.md)
-* [Using LoRAs](Using-LoRAs.md)
-* [llama.cpp models](llama.cpp-models.md)
-* [RWKV model](RWKV-model.md)
-* [Extensions](Extensions.md)
-* [Chat mode](Chat-mode.md)
-* [DeepSpeed](DeepSpeed.md)
-* [FlexGen](FlexGen.md)
-* [Spell book](Spell-book.md)
-* [Low-VRAM-guide](Low-VRAM-guide.md)
-* [System requirements](System-requirements.md)
-* [Windows installation guide](Windows-installation-guide.md)
-* [WSL installation guide](WSL-installation-guide.md)
-* [Docker Compose](Docker.md)
diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/sd_vae_approx.py b/spaces/aodianyun/stable-diffusion-webui/modules/sd_vae_approx.py
deleted file mode 100644
index ea4c4a3a72941c31a654a29ce90cf8d9c82ce674..0000000000000000000000000000000000000000
--- a/spaces/aodianyun/stable-diffusion-webui/modules/sd_vae_approx.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import os
-
-import torch
-from torch import nn
-from modules import devices, paths
-
-sd_vae_approx_model = None
-
-
-class VAEApprox(nn.Module):
- def __init__(self):
- super(VAEApprox, self).__init__()
- self.conv1 = nn.Conv2d(4, 8, (7, 7))
- self.conv2 = nn.Conv2d(8, 16, (5, 5))
- self.conv3 = nn.Conv2d(16, 32, (3, 3))
- self.conv4 = nn.Conv2d(32, 64, (3, 3))
- self.conv5 = nn.Conv2d(64, 32, (3, 3))
- self.conv6 = nn.Conv2d(32, 16, (3, 3))
- self.conv7 = nn.Conv2d(16, 8, (3, 3))
- self.conv8 = nn.Conv2d(8, 3, (3, 3))
-
- def forward(self, x):
- extra = 11
- x = nn.functional.interpolate(x, (x.shape[2] * 2, x.shape[3] * 2))
- x = nn.functional.pad(x, (extra, extra, extra, extra))
-
- for layer in [self.conv1, self.conv2, self.conv3, self.conv4, self.conv5, self.conv6, self.conv7, self.conv8, ]:
- x = layer(x)
- x = nn.functional.leaky_relu(x, 0.1)
-
- return x
-
-
-def model():
- global sd_vae_approx_model
-
- if sd_vae_approx_model is None:
- sd_vae_approx_model = VAEApprox()
- sd_vae_approx_model.load_state_dict(torch.load(os.path.join(paths.models_path, "VAE-approx", "model.pt"), map_location='cpu' if devices.device.type != 'cuda' else None))
- sd_vae_approx_model.eval()
- sd_vae_approx_model.to(devices.device, devices.dtype)
-
- return sd_vae_approx_model
-
-
-def cheap_approximation(sample):
- # https://discuss.huggingface.co/t/decoding-latents-to-rgb-without-upscaling/23204/2
-
- coefs = torch.tensor([
- [0.298, 0.207, 0.208],
- [0.187, 0.286, 0.173],
- [-0.158, 0.189, 0.264],
- [-0.184, -0.271, -0.473],
- ]).to(sample.device)
-
- x_sample = torch.einsum("lxy,lr -> rxy", sample, coefs)
-
- return x_sample
diff --git a/spaces/arbml/Ashaar/poetry_diacritizer/modules/layers.py b/spaces/arbml/Ashaar/poetry_diacritizer/modules/layers.py
deleted file mode 100644
index 64d7d68f5d3a7d58c2615939220168a94bbd4475..0000000000000000000000000000000000000000
--- a/spaces/arbml/Ashaar/poetry_diacritizer/modules/layers.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import torch
-from torch import nn
-from typing import Any
-
-
-class BatchNormConv1d(nn.Module):
- """
- A nn.Conv1d followed by an optional activation function, and nn.BatchNorm1d
- """
-
- def __init__(
- self,
- in_dim: int,
- out_dim: int,
- kernel_size: int,
- stride: int,
- padding: int,
- activation: Any = None,
- ):
- super().__init__()
- self.conv1d = nn.Conv1d(
- in_dim,
- out_dim,
- kernel_size=kernel_size,
- stride=stride,
- padding=padding,
- bias=False,
- )
- self.bn = nn.BatchNorm1d(out_dim)
- self.activation = activation
-
- def forward(self, x: Any):
- x = self.conv1d(x)
- if self.activation is not None:
- x = self.activation(x)
- return self.bn(x)
-
-
-class LinearNorm(torch.nn.Module):
- def __init__(self, in_dim, out_dim, bias=True, w_init_gain='linear'):
- super().__init__()
- self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias)
-
- torch.nn.init.xavier_uniform_(
- self.linear_layer.weight,
- gain=torch.nn.init.calculate_gain(w_init_gain))
-
- def forward(self, x):
- return self.linear_layer(x)
-
-
-class ConvNorm(torch.nn.Module):
- def __init__(self, in_channels, out_channels, kernel_size=1, stride=1,
- padding=None, dilation=1, bias=True, w_init_gain='linear'):
- super().__init__()
- if padding is None:
- assert(kernel_size % 2 == 1)
- padding = int(dilation * (kernel_size - 1) / 2)
-
- self.conv = torch.nn.Conv1d(in_channels, out_channels,
- kernel_size=kernel_size, stride=stride,
- padding=padding, dilation=dilation,
- bias=bias)
-
- torch.nn.init.xavier_uniform_(
- self.conv.weight, gain=torch.nn.init.calculate_gain(w_init_gain))
-
- def forward(self, signal):
- conv_signal = self.conv(signal)
- return conv_signal
diff --git a/spaces/arnavkartikeya/SCRIPture-final/transform/randaugment.py b/spaces/arnavkartikeya/SCRIPture-final/transform/randaugment.py
deleted file mode 100644
index 094d9f4cacc93146d2bab7311d9dc04feb07032c..0000000000000000000000000000000000000000
--- a/spaces/arnavkartikeya/SCRIPture-final/transform/randaugment.py
+++ /dev/null
@@ -1,340 +0,0 @@
-import cv2
-import numpy as np
-
-
-## aug functions
-def identity_func(img):
- return img
-
-
-def autocontrast_func(img, cutoff=0):
- '''
- same output as PIL.ImageOps.autocontrast
- '''
- n_bins = 256
-
- def tune_channel(ch):
- n = ch.size
- cut = cutoff * n // 100
- if cut == 0:
- high, low = ch.max(), ch.min()
- else:
- hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins])
- low = np.argwhere(np.cumsum(hist) > cut)
- low = 0 if low.shape[0] == 0 else low[0]
- high = np.argwhere(np.cumsum(hist[::-1]) > cut)
- high = n_bins - 1 if high.shape[0] == 0 else n_bins - 1 - high[0]
- if high <= low:
- table = np.arange(n_bins)
- else:
- scale = (n_bins - 1) / (high - low)
- offset = -low * scale
- table = np.arange(n_bins) * scale + offset
- table[table < 0] = 0
- table[table > n_bins - 1] = n_bins - 1
- table = table.clip(0, 255).astype(np.uint8)
- return table[ch]
-
- channels = [tune_channel(ch) for ch in cv2.split(img)]
- out = cv2.merge(channels)
- return out
-
-
-def equalize_func(img):
- '''
- same output as PIL.ImageOps.equalize
- PIL's implementation is different from cv2.equalize
- '''
- n_bins = 256
-
- def tune_channel(ch):
- hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins])
- non_zero_hist = hist[hist != 0].reshape(-1)
- step = np.sum(non_zero_hist[:-1]) // (n_bins - 1)
- if step == 0: return ch
- n = np.empty_like(hist)
- n[0] = step // 2
- n[1:] = hist[:-1]
- table = (np.cumsum(n) // step).clip(0, 255).astype(np.uint8)
- return table[ch]
-
- channels = [tune_channel(ch) for ch in cv2.split(img)]
- out = cv2.merge(channels)
- return out
-
-
-def rotate_func(img, degree, fill=(0, 0, 0)):
- '''
- like PIL, rotate by degree, not radians
- '''
- H, W = img.shape[0], img.shape[1]
- center = W / 2, H / 2
- M = cv2.getRotationMatrix2D(center, degree, 1)
- out = cv2.warpAffine(img, M, (W, H), borderValue=fill)
- return out
-
-
-def solarize_func(img, thresh=128):
- '''
- same output as PIL.ImageOps.posterize
- '''
- table = np.array([el if el < thresh else 255 - el for el in range(256)])
- table = table.clip(0, 255).astype(np.uint8)
- out = table[img]
- return out
-
-
-def color_func(img, factor):
- '''
- same output as PIL.ImageEnhance.Color
- '''
- ## implementation according to PIL definition, quite slow
- # degenerate = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[:, :, np.newaxis]
- # out = blend(degenerate, img, factor)
- # M = (
- # np.eye(3) * factor
- # + np.float32([0.114, 0.587, 0.299]).reshape(3, 1) * (1. - factor)
- # )[np.newaxis, np.newaxis, :]
- M = (
- np.float32([
- [0.886, -0.114, -0.114],
- [-0.587, 0.413, -0.587],
- [-0.299, -0.299, 0.701]]) * factor
- + np.float32([[0.114], [0.587], [0.299]])
- )
- out = np.matmul(img, M).clip(0, 255).astype(np.uint8)
- return out
-
-
-def contrast_func(img, factor):
- """
- same output as PIL.ImageEnhance.Contrast
- """
- mean = np.sum(np.mean(img, axis=(0, 1)) * np.array([0.114, 0.587, 0.299]))
- table = np.array([(
- el - mean) * factor + mean
- for el in range(256)
- ]).clip(0, 255).astype(np.uint8)
- out = table[img]
- return out
-
-
-def brightness_func(img, factor):
- '''
- same output as PIL.ImageEnhance.Contrast
- '''
- table = (np.arange(256, dtype=np.float32) * factor).clip(0, 255).astype(np.uint8)
- out = table[img]
- return out
-
-
-def sharpness_func(img, factor):
- '''
- The differences the this result and PIL are all on the 4 boundaries, the center
- areas are same
- '''
- kernel = np.ones((3, 3), dtype=np.float32)
- kernel[1][1] = 5
- kernel /= 13
- degenerate = cv2.filter2D(img, -1, kernel)
- if factor == 0.0:
- out = degenerate
- elif factor == 1.0:
- out = img
- else:
- out = img.astype(np.float32)
- degenerate = degenerate.astype(np.float32)[1:-1, 1:-1, :]
- out[1:-1, 1:-1, :] = degenerate + factor * (out[1:-1, 1:-1, :] - degenerate)
- out = out.astype(np.uint8)
- return out
-
-
-def shear_x_func(img, factor, fill=(0, 0, 0)):
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, factor, 0], [0, 1, 0]])
- out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8)
- return out
-
-
-def translate_x_func(img, offset, fill=(0, 0, 0)):
- '''
- same output as PIL.Image.transform
- '''
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, 0, -offset], [0, 1, 0]])
- out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8)
- return out
-
-
-def translate_y_func(img, offset, fill=(0, 0, 0)):
- '''
- same output as PIL.Image.transform
- '''
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, 0, 0], [0, 1, -offset]])
- out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8)
- return out
-
-
-def posterize_func(img, bits):
- '''
- same output as PIL.ImageOps.posterize
- '''
- out = np.bitwise_and(img, np.uint8(255 << (8 - bits)))
- return out
-
-
-def shear_y_func(img, factor, fill=(0, 0, 0)):
- H, W = img.shape[0], img.shape[1]
- M = np.float32([[1, 0, 0], [factor, 1, 0]])
- out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8)
- return out
-
-
-def cutout_func(img, pad_size, replace=(0, 0, 0)):
- replace = np.array(replace, dtype=np.uint8)
- H, W = img.shape[0], img.shape[1]
- rh, rw = np.random.random(2)
- pad_size = pad_size // 2
- ch, cw = int(rh * H), int(rw * W)
- x1, x2 = max(ch - pad_size, 0), min(ch + pad_size, H)
- y1, y2 = max(cw - pad_size, 0), min(cw + pad_size, W)
- out = img.copy()
- out[x1:x2, y1:y2, :] = replace
- return out
-
-
-### level to args
-def enhance_level_to_args(MAX_LEVEL):
- def level_to_args(level):
- return ((level / MAX_LEVEL) * 1.8 + 0.1,)
- return level_to_args
-
-
-def shear_level_to_args(MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = (level / MAX_LEVEL) * 0.3
- if np.random.random() > 0.5: level = -level
- return (level, replace_value)
-
- return level_to_args
-
-
-def translate_level_to_args(translate_const, MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = (level / MAX_LEVEL) * float(translate_const)
- if np.random.random() > 0.5: level = -level
- return (level, replace_value)
-
- return level_to_args
-
-
-def cutout_level_to_args(cutout_const, MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = int((level / MAX_LEVEL) * cutout_const)
- return (level, replace_value)
-
- return level_to_args
-
-
-def solarize_level_to_args(MAX_LEVEL):
- def level_to_args(level):
- level = int((level / MAX_LEVEL) * 256)
- return (level, )
- return level_to_args
-
-
-def none_level_to_args(level):
- return ()
-
-
-def posterize_level_to_args(MAX_LEVEL):
- def level_to_args(level):
- level = int((level / MAX_LEVEL) * 4)
- return (level, )
- return level_to_args
-
-
-def rotate_level_to_args(MAX_LEVEL, replace_value):
- def level_to_args(level):
- level = (level / MAX_LEVEL) * 30
- if np.random.random() < 0.5:
- level = -level
- return (level, replace_value)
-
- return level_to_args
-
-
-func_dict = {
- 'Identity': identity_func,
- 'AutoContrast': autocontrast_func,
- 'Equalize': equalize_func,
- 'Rotate': rotate_func,
- 'Solarize': solarize_func,
- 'Color': color_func,
- 'Contrast': contrast_func,
- 'Brightness': brightness_func,
- 'Sharpness': sharpness_func,
- 'ShearX': shear_x_func,
- 'TranslateX': translate_x_func,
- 'TranslateY': translate_y_func,
- 'Posterize': posterize_func,
- 'ShearY': shear_y_func,
-}
-
-translate_const = 10
-MAX_LEVEL = 10
-replace_value = (128, 128, 128)
-arg_dict = {
- 'Identity': none_level_to_args,
- 'AutoContrast': none_level_to_args,
- 'Equalize': none_level_to_args,
- 'Rotate': rotate_level_to_args(MAX_LEVEL, replace_value),
- 'Solarize': solarize_level_to_args(MAX_LEVEL),
- 'Color': enhance_level_to_args(MAX_LEVEL),
- 'Contrast': enhance_level_to_args(MAX_LEVEL),
- 'Brightness': enhance_level_to_args(MAX_LEVEL),
- 'Sharpness': enhance_level_to_args(MAX_LEVEL),
- 'ShearX': shear_level_to_args(MAX_LEVEL, replace_value),
- 'TranslateX': translate_level_to_args(
- translate_const, MAX_LEVEL, replace_value
- ),
- 'TranslateY': translate_level_to_args(
- translate_const, MAX_LEVEL, replace_value
- ),
- 'Posterize': posterize_level_to_args(MAX_LEVEL),
- 'ShearY': shear_level_to_args(MAX_LEVEL, replace_value),
-}
-
-
-class RandomAugment(object):
-
- def __init__(self, N=2, M=10, isPIL=False, augs=[]):
- self.N = N
- self.M = M
- self.isPIL = isPIL
- if augs:
- self.augs = augs
- else:
- self.augs = list(arg_dict.keys())
-
- def get_random_ops(self):
- sampled_ops = np.random.choice(self.augs, self.N)
- return [(op, 0.5, self.M) for op in sampled_ops]
-
- def __call__(self, img):
- if self.isPIL:
- img = np.array(img)
- ops = self.get_random_ops()
- for name, prob, level in ops:
- if np.random.random() > prob:
- continue
- args = arg_dict[name](level)
- img = func_dict[name](img, *args)
- return img
-
-
-if __name__ == '__main__':
- a = RandomAugment()
- img = np.random.randn(32, 32, 3)
- a(img)
\ No newline at end of file
diff --git a/spaces/artificialguybr/OPENHERMES-V2.5-DEMO/app.py b/spaces/artificialguybr/OPENHERMES-V2.5-DEMO/app.py
deleted file mode 100644
index 7cec91212a2384e8968c46c64be4143e3b557be8..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/OPENHERMES-V2.5-DEMO/app.py
+++ /dev/null
@@ -1,155 +0,0 @@
-import gradio as gr
-import re
-from transformers import AutoModelForCausalLM, AutoTokenizer
-import torch
-import bitsandbytes
-import accelerate
-model_name_or_path = "teknium/OpenHermes-2.5-Mistral-7B"
-dtype = torch.bfloat16
-model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
- device_map="auto",
- torch_dtype=dtype,
- trust_remote_code=False,
- load_in_4bit=True,
- revision="main")
-tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
-
-BASE_SYSTEM_MESSAGE = "I carefully provide accurate, factual, thoughtful, nuanced answers and am brilliant at reasoning."
-
-def clear_chat(chat_history_state, chat_message):
- chat_history_state = []
- chat_message = ''
- return chat_history_state, chat_message
-
-def user(message, history):
- history = history or []
- history.append([message, ""])
- return "", history
-
-def regenerate(chatbot, chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty):
- print("Regenerate function called") # Debug print
-
- if not chat_history_state:
- print("Chat history is empty") # Debug print
- return chatbot, chat_history_state, ""
-
- # Remove only the last assistant's message from the chat history
- if len(chat_history_state) > 0:
- print(f"Before: {chat_history_state[-1]}") # Debug print
- chat_history_state[-1][1] = ""
- print(f"After: {chat_history_state[-1]}") # Debug print
-
- # Re-run the chat function
- new_history, _, _ = chat(chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty)
- print(f"New history: {new_history}") # Debug print
-
- return new_history, new_history, ""
-
-
-def chat(history, system_message, max_tokens, temperature, top_p, top_k, repetition_penalty):
- print(f"Chat function called with history: {history}")
- history = history or []
-
- # Use BASE_SYSTEM_MESSAGE if system_message is empty
- system_message_to_use = system_message if system_message.strip() else BASE_SYSTEM_MESSAGE
-
- # A última mensagem do usuário
- user_prompt = history[-1][0] if history else ""
- print(f"User prompt used for generation: {user_prompt}") # Debug print
- # Preparar a entrada para o modelo
- prompt_template = f'''system
-{system_message_to_use.strip()}
-user
-{user_prompt}
-assistant
-'''
- input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
-
- # Gerar a saída
- output = model.generate(
- input_ids=input_ids,
- max_length=max_tokens,
- temperature=temperature,
- top_p=top_p,
- top_k=top_k,
- repetition_penalty=repetition_penalty
- )
-
- # Decodificar a saída
- decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
- assistant_response = decoded_output.split('assistant')[-1].strip() # Pegar apenas a última resposta do assistente
- print(f"Generated assistant response: {assistant_response}") # Debug print
- # Atualizar o histórico
- if history:
- history[-1][1] += assistant_response
- else:
- history.append(["", assistant_response])
-
- print(f"Updated history: {history}")
- return history, history, ""
-
-
-start_message = ""
-
-with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- gr.Markdown("""
- ## OpenHermes-V2.5 Finetuned on Mistral 7B
- **Space created by [@artificialguybr](https://twitter.com/artificialguybr). Model by [@Teknium1](https://twitter.com/Teknium1). Thanks HF for GPU!**
- **OpenHermes-V2.5 is currently SOTA in some benchmarks for 7B models.**
- **Hermes 2 model was trained on 900,000 instructions, and surpasses all previous versions of Hermes 13B and below, and matches 70B on some benchmarks! Hermes 2 changes the game with strong multiturn chat skills, system prompt capabilities, and uses ChatML format. It's quality, diversity and scale is unmatched in the current OS LM landscape. Not only does it do well in benchmarks, but also in unmeasured capabilities, like Roleplaying, Tasks, and more.**
- """)
- with gr.Row():
- #chatbot = gr.Chatbot().style(height=500)
- chatbot = gr.Chatbot(elem_id="chatbot")
- with gr.Row():
- message = gr.Textbox(
- label="What do you want to chat about?",
- placeholder="Ask me anything.",
- lines=3,
- )
- with gr.Row():
- submit = gr.Button(value="Send message", variant="secondary", scale=1)
- clear = gr.Button(value="New topic", variant="secondary", scale=0)
- stop = gr.Button(value="Stop", variant="secondary", scale=0)
- regen_btn = gr.Button(value="Regenerate", variant="secondary", scale=0)
- with gr.Accordion("Show Model Parameters", open=False):
- with gr.Row():
- with gr.Column():
- max_tokens = gr.Slider(20, 512, label="Max Tokens", step=20, value=500)
- temperature = gr.Slider(0.0, 2.0, label="Temperature", step=0.1, value=0.7)
- top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.95)
- top_k = gr.Slider(1, 100, label="Top K", step=1, value=40)
- repetition_penalty = gr.Slider(1.0, 2.0, label="Repetition Penalty", step=0.1, value=1.1)
-
- system_msg = gr.Textbox(
- start_message, label="System Message", interactive=True, visible=True, placeholder="System prompt. Provide instructions which you want the model to remember.", lines=5)
-
- chat_history_state = gr.State()
- clear.click(clear_chat, inputs=[chat_history_state, message], outputs=[chat_history_state, message], queue=False)
- clear.click(lambda: None, None, chatbot, queue=False)
-
- submit_click_event = submit.click(
- fn=user, inputs=[message, chat_history_state], outputs=[message, chat_history_state], queue=True
- ).then(
- fn=chat, inputs=[chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty], outputs=[chatbot, chat_history_state, message], queue=True
- )
-
- # Corrected the clear button click event
- clear.click(
- fn=clear_chat, inputs=[chat_history_state, message], outputs=[chat_history_state, message], queue=False
- )
-
- # Stop button remains the same
- stop.click(fn=None, inputs=None, outputs=None, cancels=[submit_click_event], queue=False)
- regen_click_event = regen_btn.click(
- fn=regenerate,
- inputs=[chatbot, chat_history_state, system_msg, max_tokens, temperature, top_p, top_k, repetition_penalty],
- outputs=[chatbot, chat_history_state, message],
- queue=True
- )
-
-
-demo.queue(max_size=128, concurrency_count=2)
-demo.launch()
\ No newline at end of file
diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/xtts_v1/train_gpt_xtts.py b/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/xtts_v1/train_gpt_xtts.py
deleted file mode 100644
index 7d8f4064c5f510295d5698869acdbdd57a9faeff..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/xtts_v1/train_gpt_xtts.py
+++ /dev/null
@@ -1,176 +0,0 @@
-import os
-
-from trainer import Trainer, TrainerArgs
-
-from TTS.config.shared_configs import BaseDatasetConfig
-from TTS.tts.datasets import load_tts_samples
-from TTS.tts.layers.xtts.trainer.gpt_trainer import GPTArgs, GPTTrainer, GPTTrainerConfig, XttsAudioConfig
-from TTS.utils.manage import ModelManager
-
-# Logging parameters
-RUN_NAME = "GPT_XTTS_LJSpeech_FT"
-PROJECT_NAME = "XTTS_trainer"
-DASHBOARD_LOGGER = "tensorboard"
-LOGGER_URI = None
-
-# Set here the path that the checkpoints will be saved. Default: ./run/training/
-OUT_PATH = os.path.join(os.path.dirname(os.path.abspath(__file__)), "run", "training")
-
-# Training Parameters
-OPTIMIZER_WD_ONLY_ON_WEIGHTS = True # for multi-gpu training please make it False
-START_WITH_EVAL = True # if True it will star with evaluation
-BATCH_SIZE = 3 # set here the batch size
-GRAD_ACUMM_STEPS = 84 # set here the grad accumulation steps
-# Note: we recommend that BATCH_SIZE * GRAD_ACUMM_STEPS need to be at least 252 for more efficient training. You can increase/decrease BATCH_SIZE but then set GRAD_ACUMM_STEPS accordingly.
-
-# Define here the dataset that you want to use for the fine-tuning on.
-config_dataset = BaseDatasetConfig(
- formatter="ljspeech",
- dataset_name="ljspeech",
- path="/raid/datasets/LJSpeech-1.1_24khz/",
- meta_file_train="/raid/datasets/LJSpeech-1.1_24khz/metadata.csv",
- language="en",
-)
-
-# Add here the configs of the datasets
-DATASETS_CONFIG_LIST = [config_dataset]
-
-# Define the path where XTTS v1.1.1 files will be downloaded
-CHECKPOINTS_OUT_PATH = os.path.join(OUT_PATH, "XTTS_v1.1_original_model_files/")
-os.makedirs(CHECKPOINTS_OUT_PATH, exist_ok=True)
-
-
-# DVAE files
-DVAE_CHECKPOINT_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v1.1.2/dvae.pth"
-MEL_NORM_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v1.1.2/mel_stats.pth"
-
-# Set the path to the downloaded files
-DVAE_CHECKPOINT = os.path.join(CHECKPOINTS_OUT_PATH, DVAE_CHECKPOINT_LINK.split("/")[-1])
-MEL_NORM_FILE = os.path.join(CHECKPOINTS_OUT_PATH, MEL_NORM_LINK.split("/")[-1])
-
-# download DVAE files if needed
-if not os.path.isfile(DVAE_CHECKPOINT) or not os.path.isfile(MEL_NORM_FILE):
- print(" > Downloading DVAE files!")
- ModelManager._download_model_files([MEL_NORM_LINK, DVAE_CHECKPOINT_LINK], CHECKPOINTS_OUT_PATH, progress_bar=True)
-
-
-# Download XTTS v1.1 checkpoint if needed
-TOKENIZER_FILE_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v1.1.2/vocab.json"
-XTTS_CHECKPOINT_LINK = "https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/v1.1.2/model.pth"
-
-# XTTS transfer learning parameters: You we need to provide the paths of XTTS model checkpoint that you want to do the fine tuning.
-TOKENIZER_FILE = os.path.join(CHECKPOINTS_OUT_PATH, TOKENIZER_FILE_LINK.split("/")[-1]) # vocab.json file
-XTTS_CHECKPOINT = os.path.join(CHECKPOINTS_OUT_PATH, XTTS_CHECKPOINT_LINK.split("/")[-1]) # model.pth file
-
-# download XTTS v1.1 files if needed
-if not os.path.isfile(TOKENIZER_FILE) or not os.path.isfile(XTTS_CHECKPOINT):
- print(" > Downloading XTTS v1.1 files!")
- ModelManager._download_model_files(
- [TOKENIZER_FILE_LINK, XTTS_CHECKPOINT_LINK], CHECKPOINTS_OUT_PATH, progress_bar=True
- )
-
-
-# Training sentences generations
-SPEAKER_REFERENCE = [
- "./tests/data/ljspeech/wavs/LJ001-0002.wav" # speaker reference to be used in training test sentences
-]
-LANGUAGE = config_dataset.language
-
-
-def main():
- # init args and config
- model_args = GPTArgs(
- max_conditioning_length=132300, # 6 secs
- min_conditioning_length=66150, # 3 secs
- debug_loading_failures=False,
- max_wav_length=255995, # ~11.6 seconds
- max_text_length=200,
- mel_norm_file=MEL_NORM_FILE,
- dvae_checkpoint=DVAE_CHECKPOINT,
- # tokenizer_file="/raid/datasets/xtts_models/vocab.json", # vocab path of the model that you want to fine-tune
- # xtts_checkpoint="https://huggingface.co/coqui/XTTS-v1/resolve/hifigan/model.pth",
- xtts_checkpoint=XTTS_CHECKPOINT, # checkpoint path of the model that you want to fine-tune
- tokenizer_file=TOKENIZER_FILE,
- gpt_num_audio_tokens=8194,
- gpt_start_audio_token=8192,
- gpt_stop_audio_token=8193,
- )
- # define audio config
- audio_config = XttsAudioConfig(sample_rate=22050, dvae_sample_rate=22050, output_sample_rate=24000)
- # training parameters config
- config = GPTTrainerConfig(
- output_path=OUT_PATH,
- model_args=model_args,
- run_name=RUN_NAME,
- project_name=PROJECT_NAME,
- run_description="""
- GPT XTTS training
- """,
- dashboard_logger=DASHBOARD_LOGGER,
- logger_uri=LOGGER_URI,
- audio=audio_config,
- batch_size=BATCH_SIZE,
- batch_group_size=48,
- eval_batch_size=BATCH_SIZE,
- num_loader_workers=8,
- eval_split_max_size=256,
- print_step=50,
- plot_step=100,
- log_model_step=1000,
- save_step=10000,
- save_n_checkpoints=1,
- save_checkpoints=True,
- # target_loss="loss",
- print_eval=False,
- # Optimizer values like tortoise, pytorch implementation with modifications to not apply WD to non-weight parameters.
- optimizer="AdamW",
- optimizer_wd_only_on_weights=OPTIMIZER_WD_ONLY_ON_WEIGHTS,
- optimizer_params={"betas": [0.9, 0.96], "eps": 1e-8, "weight_decay": 1e-2},
- lr=5e-06, # learning rate
- lr_scheduler="MultiStepLR",
- # it was adjusted accordly for the new step scheme
- lr_scheduler_params={"milestones": [50000 * 18, 150000 * 18, 300000 * 18], "gamma": 0.5, "last_epoch": -1},
- test_sentences=[
- {
- "text": "It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
- "speaker_wav": SPEAKER_REFERENCE,
- "language": LANGUAGE,
- },
- {
- "text": "This cake is great. It's so delicious and moist.",
- "speaker_wav": SPEAKER_REFERENCE,
- "language": LANGUAGE,
- },
- ],
- )
-
- # init the model from config
- model = GPTTrainer.init_from_config(config)
-
- # load training samples
- train_samples, eval_samples = load_tts_samples(
- DATASETS_CONFIG_LIST,
- eval_split=True,
- eval_split_max_size=config.eval_split_max_size,
- eval_split_size=config.eval_split_size,
- )
-
- # init the trainer and 🚀
- trainer = Trainer(
- TrainerArgs(
- restore_path=None, # xtts checkpoint is restored via xtts_checkpoint key so no need of restore it using Trainer restore_path parameter
- skip_train_epoch=False,
- start_with_eval=START_WITH_EVAL,
- grad_accum_steps=GRAD_ACUMM_STEPS,
- ),
- config,
- output_path=OUT_PATH,
- model=model,
- train_samples=train_samples,
- eval_samples=eval_samples,
- )
- trainer.fit()
-
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/arxify/RVC-beta-v2-0618/infer_pack/models_onnx.py b/spaces/arxify/RVC-beta-v2-0618/infer_pack/models_onnx.py
deleted file mode 100644
index b0ed4a7847b419beef014f9afa1048400a829ebe..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/infer_pack/models_onnx.py
+++ /dev/null
@@ -1,819 +0,0 @@
-import math, pdb, os
-from time import time as ttime
-import torch
-from torch import nn
-from torch.nn import functional as F
-from infer_pack import modules
-from infer_pack import attentions
-from infer_pack import commons
-from infer_pack.commons import init_weights, get_padding
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from infer_pack.commons import init_weights
-import numpy as np
-from infer_pack import commons
-
-
-class TextEncoder256(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(256, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class TextEncoder768(nn.Module):
- def __init__(
- self,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- f0=True,
- ):
- super().__init__()
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.emb_phone = nn.Linear(768, hidden_channels)
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
- if f0 == True:
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
- self.encoder = attentions.Encoder(
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, phone, pitch, lengths):
- if pitch == None:
- x = self.emb_phone(phone)
- else:
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
- x = self.lrelu(x)
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(
- self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0,
- ):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(
- modules.ResidualCouplingLayer(
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- mean_only=True,
- )
- )
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
- def remove_weight_norm(self):
- for i in range(self.n_flows):
- self.flows[i * 2].remove_weight_norm()
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(
- self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0,
- ):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=gin_channels,
- )
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
- x.dtype
- )
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
- def remove_weight_norm(self):
- self.enc.remove_weight_norm()
-
-
-class Generator(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=0,
- ):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class SineGen(torch.nn.Module):
- """Definition of sine generator
- SineGen(samp_rate, harmonic_num = 0,
- sine_amp = 0.1, noise_std = 0.003,
- voiced_threshold = 0,
- flag_for_pulse=False)
- samp_rate: sampling rate in Hz
- harmonic_num: number of harmonic overtones (default 0)
- sine_amp: amplitude of sine-wavefrom (default 0.1)
- noise_std: std of Gaussian noise (default 0.003)
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
- Note: when flag_for_pulse is True, the first time step of a voiced
- segment is always sin(np.pi) or cos(0)
- """
-
- def __init__(
- self,
- samp_rate,
- harmonic_num=0,
- sine_amp=0.1,
- noise_std=0.003,
- voiced_threshold=0,
- flag_for_pulse=False,
- ):
- super(SineGen, self).__init__()
- self.sine_amp = sine_amp
- self.noise_std = noise_std
- self.harmonic_num = harmonic_num
- self.dim = self.harmonic_num + 1
- self.sampling_rate = samp_rate
- self.voiced_threshold = voiced_threshold
-
- def _f02uv(self, f0):
- # generate uv signal
- uv = torch.ones_like(f0)
- uv = uv * (f0 > self.voiced_threshold)
- return uv
-
- def forward(self, f0, upp):
- """sine_tensor, uv = forward(f0)
- input F0: tensor(batchsize=1, length, dim=1)
- f0 for unvoiced steps should be 0
- output sine_tensor: tensor(batchsize=1, length, dim)
- output uv: tensor(batchsize=1, length, 1)
- """
- with torch.no_grad():
- f0 = f0[:, None].transpose(1, 2)
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
- # fundamental component
- f0_buf[:, :, 0] = f0[:, :, 0]
- for idx in np.arange(self.harmonic_num):
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
- idx + 2
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
- rand_ini = torch.rand(
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
- )
- rand_ini[:, 0] = 0
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
- tmp_over_one *= upp
- tmp_over_one = F.interpolate(
- tmp_over_one.transpose(2, 1),
- scale_factor=upp,
- mode="linear",
- align_corners=True,
- ).transpose(2, 1)
- rad_values = F.interpolate(
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(
- 2, 1
- ) #######
- tmp_over_one %= 1
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
- cumsum_shift = torch.zeros_like(rad_values)
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
- sine_waves = torch.sin(
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
- )
- sine_waves = sine_waves * self.sine_amp
- uv = self._f02uv(f0)
- uv = F.interpolate(
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
- ).transpose(2, 1)
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
- noise = noise_amp * torch.randn_like(sine_waves)
- sine_waves = sine_waves * uv + noise
- return sine_waves, uv, noise
-
-
-class SourceModuleHnNSF(torch.nn.Module):
- """SourceModule for hn-nsf
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
- add_noise_std=0.003, voiced_threshod=0)
- sampling_rate: sampling_rate in Hz
- harmonic_num: number of harmonic above F0 (default: 0)
- sine_amp: amplitude of sine source signal (default: 0.1)
- add_noise_std: std of additive Gaussian noise (default: 0.003)
- note that amplitude of noise in unvoiced is decided
- by sine_amp
- voiced_threshold: threhold to set U/V given F0 (default: 0)
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
- F0_sampled (batchsize, length, 1)
- Sine_source (batchsize, length, 1)
- noise_source (batchsize, length 1)
- uv (batchsize, length, 1)
- """
-
- def __init__(
- self,
- sampling_rate,
- harmonic_num=0,
- sine_amp=0.1,
- add_noise_std=0.003,
- voiced_threshod=0,
- is_half=True,
- ):
- super(SourceModuleHnNSF, self).__init__()
-
- self.sine_amp = sine_amp
- self.noise_std = add_noise_std
- self.is_half = is_half
- # to produce sine waveforms
- self.l_sin_gen = SineGen(
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
- )
-
- # to merge source harmonics into a single excitation
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
- self.l_tanh = torch.nn.Tanh()
-
- def forward(self, x, upp=None):
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
- if self.is_half:
- sine_wavs = sine_wavs.half()
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
- return sine_merge, None, None # noise, uv
-
-
-class GeneratorNSF(torch.nn.Module):
- def __init__(
- self,
- initial_channel,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels,
- sr,
- is_half=False,
- ):
- super(GeneratorNSF, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
-
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
- self.m_source = SourceModuleHnNSF(
- sampling_rate=sr, harmonic_num=0, is_half=is_half
- )
- self.noise_convs = nn.ModuleList()
- self.conv_pre = Conv1d(
- initial_channel, upsample_initial_channel, 7, 1, padding=3
- )
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- c_cur = upsample_initial_channel // (2 ** (i + 1))
- self.ups.append(
- weight_norm(
- ConvTranspose1d(
- upsample_initial_channel // (2**i),
- upsample_initial_channel // (2 ** (i + 1)),
- k,
- u,
- padding=(k - u) // 2,
- )
- )
- )
- if i + 1 < len(upsample_rates):
- stride_f0 = np.prod(upsample_rates[i + 1 :])
- self.noise_convs.append(
- Conv1d(
- 1,
- c_cur,
- kernel_size=stride_f0 * 2,
- stride=stride_f0,
- padding=stride_f0 // 2,
- )
- )
- else:
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel // (2 ** (i + 1))
- for j, (k, d) in enumerate(
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
- ):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- self.upp = np.prod(upsample_rates)
-
- def forward(self, x, f0, g=None):
- har_source, noi_source, uv = self.m_source(f0, self.upp)
- har_source = har_source.transpose(1, 2)
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- x_source = self.noise_convs[i](har_source)
- x = x + x_source
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i * self.num_kernels + j](x)
- else:
- xs += self.resblocks[i * self.num_kernels + j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
- return x
-
- def remove_weight_norm(self):
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-sr2sr = {
- "32k": 32000,
- "40k": 40000,
- "48k": 48000,
-}
-
-
-class SynthesizerTrnMsNSFsidM(nn.Module):
- def __init__(
- self,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- spk_embed_dim,
- gin_channels,
- sr,
- version,
- **kwargs
- ):
- super().__init__()
- if type(sr) == type("strr"):
- sr = sr2sr[sr]
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.gin_channels = gin_channels
- # self.hop_length = hop_length#
- self.spk_embed_dim = spk_embed_dim
- if version == "v1":
- self.enc_p = TextEncoder256(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- else:
- self.enc_p = TextEncoder768(
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- )
- self.dec = GeneratorNSF(
- inter_channels,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- gin_channels=gin_channels,
- sr=sr,
- is_half=kwargs["is_half"],
- )
- self.enc_q = PosteriorEncoder(
- spec_channels,
- inter_channels,
- hidden_channels,
- 5,
- 1,
- 16,
- gin_channels=gin_channels,
- )
- self.flow = ResidualCouplingBlock(
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
- )
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
- self.speaker_map = None
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
-
- def remove_weight_norm(self):
- self.dec.remove_weight_norm()
- self.flow.remove_weight_norm()
- self.enc_q.remove_weight_norm()
-
- def construct_spkmixmap(self, n_speaker):
- self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels))
- for i in range(n_speaker):
- self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]]))
- self.speaker_map = self.speaker_map.unsqueeze(0)
-
- def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None):
- if self.speaker_map is not None: # [N, S] * [S, B, 1, H]
- g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1]
- g = g * self.speaker_map # [N, S, B, 1, H]
- g = torch.sum(g, dim=1) # [N, 1, B, 1, H]
- g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N]
- else:
- g = g.unsqueeze(0)
- g = self.emb_g(g).transpose(1, 2)
-
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
- z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
- z = self.flow(z_p, x_mask, g=g, reverse=True)
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
- return o
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2, 3, 5, 7, 11, 17]
- # periods = [3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class MultiPeriodDiscriminatorV2(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminatorV2, self).__init__()
- # periods = [2, 3, 5, 7, 11, 17]
- periods = [2, 3, 5, 7, 11, 17, 23, 37]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
- ]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = [] #
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- # for j in range(len(fmap_r)):
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ]
- )
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList(
- [
- norm_f(
- Conv2d(
- 1,
- 32,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 32,
- 128,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 128,
- 512,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 512,
- 1024,
- (kernel_size, 1),
- (stride, 1),
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- norm_f(
- Conv2d(
- 1024,
- 1024,
- (kernel_size, 1),
- 1,
- padding=(get_padding(kernel_size, 1), 0),
- )
- ),
- ]
- )
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/expr/core.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/expr/core.py
deleted file mode 100644
index 264c5a1956fbc90a3322f3d69a2f555d416a9d95..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/expr/core.py
+++ /dev/null
@@ -1,197 +0,0 @@
-from ..utils import SchemaBase
-
-
-class DatumType(object):
- """An object to assist in building Vega-Lite Expressions"""
-
- def __repr__(self):
- return "datum"
-
- def __getattr__(self, attr):
- if attr.startswith("__") and attr.endswith("__"):
- raise AttributeError(attr)
- return GetAttrExpression("datum", attr)
-
- def __getitem__(self, attr):
- return GetItemExpression("datum", attr)
-
- def __call__(self, datum, **kwargs):
- """Specify a datum for use in an encoding"""
- return dict(datum=datum, **kwargs)
-
-
-datum = DatumType()
-
-
-def _js_repr(val):
- """Return a javascript-safe string representation of val"""
- if val is True:
- return "true"
- elif val is False:
- return "false"
- elif val is None:
- return "null"
- else:
- return repr(val)
-
-
-class Expression(SchemaBase):
- """Expression
-
- Base object for enabling build-up of Javascript expressions using
- a Python syntax. Calling ``repr(obj)`` will return a Javascript
- representation of the object and the operations it encodes.
- """
-
- _schema = {"type": "string"}
-
- def to_dict(self, *args, **kwargs):
- return repr(self)
-
- def __setattr__(self, attr, val):
- # We don't need the setattr magic defined in SchemaBase
- return object.__setattr__(self, attr, val)
-
- def __add__(self, other):
- return BinaryExpression("+", self, other)
-
- def __radd__(self, other):
- return BinaryExpression("+", other, self)
-
- def __sub__(self, other):
- return BinaryExpression("-", self, other)
-
- def __rsub__(self, other):
- return BinaryExpression("-", other, self)
-
- def __mul__(self, other):
- return BinaryExpression("*", self, other)
-
- def __rmul__(self, other):
- return BinaryExpression("*", other, self)
-
- def __truediv__(self, other):
- return BinaryExpression("/", self, other)
-
- def __rtruediv__(self, other):
- return BinaryExpression("/", other, self)
-
- __div__ = __truediv__
-
- __rdiv__ = __rtruediv__
-
- def __mod__(self, other):
- return BinaryExpression("%", self, other)
-
- def __rmod__(self, other):
- return BinaryExpression("%", other, self)
-
- def __pow__(self, other):
- # "**" Javascript operator is not supported in all browsers
- return FunctionExpression("pow", (self, other))
-
- def __rpow__(self, other):
- # "**" Javascript operator is not supported in all browsers
- return FunctionExpression("pow", (other, self))
-
- def __neg__(self):
- return UnaryExpression("-", self)
-
- def __pos__(self):
- return UnaryExpression("+", self)
-
- # comparison operators
-
- def __eq__(self, other):
- return BinaryExpression("===", self, other)
-
- def __ne__(self, other):
- return BinaryExpression("!==", self, other)
-
- def __gt__(self, other):
- return BinaryExpression(">", self, other)
-
- def __lt__(self, other):
- return BinaryExpression("<", self, other)
-
- def __ge__(self, other):
- return BinaryExpression(">=", self, other)
-
- def __le__(self, other):
- return BinaryExpression("<=", self, other)
-
- def __abs__(self):
- return FunctionExpression("abs", (self,))
-
- # logical operators
-
- def __and__(self, other):
- return BinaryExpression("&&", self, other)
-
- def __rand__(self, other):
- return BinaryExpression("&&", other, self)
-
- def __or__(self, other):
- return BinaryExpression("||", self, other)
-
- def __ror__(self, other):
- return BinaryExpression("||", other, self)
-
- def __invert__(self):
- return UnaryExpression("!", self)
-
- # item access
- def __getitem__(self, val):
- return GetItemExpression(self, val)
-
-
-class UnaryExpression(Expression):
- def __init__(self, op, val):
- super(UnaryExpression, self).__init__(op=op, val=val)
-
- def __repr__(self):
- return "({op}{val})".format(op=self.op, val=_js_repr(self.val))
-
-
-class BinaryExpression(Expression):
- def __init__(self, op, lhs, rhs):
- super(BinaryExpression, self).__init__(op=op, lhs=lhs, rhs=rhs)
-
- def __repr__(self):
- return "({lhs} {op} {rhs})".format(
- op=self.op, lhs=_js_repr(self.lhs), rhs=_js_repr(self.rhs)
- )
-
-
-class FunctionExpression(Expression):
- def __init__(self, name, args):
- super(FunctionExpression, self).__init__(name=name, args=args)
-
- def __repr__(self):
- args = ",".join(_js_repr(arg) for arg in self.args)
- return "{name}({args})".format(name=self.name, args=args)
-
-
-class ConstExpression(Expression):
- def __init__(self, name, doc):
- self.__doc__ = """{}: {}""".format(name, doc)
- super(ConstExpression, self).__init__(name=name, doc=doc)
-
- def __repr__(self):
- return str(self.name)
-
-
-class GetAttrExpression(Expression):
- def __init__(self, group, name):
- super(GetAttrExpression, self).__init__(group=group, name=name)
-
- def __repr__(self):
- return "{}.{}".format(self.group, self.name)
-
-
-class GetItemExpression(Expression):
- def __init__(self, group, name):
- super(GetItemExpression, self).__init__(group=group, name=name)
-
- def __repr__(self):
- return "{}[{!r}]".format(self.group, self.name)
diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/schema/core.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/schema/core.py
deleted file mode 100644
index 7c101a7cc372bdc572f483732e26a0860762fcdc..0000000000000000000000000000000000000000
--- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/schema/core.py
+++ /dev/null
@@ -1,16039 +0,0 @@
-# The contents of this file are automatically written by
-# tools/generate_schema_wrapper.py. Do not modify directly.
-
-from altair.utils.schemapi import SchemaBase, Undefined, _subclasses
-
-import pkgutil
-import json
-
-def load_schema():
- """Load the json schema associated with this module's functions"""
- return json.loads(pkgutil.get_data(__name__, 'vega-lite-schema.json').decode('utf-8'))
-
-
-class VegaLiteSchema(SchemaBase):
- _rootschema = load_schema()
- @classmethod
- def _default_wrapper_classes(cls):
- return _subclasses(VegaLiteSchema)
-
-
-class Root(VegaLiteSchema):
- """Root schema wrapper
-
- anyOf(:class:`TopLevelUnitSpec`, :class:`TopLevelFacetSpec`, :class:`TopLevelLayerSpec`,
- :class:`TopLevelRepeatSpec`, :class:`TopLevelConcatSpec`, :class:`TopLevelVConcatSpec`,
- :class:`TopLevelHConcatSpec`)
- A Vega-Lite top-level specification.
- This is the root class for all Vega-Lite specifications.
- (The json schema is generated from this type.)
- """
- _schema = VegaLiteSchema._rootschema
-
- def __init__(self, *args, **kwds):
- super(Root, self).__init__(*args, **kwds)
-
-
-class Aggregate(VegaLiteSchema):
- """Aggregate schema wrapper
-
- anyOf(:class:`AggregateOp`, :class:`ArgmaxDef`, :class:`ArgminDef`)
- """
- _schema = {'$ref': '#/definitions/Aggregate'}
-
- def __init__(self, *args, **kwds):
- super(Aggregate, self).__init__(*args, **kwds)
-
-
-class AggregateOp(Aggregate):
- """AggregateOp schema wrapper
-
- enum('argmax', 'argmin', 'average', 'count', 'distinct', 'max', 'mean', 'median', 'min',
- 'missing', 'q1', 'q3', 'ci0', 'ci1', 'stderr', 'stdev', 'stdevp', 'sum', 'valid', 'values',
- 'variance', 'variancep')
- """
- _schema = {'$ref': '#/definitions/AggregateOp'}
-
- def __init__(self, *args):
- super(AggregateOp, self).__init__(*args)
-
-
-class AggregatedFieldDef(VegaLiteSchema):
- """AggregatedFieldDef schema wrapper
-
- Mapping(required=[op, as])
-
- Attributes
- ----------
-
- op : :class:`AggregateOp`
- The aggregation operation to apply to the fields (e.g., sum, average or count).
- See the `full list of supported aggregation operations
- `__
- for more information.
- field : :class:`FieldName`
- The data field for which to compute aggregate function. This is required for all
- aggregation operations except ``"count"``.
- as : :class:`FieldName`
- The output field names to use for each aggregated field.
- """
- _schema = {'$ref': '#/definitions/AggregatedFieldDef'}
-
- def __init__(self, op=Undefined, field=Undefined, **kwds):
- super(AggregatedFieldDef, self).__init__(op=op, field=field, **kwds)
-
-
-class Align(VegaLiteSchema):
- """Align schema wrapper
-
- enum('left', 'center', 'right')
- """
- _schema = {'$ref': '#/definitions/Align'}
-
- def __init__(self, *args):
- super(Align, self).__init__(*args)
-
-
-class AnyMark(VegaLiteSchema):
- """AnyMark schema wrapper
-
- anyOf(:class:`CompositeMark`, :class:`CompositeMarkDef`, :class:`Mark`, :class:`MarkDef`)
- """
- _schema = {'$ref': '#/definitions/AnyMark'}
-
- def __init__(self, *args, **kwds):
- super(AnyMark, self).__init__(*args, **kwds)
-
-
-class AreaConfig(VegaLiteSchema):
- """AreaConfig schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- align : :class:`Align`
- The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``.
- angle : float
- The rotation angle of the text, in degrees.
- baseline : :class:`TextBaseline`
- The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``.
-
- **Default value:** ``"middle"``
- color : :class:`Color`
- Default color. Note that ``fill`` and ``stroke`` have higher precedence than
- ``color`` and will override ``color``.
-
- **Default value:** :raw-html:`■`
- ``"#4682b4"``
-
- **Note:** This property cannot be used in a `style config
- `__.
- cornerRadius : float
- The radius in pixels of rounded rectangle corners.
-
- **Default value:** ``0``
- cursor : :class:`Cursor`
- The mouse cursor used over the mark. Any valid `CSS cursor type
- `__ can be used.
- dir : :class:`Dir`
- The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"``
- (right-to-left). This property determines on which side is truncated in response to
- the limit parameter.
-
- **Default value:** ``"ltr"``
- dx : float
- The horizontal offset, in pixels, between the text label and its anchor point. The
- offset is applied after rotation by the *angle* property.
- dy : float
- The vertical offset, in pixels, between the text label and its anchor point. The
- offset is applied after rotation by the *angle* property.
- ellipsis : string
- The ellipsis string for text truncated in response to the limit parameter.
-
- **Default value:** ``"…"``
- fill : :class:`Color`
- Default Fill Color. This has higher precedence than ``config.color``
-
- **Default value:** (None)
- fillOpacity : float
- The fill opacity (value between [0,1]).
-
- **Default value:** ``1``
- filled : boolean
- Whether the mark's color should be used as fill color instead of stroke color.
-
- **Default value:** ``false`` for ``point``, ``line`` and ``rule`` ; otherwise,
- ``true``.
-
- **Note:** This property cannot be used in a `style config
- `__.
- font : string
- The typeface to set the text in (e.g., ``"Helvetica Neue"`` ).
- fontSize : float
- The font size, in pixels.
- fontStyle : :class:`FontStyle`
- The font style (e.g., ``"italic"`` ).
- fontWeight : :class:`FontWeight`
- The font weight.
- This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``,
- ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700``
- ).
- height : float
- Height of the marks.
- href : string
- A URL to load upon mouse click. If defined, the mark acts as a hyperlink.
- interpolate : :class:`Interpolate`
- The line interpolation method to use for line and area marks. One of the following:
-
-
- * ``"linear"`` : piecewise linear segments, as in a polyline.
- * ``"linear-closed"`` : close the linear segments to form a polygon.
- * ``"step"`` : alternate between horizontal and vertical segments, as in a step
- function.
- * ``"step-before"`` : alternate between vertical and horizontal segments, as in a
- step function.
- * ``"step-after"`` : alternate between horizontal and vertical segments, as in a
- step function.
- * ``"basis"`` : a B-spline, with control point duplication on the ends.
- * ``"basis-open"`` : an open B-spline; may not intersect the start or end.
- * ``"basis-closed"`` : a closed B-spline, as in a loop.
- * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends.
- * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end,
- but will intersect other control points.
- * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop.
- * ``"bundle"`` : equivalent to basis, except the tension parameter is used to
- straighten the spline.
- * ``"monotone"`` : cubic interpolation that preserves monotonicity in y.
- limit : float
- The maximum length of the text mark in pixels. The text value will be automatically
- truncated if the rendered size exceeds the limit.
-
- **Default value:** ``0``, indicating no limit
- line : anyOf(boolean, :class:`OverlayMarkDef`)
- A flag for overlaying line on top of area marks, or an object defining the
- properties of the overlayed lines.
-
-
- If this value is an empty object ( ``{}`` ) or ``true``, lines with default
- properties will be used.
-
- If this value is ``false``, no lines would be automatically added to area marks.
-
- **Default value:** ``false``.
- opacity : float
- The overall opacity (value between [0,1]).
-
- **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``,
- ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise.
- order : anyOf(None, boolean)
- For line and trail marks, this ``order`` property can be set to ``null`` or
- ``false`` to make the lines use the original order in the data sources.
- orient : :class:`Orientation`
- The orientation of a non-stacked bar, tick, area, and line charts.
- The value is either horizontal (default) or vertical.
-
-
- * For bar, rule and tick, this determines whether the size of the bar and tick
- should be applied to x or y dimension.
- * For area, this property determines the orient property of the Vega output.
- * For line and trail marks, this property determines the sort order of the points in
- the line
- if ``config.sortLineBy`` is not specified.
- For stacked charts, this is always determined by the orientation of the stack;
- therefore explicitly specified value will be ignored.
- point : anyOf(boolean, :class:`OverlayMarkDef`, enum('transparent'))
- A flag for overlaying points on top of line or area marks, or an object defining the
- properties of the overlayed points.
-
-
- If this property is ``"transparent"``, transparent points will be used (for
- enhancing tooltips and selections).
-
- If this property is an empty object ( ``{}`` ) or ``true``, filled points with
- default properties will be used.
-
- If this property is ``false``, no points would be automatically added to line or
- area marks.
-
- **Default value:** ``false``.
- radius : float
- Polar coordinate radial offset, in pixels, of the text label from the origin
- determined by the ``x`` and ``y`` properties.
- shape : string
- Shape of the point marks. Supported values include:
-
-
- * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``,
- ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or
- ``"triangle-left"``.
- * the line symbol ``"stroke"``
- * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"``
- * a custom `SVG path string
- `__ (For correct
- sizing, custom shape paths should be defined within a square bounding box with
- coordinates ranging from -1 to 1 along both the x and y dimensions.)
-
- **Default value:** ``"circle"``
- size : float
- Default size for marks.
-
-
- * For ``point`` / ``circle`` / ``square``, this represents the pixel area of the
- marks. For example: in the case of circles, the radius is determined in part by
- the square root of the size value.
- * For ``bar``, this represents the band size of the bar, in pixels.
- * For ``text``, this represents the font size, in pixels.
-
- **Default value:** ``30`` for point, circle, square marks; ``rangeStep`` - 1 for bar
- marks with discrete dimensions; ``5`` for bar marks with continuous dimensions;
- ``11`` for text marks.
- stroke : :class:`Color`
- Default Stroke Color. This has higher precedence than ``config.color``
-
- **Default value:** (None)
- strokeCap : :class:`StrokeCap`
- The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or
- ``"square"``.
-
- **Default value:** ``"square"``
- strokeDash : List(float)
- An array of alternating stroke, space lengths for creating dashed or dotted lines.
- strokeDashOffset : float
- The offset (in pixels) into which to begin drawing with the stroke dash array.
- strokeJoin : :class:`StrokeJoin`
- The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``.
-
- **Default value:** ``"miter"``
- strokeMiterLimit : float
- The miter limit at which to bevel a line join.
- strokeOpacity : float
- The stroke opacity (value between [0,1]).
-
- **Default value:** ``1``
- strokeWidth : float
- The stroke width, in pixels.
- tension : float
- Depending on the interpolation type, sets the tension parameter (for line and area
- marks).
- text : string
- Placeholder text if the ``text`` channel is not specified
- theta : float
- Polar coordinate angle, in radians, of the text label from the origin determined by
- the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of
- ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in
- radians, with ``0`` indicating "north".
- tooltip : anyOf(:class:`Value`, :class:`TooltipContent`, None)
- The tooltip text string to show upon mouse hover or an object defining which fields
- should the tooltip be derived from.
-
-
- * If ``tooltip`` is ``{"content": "encoding"}``, then all fields from ``encoding``
- will be used.
- * If ``tooltip`` is ``{"content": "data"}``, then all fields that appear in the
- highlighted data point will be used.
- * If set to ``null``, then no tooltip will be used.
- width : float
- Width of the marks.
- x : anyOf(float, enum('width'))
- X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without
- specified ``x2`` or ``width``.
-
- The ``value`` of this channel can be a number or a string ``"width"`` for the width
- of the plot.
- x2 : anyOf(float, enum('width'))
- X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``.
-
- The ``value`` of this channel can be a number or a string ``"width"`` for the width
- of the plot.
- y : anyOf(float, enum('height'))
- Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without
- specified ``y2`` or ``height``.
-
- The ``value`` of this channel can be a number or a string ``"height"`` for the
- height of the plot.
- y2 : anyOf(float, enum('width'))
- Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``.
-
- The ``value`` of this channel can be a number or a string ``"height"`` for the
- height of the plot.
- """
- _schema = {'$ref': '#/definitions/AreaConfig'}
-
- def __init__(self, align=Undefined, angle=Undefined, baseline=Undefined, color=Undefined,
- cornerRadius=Undefined, cursor=Undefined, dir=Undefined, dx=Undefined, dy=Undefined,
- ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined,
- font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined,
- height=Undefined, href=Undefined, interpolate=Undefined, limit=Undefined,
- line=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, point=Undefined,
- radius=Undefined, shape=Undefined, size=Undefined, stroke=Undefined,
- strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined,
- strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOpacity=Undefined,
- strokeWidth=Undefined, tension=Undefined, text=Undefined, theta=Undefined,
- tooltip=Undefined, width=Undefined, x=Undefined, x2=Undefined, y=Undefined,
- y2=Undefined, **kwds):
- super(AreaConfig, self).__init__(align=align, angle=angle, baseline=baseline, color=color,
- cornerRadius=cornerRadius, cursor=cursor, dir=dir, dx=dx,
- dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity,
- filled=filled, font=font, fontSize=fontSize,
- fontStyle=fontStyle, fontWeight=fontWeight, height=height,
- href=href, interpolate=interpolate, limit=limit, line=line,
- opacity=opacity, order=order, orient=orient, point=point,
- radius=radius, shape=shape, size=size, stroke=stroke,
- strokeCap=strokeCap, strokeDash=strokeDash,
- strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin,
- strokeMiterLimit=strokeMiterLimit, strokeOpacity=strokeOpacity,
- strokeWidth=strokeWidth, tension=tension, text=text,
- theta=theta, tooltip=tooltip, width=width, x=x, x2=x2, y=y,
- y2=y2, **kwds)
-
-
-class ArgmaxDef(Aggregate):
- """ArgmaxDef schema wrapper
-
- Mapping(required=[argmax])
-
- Attributes
- ----------
-
- argmax : string
-
- """
- _schema = {'$ref': '#/definitions/ArgmaxDef'}
-
- def __init__(self, argmax=Undefined, **kwds):
- super(ArgmaxDef, self).__init__(argmax=argmax, **kwds)
-
-
-class ArgminDef(Aggregate):
- """ArgminDef schema wrapper
-
- Mapping(required=[argmin])
-
- Attributes
- ----------
-
- argmin : string
-
- """
- _schema = {'$ref': '#/definitions/ArgminDef'}
-
- def __init__(self, argmin=Undefined, **kwds):
- super(ArgminDef, self).__init__(argmin=argmin, **kwds)
-
-
-class AutoSizeParams(VegaLiteSchema):
- """AutoSizeParams schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- contains : enum('content', 'padding')
- Determines how size calculation should be performed, one of ``"content"`` or
- ``"padding"``. The default setting ( ``"content"`` ) interprets the width and height
- settings as the data rectangle (plotting) dimensions, to which padding is then
- added. In contrast, the ``"padding"`` setting includes the padding within the view
- size calculations, such that the width and height settings indicate the **total**
- intended size of the view.
-
- **Default value** : ``"content"``
- resize : boolean
- A boolean flag indicating if autosize layout should be re-calculated on every view
- update.
-
- **Default value** : ``false``
- type : :class:`AutosizeType`
- The sizing format type. One of ``"pad"``, ``"fit"`` or ``"none"``. See the `autosize
- type `__ documentation for
- descriptions of each.
-
- **Default value** : ``"pad"``
- """
- _schema = {'$ref': '#/definitions/AutoSizeParams'}
-
- def __init__(self, contains=Undefined, resize=Undefined, type=Undefined, **kwds):
- super(AutoSizeParams, self).__init__(contains=contains, resize=resize, type=type, **kwds)
-
-
-class AutosizeType(VegaLiteSchema):
- """AutosizeType schema wrapper
-
- enum('pad', 'fit', 'none')
- """
- _schema = {'$ref': '#/definitions/AutosizeType'}
-
- def __init__(self, *args):
- super(AutosizeType, self).__init__(*args)
-
-
-class Axis(VegaLiteSchema):
- """Axis schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- bandPosition : float
- An interpolation fraction indicating where, for ``band`` scales, axis ticks should
- be positioned. A value of ``0`` places ticks at the left edge of their bands. A
- value of ``0.5`` places ticks in the middle of their bands.
-
- **Default value:** ``0.5``
- domain : boolean
- A boolean flag indicating if the domain (the axis baseline) should be included as
- part of the axis.
-
- **Default value:** ``true``
- domainColor : :class:`Color`
- Color of axis domain line.
-
- **Default value:** ``"gray"``.
- domainDash : List(float)
- An array of alternating [stroke, space] lengths for dashed domain lines.
- domainDashOffset : float
- The pixel offset at which to start drawing with the domain dash array.
- domainOpacity : float
- Opacity of the axis domain line.
- domainWidth : float
- Stroke width of axis domain line
-
- **Default value:** ``1``
- format : string
- The text formatting pattern for labels of guides (axes, legends, headers) and text
- marks.
-
-
- * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's
- `number format pattern `__.
- * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time
- format pattern `__.
-
- See the `format documentation `__
- for more examples.
-
- **Default value:** Derived from `numberFormat
- `__ config for number
- format and from `timeFormat
- `__ config for time
- format.
- formatType : enum('number', 'time')
- The format type for labels ( ``"number"`` or ``"time"`` ).
-
- **Default value:**
-
-
- * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``.
- * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without
- ``timeUnit``.
- grid : boolean
- A boolean flag indicating if grid lines should be included as part of the axis
-
- **Default value:** ``true`` for `continuous scales
- `__ that are not
- binned; otherwise, ``false``.
- gridColor : :class:`Color`
- Color of gridlines.
-
- **Default value:** ``"lightGray"``.
- gridDash : List(float)
- An array of alternating [stroke, space] lengths for dashed grid lines.
- gridDashOffset : float
- The pixel offset at which to start drawing with the grid dash array.
- gridOpacity : float
- The stroke opacity of grid (value between [0,1])
-
- **Default value:** ``1``
- gridWidth : float
- The grid width, in pixels.
-
- **Default value:** ``1``
- labelAlign : :class:`Align`
- Horizontal text alignment of axis tick labels, overriding the default setting for
- the current axis orientation.
- labelAngle : float
- The rotation angle of the axis labels.
-
- **Default value:** ``-90`` for nominal and ordinal fields; ``0`` otherwise.
- labelBaseline : :class:`TextBaseline`
- Vertical text baseline of axis tick labels, overriding the default setting for the
- current axis orientation. Can be ``"top"``, ``"middle"``, ``"bottom"``, or
- ``"alphabetic"``.
- labelBound : anyOf(float, boolean)
- Indicates if labels should be hidden if they exceed the axis range. If ``false``
- (the default) no bounds overlap analysis is performed. If ``true``, labels will be
- hidden if they exceed the axis range by more than 1 pixel. If this property is a
- number, it specifies the pixel tolerance: the maximum amount by which a label
- bounding box may exceed the axis range.
-
- **Default value:** ``false``.
- labelColor : :class:`Color`
- The color of the tick label, can be in hex color code or regular color name.
- labelFlush : anyOf(boolean, float)
- Indicates if the first and last axis labels should be aligned flush with the scale
- range. Flush alignment for a horizontal axis will left-align the first label and
- right-align the last label. For vertical axes, bottom and top text baselines are
- applied instead. If this property is a number, it also indicates the number of
- pixels by which to offset the first and last labels; for example, a value of 2 will
- flush-align the first and last labels and also push them 2 pixels outward from the
- center of the axis. The additional adjustment can sometimes help the labels better
- visually group with corresponding axis ticks.
-
- **Default value:** ``true`` for axis of a continuous x-scale. Otherwise, ``false``.
- labelFlushOffset : float
- Indicates the number of pixels by which to offset flush-adjusted labels. For
- example, a value of ``2`` will push flush-adjusted labels 2 pixels outward from the
- center of the axis. Offsets can help the labels better visually group with
- corresponding axis ticks.
-
- **Default value:** ``0``.
- labelFont : string
- The font of the tick label.
- labelFontSize : float
- The font size of the label, in pixels.
- labelFontStyle : :class:`FontStyle`
- Font style of the title.
- labelFontWeight : :class:`FontWeight`
- Font weight of axis tick labels.
- labelLimit : float
- Maximum allowed pixel width of axis tick labels.
-
- **Default value:** ``180``
- labelOpacity : float
- The opacity of the labels.
- labelOverlap : :class:`LabelOverlap`
- The strategy to use for resolving overlap of axis labels. If ``false`` (the
- default), no overlap reduction is attempted. If set to ``true`` or ``"parity"``, a
- strategy of removing every other label is used (this works well for standard linear
- axes). If set to ``"greedy"``, a linear scan of the labels is performed, removing
- any labels that overlaps with the last visible label (this often works better for
- log-scaled axes).
-
- **Default value:** ``true`` for non-nominal fields with non-log scales; ``"greedy"``
- for log scales; otherwise ``false``.
- labelPadding : float
- The padding, in pixels, between axis and text labels.
-
- **Default value:** ``2``
- labelSeparation : float
- The minimum separation that must be between label bounding boxes for them to be
- considered non-overlapping (default ``0`` ). This property is ignored if
- *labelOverlap* resolution is not enabled.
- labels : boolean
- A boolean flag indicating if labels should be included as part of the axis.
-
- **Default value:** ``true``.
- maxExtent : float
- The maximum extent in pixels that axis ticks and labels should use. This determines
- a maximum offset value for axis titles.
-
- **Default value:** ``undefined``.
- minExtent : float
- The minimum extent in pixels that axis ticks and labels should use. This determines
- a minimum offset value for axis titles.
-
- **Default value:** ``30`` for y-axis; ``undefined`` for x-axis.
- offset : float
- The offset, in pixels, by which to displace the axis from the edge of the enclosing
- group or data rectangle.
-
- **Default value:** derived from the `axis config
- `__ 's
- ``offset`` ( ``0`` by default)
- orient : :class:`AxisOrient`
- The orientation of the axis. One of ``"top"``, ``"bottom"``, ``"left"`` or
- ``"right"``. The orientation can be used to further specialize the axis type (e.g.,
- a y-axis oriented towards the right edge of the chart).
-
- **Default value:** ``"bottom"`` for x-axes and ``"left"`` for y-axes.
- position : float
- The anchor position of the axis in pixels. For x-axes with top or bottom
- orientation, this sets the axis group x coordinate. For y-axes with left or right
- orientation, this sets the axis group y coordinate.
-
- **Default value** : ``0``
- tickColor : :class:`Color`
- The color of the axis's tick.
-
- **Default value:** ``"gray"``
- tickCount : float
- A desired number of ticks, for axes visualizing quantitative scales. The resulting
- number may be different so that values are "nice" (multiples of 2, 5, 10) and lie
- within the underlying scale's range.
- tickDash : List(float)
- An array of alternating [stroke, space] lengths for dashed tick mark lines.
- tickDashOffset : float
- The pixel offset at which to start drawing with the tick mark dash array.
- tickExtra : boolean
- Boolean flag indicating if an extra axis tick should be added for the initial
- position of the axis. This flag is useful for styling axes for ``band`` scales such
- that ticks are placed on band boundaries rather in the middle of a band. Use in
- conjunction with ``"bandPosition": 1`` and an axis ``"padding"`` value of ``0``.
- tickMinStep : float
- The minimum desired step between axis ticks, in terms of scale domain values. For
- example, a value of ``1`` indicates that ticks should not be less than 1 unit apart.
- If ``tickMinStep`` is specified, the ``tickCount`` value will be adjusted, if
- necessary, to enforce the minimum step value.
-
- **Default value** : ``undefined``
- tickOffset : float
- Position offset in pixels to apply to ticks, labels, and gridlines.
- tickOpacity : float
- Opacity of the ticks.
- tickRound : boolean
- Boolean flag indicating if pixel position values should be rounded to the nearest
- integer.
-
- **Default value:** ``true``
- tickSize : float
- The size in pixels of axis ticks.
-
- **Default value:** ``5``
- tickWidth : float
- The width, in pixels, of ticks.
-
- **Default value:** ``1``
- ticks : boolean
- Boolean value that determines whether the axis should include ticks.
-
- **Default value:** ``true``
- title : anyOf(string, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- titleAlign : :class:`Align`
- Horizontal text alignment of axis titles.
- titleAnchor : :class:`TitleAnchor`
- Text anchor position for placing axis titles.
- titleAngle : float
- Angle in degrees of axis titles.
- titleBaseline : :class:`TextBaseline`
- Vertical text baseline for axis titles.
- titleColor : :class:`Color`
- Color of the title, can be in hex color code or regular color name.
- titleFont : string
- Font of the title. (e.g., ``"Helvetica Neue"`` ).
- titleFontSize : float
- Font size of the title.
- titleFontStyle : :class:`FontStyle`
- Font style of the title.
- titleFontWeight : :class:`FontWeight`
- Font weight of the title.
- This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``,
- ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700``
- ).
- titleLimit : float
- Maximum allowed pixel width of axis titles.
- titleOpacity : float
- Opacity of the axis title.
- titlePadding : float
- The padding, in pixels, between title and axis.
- titleX : float
- X-coordinate of the axis title relative to the axis group.
- titleY : float
- Y-coordinate of the axis title relative to the axis group.
- values : anyOf(List(float), List(string), List(boolean), List(:class:`DateTime`))
- Explicitly set the visible axis tick values.
- zindex : float
- A non-negative integer indicating the z-index of the axis.
- If zindex is 0, axes should be drawn behind all chart elements.
- To put them in front, use ``"zindex = 1"``.
-
- **Default value:** ``1`` (in front of the marks) for actual axis and ``0`` (behind
- the marks) for grids.
- """
- _schema = {'$ref': '#/definitions/Axis'}
-
- def __init__(self, bandPosition=Undefined, domain=Undefined, domainColor=Undefined,
- domainDash=Undefined, domainDashOffset=Undefined, domainOpacity=Undefined,
- domainWidth=Undefined, format=Undefined, formatType=Undefined, grid=Undefined,
- gridColor=Undefined, gridDash=Undefined, gridDashOffset=Undefined,
- gridOpacity=Undefined, gridWidth=Undefined, labelAlign=Undefined, labelAngle=Undefined,
- labelBaseline=Undefined, labelBound=Undefined, labelColor=Undefined,
- labelFlush=Undefined, labelFlushOffset=Undefined, labelFont=Undefined,
- labelFontSize=Undefined, labelFontStyle=Undefined, labelFontWeight=Undefined,
- labelLimit=Undefined, labelOpacity=Undefined, labelOverlap=Undefined,
- labelPadding=Undefined, labelSeparation=Undefined, labels=Undefined,
- maxExtent=Undefined, minExtent=Undefined, offset=Undefined, orient=Undefined,
- position=Undefined, tickColor=Undefined, tickCount=Undefined, tickDash=Undefined,
- tickDashOffset=Undefined, tickExtra=Undefined, tickMinStep=Undefined,
- tickOffset=Undefined, tickOpacity=Undefined, tickRound=Undefined, tickSize=Undefined,
- tickWidth=Undefined, ticks=Undefined, title=Undefined, titleAlign=Undefined,
- titleAnchor=Undefined, titleAngle=Undefined, titleBaseline=Undefined,
- titleColor=Undefined, titleFont=Undefined, titleFontSize=Undefined,
- titleFontStyle=Undefined, titleFontWeight=Undefined, titleLimit=Undefined,
- titleOpacity=Undefined, titlePadding=Undefined, titleX=Undefined, titleY=Undefined,
- values=Undefined, zindex=Undefined, **kwds):
- super(Axis, self).__init__(bandPosition=bandPosition, domain=domain, domainColor=domainColor,
- domainDash=domainDash, domainDashOffset=domainDashOffset,
- domainOpacity=domainOpacity, domainWidth=domainWidth, format=format,
- formatType=formatType, grid=grid, gridColor=gridColor,
- gridDash=gridDash, gridDashOffset=gridDashOffset,
- gridOpacity=gridOpacity, gridWidth=gridWidth, labelAlign=labelAlign,
- labelAngle=labelAngle, labelBaseline=labelBaseline,
- labelBound=labelBound, labelColor=labelColor, labelFlush=labelFlush,
- labelFlushOffset=labelFlushOffset, labelFont=labelFont,
- labelFontSize=labelFontSize, labelFontStyle=labelFontStyle,
- labelFontWeight=labelFontWeight, labelLimit=labelLimit,
- labelOpacity=labelOpacity, labelOverlap=labelOverlap,
- labelPadding=labelPadding, labelSeparation=labelSeparation,
- labels=labels, maxExtent=maxExtent, minExtent=minExtent,
- offset=offset, orient=orient, position=position, tickColor=tickColor,
- tickCount=tickCount, tickDash=tickDash,
- tickDashOffset=tickDashOffset, tickExtra=tickExtra,
- tickMinStep=tickMinStep, tickOffset=tickOffset,
- tickOpacity=tickOpacity, tickRound=tickRound, tickSize=tickSize,
- tickWidth=tickWidth, ticks=ticks, title=title, titleAlign=titleAlign,
- titleAnchor=titleAnchor, titleAngle=titleAngle,
- titleBaseline=titleBaseline, titleColor=titleColor,
- titleFont=titleFont, titleFontSize=titleFontSize,
- titleFontStyle=titleFontStyle, titleFontWeight=titleFontWeight,
- titleLimit=titleLimit, titleOpacity=titleOpacity,
- titlePadding=titlePadding, titleX=titleX, titleY=titleY,
- values=values, zindex=zindex, **kwds)
-
-
-class AxisConfig(VegaLiteSchema):
- """AxisConfig schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- bandPosition : float
- An interpolation fraction indicating where, for ``band`` scales, axis ticks should
- be positioned. A value of ``0`` places ticks at the left edge of their bands. A
- value of ``0.5`` places ticks in the middle of their bands.
-
- **Default value:** ``0.5``
- domain : boolean
- A boolean flag indicating if the domain (the axis baseline) should be included as
- part of the axis.
-
- **Default value:** ``true``
- domainColor : :class:`Color`
- Color of axis domain line.
-
- **Default value:** ``"gray"``.
- domainDash : List(float)
- An array of alternating [stroke, space] lengths for dashed domain lines.
- domainDashOffset : float
- The pixel offset at which to start drawing with the domain dash array.
- domainOpacity : float
- Opacity of the axis domain line.
- domainWidth : float
- Stroke width of axis domain line
-
- **Default value:** ``1``
- grid : boolean
- A boolean flag indicating if grid lines should be included as part of the axis
-
- **Default value:** ``true`` for `continuous scales
- `__ that are not
- binned; otherwise, ``false``.
- gridColor : :class:`Color`
- Color of gridlines.
-
- **Default value:** ``"lightGray"``.
- gridDash : List(float)
- An array of alternating [stroke, space] lengths for dashed grid lines.
- gridDashOffset : float
- The pixel offset at which to start drawing with the grid dash array.
- gridOpacity : float
- The stroke opacity of grid (value between [0,1])
-
- **Default value:** ``1``
- gridWidth : float
- The grid width, in pixels.
-
- **Default value:** ``1``
- labelAlign : :class:`Align`
- Horizontal text alignment of axis tick labels, overriding the default setting for
- the current axis orientation.
- labelAngle : float
- The rotation angle of the axis labels.
-
- **Default value:** ``-90`` for nominal and ordinal fields; ``0`` otherwise.
- labelBaseline : :class:`TextBaseline`
- Vertical text baseline of axis tick labels, overriding the default setting for the
- current axis orientation. Can be ``"top"``, ``"middle"``, ``"bottom"``, or
- ``"alphabetic"``.
- labelBound : anyOf(float, boolean)
- Indicates if labels should be hidden if they exceed the axis range. If ``false``
- (the default) no bounds overlap analysis is performed. If ``true``, labels will be
- hidden if they exceed the axis range by more than 1 pixel. If this property is a
- number, it specifies the pixel tolerance: the maximum amount by which a label
- bounding box may exceed the axis range.
-
- **Default value:** ``false``.
- labelColor : :class:`Color`
- The color of the tick label, can be in hex color code or regular color name.
- labelFlush : anyOf(boolean, float)
- Indicates if the first and last axis labels should be aligned flush with the scale
- range. Flush alignment for a horizontal axis will left-align the first label and
- right-align the last label. For vertical axes, bottom and top text baselines are
- applied instead. If this property is a number, it also indicates the number of
- pixels by which to offset the first and last labels; for example, a value of 2 will
- flush-align the first and last labels and also push them 2 pixels outward from the
- center of the axis. The additional adjustment can sometimes help the labels better
- visually group with corresponding axis ticks.
-
- **Default value:** ``true`` for axis of a continuous x-scale. Otherwise, ``false``.
- labelFlushOffset : float
- Indicates the number of pixels by which to offset flush-adjusted labels. For
- example, a value of ``2`` will push flush-adjusted labels 2 pixels outward from the
- center of the axis. Offsets can help the labels better visually group with
- corresponding axis ticks.
-
- **Default value:** ``0``.
- labelFont : string
- The font of the tick label.
- labelFontSize : float
- The font size of the label, in pixels.
- labelFontStyle : :class:`FontStyle`
- Font style of the title.
- labelFontWeight : :class:`FontWeight`
- Font weight of axis tick labels.
- labelLimit : float
- Maximum allowed pixel width of axis tick labels.
-
- **Default value:** ``180``
- labelOpacity : float
- The opacity of the labels.
- labelOverlap : :class:`LabelOverlap`
- The strategy to use for resolving overlap of axis labels. If ``false`` (the
- default), no overlap reduction is attempted. If set to ``true`` or ``"parity"``, a
- strategy of removing every other label is used (this works well for standard linear
- axes). If set to ``"greedy"``, a linear scan of the labels is performed, removing
- any labels that overlaps with the last visible label (this often works better for
- log-scaled axes).
-
- **Default value:** ``true`` for non-nominal fields with non-log scales; ``"greedy"``
- for log scales; otherwise ``false``.
- labelPadding : float
- The padding, in pixels, between axis and text labels.
-
- **Default value:** ``2``
- labelSeparation : float
- The minimum separation that must be between label bounding boxes for them to be
- considered non-overlapping (default ``0`` ). This property is ignored if
- *labelOverlap* resolution is not enabled.
- labels : boolean
- A boolean flag indicating if labels should be included as part of the axis.
-
- **Default value:** ``true``.
- maxExtent : float
- The maximum extent in pixels that axis ticks and labels should use. This determines
- a maximum offset value for axis titles.
-
- **Default value:** ``undefined``.
- minExtent : float
- The minimum extent in pixels that axis ticks and labels should use. This determines
- a minimum offset value for axis titles.
-
- **Default value:** ``30`` for y-axis; ``undefined`` for x-axis.
- orient : :class:`AxisOrient`
- The orientation of the axis. One of ``"top"``, ``"bottom"``, ``"left"`` or
- ``"right"``. The orientation can be used to further specialize the axis type (e.g.,
- a y-axis oriented towards the right edge of the chart).
-
- **Default value:** ``"bottom"`` for x-axes and ``"left"`` for y-axes.
- shortTimeLabels : boolean
- Whether month names and weekday names should be abbreviated.
-
- **Default value:** ``false``
- tickColor : :class:`Color`
- The color of the axis's tick.
-
- **Default value:** ``"gray"``
- tickDash : List(float)
- An array of alternating [stroke, space] lengths for dashed tick mark lines.
- tickDashOffset : float
- The pixel offset at which to start drawing with the tick mark dash array.
- tickExtra : boolean
- Boolean flag indicating if an extra axis tick should be added for the initial
- position of the axis. This flag is useful for styling axes for ``band`` scales such
- that ticks are placed on band boundaries rather in the middle of a band. Use in
- conjunction with ``"bandPosition": 1`` and an axis ``"padding"`` value of ``0``.
- tickOffset : float
- Position offset in pixels to apply to ticks, labels, and gridlines.
- tickOpacity : float
- Opacity of the ticks.
- tickRound : boolean
- Boolean flag indicating if pixel position values should be rounded to the nearest
- integer.
-
- **Default value:** ``true``
- tickSize : float
- The size in pixels of axis ticks.
-
- **Default value:** ``5``
- tickWidth : float
- The width, in pixels, of ticks.
-
- **Default value:** ``1``
- ticks : boolean
- Boolean value that determines whether the axis should include ticks.
-
- **Default value:** ``true``
- title : None
- Set to null to disable title for the axis, legend, or header.
- titleAlign : :class:`Align`
- Horizontal text alignment of axis titles.
- titleAnchor : :class:`TitleAnchor`
- Text anchor position for placing axis titles.
- titleAngle : float
- Angle in degrees of axis titles.
- titleBaseline : :class:`TextBaseline`
- Vertical text baseline for axis titles.
- titleColor : :class:`Color`
- Color of the title, can be in hex color code or regular color name.
- titleFont : string
- Font of the title. (e.g., ``"Helvetica Neue"`` ).
- titleFontSize : float
- Font size of the title.
- titleFontStyle : :class:`FontStyle`
- Font style of the title.
- titleFontWeight : :class:`FontWeight`
- Font weight of the title.
- This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``,
- ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700``
- ).
- titleLimit : float
- Maximum allowed pixel width of axis titles.
- titleOpacity : float
- Opacity of the axis title.
- titlePadding : float
- The padding, in pixels, between title and axis.
- titleX : float
- X-coordinate of the axis title relative to the axis group.
- titleY : float
- Y-coordinate of the axis title relative to the axis group.
- """
- _schema = {'$ref': '#/definitions/AxisConfig'}
-
- def __init__(self, bandPosition=Undefined, domain=Undefined, domainColor=Undefined,
- domainDash=Undefined, domainDashOffset=Undefined, domainOpacity=Undefined,
- domainWidth=Undefined, grid=Undefined, gridColor=Undefined, gridDash=Undefined,
- gridDashOffset=Undefined, gridOpacity=Undefined, gridWidth=Undefined,
- labelAlign=Undefined, labelAngle=Undefined, labelBaseline=Undefined,
- labelBound=Undefined, labelColor=Undefined, labelFlush=Undefined,
- labelFlushOffset=Undefined, labelFont=Undefined, labelFontSize=Undefined,
- labelFontStyle=Undefined, labelFontWeight=Undefined, labelLimit=Undefined,
- labelOpacity=Undefined, labelOverlap=Undefined, labelPadding=Undefined,
- labelSeparation=Undefined, labels=Undefined, maxExtent=Undefined, minExtent=Undefined,
- orient=Undefined, shortTimeLabels=Undefined, tickColor=Undefined, tickDash=Undefined,
- tickDashOffset=Undefined, tickExtra=Undefined, tickOffset=Undefined,
- tickOpacity=Undefined, tickRound=Undefined, tickSize=Undefined, tickWidth=Undefined,
- ticks=Undefined, title=Undefined, titleAlign=Undefined, titleAnchor=Undefined,
- titleAngle=Undefined, titleBaseline=Undefined, titleColor=Undefined,
- titleFont=Undefined, titleFontSize=Undefined, titleFontStyle=Undefined,
- titleFontWeight=Undefined, titleLimit=Undefined, titleOpacity=Undefined,
- titlePadding=Undefined, titleX=Undefined, titleY=Undefined, **kwds):
- super(AxisConfig, self).__init__(bandPosition=bandPosition, domain=domain,
- domainColor=domainColor, domainDash=domainDash,
- domainDashOffset=domainDashOffset, domainOpacity=domainOpacity,
- domainWidth=domainWidth, grid=grid, gridColor=gridColor,
- gridDash=gridDash, gridDashOffset=gridDashOffset,
- gridOpacity=gridOpacity, gridWidth=gridWidth,
- labelAlign=labelAlign, labelAngle=labelAngle,
- labelBaseline=labelBaseline, labelBound=labelBound,
- labelColor=labelColor, labelFlush=labelFlush,
- labelFlushOffset=labelFlushOffset, labelFont=labelFont,
- labelFontSize=labelFontSize, labelFontStyle=labelFontStyle,
- labelFontWeight=labelFontWeight, labelLimit=labelLimit,
- labelOpacity=labelOpacity, labelOverlap=labelOverlap,
- labelPadding=labelPadding, labelSeparation=labelSeparation,
- labels=labels, maxExtent=maxExtent, minExtent=minExtent,
- orient=orient, shortTimeLabels=shortTimeLabels,
- tickColor=tickColor, tickDash=tickDash,
- tickDashOffset=tickDashOffset, tickExtra=tickExtra,
- tickOffset=tickOffset, tickOpacity=tickOpacity,
- tickRound=tickRound, tickSize=tickSize, tickWidth=tickWidth,
- ticks=ticks, title=title, titleAlign=titleAlign,
- titleAnchor=titleAnchor, titleAngle=titleAngle,
- titleBaseline=titleBaseline, titleColor=titleColor,
- titleFont=titleFont, titleFontSize=titleFontSize,
- titleFontStyle=titleFontStyle, titleFontWeight=titleFontWeight,
- titleLimit=titleLimit, titleOpacity=titleOpacity,
- titlePadding=titlePadding, titleX=titleX, titleY=titleY, **kwds)
-
-
-class AxisOrient(VegaLiteSchema):
- """AxisOrient schema wrapper
-
- enum('top', 'bottom', 'left', 'right')
- """
- _schema = {'$ref': '#/definitions/AxisOrient'}
-
- def __init__(self, *args):
- super(AxisOrient, self).__init__(*args)
-
-
-class AxisResolveMap(VegaLiteSchema):
- """AxisResolveMap schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- x : :class:`ResolveMode`
-
- y : :class:`ResolveMode`
-
- """
- _schema = {'$ref': '#/definitions/AxisResolveMap'}
-
- def __init__(self, x=Undefined, y=Undefined, **kwds):
- super(AxisResolveMap, self).__init__(x=x, y=y, **kwds)
-
-
-class BaseLegendLayout(VegaLiteSchema):
- """BaseLegendLayout schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- anchor : :class:`TitleAnchor`
- The anchor point for legend orient group layout.
- bounds : :class:`LayoutBounds`
- The bounds calculation to use for legend orient group layout.
- center : anyOf(boolean, :class:`SignalRef`)
- A flag to center legends within a shared orient group.
- direction : anyOf(:class:`Orientation`, :class:`SignalRef`)
- The layout direction for legend orient group layout.
- margin : anyOf(float, :class:`SignalRef`)
- The pixel margin between legends within a orient group.
- offset : anyOf(float, :class:`SignalRef`)
- The pixel offset from the chart body for a legend orient group.
- """
- _schema = {'$ref': '#/definitions/BaseLegendLayout'}
-
- def __init__(self, anchor=Undefined, bounds=Undefined, center=Undefined, direction=Undefined,
- margin=Undefined, offset=Undefined, **kwds):
- super(BaseLegendLayout, self).__init__(anchor=anchor, bounds=bounds, center=center,
- direction=direction, margin=margin, offset=offset, **kwds)
-
-
-class BaseMarkConfig(VegaLiteSchema):
- """BaseMarkConfig schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- align : :class:`Align`
- The horizontal alignment of the text. One of ``"left"``, ``"right"``, ``"center"``.
- angle : float
- The rotation angle of the text, in degrees.
- baseline : :class:`TextBaseline`
- The vertical alignment of the text. One of ``"top"``, ``"middle"``, ``"bottom"``.
-
- **Default value:** ``"middle"``
- cornerRadius : float
- The radius in pixels of rounded rectangle corners.
-
- **Default value:** ``0``
- cursor : :class:`Cursor`
- The mouse cursor used over the mark. Any valid `CSS cursor type
- `__ can be used.
- dir : :class:`Dir`
- The direction of the text. One of ``"ltr"`` (left-to-right) or ``"rtl"``
- (right-to-left). This property determines on which side is truncated in response to
- the limit parameter.
-
- **Default value:** ``"ltr"``
- dx : float
- The horizontal offset, in pixels, between the text label and its anchor point. The
- offset is applied after rotation by the *angle* property.
- dy : float
- The vertical offset, in pixels, between the text label and its anchor point. The
- offset is applied after rotation by the *angle* property.
- ellipsis : string
- The ellipsis string for text truncated in response to the limit parameter.
-
- **Default value:** ``"…"``
- fill : :class:`Color`
- Default Fill Color. This has higher precedence than ``config.color``
-
- **Default value:** (None)
- fillOpacity : float
- The fill opacity (value between [0,1]).
-
- **Default value:** ``1``
- font : string
- The typeface to set the text in (e.g., ``"Helvetica Neue"`` ).
- fontSize : float
- The font size, in pixels.
- fontStyle : :class:`FontStyle`
- The font style (e.g., ``"italic"`` ).
- fontWeight : :class:`FontWeight`
- The font weight.
- This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``,
- ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700``
- ).
- height : float
- Height of the marks.
- href : string
- A URL to load upon mouse click. If defined, the mark acts as a hyperlink.
- interpolate : :class:`Interpolate`
- The line interpolation method to use for line and area marks. One of the following:
-
-
- * ``"linear"`` : piecewise linear segments, as in a polyline.
- * ``"linear-closed"`` : close the linear segments to form a polygon.
- * ``"step"`` : alternate between horizontal and vertical segments, as in a step
- function.
- * ``"step-before"`` : alternate between vertical and horizontal segments, as in a
- step function.
- * ``"step-after"`` : alternate between horizontal and vertical segments, as in a
- step function.
- * ``"basis"`` : a B-spline, with control point duplication on the ends.
- * ``"basis-open"`` : an open B-spline; may not intersect the start or end.
- * ``"basis-closed"`` : a closed B-spline, as in a loop.
- * ``"cardinal"`` : a Cardinal spline, with control point duplication on the ends.
- * ``"cardinal-open"`` : an open Cardinal spline; may not intersect the start or end,
- but will intersect other control points.
- * ``"cardinal-closed"`` : a closed Cardinal spline, as in a loop.
- * ``"bundle"`` : equivalent to basis, except the tension parameter is used to
- straighten the spline.
- * ``"monotone"`` : cubic interpolation that preserves monotonicity in y.
- limit : float
- The maximum length of the text mark in pixels. The text value will be automatically
- truncated if the rendered size exceeds the limit.
-
- **Default value:** ``0``, indicating no limit
- opacity : float
- The overall opacity (value between [0,1]).
-
- **Default value:** ``0.7`` for non-aggregate plots with ``point``, ``tick``,
- ``circle``, or ``square`` marks or layered ``bar`` charts and ``1`` otherwise.
- orient : :class:`Orientation`
- The orientation of a non-stacked bar, tick, area, and line charts.
- The value is either horizontal (default) or vertical.
-
-
- * For bar, rule and tick, this determines whether the size of the bar and tick
- should be applied to x or y dimension.
- * For area, this property determines the orient property of the Vega output.
- * For line and trail marks, this property determines the sort order of the points in
- the line
- if ``config.sortLineBy`` is not specified.
- For stacked charts, this is always determined by the orientation of the stack;
- therefore explicitly specified value will be ignored.
- radius : float
- Polar coordinate radial offset, in pixels, of the text label from the origin
- determined by the ``x`` and ``y`` properties.
- shape : string
- Shape of the point marks. Supported values include:
-
-
- * plotting shapes: ``"circle"``, ``"square"``, ``"cross"``, ``"diamond"``,
- ``"triangle-up"``, ``"triangle-down"``, ``"triangle-right"``, or
- ``"triangle-left"``.
- * the line symbol ``"stroke"``
- * centered directional shapes ``"arrow"``, ``"wedge"``, or ``"triangle"``
- * a custom `SVG path string
- `__ (For correct
- sizing, custom shape paths should be defined within a square bounding box with
- coordinates ranging from -1 to 1 along both the x and y dimensions.)
-
- **Default value:** ``"circle"``
- size : float
- The pixel area each the point/circle/square.
- For example: in the case of circles, the radius is determined in part by the square
- root of the size value.
-
- **Default value:** ``30``
- stroke : :class:`Color`
- Default Stroke Color. This has higher precedence than ``config.color``
-
- **Default value:** (None)
- strokeCap : :class:`StrokeCap`
- The stroke cap for line ending style. One of ``"butt"``, ``"round"``, or
- ``"square"``.
-
- **Default value:** ``"square"``
- strokeDash : List(float)
- An array of alternating stroke, space lengths for creating dashed or dotted lines.
- strokeDashOffset : float
- The offset (in pixels) into which to begin drawing with the stroke dash array.
- strokeJoin : :class:`StrokeJoin`
- The stroke line join method. One of ``"miter"``, ``"round"`` or ``"bevel"``.
-
- **Default value:** ``"miter"``
- strokeMiterLimit : float
- The miter limit at which to bevel a line join.
- strokeOpacity : float
- The stroke opacity (value between [0,1]).
-
- **Default value:** ``1``
- strokeWidth : float
- The stroke width, in pixels.
- tension : float
- Depending on the interpolation type, sets the tension parameter (for line and area
- marks).
- text : string
- Placeholder text if the ``text`` channel is not specified
- theta : float
- Polar coordinate angle, in radians, of the text label from the origin determined by
- the ``x`` and ``y`` properties. Values for ``theta`` follow the same convention of
- ``arc`` mark ``startAngle`` and ``endAngle`` properties: angles are measured in
- radians, with ``0`` indicating "north".
- tooltip : Any
- The tooltip text to show upon mouse hover.
- width : float
- Width of the marks.
- x : anyOf(float, enum('width'))
- X coordinates of the marks, or width of horizontal ``"bar"`` and ``"area"`` without
- specified ``x2`` or ``width``.
-
- The ``value`` of this channel can be a number or a string ``"width"`` for the width
- of the plot.
- x2 : anyOf(float, enum('width'))
- X2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``.
-
- The ``value`` of this channel can be a number or a string ``"width"`` for the width
- of the plot.
- y : anyOf(float, enum('height'))
- Y coordinates of the marks, or height of vertical ``"bar"`` and ``"area"`` without
- specified ``y2`` or ``height``.
-
- The ``value`` of this channel can be a number or a string ``"height"`` for the
- height of the plot.
- y2 : anyOf(float, enum('width'))
- Y2 coordinates for ranged ``"area"``, ``"bar"``, ``"rect"``, and ``"rule"``.
-
- The ``value`` of this channel can be a number or a string ``"height"`` for the
- height of the plot.
- """
- _schema = {'$ref': '#/definitions/BaseMarkConfig'}
-
- def __init__(self, align=Undefined, angle=Undefined, baseline=Undefined, cornerRadius=Undefined,
- cursor=Undefined, dir=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined,
- fill=Undefined, fillOpacity=Undefined, font=Undefined, fontSize=Undefined,
- fontStyle=Undefined, fontWeight=Undefined, height=Undefined, href=Undefined,
- interpolate=Undefined, limit=Undefined, opacity=Undefined, orient=Undefined,
- radius=Undefined, shape=Undefined, size=Undefined, stroke=Undefined,
- strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined,
- strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOpacity=Undefined,
- strokeWidth=Undefined, tension=Undefined, text=Undefined, theta=Undefined,
- tooltip=Undefined, width=Undefined, x=Undefined, x2=Undefined, y=Undefined,
- y2=Undefined, **kwds):
- super(BaseMarkConfig, self).__init__(align=align, angle=angle, baseline=baseline,
- cornerRadius=cornerRadius, cursor=cursor, dir=dir, dx=dx,
- dy=dy, ellipsis=ellipsis, fill=fill,
- fillOpacity=fillOpacity, font=font, fontSize=fontSize,
- fontStyle=fontStyle, fontWeight=fontWeight, height=height,
- href=href, interpolate=interpolate, limit=limit,
- opacity=opacity, orient=orient, radius=radius, shape=shape,
- size=size, stroke=stroke, strokeCap=strokeCap,
- strokeDash=strokeDash, strokeDashOffset=strokeDashOffset,
- strokeJoin=strokeJoin, strokeMiterLimit=strokeMiterLimit,
- strokeOpacity=strokeOpacity, strokeWidth=strokeWidth,
- tension=tension, text=text, theta=theta, tooltip=tooltip,
- width=width, x=x, x2=x2, y=y, y2=y2, **kwds)
-
-
-class BaseTitleConfig(VegaLiteSchema):
- """BaseTitleConfig schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- align : :class:`Align`
-
- anchor : :class:`TitleAnchor`
- The anchor position for placing the title. One of ``"start"``, ``"middle"``, or
- ``"end"``. For example, with an orientation of top these anchor positions map to a
- left-, center-, or right-aligned title.
- angle : float
- Angle in degrees of title text.
- baseline : :class:`TextBaseline`
- Vertical text baseline for title text. One of ``"top"``, ``"middle"``, ``"bottom"``,
- or ``"alphabetic"``.
- color : :class:`Color`
- Text color for title text.
- dx : float
- Delta offset for title text x-coordinate.
- dy : float
- Delta offset for title text y-coordinate.
- font : string
- Font name for title text.
- fontSize : float
- Font size in pixels for title text.
-
- **Default value:** ``10``.
- fontStyle : :class:`FontStyle`
- Font style for title text.
- fontWeight : :class:`FontWeight`
- Font weight for title text.
- This can be either a string (e.g ``"bold"``, ``"normal"`` ) or a number ( ``100``,
- ``200``, ``300``, ..., ``900`` where ``"normal"`` = ``400`` and ``"bold"`` = ``700``
- ).
- frame : :class:`TitleFrame`
- The reference frame for the anchor position, one of ``"bounds"`` (to anchor relative
- to the full bounding box) or ``"group"`` (to anchor relative to the group width or
- height).
- limit : float
- The maximum allowed length in pixels of legend labels.
- offset : float
- The orthogonal offset in pixels by which to displace the title from its position
- along the edge of the chart.
- orient : :class:`TitleOrient`
- Default title orientation ( ``"top"``, ``"bottom"``, ``"left"``, or ``"right"`` )
- """
- _schema = {'$ref': '#/definitions/BaseTitleConfig'}
-
- def __init__(self, align=Undefined, anchor=Undefined, angle=Undefined, baseline=Undefined,
- color=Undefined, dx=Undefined, dy=Undefined, font=Undefined, fontSize=Undefined,
- fontStyle=Undefined, fontWeight=Undefined, frame=Undefined, limit=Undefined,
- offset=Undefined, orient=Undefined, **kwds):
- super(BaseTitleConfig, self).__init__(align=align, anchor=anchor, angle=angle,
- baseline=baseline, color=color, dx=dx, dy=dy, font=font,
- fontSize=fontSize, fontStyle=fontStyle,
- fontWeight=fontWeight, frame=frame, limit=limit,
- offset=offset, orient=orient, **kwds)
-
-
-class BinParams(VegaLiteSchema):
- """BinParams schema wrapper
-
- Mapping(required=[])
- Binning properties or boolean flag for determining whether to bin data or not.
-
- Attributes
- ----------
-
- anchor : float
- A value in the binned domain at which to anchor the bins, shifting the bin
- boundaries if necessary to ensure that a boundary aligns with the anchor value.
-
- **Default Value:** the minimum bin extent value
- base : float
- The number base to use for automatic bin determination (default is base 10).
-
- **Default value:** ``10``
- binned : boolean
- When set to true, Vega-Lite treats the input data as already binned.
- divide : List(float)
- Scale factors indicating allowable subdivisions. The default value is [5, 2], which
- indicates that for base 10 numbers (the default base), the method may consider
- dividing bin sizes by 5 and/or 2. For example, for an initial step size of 10, the
- method can check if bin sizes of 2 (= 10/5), 5 (= 10/2), or 1 (= 10/(5*2)) might
- also satisfy the given constraints.
-
- **Default value:** ``[5, 2]``
- extent : List(float)
- A two-element ( ``[min, max]`` ) array indicating the range of desired bin values.
- maxbins : float
- Maximum number of bins.
-
- **Default value:** ``6`` for ``row``, ``column`` and ``shape`` channels; ``10`` for
- other channels
- minstep : float
- A minimum allowable step size (particularly useful for integer values).
- nice : boolean
- If true (the default), attempts to make the bin boundaries use human-friendly
- boundaries, such as multiples of ten.
- step : float
- An exact step size to use between bins.
-
- **Note:** If provided, options such as maxbins will be ignored.
- steps : List(float)
- An array of allowable step sizes to choose from.
- """
- _schema = {'$ref': '#/definitions/BinParams'}
-
- def __init__(self, anchor=Undefined, base=Undefined, binned=Undefined, divide=Undefined,
- extent=Undefined, maxbins=Undefined, minstep=Undefined, nice=Undefined, step=Undefined,
- steps=Undefined, **kwds):
- super(BinParams, self).__init__(anchor=anchor, base=base, binned=binned, divide=divide,
- extent=extent, maxbins=maxbins, minstep=minstep, nice=nice,
- step=step, steps=steps, **kwds)
-
-
-class Binding(VegaLiteSchema):
- """Binding schema wrapper
-
- anyOf(:class:`BindCheckbox`, :class:`BindRadioSelect`, :class:`BindRange`,
- :class:`InputBinding`)
- """
- _schema = {'$ref': '#/definitions/Binding'}
-
- def __init__(self, *args, **kwds):
- super(Binding, self).__init__(*args, **kwds)
-
-
-class BindCheckbox(Binding):
- """BindCheckbox schema wrapper
-
- Mapping(required=[input])
-
- Attributes
- ----------
-
- input : enum('checkbox')
-
- debounce : float
-
- element : :class:`Element`
-
- name : string
-
- type : string
-
- """
- _schema = {'$ref': '#/definitions/BindCheckbox'}
-
- def __init__(self, input=Undefined, debounce=Undefined, element=Undefined, name=Undefined,
- type=Undefined, **kwds):
- super(BindCheckbox, self).__init__(input=input, debounce=debounce, element=element, name=name,
- type=type, **kwds)
-
-
-class BindRadioSelect(Binding):
- """BindRadioSelect schema wrapper
-
- Mapping(required=[input, options])
-
- Attributes
- ----------
-
- input : enum('radio', 'select')
-
- options : List(Any)
-
- debounce : float
-
- element : :class:`Element`
-
- name : string
-
- type : string
-
- """
- _schema = {'$ref': '#/definitions/BindRadioSelect'}
-
- def __init__(self, input=Undefined, options=Undefined, debounce=Undefined, element=Undefined,
- name=Undefined, type=Undefined, **kwds):
- super(BindRadioSelect, self).__init__(input=input, options=options, debounce=debounce,
- element=element, name=name, type=type, **kwds)
-
-
-class BindRange(Binding):
- """BindRange schema wrapper
-
- Mapping(required=[input])
-
- Attributes
- ----------
-
- input : enum('range')
-
- debounce : float
-
- element : :class:`Element`
-
- max : float
-
- min : float
-
- name : string
-
- step : float
-
- type : string
-
- """
- _schema = {'$ref': '#/definitions/BindRange'}
-
- def __init__(self, input=Undefined, debounce=Undefined, element=Undefined, max=Undefined,
- min=Undefined, name=Undefined, step=Undefined, type=Undefined, **kwds):
- super(BindRange, self).__init__(input=input, debounce=debounce, element=element, max=max,
- min=min, name=name, step=step, type=type, **kwds)
-
-
-class BoxPlotConfig(VegaLiteSchema):
- """BoxPlotConfig schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- box : anyOf(boolean, :class:`MarkConfig`)
-
- extent : anyOf(enum('min-max'), float)
- The extent of the whiskers. Available options include:
-
-
- * ``"min-max"`` : min and max are the lower and upper whiskers respectively.
- * A number representing multiple of the interquartile range. This number will be
- multiplied by the IQR to determine whisker boundary, which spans from the smallest
- data to the largest data within the range *[Q1 - k * IQR, Q3 + k * IQR]* where
- *Q1* and *Q3* are the first and third quartiles while *IQR* is the interquartile
- range ( *Q3-Q1* ).
-
- **Default value:** ``1.5``.
- median : anyOf(boolean, :class:`MarkConfig`)
-
- outliers : anyOf(boolean, :class:`MarkConfig`)
-
- rule : anyOf(boolean, :class:`MarkConfig`)
-
- size : float
- Size of the box and median tick of a box plot
- ticks : anyOf(boolean, :class:`MarkConfig`)
-
- """
- _schema = {'$ref': '#/definitions/BoxPlotConfig'}
-
- def __init__(self, box=Undefined, extent=Undefined, median=Undefined, outliers=Undefined,
- rule=Undefined, size=Undefined, ticks=Undefined, **kwds):
- super(BoxPlotConfig, self).__init__(box=box, extent=extent, median=median, outliers=outliers,
- rule=rule, size=size, ticks=ticks, **kwds)
-
-
-class BrushConfig(VegaLiteSchema):
- """BrushConfig schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- fill : :class:`Color`
- The fill color of the interval mark.
-
- **Default value:** ``#333333``
- fillOpacity : float
- The fill opacity of the interval mark (a value between 0 and 1).
-
- **Default value:** ``0.125``
- stroke : :class:`Color`
- The stroke color of the interval mark.
-
- **Default value:** ``#ffffff``
- strokeDash : List(float)
- An array of alternating stroke and space lengths,
- for creating dashed or dotted lines.
- strokeDashOffset : float
- The offset (in pixels) with which to begin drawing the stroke dash array.
- strokeOpacity : float
- The stroke opacity of the interval mark (a value between 0 and 1).
- strokeWidth : float
- The stroke width of the interval mark.
- """
- _schema = {'$ref': '#/definitions/BrushConfig'}
-
- def __init__(self, fill=Undefined, fillOpacity=Undefined, stroke=Undefined, strokeDash=Undefined,
- strokeDashOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, **kwds):
- super(BrushConfig, self).__init__(fill=fill, fillOpacity=fillOpacity, stroke=stroke,
- strokeDash=strokeDash, strokeDashOffset=strokeDashOffset,
- strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, **kwds)
-
-
-class Color(VegaLiteSchema):
- """Color schema wrapper
-
- anyOf(:class:`ColorName`, :class:`HexColor`, string)
- """
- _schema = {'$ref': '#/definitions/Color'}
-
- def __init__(self, *args, **kwds):
- super(Color, self).__init__(*args, **kwds)
-
-
-class ColorName(Color):
- """ColorName schema wrapper
-
- enum('black', 'silver', 'gray', 'white', 'maroon', 'red', 'purple', 'fuchsia', 'green',
- 'lime', 'olive', 'yellow', 'navy', 'blue', 'teal', 'aqua', 'orange', 'aliceblue',
- 'antiquewhite', 'aquamarine', 'azure', 'beige', 'bisque', 'blanchedalmond', 'blueviolet',
- 'brown', 'burlywood', 'cadetblue', 'chartreuse', 'chocolate', 'coral', 'cornflowerblue',
- 'cornsilk', 'crimson', 'cyan', 'darkblue', 'darkcyan', 'darkgoldenrod', 'darkgray',
- 'darkgreen', 'darkgrey', 'darkkhaki', 'darkmagenta', 'darkolivegreen', 'darkorange',
- 'darkorchid', 'darkred', 'darksalmon', 'darkseagreen', 'darkslateblue', 'darkslategray',
- 'darkslategrey', 'darkturquoise', 'darkviolet', 'deeppink', 'deepskyblue', 'dimgray',
- 'dimgrey', 'dodgerblue', 'firebrick', 'floralwhite', 'forestgreen', 'gainsboro',
- 'ghostwhite', 'gold', 'goldenrod', 'greenyellow', 'grey', 'honeydew', 'hotpink',
- 'indianred', 'indigo', 'ivory', 'khaki', 'lavender', 'lavenderblush', 'lawngreen',
- 'lemonchiffon', 'lightblue', 'lightcoral', 'lightcyan', 'lightgoldenrodyellow', 'lightgray',
- 'lightgreen', 'lightgrey', 'lightpink', 'lightsalmon', 'lightseagreen', 'lightskyblue',
- 'lightslategray', 'lightslategrey', 'lightsteelblue', 'lightyellow', 'limegreen', 'linen',
- 'magenta', 'mediumaquamarine', 'mediumblue', 'mediumorchid', 'mediumpurple',
- 'mediumseagreen', 'mediumslateblue', 'mediumspringgreen', 'mediumturquoise',
- 'mediumvioletred', 'midnightblue', 'mintcream', 'mistyrose', 'moccasin', 'navajowhite',
- 'oldlace', 'olivedrab', 'orangered', 'orchid', 'palegoldenrod', 'palegreen',
- 'paleturquoise', 'palevioletred', 'papayawhip', 'peachpuff', 'peru', 'pink', 'plum',
- 'powderblue', 'rosybrown', 'royalblue', 'saddlebrown', 'salmon', 'sandybrown', 'seagreen',
- 'seashell', 'sienna', 'skyblue', 'slateblue', 'slategray', 'slategrey', 'snow',
- 'springgreen', 'steelblue', 'tan', 'thistle', 'tomato', 'turquoise', 'violet', 'wheat',
- 'whitesmoke', 'yellowgreen', 'rebeccapurple')
- """
- _schema = {'$ref': '#/definitions/ColorName'}
-
- def __init__(self, *args):
- super(ColorName, self).__init__(*args)
-
-
-class CompositeMark(AnyMark):
- """CompositeMark schema wrapper
-
- anyOf(:class:`BoxPlot`, :class:`ErrorBar`, :class:`ErrorBand`)
- """
- _schema = {'$ref': '#/definitions/CompositeMark'}
-
- def __init__(self, *args, **kwds):
- super(CompositeMark, self).__init__(*args, **kwds)
-
-
-class BoxPlot(CompositeMark):
- """BoxPlot schema wrapper
-
- enum('boxplot')
- """
- _schema = {'$ref': '#/definitions/BoxPlot'}
-
- def __init__(self, *args):
- super(BoxPlot, self).__init__(*args)
-
-
-class CompositeMarkDef(AnyMark):
- """CompositeMarkDef schema wrapper
-
- anyOf(:class:`BoxPlotDef`, :class:`ErrorBarDef`, :class:`ErrorBandDef`)
- """
- _schema = {'$ref': '#/definitions/CompositeMarkDef'}
-
- def __init__(self, *args, **kwds):
- super(CompositeMarkDef, self).__init__(*args, **kwds)
-
-
-class BoxPlotDef(CompositeMarkDef):
- """BoxPlotDef schema wrapper
-
- Mapping(required=[type])
-
- Attributes
- ----------
-
- type : :class:`BoxPlot`
- The mark type. This could a primitive mark type
- (one of ``"bar"``, ``"circle"``, ``"square"``, ``"tick"``, ``"line"``,
- ``"area"``, ``"point"``, ``"geoshape"``, ``"rule"``, and ``"text"`` )
- or a composite mark type ( ``"boxplot"``, ``"errorband"``, ``"errorbar"`` ).
- box : anyOf(boolean, :class:`MarkConfig`)
-
- clip : boolean
- Whether a composite mark be clipped to the enclosing group’s width and height.
- color : :class:`Color`
- Default color. Note that ``fill`` and ``stroke`` have higher precedence than
- ``color`` and will override ``color``.
-
- **Default value:** :raw-html:`■`
- ``"#4682b4"``
-
- **Note:** This property cannot be used in a `style config
- `__.
- extent : anyOf(enum('min-max'), float)
- The extent of the whiskers. Available options include:
-
-
- * ``"min-max"`` : min and max are the lower and upper whiskers respectively.
- * A number representing multiple of the interquartile range. This number will be
- multiplied by the IQR to determine whisker boundary, which spans from the smallest
- data to the largest data within the range *[Q1 - k * IQR, Q3 + k * IQR]* where
- *Q1* and *Q3* are the first and third quartiles while *IQR* is the interquartile
- range ( *Q3-Q1* ).
-
- **Default value:** ``1.5``.
- median : anyOf(boolean, :class:`MarkConfig`)
-
- opacity : float
- The opacity (value between [0,1]) of the mark.
- orient : :class:`Orientation`
- Orientation of the box plot. This is normally automatically determined based on
- types of fields on x and y channels. However, an explicit ``orient`` be specified
- when the orientation is ambiguous.
-
- **Default value:** ``"vertical"``.
- outliers : anyOf(boolean, :class:`MarkConfig`)
-
- rule : anyOf(boolean, :class:`MarkConfig`)
-
- size : float
- Size of the box and median tick of a box plot
- ticks : anyOf(boolean, :class:`MarkConfig`)
-
- """
- _schema = {'$ref': '#/definitions/BoxPlotDef'}
-
- def __init__(self, type=Undefined, box=Undefined, clip=Undefined, color=Undefined, extent=Undefined,
- median=Undefined, opacity=Undefined, orient=Undefined, outliers=Undefined,
- rule=Undefined, size=Undefined, ticks=Undefined, **kwds):
- super(BoxPlotDef, self).__init__(type=type, box=box, clip=clip, color=color, extent=extent,
- median=median, opacity=opacity, orient=orient,
- outliers=outliers, rule=rule, size=size, ticks=ticks, **kwds)
-
-
-class CompositionConfig(VegaLiteSchema):
- """CompositionConfig schema wrapper
-
- Mapping(required=[])
-
- Attributes
- ----------
-
- columns : float
- The number of columns to include in the view composition layout.
-
- **Default value** : ``undefined`` -- An infinite number of columns (a single row)
- will be assumed. This is equivalent to
- ``hconcat`` (for ``concat`` ) and to using the ``column`` channel (for ``facet`` and
- ``repeat`` ).
-
- **Note** :
-
- 1) This property is only for:
-
-
- * the general (wrappable) ``concat`` operator (not ``hconcat`` / ``vconcat`` )
- * the ``facet`` and ``repeat`` operator with one field/repetition definition
- (without row/column nesting)
-
- 2) Setting the ``columns`` to ``1`` is equivalent to ``vconcat`` (for ``concat`` )
- and to using the ``row`` channel (for ``facet`` and ``repeat`` ).
- spacing : float
- The default spacing in pixels between composed sub-views.
-
- **Default value** : ``20``
- """
- _schema = {'$ref': '#/definitions/CompositionConfig'}
-
- def __init__(self, columns=Undefined, spacing=Undefined, **kwds):
- super(CompositionConfig, self).__init__(columns=columns, spacing=spacing, **kwds)
-
-
-class ConditionalMarkPropFieldDef(VegaLiteSchema):
- """ConditionalMarkPropFieldDef schema wrapper
-
- anyOf(:class:`ConditionalPredicateMarkPropFieldDef`,
- :class:`ConditionalSelectionMarkPropFieldDef`)
- """
- _schema = {'$ref': '#/definitions/ConditionalMarkPropFieldDef'}
-
- def __init__(self, *args, **kwds):
- super(ConditionalMarkPropFieldDef, self).__init__(*args, **kwds)
-
-
-class ConditionalMarkPropFieldDefTypeForShape(VegaLiteSchema):
- """ConditionalMarkPropFieldDefTypeForShape schema wrapper
-
- anyOf(:class:`ConditionalPredicateMarkPropFieldDefTypeForShape`,
- :class:`ConditionalSelectionMarkPropFieldDefTypeForShape`)
- """
- _schema = {'$ref': '#/definitions/ConditionalMarkPropFieldDef'}
-
- def __init__(self, *args, **kwds):
- super(ConditionalMarkPropFieldDefTypeForShape, self).__init__(*args, **kwds)
-
-
-class ConditionalNumberValueDef(VegaLiteSchema):
- """ConditionalNumberValueDef schema wrapper
-
- anyOf(:class:`ConditionalPredicateNumberValueDef`,
- :class:`ConditionalSelectionNumberValueDef`)
- """
- _schema = {'$ref': '#/definitions/ConditionalNumberValueDef'}
-
- def __init__(self, *args, **kwds):
- super(ConditionalNumberValueDef, self).__init__(*args, **kwds)
-
-
-class ConditionalPredicateMarkPropFieldDef(ConditionalMarkPropFieldDef):
- """ConditionalPredicateMarkPropFieldDef schema wrapper
-
- Mapping(required=[test, type])
-
- Attributes
- ----------
-
- test : :class:`LogicalOperandPredicate`
- Predicate for triggering the condition
- type : :class:`StandardType`
- The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``,
- ``"ordinal"``, or ``"nominal"`` ).
- It can also be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- **Note:**
-
-
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types ( ``number``, ``string``, etc.). The same primitive data type can have
- different types of measurement. For example, numeric data can represent
- quantitative, ordinal, or nominal data.
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using
- an ordinal scale) `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output
- is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- aggregate : :class:`Aggregate`
- Aggregation function for the field
- (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating that the
- data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite (
- ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value
- or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:**
- 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested
- objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ).
- If field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ).
- See more details about escaping in the `field documentation
- `__.
- 2) ``field`` is not required if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend.
- If ``null``, the legend for the encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- Javascript.
- * `A sort-by-encoding definition
- `__ for sorting
- by another encoding channel. (This type of sort definition is not available for
- ``row`` and ``column`` channels.)
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects `__. In addition, for time units
- ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` is not supported for ``row`` and ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : :class:`TimeUnit`
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field.
- or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(string, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- """
- _schema = {'$ref': '#/definitions/ConditionalPredicate'}
-
- def __init__(self, test=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined,
- field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined,
- title=Undefined, **kwds):
- super(ConditionalPredicateMarkPropFieldDef, self).__init__(test=test, type=type,
- aggregate=aggregate, bin=bin,
- field=field, legend=legend,
- scale=scale, sort=sort,
- timeUnit=timeUnit, title=title,
- **kwds)
-
-
-class ConditionalPredicateMarkPropFieldDefTypeForShape(ConditionalMarkPropFieldDefTypeForShape):
- """ConditionalPredicateMarkPropFieldDefTypeForShape schema wrapper
-
- Mapping(required=[test, type])
-
- Attributes
- ----------
-
- test : :class:`LogicalOperandPredicate`
- Predicate for triggering the condition
- type : :class:`TypeForShape`
- The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``,
- ``"ordinal"``, or ``"nominal"`` ).
- It can also be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- **Note:**
-
-
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types ( ``number``, ``string``, etc.). The same primitive data type can have
- different types of measurement. For example, numeric data can represent
- quantitative, ordinal, or nominal data.
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using
- an ordinal scale) `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output
- is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- aggregate : :class:`Aggregate`
- Aggregation function for the field
- (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating that the
- data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite (
- ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value
- or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:**
- 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested
- objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ).
- If field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ).
- See more details about escaping in the `field documentation
- `__.
- 2) ``field`` is not required if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend.
- If ``null``, the legend for the encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- Javascript.
- * `A sort-by-encoding definition
- `__ for sorting
- by another encoding channel. (This type of sort definition is not available for
- ``row`` and ``column`` channels.)
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects `__. In addition, for time units
- ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` is not supported for ``row`` and ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : :class:`TimeUnit`
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field.
- or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(string, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- """
- _schema = {'$ref': '#/definitions/ConditionalPredicate>'}
-
- def __init__(self, test=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined,
- field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined,
- title=Undefined, **kwds):
- super(ConditionalPredicateMarkPropFieldDefTypeForShape, self).__init__(test=test, type=type,
- aggregate=aggregate,
- bin=bin, field=field,
- legend=legend,
- scale=scale, sort=sort,
- timeUnit=timeUnit,
- title=title, **kwds)
-
-
-class ConditionalPredicateNumberValueDef(ConditionalNumberValueDef):
- """ConditionalPredicateNumberValueDef schema wrapper
-
- Mapping(required=[test, value])
-
- Attributes
- ----------
-
- test : :class:`LogicalOperandPredicate`
- Predicate for triggering the condition
- value : float
- A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values
- between ``0`` to ``1`` for opacity).
- """
- _schema = {'$ref': '#/definitions/ConditionalPredicate'}
-
- def __init__(self, test=Undefined, value=Undefined, **kwds):
- super(ConditionalPredicateNumberValueDef, self).__init__(test=test, value=value, **kwds)
-
-
-class ConditionalSelectionMarkPropFieldDef(ConditionalMarkPropFieldDef):
- """ConditionalSelectionMarkPropFieldDef schema wrapper
-
- Mapping(required=[selection, type])
-
- Attributes
- ----------
-
- selection : :class:`SelectionOperand`
- A `selection name `__, or a
- series of `composed selections
- `__.
- type : :class:`StandardType`
- The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``,
- ``"ordinal"``, or ``"nominal"`` ).
- It can also be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- **Note:**
-
-
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types ( ``number``, ``string``, etc.). The same primitive data type can have
- different types of measurement. For example, numeric data can represent
- quantitative, ordinal, or nominal data.
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using
- an ordinal scale) `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output
- is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- aggregate : :class:`Aggregate`
- Aggregation function for the field
- (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating that the
- data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite (
- ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value
- or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:**
- 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested
- objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ).
- If field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ).
- See more details about escaping in the `field documentation
- `__.
- 2) ``field`` is not required if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend.
- If ``null``, the legend for the encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- Javascript.
- * `A sort-by-encoding definition
- `__ for sorting
- by another encoding channel. (This type of sort definition is not available for
- ``row`` and ``column`` channels.)
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects `__. In addition, for time units
- ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` is not supported for ``row`` and ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : :class:`TimeUnit`
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field.
- or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(string, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- """
- _schema = {'$ref': '#/definitions/ConditionalSelection'}
-
- def __init__(self, selection=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined,
- field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined,
- title=Undefined, **kwds):
- super(ConditionalSelectionMarkPropFieldDef, self).__init__(selection=selection, type=type,
- aggregate=aggregate, bin=bin,
- field=field, legend=legend,
- scale=scale, sort=sort,
- timeUnit=timeUnit, title=title,
- **kwds)
-
-
-class ConditionalSelectionMarkPropFieldDefTypeForShape(ConditionalMarkPropFieldDefTypeForShape):
- """ConditionalSelectionMarkPropFieldDefTypeForShape schema wrapper
-
- Mapping(required=[selection, type])
-
- Attributes
- ----------
-
- selection : :class:`SelectionOperand`
- A `selection name `__, or a
- series of `composed selections
- `__.
- type : :class:`TypeForShape`
- The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``,
- ``"ordinal"``, or ``"nominal"`` ).
- It can also be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- **Note:**
-
-
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types ( ``number``, ``string``, etc.). The same primitive data type can have
- different types of measurement. For example, numeric data can represent
- quantitative, ordinal, or nominal data.
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using
- an ordinal scale) `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output
- is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- aggregate : :class:`Aggregate`
- Aggregation function for the field
- (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bin : anyOf(boolean, :class:`BinParams`, None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating that the
- data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite (
- ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value
- or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:**
- 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested
- objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ).
- If field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ).
- See more details about escaping in the `field documentation
- `__.
- 2) ``field`` is not required if ``aggregate`` is ``count``.
- legend : anyOf(:class:`Legend`, None)
- An object defining properties of the legend.
- If ``null``, the legend for the encoding channel will be removed.
-
- **Default value:** If undefined, default `legend properties
- `__ are applied.
-
- **See also:** `legend `__
- documentation.
- scale : anyOf(:class:`Scale`, None)
- An object defining properties of the channel's scale, which is the function that
- transforms values in the data domain (numbers, dates, strings, etc) to visual values
- (pixels, colors, sizes) of the encoding channels.
-
- If ``null``, the scale will be `disabled and the data value will be directly encoded
- `__.
-
- **Default value:** If undefined, default `scale properties
- `__ are applied.
-
- **See also:** `scale `__
- documentation.
- sort : :class:`Sort`
- Sort order for the encoded field.
-
- For continuous fields (quantitative or temporal), ``sort`` can be either
- ``"ascending"`` or ``"descending"``.
-
- For discrete fields, ``sort`` can be one of the following:
-
-
- * ``"ascending"`` or ``"descending"`` -- for sorting by the values' natural order in
- Javascript.
- * `A sort-by-encoding definition
- `__ for sorting
- by another encoding channel. (This type of sort definition is not available for
- ``row`` and ``column`` channels.)
- * `A sort field definition
- `__ for sorting by
- another field.
- * `An array specifying the field values in preferred order
- `__. In this case, the
- sort order will obey the values in the array, followed by any unspecified values
- in their original order. For discrete time field, values in the sort array can be
- `date-time definition objects `__. In addition, for time units
- ``"month"`` and ``"day"``, the values can be the month or day names (case
- insensitive) or their 3-letter initials (e.g., ``"Mon"``, ``"Tue"`` ).
- * ``null`` indicating no sort.
-
- **Default value:** ``"ascending"``
-
- **Note:** ``null`` is not supported for ``row`` and ``column``.
-
- **See also:** `sort `__
- documentation.
- timeUnit : :class:`TimeUnit`
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field.
- or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(string, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- """
- _schema = {'$ref': '#/definitions/ConditionalSelection>'}
-
- def __init__(self, selection=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined,
- field=Undefined, legend=Undefined, scale=Undefined, sort=Undefined, timeUnit=Undefined,
- title=Undefined, **kwds):
- super(ConditionalSelectionMarkPropFieldDefTypeForShape, self).__init__(selection=selection,
- type=type,
- aggregate=aggregate,
- bin=bin, field=field,
- legend=legend,
- scale=scale, sort=sort,
- timeUnit=timeUnit,
- title=title, **kwds)
-
-
-class ConditionalSelectionNumberValueDef(ConditionalNumberValueDef):
- """ConditionalSelectionNumberValueDef schema wrapper
-
- Mapping(required=[selection, value])
-
- Attributes
- ----------
-
- selection : :class:`SelectionOperand`
- A `selection name `__, or a
- series of `composed selections
- `__.
- value : float
- A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values
- between ``0`` to ``1`` for opacity).
- """
- _schema = {'$ref': '#/definitions/ConditionalSelection'}
-
- def __init__(self, selection=Undefined, value=Undefined, **kwds):
- super(ConditionalSelectionNumberValueDef, self).__init__(selection=selection, value=value,
- **kwds)
-
-
-class ConditionalStringValueDef(VegaLiteSchema):
- """ConditionalStringValueDef schema wrapper
-
- anyOf(:class:`ConditionalPredicateStringValueDef`,
- :class:`ConditionalSelectionStringValueDef`)
- """
- _schema = {'$ref': '#/definitions/ConditionalStringValueDef'}
-
- def __init__(self, *args, **kwds):
- super(ConditionalStringValueDef, self).__init__(*args, **kwds)
-
-
-class ConditionalPredicateStringValueDef(ConditionalStringValueDef):
- """ConditionalPredicateStringValueDef schema wrapper
-
- Mapping(required=[test, value])
-
- Attributes
- ----------
-
- test : :class:`LogicalOperandPredicate`
- Predicate for triggering the condition
- value : anyOf(string, None)
- A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values
- between ``0`` to ``1`` for opacity).
- """
- _schema = {'$ref': '#/definitions/ConditionalPredicate'}
-
- def __init__(self, test=Undefined, value=Undefined, **kwds):
- super(ConditionalPredicateStringValueDef, self).__init__(test=test, value=value, **kwds)
-
-
-class ConditionalSelectionStringValueDef(ConditionalStringValueDef):
- """ConditionalSelectionStringValueDef schema wrapper
-
- Mapping(required=[selection, value])
-
- Attributes
- ----------
-
- selection : :class:`SelectionOperand`
- A `selection name `__, or a
- series of `composed selections
- `__.
- value : anyOf(string, None)
- A constant value in visual domain (e.g., ``"red"`` / "#0099ff" for color, values
- between ``0`` to ``1`` for opacity).
- """
- _schema = {'$ref': '#/definitions/ConditionalSelection'}
-
- def __init__(self, selection=Undefined, value=Undefined, **kwds):
- super(ConditionalSelectionStringValueDef, self).__init__(selection=selection, value=value,
- **kwds)
-
-
-class ConditionalTextFieldDef(VegaLiteSchema):
- """ConditionalTextFieldDef schema wrapper
-
- anyOf(:class:`ConditionalPredicateTextFieldDef`, :class:`ConditionalSelectionTextFieldDef`)
- """
- _schema = {'$ref': '#/definitions/ConditionalTextFieldDef'}
-
- def __init__(self, *args, **kwds):
- super(ConditionalTextFieldDef, self).__init__(*args, **kwds)
-
-
-class ConditionalPredicateTextFieldDef(ConditionalTextFieldDef):
- """ConditionalPredicateTextFieldDef schema wrapper
-
- Mapping(required=[test, type])
-
- Attributes
- ----------
-
- test : :class:`LogicalOperandPredicate`
- Predicate for triggering the condition
- type : :class:`StandardType`
- The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``,
- ``"ordinal"``, or ``"nominal"`` ).
- It can also be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- **Note:**
-
-
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types ( ``number``, ``string``, etc.). The same primitive data type can have
- different types of measurement. For example, numeric data can represent
- quantitative, ordinal, or nominal data.
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using
- an ordinal scale) `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output
- is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- aggregate : :class:`Aggregate`
- Aggregation function for the field
- (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating that the
- data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite (
- ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value
- or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:**
- 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested
- objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ).
- If field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ).
- See more details about escaping in the `field documentation
- `__.
- 2) ``field`` is not required if ``aggregate`` is ``count``.
- format : string
- The text formatting pattern for labels of guides (axes, legends, headers) and text
- marks.
-
-
- * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's
- `number format pattern `__.
- * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time
- format pattern `__.
-
- See the `format documentation `__
- for more examples.
-
- **Default value:** Derived from `numberFormat
- `__ config for number
- format and from `timeFormat
- `__ config for time
- format.
- formatType : enum('number', 'time')
- The format type for labels ( ``"number"`` or ``"time"`` ).
-
- **Default value:**
-
-
- * ``"time"`` for temporal fields and ordinal and nomimal fields with ``timeUnit``.
- * ``"number"`` for quantitative fields as well as ordinal and nomimal fields without
- ``timeUnit``.
- timeUnit : :class:`TimeUnit`
- Time unit (e.g., ``year``, ``yearmonth``, ``month``, ``hours`` ) for a temporal
- field.
- or `a temporal field that gets casted as ordinal
- `__.
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `timeUnit `__
- documentation.
- title : anyOf(string, None)
- A title for the field. If ``null``, the title will be removed.
-
- **Default value:** derived from the field's name and transformation function (
- ``aggregate``, ``bin`` and ``timeUnit`` ). If the field has an aggregate function,
- the function is displayed as part of the title (e.g., ``"Sum of Profit"`` ). If the
- field is binned or has a time unit applied, the applied function is shown in
- parentheses (e.g., ``"Profit (binned)"``, ``"Transaction Date (year-month)"`` ).
- Otherwise, the title is simply the field name.
-
- **Notes** :
-
- 1) You can customize the default field title format by providing the `fieldTitle
- `__ property in
- the `config `__ or `fieldTitle
- function via the compile function's options
- `__.
-
- 2) If both field definition's ``title`` and axis, header, or legend ``title`` are
- defined, axis/header/legend title will be used.
- """
- _schema = {'$ref': '#/definitions/ConditionalPredicate'}
-
- def __init__(self, test=Undefined, type=Undefined, aggregate=Undefined, bin=Undefined,
- field=Undefined, format=Undefined, formatType=Undefined, timeUnit=Undefined,
- title=Undefined, **kwds):
- super(ConditionalPredicateTextFieldDef, self).__init__(test=test, type=type,
- aggregate=aggregate, bin=bin,
- field=field, format=format,
- formatType=formatType, timeUnit=timeUnit,
- title=title, **kwds)
-
-
-class ConditionalSelectionTextFieldDef(ConditionalTextFieldDef):
- """ConditionalSelectionTextFieldDef schema wrapper
-
- Mapping(required=[selection, type])
-
- Attributes
- ----------
-
- selection : :class:`SelectionOperand`
- A `selection name `__, or a
- series of `composed selections
- `__.
- type : :class:`StandardType`
- The encoded field's type of measurement ( ``"quantitative"``, ``"temporal"``,
- ``"ordinal"``, or ``"nominal"`` ).
- It can also be a ``"geojson"`` type for encoding `'geoshape'
- `__.
-
- **Note:**
-
-
- * Data values for a temporal field can be either a date-time string (e.g.,
- ``"2015-03-07 12:32:17"``, ``"17:01"``, ``"2015-03-16"``. ``"2015"`` ) or a
- timestamp number (e.g., ``1552199579097`` ).
- * Data ``type`` describes the semantics of the data rather than the primitive data
- types ( ``number``, ``string``, etc.). The same primitive data type can have
- different types of measurement. For example, numeric data can represent
- quantitative, ordinal, or nominal data.
- * When using with `bin `__, the
- ``type`` property can be either ``"quantitative"`` (for using a linear bin scale)
- or `"ordinal" (for using an ordinal bin scale)
- `__.
- * When using with `timeUnit
- `__, the ``type`` property
- can be either ``"temporal"`` (for using a temporal scale) or `"ordinal" (for using
- an ordinal scale) `__.
- * When using with `aggregate
- `__, the ``type`` property
- refers to the post-aggregation data type. For example, we can calculate count
- ``distinct`` of a categorical field ``"cat"`` using ``{"aggregate": "distinct",
- "field": "cat", "type": "quantitative"}``. The ``"type"`` of the aggregate output
- is ``"quantitative"``.
- * Secondary channels (e.g., ``x2``, ``y2``, ``xError``, ``yError`` ) do not have
- ``type`` as they have exactly the same type as their primary channels (e.g.,
- ``x``, ``y`` ).
-
- **See also:** `type `__
- documentation.
- aggregate : :class:`Aggregate`
- Aggregation function for the field
- (e.g., ``mean``, ``sum``, ``median``, ``min``, ``max``, ``count`` ).
-
- **Default value:** ``undefined`` (None)
-
- **See also:** `aggregate `__
- documentation.
- bin : anyOf(boolean, :class:`BinParams`, enum('binned'), None)
- A flag for binning a ``quantitative`` field, `an object defining binning parameters
- `__, or indicating that the
- data for ``x`` or ``y`` channel are binned before they are imported into Vega-Lite (
- ``"binned"`` ).
-
-
- If ``true``, default `binning parameters
- `__ will be applied.
-
- If ``"binned"``, this indicates that the data for the ``x`` (or ``y`` ) channel are
- already binned. You can map the bin-start field to ``x`` (or ``y`` ) and the bin-end
- field to ``x2`` (or ``y2`` ). The scale and axis will be formatted similar to
- binning in Vega-lite. To adjust the axis ticks based on the bin step, you can also
- set the axis's `tickMinStep
- `__ property.
-
- **Default value:** ``false``
-
- **See also:** `bin `__
- documentation.
- field : :class:`Field`
- **Required.** A string defining the name of the field from which to pull a data
- value
- or an object defining iterated values from the `repeat
- `__ operator.
-
- **See also:** `field `__
- documentation.
-
- **Notes:**
- 1) Dots ( ``.`` ) and brackets ( ``[`` and ``]`` ) can be used to access nested
- objects (e.g., ``"field": "foo.bar"`` and ``"field": "foo['bar']"`` ).
- If field names contain dots or brackets but are not nested, you can use ``\\`` to
- escape dots and brackets (e.g., ``"a\\.b"`` and ``"a\\[0\\]"`` ).
- See more details about escaping in the `field documentation
- `__.
- 2) ``field`` is not required if ``aggregate`` is ``count``.
- format : string
- The text formatting pattern for labels of guides (axes, legends, headers) and text
- marks.
-
-
- * If the format type is ``"number"`` (e.g., for quantitative fields), this is D3's
- `number format pattern `__.
- * If the format type is ``"time"`` (e.g., for temporal fields), this is D3's `time
- format pattern `__.
-
- See the `format documentation `__
- for more examples.
-
- **Default value:** Derived from `numberFormat
- `__ config for number
- format and from `timeFormat
-