-
-Use Control + F to find your desired software. If your program wasn’t listed, then it is most likely not a pdf downloader and most probably a shareware program. If your program wasn’t listed, then it is most likely not a pdf downloader and most probably a shareware program.
-
-If you’re looking for a free pdf downloader or software that lets you download from websites for free, then you are in the right place. On this page you will find the best programs for this!
-
-The free pdf downloader Program
-
-There are a lot of pdf downloader software to choose from, but most of them are expensive, so we’ve put together a list of the best free software!
-
-The best free pdf downloader & software
-
-#1 DownloadPipe
-
-DownloadPipe is a free download manager for windows which supports multiple platforms like Windows, Mac, and Linux. It supports multiple protocols including HTTPS, FTPS, FTP, etc. to secure the download process. You can quickly download more than 100 of your favorite programs.
-
-With this program you can download anything for free. PDF files, documents, movies, songs, games, software, and more. It has a very simple design and intuitive user interface. Also, DownloadPipe is extremely easy to use and intuitive.
-
-To download a PDF file you need to go to the “Download” menu on the top right corner and select the “Save as” option. You can then specify where you want to download the file to.
-
-#2 Zipeg
-
-Zipeg is a free PDF downloader that lets you download any file from a website. It’s a standalone downloader. Zipeg doesn’t require a browser.
-
-This is a program for users who want to download a PDF file without a web browser. You can use the program without installing it. It’s available as a standalone downloader.
-
-For example, if you have a PDF file that you need to download, then just open the Zipeg app and start the download. Zipeg will prompt you to select the link, the file name and other details.
-
-You can download the PDF file. Zipeg allows you to select any file from your browser and download it to the computer. Also, it can download HTML files. You can download a file from any web page. The program is very simple to use 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Dark Souls 2 Save Editor Fix.md b/spaces/1gistliPinn/ChatGPT4/Examples/Dark Souls 2 Save Editor Fix.md
deleted file mode 100644
index 30e3d3be11dfe586c20d0116d9721cb6e61da70f..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Dark Souls 2 Save Editor Fix.md
+++ /dev/null
@@ -1,18 +0,0 @@
-
-
-Mar 04, 2017 · 2. Come to the hearts of your viewers and make them feel like they're actually at the show. Be able to produce audio and video content that is consistent in quality. YouTube Help Center. Reply to a video, message or comment. How to: Ask a question; Start a discussion; Share your thoughts; Start or join a discussion. What's your question? How to: Ask a question. How to: Answer a question. How to: Answer a question about your video. How to: Answer a question about your video.
-
-Satisfaction Guarantee. If you aren't completely satisfied, return the item. We've got it. This top rated casino has been around for many years and is a site full of interesting games. This online poker room offers a good welcome bonus for newcomers, a great welcome bonus for repeat players, and a wide selection of unique tournaments. We are an affiliate of the best online poker room in the world.
-
-The first bet is 10. As you can see on this animation, the next bet will be 10 more, for a total bet of 20. The player is at this point committed to the second play.
-
-Create New Account. You are one step away from creating your new account. In order to create your account please select your city:. Select a Username. Select a password. Select your city:. Please select your city. Select a city: Select a state: Select a state. Please select a city. Select a city. Select a state. Please select a city. Please select a city. Please select a state. Please select a state.
-
-Hello mate! This is Renato from the Mexican Casino Club website. Let me introduce ourselves; We are the world's largest online gambling and gaming website that works with an excellent selection of online casinos from around the world.
-
-Have you ever considered what life would be like if you could control every moment, and be able to touch, hear, and taste anything that was around you? It's a fascinating concept, and we think you'd be interested in taking the next step in your experience. Perhaps we should use more of our time to come up with better ways to be at peace with ourselves, our family, and our world, and stop obsessing about the little stuff.
-
-Instead of trying to fix the'symptoms', why not try to 'get rid of the disease'? After all, most people would rather cut off a hand than cut off the 4fefd39f24
-
-
-
diff --git a/spaces/1phancelerku/anime-remove-background/Aprenda a baixar Stick War 3 com dinheiro infinito e desbloquear todos os recursos.md b/spaces/1phancelerku/anime-remove-background/Aprenda a baixar Stick War 3 com dinheiro infinito e desbloquear todos os recursos.md
deleted file mode 100644
index aff188dfacd0654a80580fbc92db7c7236be76fb..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Aprenda a baixar Stick War 3 com dinheiro infinito e desbloquear todos os recursos.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-
Stick War 3: How to Download and Play the Ultimate Strategy Game
-
If you are a fan of strategy games, you have probably heard of Stick War 3, one of the most popular and addictive games in the genre. Stick War 3 is a game where you can create your own army, fight against other players or AI opponents, and conquer the world of Inamorta. Whether you prefer single player or multiplayer modes, Stick War 3 has something for everyone. In this article, we will show you how to download and install Stick War 3 on your device, how to play the different modes and features of the game, and how to improve your skills with some tips and tricks.
-
PVP Matches
-
One of the main attractions of Stick War 3 is its real-time multiplayer strategy mode, where you can team up with your friends or battle against strangers from around the world. You can choose from 1v1 or 2v2 matches, and use any deck that you have created or unlocked. The goal is to destroy your enemy's statue before they destroy yours, using your units, spells, enchantments, and strategies.
One of the coolest features of Stick War 3 is that you can take control of any unit at any time, giving you more flexibility and control over your army. You can also use spells such as a giant bubble that blocks incoming projectiles, or snow squall that freezes entire legions. You can also use enchantments such as the rune of reanimation that will cause any poisoned enemy units to respawn as zombies.
-
Another way to make your battles more fun and personalized is to customize your battlefield with skins, statues, voice-lines, and emotes. You can change the appearance of your units, your statue, your tower, and even your voice commands. You can also use emotes to communicate with your allies or taunt your enemies.
-
Single Player Modes
-
If you prefer playing solo or offline, Stick War 3 has plenty of options for you as well. You can play the huge ever expanding campaign mode, where you will follow an epic story with multiple chapters, fully animated comic book style cut scenes, and huge storylines. You will explore the world of Inamorta, where weapons are religion and nations are constantly at war. You will encounter different factions, allies, enemies, secrets, and challenges along the way.
-
You can also practice your strategies against AI opponents in different scenarios in the proving grounds mode. You can choose from various selectable decks and situations to test your skills and learn new tactics. You can also challenge yourself with daily battles, where you will face a special scenario with fixed decks and other special conditions that do not appear in normal gameplay. You can earn gem rewards for completing each difficulty level.
-
Custom Armies
-
One of the most important aspects of Stick War 3 is building your own battle decks with a variety of army types and upgrades. You can collect and unlock new cards from a growing selection of over 40 different nations, each with their own unique units, abilities, and bonuses. You can also research new upgrades and technologies to make your army stronger and more versatile. You can create up to 10 different decks, each with a maximum of 12 cards, and switch between them before each battle.
-
stick war 3 mod apk dinheiro infinito
-stick war 3 hack dinheiro infinito
-stick war 3 legacy dinheiro infinito
-stick war 3 download para android com dinheiro infinito
-stick war 3 atualizado com dinheiro infinito
-stick war 3 jogo online com dinheiro infinito
-stick war 3 como baixar e instalar dinheiro infinito
-stick war 3 dicas e truques para ganhar dinheiro infinito
-stick war 3 versão completa com dinheiro infinito
-stick war 3 multiplayer com dinheiro infinito
-stick war 3 cheats dinheiro infinito
-stick war 3 apk mod menu dinheiro infinito
-stick war 3 tudo desbloqueado com dinheiro infinito
-stick war 3 sem root com dinheiro infinito
-stick war 3 offline com dinheiro infinito
-stick war 3 estratégia de guerra com dinheiro infinito
-stick war 3 skins personalizadas com dinheiro infinito
-stick war 3 novas atualizações com dinheiro infinito
-stick war 3 jogabilidade incrível com dinheiro infinito
-stick war 3 gráficos impressionantes com dinheiro infinito
-stick war 3 download rápido e fácil com dinheiro infinito
-stick war 3 tutorial passo a passo com dinheiro infinito
-stick war 3 melhores armas e unidades com dinheiro infinito
-stick war 3 modo história com dinheiro infinito
-stick war 3 modo sobrevivência com dinheiro infinito
-stick war 3 modo zumbi com dinheiro infinito
-stick war 3 modo clássico com dinheiro infinito
-stick war 3 modo torneio com dinheiro infinito
-stick war 3 modo desafio com dinheiro infinito
-stick war 3 modo sandbox com dinheiro infinito
-stick war 3 modo criativo com dinheiro infinito
-stick war 3 modo cooperativo com dinheiro infinito
-stick war 3 modo versus com dinheiro infinito
-stick war 3 modo ranking com dinheiro infinito
-stick war 3 modo conquista com dinheiro infinito
-stick war 3 modo missão com dinheiro infinito
-stick war 3 modo aventura com dinheiro infinito
-stick war 3 modo campanha com dinheiro infinito
-stick war 3 modo batalha épica com dinheiro infinito
-stick war 3 modo guerra mundial com dinheiro infinito
-
Another way to customize your army is to use generals of each nation, who have their own unique abilities and effects. You can choose one general for each deck, and use their power once per battle. For example, you can use the general of the Order Empire, who can summon a giant sword that deals massive damage to enemies in front of him. Or you can use the general of the Chaos Empire, who can transform into a powerful demon that can fly and shoot fireballs.
-
Tips and Tricks
-
Stick War 3 is a game that requires skill, strategy, and creativity to master. Here are some tips and tricks that can help you improve your gameplay and win more battles.
-
-
Learn the strengths and weaknesses of each unit type and nation. For example, archers are good at dealing damage from a distance, but are vulnerable to melee attacks. Speartons are good at defending and blocking enemy units, but are slow and expensive. The Order Empire is good at balanced and versatile strategies, but lacks specialization. The Chaos Empire is good at aggressive and chaotic strategies, but lacks defense and stability.
-
Use the right units for the right situations. For example, use miners to gather gold and mana, which are essential for building your army and using spells. Use swordwraths to rush your enemy in the early game or flank them in the late game. Use magikills to cast powerful spells that can turn the tide of the battle.
-
Use your spells and enchantments wisely. For example, use heal to restore the health of your units or your statue. Use poison to deal damage over time to enemy units or their statue. Use shield to protect your units or your statue from enemy attacks.
-
Take control of your units when necessary. For example, take control of an archer to aim more accurately or avoid enemy fire. Take control of a spearton to block enemy units or charge at them. Take control of a magikill to cast spells more precisely or escape from danger.
-
Avoid common mistakes and pitfalls in the game. For example, do not overextend your army or leave your statue undefended. Do not waste your gold or mana on unnecessary units or spells. Do not underestimate your enemy or overestimate yourself.
-
-
Conclusion
-
Stick War 3 is a game that will keep you entertained for hours with its amazing graphics, gameplay, and features. Whether you want to play online with other players or offline by yourself, you will find something that suits your taste and style. You can download and install Stick War 3 on your device for free from the official website or from the app store of your choice. You can also follow the game on social media for more news and updates. If you are looking for a fun and challenging strategy game, you should definitely give Stick War 3 a try.
-
FAQs
-
-
Q: How do I download Stick War 3?
-
A: You can download Stick War 3 from the official website or from the app store of your choice. You will need an internet connection to play online modes, but you can play offline modes without it.
-
Q: How do I unlock new cards and generals?
-
A: You can unlock new cards and generals by playing the campaign mode, completing daily battles, opening chests, or buying them with gems.
-
Q: How do I earn gems?
-
A: You can earn gems by playing the campaign mode, completing daily battles, watching ads, or buying them with real money.
-
Q: How do I play with my friends?
-
A: You can play with your friends by inviting them to join your team in PVP matches, or by creating a private room with a code that they can enter.
-
Q: How do I contact the developers?
-
A: You can contact the developers by sending them an email at support@stickwar.com or by filling out a form on their website.
-
- : https://stickwar.com/ : https://play.google.com/store/apps/details?id=com.maxgames.stickwar3&hl=en_US&gl=US : https://www.facebook 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download FF Advance Server APK Juli 2021 How to Register and Play.md b/spaces/1phancelerku/anime-remove-background/Download FF Advance Server APK Juli 2021 How to Register and Play.md
deleted file mode 100644
index a03bc9732935b1d71d0e0bcdc9f0510d97532ab5..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download FF Advance Server APK Juli 2021 How to Register and Play.md
+++ /dev/null
@@ -1,91 +0,0 @@
-
-
How to Join and Play Free Fire Advance Server in July 2023
-
Free Fire is one of the most popular and exciting battle royale games on mobile, with millions of players around the world. But did you know that there is a special server where you can try out new features and updates before they are released to the public? This server is called Free Fire Advance Server, and it is a great opportunity for you to experience the latest developments in the game, as well as to help the developers improve the game by reporting bugs and providing feedback.
In this article, we will tell you everything you need to know about Free Fire Advance Server, including what it is, how to register, how to download and install, how to play, and how to enjoy it. So, if you are a fan of Free Fire and want to join the exclusive club of advanced players, read on!
-
What is Free Fire Advance Server?
-
Free Fire Advance Server is a test server that is created by Garena, the developer of Free Fire, for experienced players who want to test new features and items that are not yet available on the regular server. The goal of this server is to allow players to explore and experiment with the upcoming updates, as well as to help the developers identify and fix any bugs or issues that may arise.
-
By joining Free Fire Advance Server, you will be able to access new weapons, characters, skins, modes, maps, events, and more before anyone else. You will also be able to provide your feedback and suggestions directly to the developers, which may influence the final version of the updates. Moreover, you will be rewarded with diamonds for finding and reporting bugs on the server.
-
However, there are some differences between Free Fire Advance Server and the regular server that you should be aware of. First of all, not everyone can join Free Fire Advance Server. You need to register and get an activation code from Garena, which is limited in number. Secondly, Free Fire Advance Server is not always open. It only opens for a certain period of time before each major update. Thirdly, your progress and data on Free Fire Advance Server are not linked to your regular account. You will start from scratch on the test server, and you will not be able to transfer anything back to your regular account.
-
How to Register for Free Fire Advance Server?
-
If you are interested in joining Free Fire Advance Server, you need to register first. The registration process is simple and easy, but you need to act fast because there are only a limited number of activation codes available. Here are the steps you need to follow:
Click or tap on the "Login Facebook" button to sign up for Free Fire Advance Server using your Facebook account. Make sure that your Facebook account is linked to your Free Fire or FF MAX game account.
-
Enter your personal information, such as name, email address, and phone number. Make sure that your email address and phone number are active.
-
Click or tap on the "Submit" button to complete your registration.
-
Wait for an email from Garena with your activation code and the download link for the Free Fire Advance Server APK file. Note that not everyone who registers will receive an activation code, as they are limited in number and given on a first-come, first-served basis.
-
-
If you are lucky enough to get an activation code, you can proceed to download and install the Free Fire Advance Server APK file on your Android device.
-
How to Download and Install Free Fire Advance Server APK?
-
Once you have received your activation code and the download link for the Free Fire Advance Server APK file, you can follow these steps to download and install it on your Android device:
-
-
Click or tap on the download link in the email to download the Free Fire Advance Server APK file. The file size is about 700 MB, so make sure you have enough storage space and a stable internet connection.
-
After the download is complete, locate the APK file on your device and tap on it to install it. You may need to enable the "Install from unknown sources" option in your device settings if you haven't done so before.
-
Once the installation is done, open the Free Fire Advance Server app and log in using your Facebook account that you used to register for the Advance Server.
-
Enter your activation code when prompted and tap on "Confirm". You will then be able to access the Free Fire Advance Server and enjoy the new features and updates.
-
-
Note that the Free Fire Advance Server is only open for a limited period of time, usually a few days before each major update. You can check the official website of Free Fire Advance Server at ff-advance.ff.garena.com to see when the server is open and when it will close. You will not be able to play on the Advance Server once it is closed, so make sure you make the most of it while it is open.
-
How to Play and Enjoy Free Fire Advance Server?
-
Playing on Free Fire Advance Server is similar to playing on the regular server, except that you will have access to new features and updates that are not yet available to the public. You will also start from scratch on the Advance Server, meaning that you will not have any of your previous progress, items, or data from your regular account. You will also not be able to transfer anything from the Advance Server back to your regular account.
-
How to register and download APK for Free Fire advance server July 2021
-Free Fire advance server 2021: latest updates and features
-Free Fire advance server bug hunting and feedback: how to get diamonds
-Free Fire advance server login using Facebook account: step by step guide
-Free Fire advance server timeline: server opening and closing time
-Free Fire advance server rules: what you need to know before playing
-Free Fire advance server main contributor: how to become one and get rewards
-Free Fire advance server APK download link: where to find it and how to install it
-Free Fire advance server activation code: how to get it and use it
-Free Fire advance server FAQ: answers to common questions
-Free Fire advance server review: pros and cons of playing in the test server
-Free Fire advance server tips and tricks: how to survive and win in the new mode
-Free Fire advance server new characters and weapons: what are they and how to use them
-Free Fire advance server bugs and glitches: how to report them and avoid them
-Free Fire advance server system requirements: what you need to play on your device
-Free Fire advance server news and updates: where to find the latest information
-Free Fire advance server feedback form: how to fill it and submit it
-Free Fire advance server download size: how much space you need on your device
-Free Fire advance server gameplay videos: where to watch them and learn from them
-Free Fire advance server community: how to join and interact with other players
-Free Fire advance server support: how to contact Garena if you have any issues
-Free Fire advance server registration status: how to check if you are accepted or not
-Free Fire advance server best settings: how to optimize your game performance
-Free Fire advance server comparison: how is it different from the regular server
-Free Fire advance server rewards redemption: how to claim your diamonds and other prizes
-Free Fire advance server invitation code: how to get it and share it with your friends
-Free Fire advance server maintenance schedule: when will the server be offline and for how long
-Free Fire advance server patch notes: what are the changes and improvements in the new version
-Free Fire advance server error messages: what they mean and how to fix them
-Free Fire advance server feedback survey: how to participate and share your opinions
-Free Fire advance server registration deadline: when is the last day to sign up for the test server
-Free Fire advance server download problem: what to do if you can't download or install the APK file
-Free Fire advance server login problem: what to do if you can't access or play the game
-Free Fire advance server VPN: do you need it and which one to use
-Free Fire advance server emulator: can you play it on PC and which one to use
-
However, this also means that you will have more freedom and fun to explore and experiment with the new features and updates without worrying about losing anything. You will also be able to provide your feedback and suggestions directly to the developers, as well as report any bugs or issues that you encounter on the server. By doing so, you will help improve the game and also earn rewards such as diamonds for your contribution.
-
To play and enjoy Free Fire Advance Server, here are some tips and tricks that you can follow:
-
-
Check out the new weapons, characters, skins, modes, maps, events, and more that are available on the Advance Server. Try them out and see how they work and how they affect your gameplay.
-
Be prepared for some glitches, errors, or crashes that may occur on the Advance Server. Remember that this is a test server and not everything is perfect or stable. If you encounter any problems, report them using the "Report" button on the game screen.
-
Give your honest feedback and suggestions on the new features and updates using the "Feedback" button on the game screen. Tell the developers what you like, what you don't like, what you think can be improved, or what you think is missing.
-
Have fun and enjoy playing with other advanced players who share your passion and enthusiasm for Free Fire. You can also invite your friends who have registered for the Advance Server to join you in testing out the new features and updates.
-
-
Conclusion
-
Free Fire Advance Server is a great opportunity for advanced players who want to experience new features and updates before they are released to the public. By joining Free Fire Advance Server, you will be able to access new weapons, characters, skins, modes, maps, events, and more before anyone else. You will also be able to provide your feedback and suggestions directly to the developers, which may influence the final version of the updates. Moreover, you will be rewarded with diamonds for finding and reporting bugs on the server.
-
If you are a fan of Free Fire and want to join the exclusive club of advanced players, don't miss this chance to register and download Free Fire Advance Server as soon as possible. The registration process is simple and easy, but you need to act fast because there are only a limited number of activation codes available. The download and installation process is also simple and easy, but you need to have an Android device and a stable internet connection. The playing and enjoying process is similar to the regular server, but with more freedom and fun to explore and experiment with the new features and updates. We hope that this article has helped you understand how to join and play Free Fire Advance Server in July 2023. If you have any questions or comments, feel free to leave them below. And don't forget to share this article with your friends who are also fans of Free Fire. Happy gaming!
FAQs
-
Here are some of the frequently asked questions and answers about Free Fire Advance Server:
-
-
What is the difference between Free Fire Advance Server and Free Fire MAX?
-
Free Fire Advance Server is a test server that is only open for a limited period of time before each major update. It allows players to try out new features and updates that are not yet available on the regular server. Free Fire MAX is a enhanced version of Free Fire that offers higher graphics quality, smoother performance, and exclusive content. It is compatible with the regular server and can be played anytime.
-
How can I get more diamonds on Free Fire Advance Server?
-
You can get more diamonds on Free Fire Advance Server by finding and reporting bugs on the server using the "Report" button on the game screen. You will be rewarded with diamonds for each bug that you report, depending on the severity and validity of the bug. You can also get diamonds by providing your feedback and suggestions on the new features and updates using the "Feedback" button on the game screen.
-
Can I play with my friends on Free Fire Advance Server?
-
You can play with your friends on Free Fire Advance Server if they have also registered for the Advance Server and have received an activation code from Garena. You can invite them to join you in testing out the new features and updates on the server. However, you will not be able to play with your friends who are on the regular server, as the two servers are not connected.
-
Will my progress and data on Free Fire Advance Server be saved or transferred to my regular account?
-
No, your progress and data on Free Fire Advance Server will not be saved or transferred to your regular account. You will start from scratch on the Advance Server, and you will not have any of your previous items or data from your regular account. You will also not be able to transfer anything from the Advance Server back to your regular account.
-
When will the new features and updates on Free Fire Advance Server be released to the public?
-
The new features and updates on Free Fire Advance Server will be released to the public after they have been tested and improved by the developers based on the feedback and suggestions from the players on the Advance Server. The exact date of release may vary depending on the update, but it is usually within a few weeks after the closure of the Advance Server.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download The Baby In Yellow APK for Android - Free Horror Game.md b/spaces/1phancelerku/anime-remove-background/Download The Baby In Yellow APK for Android - Free Horror Game.md
deleted file mode 100644
index e75937313753e28ccad80115fd599f8a33a26ab5..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download The Baby In Yellow APK for Android - Free Horror Game.md
+++ /dev/null
@@ -1,133 +0,0 @@
-
-
,
,
, etc.) to divide your content into sections and make it easier for the reader to follow. Step 5: Write a rough draft and pare down your outline. The fifth step is to write a rough draft of your article based on your outline. You don't have to worry about grammar, spelling, or formatting at this stage; just focus on getting your ideas down on paper. You can use tools like Google Docs, Microsoft Word, or Grammarly to write and edit your draft. As you write, try to: - Use a conversational style as written by a human (use an informal tone, utilize personal pronouns, keep it simple, engage the reader, use the active voice, keep it brief, use rhetorical questions, and incorporate analogies and metaphors). - Use SEO-optimized keywords and phrases that match the intent of your audience and the topic of your article. - Use fully detailed paragraphs that explain each point clearly and provide examples or evidence. - Use transitions words or phrases (such as however, therefore, moreover, etc.) to connect your paragraphs and sentences. - Use at least one table (using HTML tags such as
,
,
,
, etc.) to display data or information in a structured way. After you finish writing your draft, pare down your outline by removing any unnecessary or redundant information. You can also rearrange the order of your paragraphs or sections if needed. Step 6: Revise and proofread your article. The final step is to revise and proofread your article before publishing it. You can use tools like Hemingway Editor, ProWritingAid, or Grammarly to check for errors in grammar, spelling, punctuation, readability, style, tone, etc. You can also ask someone else to read your article and give you feedback. As you revise and proofread your article, try to: - Make sure that your article is coherent, consistent, and logical. - Make sure that your article is unique and original (you can use tools like Copyscape or Turnitin to check for plagiarism). - Make sure that your article is engaging and informative (you can use tools like CoSchedule Headline Analyzer or Yoast SEO Plugin to check for headline quality and SEO performance). Outline of the article: -
The Baby In Yellow: A Horror Game That Will Make You Think Twice About Babysitting
--
Introduction
- - What is The Baby In Yellow and what is it about? - Why is it a horror game and what makes it scary? - How can you download and play it on your Android device? -
The Baby In Yellow: A Game That Will Test Your Nerves
- -
The premise and the gameplay of The Baby In Yellow
- - You are a babysitter who has to take care of a baby in a yellow onesie - The baby is not a normal baby, but a demonic entity that can do strange things - You have to follow the instructions on the screen and try to survive the night -
The graphics and the sound effects of The Baby In Yellow
- - The game has a low-poly style that creates a contrast between the cute and the creepy - The game has a dark and eerie atmosphere that builds up tension and suspense - The game has realistic and disturbing sound effects that add to the horror -
How to Download and Play The Baby In Yellow on Your Android Device
- -
The requirements and the compatibility of The Baby In Yellow
- - The game requires Android 4.4 or higher and 136 MB of free space - The game is compatible with most Android devices, but some may experience performance issues - The game is free to download and play, but it may contain ads or in-app purchases -
The steps to download and install The Baby In Yellow
- - Go to one of the trusted sources that offer the APK file of The Baby In Yellow, such as [Softonic](^1^), [Tamindir](^2^), or [APKCombo](^3^) - Tap on the download button and wait for the file to be downloaded - Go to your device settings and enable the installation of apps from unknown sources - Locate the downloaded file in your file manager and tap on it to install it - Launch the game and enjoy the horror -
Conclusion
- - Summarize the main points of the article - Restate the thesis and provide a call to action or an interesting insight -
FAQs
- - List five unique FAQs related to the topic of the article Article with HTML formatting:
The Baby In Yellow: A Horror Game That Will Make You Think Twice About Babysitting
-
Introduction
-
If you are looking for a horror game that will challenge your nerves and make you jump out of your seat, you might want to try The Baby In Yellow. This is a first-person horror game developed by Team Terrible, where you will simulate the life of a babysitter. However, what you will babysit is more sinister than he first appears. The Baby In Yellow follows the same premise as the PC game, but it is now available for Android devices. In this article, we will tell you what The Baby In Yellow is about, why it is a horror game, and how you can download and play it on your Android device.
The Baby In Yellow: A Game That Will Test Your Nerves
-
The premise and the gameplay of The Baby In Yellow
-
In The Baby In Yellow, you are a babysitter who has to take care of a baby in a yellow onesie. Sounds easy, right? Well, not quite. The baby is not a normal baby, but a demonic entity that can do strange things. He can teleport, levitate, laugh maniacally, stare at you with glowing eyes, and even summon fire. He can also escape from his crib, his room, or even his house. Your job is to follow the instructions on the screen and try to survive the night. You will have to feed him, change his diaper, put him to bed, and deal with his mischief. But be careful, because he might not like what you do.
-
The graphics and the sound effects of The Baby In Yellow
- from. The game has realistic and disturbing sound effects that add to the horror. You will hear the baby's cries, laughs, whispers, and screams, as well as the creaking of doors, the flickering of lights, and the thumping of footsteps.
-
How to Download and Play The Baby In Yellow on Your Android Device
-
The requirements and the compatibility of The Baby In Yellow
-
The game requires Android 4.4 or higher and 136 MB of free space. The game is compatible with most Android devices, but some may experience performance issues. The game is free to download and play, but it may contain ads or in-app purchases.
-
The steps to download and install The Baby In Yellow
-
To download and play The Baby In Yellow on your Android device, you need to follow these steps:
-
-
-
Step
-
Instruction
-
-
-
1
-
Go to one of the trusted sources that offer the APK file of The Baby In Yellow, such as Softonic, Tamindir, or APKCombo.
-
-
-
2
-
Tap on the download button and wait for the file to be downloaded.
-
-
-
3
-
Go to your device settings and enable the installation of apps from unknown sources.
-
-
-
4
-
Locate the downloaded file in your file manager and tap on it to install it.
-
-
-
5
-
Launch the game and enjoy the horror.
-
-
-
Conclusion
-
The Baby In Yellow is a horror game that will make you think twice about babysitting. It is a game that will test your nerves and make you jump out of your seat. It is a game that has a low-poly style, a dark and eerie atmosphere, and realistic and disturbing sound effects. It is a game that is available for Android devices and can be downloaded and played for free. If you are looking for a horror game that will challenge you and scare you, you might want to try The Baby In Yellow. But be warned, this is not a game for the faint-hearted.
-
FAQs
-
Here are some frequently asked questions related to The Baby In Yellow:
-
-
Is The Baby In Yellow based on a true story?
-
No, The Baby In Yellow is not based on a true story. It is a fictional horror game inspired by a short film called The Thing in the Apartment Chapter 2, which was directed by John William Ross.
-
Is The Baby In Yellow safe to play?
-
The Baby In Yellow is safe to play as long as you are aware that it is a horror game that contains scary and violent scenes. It is not recommended for children or people who are sensitive to horror or gore. It is also advisable to play it in a well-lit room and with someone else nearby.
-
How long does it take to finish The Baby In Yellow?
-
The Baby In Yellow is a short game that can be finished in about 15 minutes. However, it has multiple endings depending on your choices and actions. You can replay the game to see different outcomes and discover more secrets.
-
the baby in yellow download android
-the baby in yellow game apk
-the baby in yellow free apk
-the baby in yellow horror game apk
-the baby in yellow apk mod
-the baby in yellow apk pure
-the baby in yellow apk latest version
-the baby in yellow apk offline
-the baby in yellow apk uptodown
-the baby in yellow apk for pc
-the baby in yellow apk android oyun club
-the baby in yellow apk hile
-the baby in yellow apk indir gezginler
-the baby in yellow apk indir tamindir
-the baby in yellow apk indir softonic
-the baby in yellow apk indir cepde
-the baby in yellow apk indir apkpure
-the baby in yellow apk indir android oyun club
-the baby in yellow apk indir son sürüm
-the baby in yellow apk indir ücretsiz
-the baby in yellow oyunu indir apk
-the baby in yellow oyunu indir android
-the baby in yellow oyunu indir pc
-the baby in yellow oyunu indir ücretsiz
-the baby in yellow oyunu indir tamindir
-the baby in yellow oyunu indir gezginler
-the baby in yellow oyunu indir softonic
-the baby in yellow oyunu indir cepde
-the baby in yellow oyunu indir apkpure
-the baby in yellow oyunu indir android oyun club
-download game the baby in yellow apk
-download game the baby in yellow android
-download game the baby in yellow mod apk
-download game the baby in yellow free apk
-download game the baby in yellow horror apk
-download game the baby in yellow latest version apk
-download game the baby in yellow offline apk
-download game the baby in yellow uptodown apk
-download game the baby in yellow for pc apk
-download game the baby in yellow android oyun club apk
-download game the baby in yellow hileli apk
-download game the baby in yellow gezginler apk
-download game the baby in yellow tamindir apk
-download game the baby in yellow softonic apk
-download game the baby in yellow cepde apk
-download game the baby in yellow apkpure apk
-download game the baby in yellow android oyun club apk
-download game the baby in yellow son sürüm apk
-download game the baby in yellow ücretsiz apk
-
What are some tips and tricks to play The Baby In Yellow?
-
Some tips and tricks to play The Baby In Yellow are:
-
-
Pay attention to the instructions on the screen and follow them carefully.
-
Use the flashlight to see better in the dark.
-
Avoid looking at the baby's eyes or touching him when he is angry.
-
Hide in the closet or under the bed if you hear something suspicious.
-
Don't let the baby escape from his room or his house.
-
Don't trust everything you see or hear.
-
-
Where can I find more games like The Baby In Yellow?
-
If you enjoyed playing The Baby In Yellow, you might also like these games:
-
-
Five Nights at Freddy's: A horror game where you have to survive five nights in a pizzeria haunted by animatronic animals.
-
Slendrina: The Cellar: A horror game where you have to explore a cellar and avoid a ghostly woman.
-
Eyes: The Horror Game: A horror game where you have to collect valuables in a haunted house and avoid a monster.
-
Hello Neighbor: A stealth horror game where you have to sneak into your neighbor's house and discover his secrets.
-
-
-
I hope you enjoyed reading this article and learned something new. If you have any questions or comments, feel free to leave them below. And if you want to play The Baby In Yellow, don't forget to download it from one of the sources mentioned above. But be careful, because this game is not for the faint-hearted.
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Epic War 6 How to Conquer Every Spot on the Board.md b/spaces/1phancelerku/anime-remove-background/Epic War 6 How to Conquer Every Spot on the Board.md
deleted file mode 100644
index 808cd6406055ad106b131e08b65ae938d25d5751..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Epic War 6 How to Conquer Every Spot on the Board.md
+++ /dev/null
@@ -1,134 +0,0 @@
-
-
Epic War 6 APK: A Thrilling Battle Game for Android
-
If you are looking for a game that combines strategy, action, and fantasy, then you should check out Epic War 6 APK. This is a game that lets you command legendary heroes and a strong army in epic battles against powerful enemies. You can choose from six unique heroes, each with their own strengths, weaknesses, and skills. You can also train and upgrade over 40 battle units, from archers and knights to dragons and giants. You can also challenge and defeat huge titans that will test your skills and strategy. And if you want to compete with other players from around the world, you can enter the PVP Arena and show how epic you are.
In this article, we will tell you everything you need to know about Epic War 6 APK, including its features, how to download and install it, how to play it, how it compares with other games, what are its pros and cons, and what is its review. By the end of this article, you will have a clear idea of whether this game is worth playing or not.
-
Features of Epic War 6 APK
-
Epic War 6 APK has a lot of features that make it a fun and exciting game to play. Here are some of them:
-
-
6 unique heroes: You can choose from six different heroes, each with their own personality, backstory, and abilities. Some of them are based on famous characters from mythology or history, such as Thor, Hercules, or Joan of Arc. Each hero has a special skill that can change the outcome of the battle, such as summoning thunderstorms, healing allies, or boosting morale.
Over 40 battle units: You can train and upgrade a variety of units to fight for you in the battlefield. You can choose from different classes, such as infantry, cavalry, ranged, magic, or special. Each class has its own advantages and disadvantages, and you need to balance your army composition according to the situation. You can also unlock new units as you progress in the game, such as ninjas, samurais, or angels.
-
10 powerful titans: You can face and defeat 10 massive titans that will pose a great challenge to your skills and strategy. These titans are based on mythical creatures, such as dragons, hydras, or krakens. They have different abilities and weaknesses, and you need to find the best way to exploit them. You can also use your hero's skill to deal extra damage or gain an edge in the fight.
-
PVP Arena: You can compete with other players from around the world in the PVP Arena mode. You can choose your hero and units and enter a random match against another player. You can also join a clan and participate in clan wars, where you can cooperate with your clan members and fight against other clans. You can earn rewards and rank up in the leaderboards by winning matches and wars.
-
-
How to Download and Install Epic War 6 APK
-
If you want to play Epic War 6 APK on your Android device, you need to download and install it first. Here are the steps that you need to follow:
-
-
Go to the official website of mob.org: This is one of the best sources for downloading free Android games. You can access it by typing mob.org in your browser or clicking on this link.
-
Search for Epic War 6 APK: Once you are on the website, you can use the search bar to look for Epic War 6 APK. You can also browse through the categories or genres to find it. Alternatively, you can use this direct link to go to the download page of Epic War 6 APK.
-
Click on the download button: When you find the game that you want, you can click on the green download button that says "Download Epic War 6". This will start the download process and you will see a progress bar on your screen.
-
Enable unknown sources on your device settings: Before you can install the APK file that you downloaded, you need to allow your device to install apps from unknown sources. To do this, go to your device settings and look for security or privacy options. Then, find the option that says "Unknown sources" or "Allow installation of apps from unknown sources" and enable it.
-
Install the APK file: After enabling unknown sources, you can go to your file manager or downloads folder and find the APK file that you downloaded. Tap on it and follow the instructions on your screen to install it.
-
Launch the game and enjoy the epic battles: Once the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer. You can then start playing the game and enjoy the epic battles.
-
Gameplay Tips and Tricks for Epic War 6 APK
-
Epic War 6 APK is a game that requires strategy, skill, and patience. You need to plan your moves carefully and use your resources wisely. Here are some tips and tricks that can help you improve your gameplay and win more battles:
-
-
Choose your hero wisely: Each hero has a different skill that can affect the battle in various ways. For example, Thor can summon thunderstorms that deal damage to all enemies, Hercules can heal all allies and boost their morale, and Joan of Arc can increase the attack and defense of all units. You need to choose the hero that suits your playstyle and strategy, and use their skill at the right time and place.
-
Use spells and skills at the right time and place: Apart from your hero's skill, you can also use spells that you can buy from the shop or earn from quests. These spells can have different effects, such as healing, damaging, freezing, or stunning. You need to use them wisely and strategically, as they have a cooldown time and a limited number of uses. You also need to aim them well, as some of them have a specific target or area of effect.
-
Upgrade your units and heroes regularly: As you progress in the game, you will face stronger enemies and tougher challenges. You need to upgrade your units and heroes regularly to increase their power and performance. You can upgrade them by using gold and gems that you can earn from battles, quests, or achievements. You can also equip them with items that you can buy from the shop or find in chests. These items can enhance their stats or give them special abilities.
-
Experiment with different combinations of units and heroes: There are many possible combinations of units and heroes that you can use in the game. You can mix and match different classes, such as infantry, cavalry, ranged, magic, or special. You can also try different heroes with different skills and abilities. You need to experiment with different combinations to find the best synergy and balance for your army.
-
-
Comparison of Epic War 6 APK with Other Games
-
Epic War 6 APK is not the only game that offers strategy and action in a fantasy setting. There are many other games that have similar or different features and gameplay. Here are some of them and how they compare with Epic War 6 APK:
-
-
-
Game
-
Similarities
-
Differences
-
-
-
Epic War Saga
-
- Same developer as Epic War 6 APK - Similar gameplay but with more RPG elements - Same genre of strategy and action
-
- Fewer heroes, units, and titans than Epic War 6 APK - More quests, missions, and achievements than Epic War 6 APK - Different graphics style and theme than Epic War 6 APK
-
-
-
Kingdom Rush
-
- Same genre of strategy and action - Similar gameplay but with tower defense elements - Same theme of fantasy and mythology
-
- Different developer than Epic War 6 APK - Fewer heroes and units than Epic War 6 APK - No titans or PVP mode in Kingdom Rush
-
-
-
Clash of Clans
-
- Same genre of strategy and action - Similar gameplay but with base building and army management elements - Same theme of fantasy and mythology
-
- Different developer than Epic War 6 APK - More online multiplayer features than Epic War 6 APK - Different graphics style and tone than Epic War 6 APK
-
-
Pros and Cons of Epic War 6 APK
-
Epic War 6 APK is a game that has many positive and negative aspects. Here are some of them:
-
Pros
-
-
High-quality graphics: The game has impressive graphics that create a realistic and immersive experience. The heroes, units, and titans are well-designed and animated. The backgrounds and environments are detailed and colorful. The effects and sounds are also realistic and captivating.
-
Addictive gameplay: The game has a simple but engaging gameplay that keeps you hooked for hours. The battles are fast-paced and thrilling, with a lot of strategy and action involved. The game also has a lot of content and features to explore, such as quests, achievements, items, and PVP mode.
-
Diverse heroes and units: The game has a lot of variety and diversity in terms of heroes and units. You can choose from six different heroes, each with their own skills and abilities. You can also train and upgrade over 40 battle units, from archers and knights to dragons and giants. You can also unlock new units as you progress in the game, such as ninjas, samurais, or angels.
-
Online PVP mode: The game has an online PVP mode that lets you compete with other players from around the world. You can choose your hero and units and enter a random match against another player. You can also join a clan and participate in clan wars, where you can cooperate with your clan members and fight against other clans. You can earn rewards and rank up in the leaderboards by winning matches and wars.
-
Free to play: The game is free to download and play on your Android device. You do not need to pay anything to enjoy the game. You can also play the game offline without an internet connection.
-
-
Cons
-
-
High learning curve: The game is not very easy to learn or master. You need to understand the mechanics and strategies of the game, such as how to use your hero's skill, how to upgrade your units, how to use spells, how to defeat titans, etc. You also need to practice a lot to improve your skills and performance.
-
Requires internet connection: The game requires an internet connection to access some of its features, such as PVP mode, clan wars, quests, achievements, etc. If you do not have a stable or fast internet connection, you may experience lagging or crashing issues.
-
May have bugs and glitches: The game may have some bugs and glitches that can affect your gameplay or experience. For example, some users have reported that the game freezes or crashes randomly, that the game does not save their progress or data, that the game does not load properly, etc.
-
May consume battery and storage space: The game may consume a lot of battery power and storage space on your device. This is because the game has high-quality graphics, sounds, and effects that require a lot of resources. You may need to charge your device frequently or clear some space on your device to play the game smoothly.
-
-
Review of Epic War 6 APK
-
Epic War 6 APK is a game that deserves a positive review from us. We think that it is a great game for fans of strategy and action games, with a lot of content and features to enjoy. We like the graphics, the gameplay, the diversity, and the online mode of the game. We think that it is a fun and exciting game to play.
-
However, we also acknowledge that the game has some flaws that need to be fixed or improved. We think that the game is not very easy to learn or master, that it requires an internet connection for some features, that it may have some bugs and glitches, and that it may consume a lot of battery power and storage space on your device.
-
Therefore, we give Epic War 6 APK a rating of 4.5 out of 5 stars based on our experience and feedback from other users. We think that it is a game worth playing if you like strategy and action games.
-
epic war 6 android game download
-epic war 6 free online strategy game
-epic war 6 heroes and titans apk
-epic war 6 unblocked html5 game
-epic war 6 mob.org apk file
-epic war 6 crazygames.com play now
-epic war 6 best army and spells
-epic war 6 mod apk unlimited money
-epic war 6 walkthrough and tips
-epic war 6 latest version update
-epic war 6 cheats and hacks
-epic war 6 review and rating
-epic war 6 gameplay and features
-epic war 6 multiplayer mode apk
-epic war 6 offline play apk
-epic war 6 trailer and screenshots
-epic war 6 system requirements and compatibility
-epic war 6 how to install apk
-epic war 6 similar games and alternatives
-epic war 6 developer and publisher
-epic war 6 forum and community
-epic war 6 guide and wiki
-epic war 6 support and contact
-epic war 6 news and updates
-epic war 6 awards and achievements
-epic war 6 fan art and videos
-epic war 6 merchandise and products
-epic war 6 soundtrack and music
-epic war 6 lore and story
-epic war 6 characters and skills
-epic war 6 units and classes
-epic war 6 maps and levels
-epic war 6 weapons and items
-epic war 6 enemies and bosses
-epic war 6 missions and challenges
-epic war 6 events and tournaments
-epic war 6 codes and coupons
-epic war 6 bugs and issues
-epic war 6 feedback and suggestions
-epic war 6 faq and help
-
Conclusion
-
In conclusion, Epic War 6 APK is a thrilling battle game for Android devices that lets you command legendary heroes and a strong army in epic battles against powerful enemies. You can choose from six unique heroes, each with their own skills and abilities. You can also train and upgrade over 40 battle units, from archers and knights to dragons and giants. You can also challenge and defeat huge titans that will test your skills and strategy. And if you want to compete with other players from around the world, you can enter the PVP Arena and show how epic you are.
-
We have also told you how to download and install Epic War 6 APK on your device, how to play it, how it compares with other games, what are its pros and cons, and what is its review. We hope that this article has been helpful and informative for you.
-
If you are interested in playing Epic War 6 APK, you can download it from the official website of mob.org or use this direct link. You can also visit the official Facebook page of the game for more updates and news. You can also watch this video for a preview of the game.
-
Thank you for reading this article and we hope that you enjoy playing Epic War 6 APK. Have fun and good luck!
-
FAQs
-
Here are some frequently asked questions about Epic War 6 APK:
-
-
What are the requirements to play Epic War 6 APK?
-You need an Android device with Android 4.1 or higher and at least 100 MB of free storage space to play Epic War 6 APK. You also need an internet connection to access some features of the game, such as PVP mode, clan wars, quests, achievements, etc.
-
Is Epic War 6 APK safe to download and install?
-Yes, Epic War 6 APK is safe to download and install on your device. It does not contain any viruses, malware, or spyware that can harm your device or data. However, you need to make sure that you download it from a trusted source, such as mob.org or the direct link that we provided in this article.
-
How can I get more gold and gems in Epic War 6 APK?
-You can get more gold and gems in Epic War 6 APK by winning battles, completing quests, achieving goals, opening chests, watching ads, or buying them with real money. You can use gold and gems to upgrade your units and heroes, buy items and spells, or unlock new features and content.
-
How can I join or create a clan in Epic War 6 APK?
-You can join or create a clan in Epic War 6 APK by going to the clan menu in the game. You can either search for an existing clan that suits your preferences and apply to join it, or create your own clan by choosing a name, a logo, and a description. You can also invite your friends or other players to join your clan. You can participate in clan wars, chat with your clan members, and share resources and tips with them.
-
How can I contact the developer of Epic War 6 APK?
-You can contact the developer of Epic War 6 APK by sending an email to epicwar@artlogicgames.com or by visiting their website at www.artlogicgames.com. You can also follow them on Facebook at www.facebook.com/epicwargames. You can send them your feedback, suggestions, questions, or complaints about the game.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion_uncond/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion_uncond/__init__.py
deleted file mode 100644
index 3286d84f41f239bbd3662100aaa85257c47cbab5..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion_uncond/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# flake8: noqa
-from .pipeline_latent_diffusion_uncond import LDMPipeline
diff --git a/spaces/AIFILMS/generate_human_motion/app.py b/spaces/AIFILMS/generate_human_motion/app.py
deleted file mode 100644
index 58c1cc635a5a4e8e6e00680a2ab5413668bdbe20..0000000000000000000000000000000000000000
--- a/spaces/AIFILMS/generate_human_motion/app.py
+++ /dev/null
@@ -1,319 +0,0 @@
-import sys
-import os
-import OpenGL.GL as gl
-os.environ["PYOPENGL_PLATFORM"] = "egl"
-os.environ["MESA_GL_VERSION_OVERRIDE"] = "4.1"
-os.system('pip install /home/user/app/pyrender')
-
-sys.argv = ['VQ-Trans/GPT_eval_multi.py']
-os.chdir('VQ-Trans')
-
-sys.path.append('/home/user/app/VQ-Trans')
-sys.path.append('/home/user/app/pyrender')
-
-import options.option_transformer as option_trans
-from huggingface_hub import snapshot_download
-model_path = snapshot_download(repo_id="vumichien/T2M-GPT")
-
-args = option_trans.get_args_parser()
-
-args.dataname = 't2m'
-args.resume_pth = f'{model_path}/VQVAE/net_last.pth'
-args.resume_trans = f'{model_path}/VQTransformer_corruption05/net_best_fid.pth'
-args.down_t = 2
-args.depth = 3
-args.block_size = 51
-
-import clip
-import torch
-import numpy as np
-import models.vqvae as vqvae
-import models.t2m_trans as trans
-from utils.motion_process import recover_from_ric
-import visualization.plot_3d_global as plot_3d
-from models.rotation2xyz import Rotation2xyz
-import numpy as np
-from trimesh import Trimesh
-import gc
-
-import torch
-from visualize.simplify_loc2rot import joints2smpl
-import pyrender
-# import matplotlib.pyplot as plt
-
-import io
-import imageio
-from shapely import geometry
-import trimesh
-from pyrender.constants import RenderFlags
-import math
-# import ffmpeg
-# from PIL import Image
-import hashlib
-import gradio as gr
-import moviepy.editor as mp
-
-## load clip model and datasets
-is_cuda = torch.cuda.is_available()
-device = torch.device("cuda" if is_cuda else "cpu")
-print(device)
-clip_model, clip_preprocess = clip.load("ViT-B/32", device=device, jit=False, download_root='./') # Must set jit=False for training
-
-if is_cuda:
- clip.model.convert_weights(clip_model)
-
-clip_model.eval()
-for p in clip_model.parameters():
- p.requires_grad = False
-
-net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers
- args.nb_code,
- args.code_dim,
- args.output_emb_width,
- args.down_t,
- args.stride_t,
- args.width,
- args.depth,
- args.dilation_growth_rate)
-
-
-trans_encoder = trans.Text2Motion_Transformer(num_vq=args.nb_code,
- embed_dim=1024,
- clip_dim=args.clip_dim,
- block_size=args.block_size,
- num_layers=9,
- n_head=16,
- drop_out_rate=args.drop_out_rate,
- fc_rate=args.ff_rate)
-
-
-print('loading checkpoint from {}'.format(args.resume_pth))
-ckpt = torch.load(args.resume_pth, map_location='cpu')
-net.load_state_dict(ckpt['net'], strict=True)
-net.eval()
-
-print('loading transformer checkpoint from {}'.format(args.resume_trans))
-ckpt = torch.load(args.resume_trans, map_location='cpu')
-trans_encoder.load_state_dict(ckpt['trans'], strict=True)
-trans_encoder.eval()
-
-mean = torch.from_numpy(np.load(f'{model_path}/meta/mean.npy'))
-std = torch.from_numpy(np.load(f'{model_path}/meta/std.npy'))
-
-if is_cuda:
- net.cuda()
- trans_encoder.cuda()
- mean = mean.cuda()
- std = std.cuda()
-
-def render(motions, device_id=0, name='test_vis'):
- frames, njoints, nfeats = motions.shape
- MINS = motions.min(axis=0).min(axis=0)
- MAXS = motions.max(axis=0).max(axis=0)
-
- height_offset = MINS[1]
- motions[:, :, 1] -= height_offset
- trajec = motions[:, 0, [0, 2]]
- is_cuda = torch.cuda.is_available()
- # device = torch.device("cuda" if is_cuda else "cpu")
- j2s = joints2smpl(num_frames=frames, device_id=0, cuda=is_cuda)
- rot2xyz = Rotation2xyz(device=device)
- faces = rot2xyz.smpl_model.faces
-
- if not os.path.exists(f'output/{name}_pred.pt'):
- print(f'Running SMPLify, it may take a few minutes.')
- motion_tensor, opt_dict = j2s.joint2smpl(motions) # [nframes, njoints, 3]
-
- vertices = rot2xyz(torch.tensor(motion_tensor).clone(), mask=None,
- pose_rep='rot6d', translation=True, glob=True,
- jointstype='vertices',
- vertstrans=True)
- vertices = vertices.detach().cpu()
- torch.save(vertices, f'output/{name}_pred.pt')
- else:
- vertices = torch.load(f'output/{name}_pred.pt')
- frames = vertices.shape[3] # shape: 1, nb_frames, 3, nb_joints
- print(vertices.shape)
- MINS = torch.min(torch.min(vertices[0], axis=0)[0], axis=1)[0]
- MAXS = torch.max(torch.max(vertices[0], axis=0)[0], axis=1)[0]
-
- out_list = []
-
- minx = MINS[0] - 0.5
- maxx = MAXS[0] + 0.5
- minz = MINS[2] - 0.5
- maxz = MAXS[2] + 0.5
- polygon = geometry.Polygon([[minx, minz], [minx, maxz], [maxx, maxz], [maxx, minz]])
- polygon_mesh = trimesh.creation.extrude_polygon(polygon, 1e-5)
-
- vid = []
- for i in range(frames):
- if i % 10 == 0:
- print(i)
-
- mesh = Trimesh(vertices=vertices[0, :, :, i].squeeze().tolist(), faces=faces)
-
- base_color = (0.11, 0.53, 0.8, 0.5)
- ## OPAQUE rendering without alpha
- ## BLEND rendering consider alpha
- material = pyrender.MetallicRoughnessMaterial(
- metallicFactor=0.7,
- alphaMode='OPAQUE',
- baseColorFactor=base_color
- )
-
-
- mesh = pyrender.Mesh.from_trimesh(mesh, material=material)
-
- polygon_mesh.visual.face_colors = [0, 0, 0, 0.21]
- polygon_render = pyrender.Mesh.from_trimesh(polygon_mesh, smooth=False)
-
- bg_color = [1, 1, 1, 0.8]
- scene = pyrender.Scene(bg_color=bg_color, ambient_light=(0.4, 0.4, 0.4))
-
- sx, sy, tx, ty = [0.75, 0.75, 0, 0.10]
-
- camera = pyrender.PerspectiveCamera(yfov=(np.pi / 3.0))
-
- light = pyrender.DirectionalLight(color=[1,1,1], intensity=300)
-
- scene.add(mesh)
-
- c = np.pi / 2
-
- scene.add(polygon_render, pose=np.array([[ 1, 0, 0, 0],
-
- [ 0, np.cos(c), -np.sin(c), MINS[1].cpu().numpy()],
-
- [ 0, np.sin(c), np.cos(c), 0],
-
- [ 0, 0, 0, 1]]))
-
- light_pose = np.eye(4)
- light_pose[:3, 3] = [0, -1, 1]
- scene.add(light, pose=light_pose.copy())
-
- light_pose[:3, 3] = [0, 1, 1]
- scene.add(light, pose=light_pose.copy())
-
- light_pose[:3, 3] = [1, 1, 2]
- scene.add(light, pose=light_pose.copy())
-
-
- c = -np.pi / 6
-
- scene.add(camera, pose=[[ 1, 0, 0, (minx+maxx).cpu().numpy()/2],
-
- [ 0, np.cos(c), -np.sin(c), 1.5],
-
- [ 0, np.sin(c), np.cos(c), max(4, minz.cpu().numpy()+(1.5-MINS[1].cpu().numpy())*2, (maxx-minx).cpu().numpy())],
-
- [ 0, 0, 0, 1]
- ])
-
- # render scene
- r = pyrender.OffscreenRenderer(960, 960)
-
- color, _ = r.render(scene, flags=RenderFlags.RGBA)
- # Image.fromarray(color).save(outdir+'/'+name+'_'+str(i)+'.png')
-
- vid.append(color)
-
- r.delete()
-
- out = np.stack(vid, axis=0)
- imageio.mimwrite(f'output/results.gif', out, fps=20)
- out_video = mp.VideoFileClip(f'output/results.gif')
- out_video.write_videofile("output/results.mp4")
- del out, vertices
- return f'output/results.mp4'
-
-def predict(clip_text, method='fast'):
- gc.collect()
- if torch.cuda.is_available():
- text = clip.tokenize([clip_text], truncate=True).cuda()
- else:
- text = clip.tokenize([clip_text], truncate=True)
- feat_clip_text = clip_model.encode_text(text).float()
- index_motion = trans_encoder.sample(feat_clip_text[0:1], False)
- pred_pose = net.forward_decoder(index_motion)
- pred_xyz = recover_from_ric((pred_pose*std+mean).float(), 22)
- output_name = hashlib.md5(clip_text.encode()).hexdigest()
- if method == 'fast':
- xyz = pred_xyz.reshape(1, -1, 22, 3)
- pose_vis = plot_3d.draw_to_batch(xyz.detach().cpu().numpy(), title_batch=None, outname=[f'output/results.gif'])
- out_video = mp.VideoFileClip("output/results.gif")
- out_video.write_videofile("output/results.mp4")
- return f'output/results.mp4'
- elif method == 'slow':
- output_path = render(pred_xyz.detach().cpu().numpy().squeeze(axis=0), device_id=0, name=output_name)
- return output_path
-
-
-# ---- Gradio Layout -----
-text_prompt = gr.Textbox(label="Text prompt", lines=1, interactive=True)
-video_out = gr.Video(label="Motion", mirror_webcam=False, interactive=False)
-demo = gr.Blocks()
-demo.encrypt = False
-
-with demo:
- gr.Markdown('''
-
-
Generating Human Motion from Textual Descriptions (T2M-GPT)
- This space uses T2M-GPT models based on Vector Quantised-Variational AutoEncoder (VQ-VAE) and Generative Pre-trained Transformer (GPT) for human motion generation from textural descriptions🤗
-
- ''')
- with gr.Row():
- with gr.Column():
- gr.Markdown('''
-
-
- a man starts off in an up right position with botg arms extended out by his sides, he then brings his arms down to his body and claps his hands together. after this he wals down amd the the left where he proceeds to sit on a seat
-
-
- ''')
- with gr.Column():
- gr.Markdown('''
-
-
- a person puts their hands together, leans forwards slightly then swings the arms from right to left
-
-
- ''')
- with gr.Column():
- gr.Markdown('''
-
-
- a man is practicing the waltz with a partner
-
-
- ''')
- with gr.Row():
- with gr.Column():
- gr.Markdown('''
- ### Generate human motion by **T2M-GPT**
- ##### Step 1. Give prompt text describing human motion
- ##### Step 2. Choice method to render output (Fast: Sketch skeleton; Slow: SMPL mesh, only work with GPU and running time around 2 mins)
- ##### Step 3. Generate output and enjoy
- ''')
- with gr.Column():
- with gr.Row():
- text_prompt.render()
- method = gr.Dropdown(["slow", "fast"], label="Method", value="slow")
- with gr.Row():
- generate_btn = gr.Button("Generate")
- generate_btn.click(predict, [text_prompt, method], [video_out], api_name="generate")
- print(video_out)
- with gr.Row():
- video_out.render()
- with gr.Row():
- gr.Markdown('''
- ### You can test by following examples:
- ''')
- examples = gr.Examples(examples=
- [ "a person jogs in place, slowly at first, then increases speed. they then back up and squat down.",
- "a man steps forward and does a handstand",
- "a man rises from the ground, walks in a circle and sits back down on the ground"],
- label="Examples", inputs=[text_prompt])
-
-demo.launch(debug=True)
diff --git a/spaces/AIZerotoHero-Health4All/02-ClinicalTerminology/README.md b/spaces/AIZerotoHero-Health4All/02-ClinicalTerminology/README.md
deleted file mode 100644
index 28796cea638944008464739ccfd3773687e64b3b..0000000000000000000000000000000000000000
--- a/spaces/AIZerotoHero-Health4All/02-ClinicalTerminology/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: 02 ClinicalTerminology
-emoji: 🐠
-colorFrom: red
-colorTo: green
-sdk: gradio
-sdk_version: 3.12.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/ASJMO/freegpt/client/css/buttons.css b/spaces/ASJMO/freegpt/client/css/buttons.css
deleted file mode 100644
index e13f52d9a0414daaa80518bd205913a645a29563..0000000000000000000000000000000000000000
--- a/spaces/ASJMO/freegpt/client/css/buttons.css
+++ /dev/null
@@ -1,4 +0,0 @@
-.buttons {
- display: flex;
- justify-content: left;
-}
diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GptGod.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GptGod.py
deleted file mode 100644
index 662884ddbec5ebffa03aae98a36727ff2cb6c366..0000000000000000000000000000000000000000
--- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GptGod.py
+++ /dev/null
@@ -1,51 +0,0 @@
-from __future__ import annotations
-import secrets, json
-from aiohttp import ClientSession
-from typing import AsyncGenerator
-from .base_provider import AsyncGeneratorProvider
-from .helper import format_prompt
-
-class GptGod(AsyncGeneratorProvider):
- url = "https://gptgod.site"
- supports_gpt_35_turbo = True
- working = True
-
- @classmethod
- async def create_async_generator(
- cls,
- model: str,
- messages: list[dict[str, str]],
- **kwargs
- ) -> AsyncGenerator:
- headers = {
- "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0",
- "Accept": "text/event-stream",
- "Accept-Language": "de,en-US;q=0.7,en;q=0.3",
- "Accept-Encoding": "gzip, deflate, br",
- "Alt-Used": "gptgod.site",
- "Connection": "keep-alive",
- "Referer": "https://gptgod.site/",
- "Sec-Fetch-Dest": "empty",
- "Sec-Fetch-Mode": "cors",
- "Sec-Fetch-Site": "same-origin",
- "Pragma": "no-cache",
- "Cache-Control": "no-cache",
- }
- async with ClientSession(headers=headers) as session:
- prompt = format_prompt(messages)
- data = {
- "content": prompt,
- "id": secrets.token_hex(16).zfill(32)
- }
- async with session.get(f"{cls.url}/api/session/free/gpt3p5", params=data) as response:
- response.raise_for_status()
- event = None
- async for line in response.content:
- if line.startswith(b'event: '):
- event = line[7:-1]
- elif event == b"data" and line.startswith(b"data: "):
- data = json.loads(line[6:-1])
- if data:
- yield data
- elif event == b"done":
- break
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/Builders.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/Builders.js
deleted file mode 100644
index bedfa7a49c502236aa2dbb9f26cdfd45b98b8cd1..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/Builders.js
+++ /dev/null
@@ -1,79 +0,0 @@
-import CreateImage from './CreateImage.js';
-import CreateSprite from './CreateSprite.js';
-import CreateVideo from './CreateVideo.js';
-import CreateText from './CreateText.js';
-import CreateBBCodeText from './CreateBBCodeText.js';
-import CreateRoundRectangle from './CreateRoundRectangle.js';
-import CreateNinePatch from './CreateNinePatch.js';
-import CreateNinePatch2 from './CreateNinePatch2.js';
-import CreateCanvas from './CreateCanvas.js';
-import CreateCircleMaskImage from './CreateCircleMaskImage.js';
-import CreateSpace from './CreateSpace.js';
-
-import CreateSizer from './CreateSizer.js';
-import CreateFixWidthSizer from './CreateFixWidthSizer.js';
-import CreateGridSizer from './CreateGridSizer.js';
-import CreateOverlapSizer from './CreateOverlapSizer.js';
-
-import CreateButtons from './CreateButtons.js';
-import CreateFixWidthButtons from './CreateFixWidthButtons.js';
-import CreateGridButtons from './CreateGridButtons.js';
-
-import CreateLabel from './CreateLabel.js';
-import CreateBadgeLabel from './CreateBadgeLabel.js';
-import CreateDialog from './CreateDialog.js';
-import CreateTextBox from './CreateTextBox.js';
-import CreateSlider from './CreateSlider.js';
-import CreateNumberBar from './CreateNumberBar.js';
-import CreateScrollBar from './CreateScrollBar.js';
-import CreateTextArea from './CreateTextArea.js';
-import CreatePages from './CreatePages.js';
-import CreateToast from './CreateToast.js';
-import CreateKnob from './CreateKnob.js';
-import CreateHolyGrail from './CreateHolyGrail.js';
-import CreateMenu from './CreateMenu.js';
-
-var Builders = {
- Image: CreateImage,
- Sprite: CreateSprite,
- Video: CreateVideo,
- Text: CreateText,
- BBCodeText: CreateBBCodeText,
- RoundRectangle: CreateRoundRectangle,
- Ninepatch: CreateNinePatch,
- Ninepatch2: CreateNinePatch2,
- Canvas: CreateCanvas,
- CircleMaskImage: CreateCircleMaskImage,
- Space: CreateSpace,
-
- Sizer: CreateSizer,
- FixWidthSizer: CreateFixWidthSizer,
- GridSizer: CreateGridSizer,
- OverlapSizer: CreateOverlapSizer,
-
- Buttons: CreateButtons,
- FixWidthButtons: CreateFixWidthButtons,
- GridButtons: CreateGridButtons,
-
- Label: CreateLabel,
- BadgeLabel: CreateBadgeLabel,
- Dialog: CreateDialog,
- TextBox: CreateTextBox,
- Slider: CreateSlider,
- NumberBar: CreateNumberBar,
- ScrollBar: CreateScrollBar,
- TextArea: CreateTextArea,
- Pages: CreatePages,
- Toast: CreateToast,
- Knob: CreateKnob,
- HolyGrail: CreateHolyGrail,
- Menu: CreateMenu,
-};
-
-/*
-function(scene, data, view, styles, customBuilders) {
- return gameObject;
-}
-*/
-
-export default Builders;
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateLabel.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateLabel.js
deleted file mode 100644
index 8c20f7f845f90be917a21a9cc0596c3cd8afabe5..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateLabel.js
+++ /dev/null
@@ -1,8 +0,0 @@
-import CreateAnyLabel from './utils/CreateAnyLabel.js';
-import Label from '../../label/Label.js';
-
-var CreateLabel = function (scene, data, view, styles, customBuilders) {
- return CreateAnyLabel(scene, data, view, styles, customBuilders, Label);
-}
-
-export default CreateLabel;
\ No newline at end of file
diff --git a/spaces/Allie7/Nose/Dockerfile b/spaces/Allie7/Nose/Dockerfile
deleted file mode 100644
index e903078eb67547d100c8e5548b2d7959ce565413..0000000000000000000000000000000000000000
--- a/spaces/Allie7/Nose/Dockerfile
+++ /dev/null
@@ -1,21 +0,0 @@
-FROM node: 18-bullseye-slim
-
-RUN apt-get update && \
-
-apt-get install -y git
-
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git/app
-
-WORKDIR /app
-
-RUN npm install
-
-COPY Dockerfile greeting.md env* ./
-
-RUN npm run build
-
-EXPOSE 7860
-
-ENV NODE_ENV=production
-
-CMD ["npm", "start" ]
\ No newline at end of file
diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/model/__init__.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/model/__init__.py
deleted file mode 100644
index b6602d66834efa27a8b88c5eb92ed901389bd9ca..0000000000000000000000000000000000000000
--- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/model/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-from src.model.styleRF import StyleRF
-from src.utils.registry import Registry
-
-MODEL_REGISTRY = Registry("MODEL")
-
-MODEL_REGISTRY.register(StyleRF)
\ No newline at end of file
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_tensorrt_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_tensorrt_img2img.py
deleted file mode 100644
index 67c7c2d00fbf53f26e42aa96dc5e049ea3b3d796..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_tensorrt_img2img.py
+++ /dev/null
@@ -1,1055 +0,0 @@
-#
-# Copyright 2023 The HuggingFace Inc. team.
-# SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-# SPDX-License-Identifier: Apache-2.0
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import gc
-import os
-from collections import OrderedDict
-from copy import copy
-from typing import List, Optional, Union
-
-import numpy as np
-import onnx
-import onnx_graphsurgeon as gs
-import PIL
-import tensorrt as trt
-import torch
-from huggingface_hub import snapshot_download
-from onnx import shape_inference
-from polygraphy import cuda
-from polygraphy.backend.common import bytes_from_path
-from polygraphy.backend.onnx.loader import fold_constants
-from polygraphy.backend.trt import (
- CreateConfig,
- Profile,
- engine_from_bytes,
- engine_from_network,
- network_from_onnx_path,
- save_engine,
-)
-from polygraphy.backend.trt import util as trt_util
-from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
-
-from diffusers.models import AutoencoderKL, UNet2DConditionModel
-from diffusers.pipelines.stable_diffusion import (
- StableDiffusionImg2ImgPipeline,
- StableDiffusionPipelineOutput,
- StableDiffusionSafetyChecker,
-)
-from diffusers.schedulers import DDIMScheduler
-from diffusers.utils import DIFFUSERS_CACHE, logging
-
-
-"""
-Installation instructions
-python3 -m pip install --upgrade transformers diffusers>=0.16.0
-python3 -m pip install --upgrade tensorrt>=8.6.1
-python3 -m pip install --upgrade polygraphy>=0.47.0 onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
-python3 -m pip install onnxruntime
-"""
-
-TRT_LOGGER = trt.Logger(trt.Logger.ERROR)
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-# Map of numpy dtype -> torch dtype
-numpy_to_torch_dtype_dict = {
- np.uint8: torch.uint8,
- np.int8: torch.int8,
- np.int16: torch.int16,
- np.int32: torch.int32,
- np.int64: torch.int64,
- np.float16: torch.float16,
- np.float32: torch.float32,
- np.float64: torch.float64,
- np.complex64: torch.complex64,
- np.complex128: torch.complex128,
-}
-if np.version.full_version >= "1.24.0":
- numpy_to_torch_dtype_dict[np.bool_] = torch.bool
-else:
- numpy_to_torch_dtype_dict[np.bool] = torch.bool
-
-# Map of torch dtype -> numpy dtype
-torch_to_numpy_dtype_dict = {value: key for (key, value) in numpy_to_torch_dtype_dict.items()}
-
-
-def device_view(t):
- return cuda.DeviceView(ptr=t.data_ptr(), shape=t.shape, dtype=torch_to_numpy_dtype_dict[t.dtype])
-
-
-def preprocess_image(image):
- """
- image: torch.Tensor
- """
- w, h = image.size
- w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
- image = image.resize((w, h))
- image = np.array(image).astype(np.float32) / 255.0
- image = image[None].transpose(0, 3, 1, 2)
- image = torch.from_numpy(image).contiguous()
- return 2.0 * image - 1.0
-
-
-class Engine:
- def __init__(self, engine_path):
- self.engine_path = engine_path
- self.engine = None
- self.context = None
- self.buffers = OrderedDict()
- self.tensors = OrderedDict()
-
- def __del__(self):
- [buf.free() for buf in self.buffers.values() if isinstance(buf, cuda.DeviceArray)]
- del self.engine
- del self.context
- del self.buffers
- del self.tensors
-
- def build(
- self,
- onnx_path,
- fp16,
- input_profile=None,
- enable_preview=False,
- enable_all_tactics=False,
- timing_cache=None,
- workspace_size=0,
- ):
- logger.warning(f"Building TensorRT engine for {onnx_path}: {self.engine_path}")
- p = Profile()
- if input_profile:
- for name, dims in input_profile.items():
- assert len(dims) == 3
- p.add(name, min=dims[0], opt=dims[1], max=dims[2])
-
- config_kwargs = {}
-
- config_kwargs["preview_features"] = [trt.PreviewFeature.DISABLE_EXTERNAL_TACTIC_SOURCES_FOR_CORE_0805]
- if enable_preview:
- # Faster dynamic shapes made optional since it increases engine build time.
- config_kwargs["preview_features"].append(trt.PreviewFeature.FASTER_DYNAMIC_SHAPES_0805)
- if workspace_size > 0:
- config_kwargs["memory_pool_limits"] = {trt.MemoryPoolType.WORKSPACE: workspace_size}
- if not enable_all_tactics:
- config_kwargs["tactic_sources"] = []
-
- engine = engine_from_network(
- network_from_onnx_path(onnx_path, flags=[trt.OnnxParserFlag.NATIVE_INSTANCENORM]),
- config=CreateConfig(fp16=fp16, profiles=[p], load_timing_cache=timing_cache, **config_kwargs),
- save_timing_cache=timing_cache,
- )
- save_engine(engine, path=self.engine_path)
-
- def load(self):
- logger.warning(f"Loading TensorRT engine: {self.engine_path}")
- self.engine = engine_from_bytes(bytes_from_path(self.engine_path))
-
- def activate(self):
- self.context = self.engine.create_execution_context()
-
- def allocate_buffers(self, shape_dict=None, device="cuda"):
- for idx in range(trt_util.get_bindings_per_profile(self.engine)):
- binding = self.engine[idx]
- if shape_dict and binding in shape_dict:
- shape = shape_dict[binding]
- else:
- shape = self.engine.get_binding_shape(binding)
- dtype = trt.nptype(self.engine.get_binding_dtype(binding))
- if self.engine.binding_is_input(binding):
- self.context.set_binding_shape(idx, shape)
- tensor = torch.empty(tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype]).to(device=device)
- self.tensors[binding] = tensor
- self.buffers[binding] = cuda.DeviceView(ptr=tensor.data_ptr(), shape=shape, dtype=dtype)
-
- def infer(self, feed_dict, stream):
- start_binding, end_binding = trt_util.get_active_profile_bindings(self.context)
- # shallow copy of ordered dict
- device_buffers = copy(self.buffers)
- for name, buf in feed_dict.items():
- assert isinstance(buf, cuda.DeviceView)
- device_buffers[name] = buf
- bindings = [0] * start_binding + [buf.ptr for buf in device_buffers.values()]
- noerror = self.context.execute_async_v2(bindings=bindings, stream_handle=stream.ptr)
- if not noerror:
- raise ValueError("ERROR: inference failed.")
-
- return self.tensors
-
-
-class Optimizer:
- def __init__(self, onnx_graph):
- self.graph = gs.import_onnx(onnx_graph)
-
- def cleanup(self, return_onnx=False):
- self.graph.cleanup().toposort()
- if return_onnx:
- return gs.export_onnx(self.graph)
-
- def select_outputs(self, keep, names=None):
- self.graph.outputs = [self.graph.outputs[o] for o in keep]
- if names:
- for i, name in enumerate(names):
- self.graph.outputs[i].name = name
-
- def fold_constants(self, return_onnx=False):
- onnx_graph = fold_constants(gs.export_onnx(self.graph), allow_onnxruntime_shape_inference=True)
- self.graph = gs.import_onnx(onnx_graph)
- if return_onnx:
- return onnx_graph
-
- def infer_shapes(self, return_onnx=False):
- onnx_graph = gs.export_onnx(self.graph)
- if onnx_graph.ByteSize() > 2147483648:
- raise TypeError("ERROR: model size exceeds supported 2GB limit")
- else:
- onnx_graph = shape_inference.infer_shapes(onnx_graph)
-
- self.graph = gs.import_onnx(onnx_graph)
- if return_onnx:
- return onnx_graph
-
-
-class BaseModel:
- def __init__(self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77):
- self.model = model
- self.name = "SD Model"
- self.fp16 = fp16
- self.device = device
-
- self.min_batch = 1
- self.max_batch = max_batch_size
- self.min_image_shape = 256 # min image resolution: 256x256
- self.max_image_shape = 1024 # max image resolution: 1024x1024
- self.min_latent_shape = self.min_image_shape // 8
- self.max_latent_shape = self.max_image_shape // 8
-
- self.embedding_dim = embedding_dim
- self.text_maxlen = text_maxlen
-
- def get_model(self):
- return self.model
-
- def get_input_names(self):
- pass
-
- def get_output_names(self):
- pass
-
- def get_dynamic_axes(self):
- return None
-
- def get_sample_input(self, batch_size, image_height, image_width):
- pass
-
- def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
- return None
-
- def get_shape_dict(self, batch_size, image_height, image_width):
- return None
-
- def optimize(self, onnx_graph):
- opt = Optimizer(onnx_graph)
- opt.cleanup()
- opt.fold_constants()
- opt.infer_shapes()
- onnx_opt_graph = opt.cleanup(return_onnx=True)
- return onnx_opt_graph
-
- def check_dims(self, batch_size, image_height, image_width):
- assert batch_size >= self.min_batch and batch_size <= self.max_batch
- assert image_height % 8 == 0 or image_width % 8 == 0
- latent_height = image_height // 8
- latent_width = image_width // 8
- assert latent_height >= self.min_latent_shape and latent_height <= self.max_latent_shape
- assert latent_width >= self.min_latent_shape and latent_width <= self.max_latent_shape
- return (latent_height, latent_width)
-
- def get_minmax_dims(self, batch_size, image_height, image_width, static_batch, static_shape):
- min_batch = batch_size if static_batch else self.min_batch
- max_batch = batch_size if static_batch else self.max_batch
- latent_height = image_height // 8
- latent_width = image_width // 8
- min_image_height = image_height if static_shape else self.min_image_shape
- max_image_height = image_height if static_shape else self.max_image_shape
- min_image_width = image_width if static_shape else self.min_image_shape
- max_image_width = image_width if static_shape else self.max_image_shape
- min_latent_height = latent_height if static_shape else self.min_latent_shape
- max_latent_height = latent_height if static_shape else self.max_latent_shape
- min_latent_width = latent_width if static_shape else self.min_latent_shape
- max_latent_width = latent_width if static_shape else self.max_latent_shape
- return (
- min_batch,
- max_batch,
- min_image_height,
- max_image_height,
- min_image_width,
- max_image_width,
- min_latent_height,
- max_latent_height,
- min_latent_width,
- max_latent_width,
- )
-
-
-def getOnnxPath(model_name, onnx_dir, opt=True):
- return os.path.join(onnx_dir, model_name + (".opt" if opt else "") + ".onnx")
-
-
-def getEnginePath(model_name, engine_dir):
- return os.path.join(engine_dir, model_name + ".plan")
-
-
-def build_engines(
- models: dict,
- engine_dir,
- onnx_dir,
- onnx_opset,
- opt_image_height,
- opt_image_width,
- opt_batch_size=1,
- force_engine_rebuild=False,
- static_batch=False,
- static_shape=True,
- enable_preview=False,
- enable_all_tactics=False,
- timing_cache=None,
- max_workspace_size=0,
-):
- built_engines = {}
- if not os.path.isdir(onnx_dir):
- os.makedirs(onnx_dir)
- if not os.path.isdir(engine_dir):
- os.makedirs(engine_dir)
-
- # Export models to ONNX
- for model_name, model_obj in models.items():
- engine_path = getEnginePath(model_name, engine_dir)
- if force_engine_rebuild or not os.path.exists(engine_path):
- logger.warning("Building Engines...")
- logger.warning("Engine build can take a while to complete")
- onnx_path = getOnnxPath(model_name, onnx_dir, opt=False)
- onnx_opt_path = getOnnxPath(model_name, onnx_dir)
- if force_engine_rebuild or not os.path.exists(onnx_opt_path):
- if force_engine_rebuild or not os.path.exists(onnx_path):
- logger.warning(f"Exporting model: {onnx_path}")
- model = model_obj.get_model()
- with torch.inference_mode(), torch.autocast("cuda"):
- inputs = model_obj.get_sample_input(opt_batch_size, opt_image_height, opt_image_width)
- torch.onnx.export(
- model,
- inputs,
- onnx_path,
- export_params=True,
- opset_version=onnx_opset,
- do_constant_folding=True,
- input_names=model_obj.get_input_names(),
- output_names=model_obj.get_output_names(),
- dynamic_axes=model_obj.get_dynamic_axes(),
- )
- del model
- torch.cuda.empty_cache()
- gc.collect()
- else:
- logger.warning(f"Found cached model: {onnx_path}")
-
- # Optimize onnx
- if force_engine_rebuild or not os.path.exists(onnx_opt_path):
- logger.warning(f"Generating optimizing model: {onnx_opt_path}")
- onnx_opt_graph = model_obj.optimize(onnx.load(onnx_path))
- onnx.save(onnx_opt_graph, onnx_opt_path)
- else:
- logger.warning(f"Found cached optimized model: {onnx_opt_path} ")
-
- # Build TensorRT engines
- for model_name, model_obj in models.items():
- engine_path = getEnginePath(model_name, engine_dir)
- engine = Engine(engine_path)
- onnx_path = getOnnxPath(model_name, onnx_dir, opt=False)
- onnx_opt_path = getOnnxPath(model_name, onnx_dir)
-
- if force_engine_rebuild or not os.path.exists(engine.engine_path):
- engine.build(
- onnx_opt_path,
- fp16=True,
- input_profile=model_obj.get_input_profile(
- opt_batch_size,
- opt_image_height,
- opt_image_width,
- static_batch=static_batch,
- static_shape=static_shape,
- ),
- enable_preview=enable_preview,
- timing_cache=timing_cache,
- workspace_size=max_workspace_size,
- )
- built_engines[model_name] = engine
-
- # Load and activate TensorRT engines
- for model_name, model_obj in models.items():
- engine = built_engines[model_name]
- engine.load()
- engine.activate()
-
- return built_engines
-
-
-def runEngine(engine, feed_dict, stream):
- return engine.infer(feed_dict, stream)
-
-
-class CLIP(BaseModel):
- def __init__(self, model, device, max_batch_size, embedding_dim):
- super(CLIP, self).__init__(
- model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
- )
- self.name = "CLIP"
-
- def get_input_names(self):
- return ["input_ids"]
-
- def get_output_names(self):
- return ["text_embeddings", "pooler_output"]
-
- def get_dynamic_axes(self):
- return {"input_ids": {0: "B"}, "text_embeddings": {0: "B"}}
-
- def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
- self.check_dims(batch_size, image_height, image_width)
- min_batch, max_batch, _, _, _, _, _, _, _, _ = self.get_minmax_dims(
- batch_size, image_height, image_width, static_batch, static_shape
- )
- return {
- "input_ids": [(min_batch, self.text_maxlen), (batch_size, self.text_maxlen), (max_batch, self.text_maxlen)]
- }
-
- def get_shape_dict(self, batch_size, image_height, image_width):
- self.check_dims(batch_size, image_height, image_width)
- return {
- "input_ids": (batch_size, self.text_maxlen),
- "text_embeddings": (batch_size, self.text_maxlen, self.embedding_dim),
- }
-
- def get_sample_input(self, batch_size, image_height, image_width):
- self.check_dims(batch_size, image_height, image_width)
- return torch.zeros(batch_size, self.text_maxlen, dtype=torch.int32, device=self.device)
-
- def optimize(self, onnx_graph):
- opt = Optimizer(onnx_graph)
- opt.select_outputs([0]) # delete graph output#1
- opt.cleanup()
- opt.fold_constants()
- opt.infer_shapes()
- opt.select_outputs([0], names=["text_embeddings"]) # rename network output
- opt_onnx_graph = opt.cleanup(return_onnx=True)
- return opt_onnx_graph
-
-
-def make_CLIP(model, device, max_batch_size, embedding_dim, inpaint=False):
- return CLIP(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
-
-
-class UNet(BaseModel):
- def __init__(
- self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77, unet_dim=4
- ):
- super(UNet, self).__init__(
- model=model,
- fp16=fp16,
- device=device,
- max_batch_size=max_batch_size,
- embedding_dim=embedding_dim,
- text_maxlen=text_maxlen,
- )
- self.unet_dim = unet_dim
- self.name = "UNet"
-
- def get_input_names(self):
- return ["sample", "timestep", "encoder_hidden_states"]
-
- def get_output_names(self):
- return ["latent"]
-
- def get_dynamic_axes(self):
- return {
- "sample": {0: "2B", 2: "H", 3: "W"},
- "encoder_hidden_states": {0: "2B"},
- "latent": {0: "2B", 2: "H", 3: "W"},
- }
-
- def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
- (
- min_batch,
- max_batch,
- _,
- _,
- _,
- _,
- min_latent_height,
- max_latent_height,
- min_latent_width,
- max_latent_width,
- ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
- return {
- "sample": [
- (2 * min_batch, self.unet_dim, min_latent_height, min_latent_width),
- (2 * batch_size, self.unet_dim, latent_height, latent_width),
- (2 * max_batch, self.unet_dim, max_latent_height, max_latent_width),
- ],
- "encoder_hidden_states": [
- (2 * min_batch, self.text_maxlen, self.embedding_dim),
- (2 * batch_size, self.text_maxlen, self.embedding_dim),
- (2 * max_batch, self.text_maxlen, self.embedding_dim),
- ],
- }
-
- def get_shape_dict(self, batch_size, image_height, image_width):
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
- return {
- "sample": (2 * batch_size, self.unet_dim, latent_height, latent_width),
- "encoder_hidden_states": (2 * batch_size, self.text_maxlen, self.embedding_dim),
- "latent": (2 * batch_size, 4, latent_height, latent_width),
- }
-
- def get_sample_input(self, batch_size, image_height, image_width):
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
- dtype = torch.float16 if self.fp16 else torch.float32
- return (
- torch.randn(
- 2 * batch_size, self.unet_dim, latent_height, latent_width, dtype=torch.float32, device=self.device
- ),
- torch.tensor([1.0], dtype=torch.float32, device=self.device),
- torch.randn(2 * batch_size, self.text_maxlen, self.embedding_dim, dtype=dtype, device=self.device),
- )
-
-
-def make_UNet(model, device, max_batch_size, embedding_dim, inpaint=False):
- return UNet(
- model,
- fp16=True,
- device=device,
- max_batch_size=max_batch_size,
- embedding_dim=embedding_dim,
- unet_dim=(9 if inpaint else 4),
- )
-
-
-class VAE(BaseModel):
- def __init__(self, model, device, max_batch_size, embedding_dim):
- super(VAE, self).__init__(
- model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
- )
- self.name = "VAE decoder"
-
- def get_input_names(self):
- return ["latent"]
-
- def get_output_names(self):
- return ["images"]
-
- def get_dynamic_axes(self):
- return {"latent": {0: "B", 2: "H", 3: "W"}, "images": {0: "B", 2: "8H", 3: "8W"}}
-
- def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
- (
- min_batch,
- max_batch,
- _,
- _,
- _,
- _,
- min_latent_height,
- max_latent_height,
- min_latent_width,
- max_latent_width,
- ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
- return {
- "latent": [
- (min_batch, 4, min_latent_height, min_latent_width),
- (batch_size, 4, latent_height, latent_width),
- (max_batch, 4, max_latent_height, max_latent_width),
- ]
- }
-
- def get_shape_dict(self, batch_size, image_height, image_width):
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
- return {
- "latent": (batch_size, 4, latent_height, latent_width),
- "images": (batch_size, 3, image_height, image_width),
- }
-
- def get_sample_input(self, batch_size, image_height, image_width):
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
- return torch.randn(batch_size, 4, latent_height, latent_width, dtype=torch.float32, device=self.device)
-
-
-def make_VAE(model, device, max_batch_size, embedding_dim, inpaint=False):
- return VAE(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
-
-
-class TorchVAEEncoder(torch.nn.Module):
- def __init__(self, model):
- super().__init__()
- self.vae_encoder = model
-
- def forward(self, x):
- return self.vae_encoder.encode(x).latent_dist.sample()
-
-
-class VAEEncoder(BaseModel):
- def __init__(self, model, device, max_batch_size, embedding_dim):
- super(VAEEncoder, self).__init__(
- model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
- )
- self.name = "VAE encoder"
-
- def get_model(self):
- vae_encoder = TorchVAEEncoder(self.model)
- return vae_encoder
-
- def get_input_names(self):
- return ["images"]
-
- def get_output_names(self):
- return ["latent"]
-
- def get_dynamic_axes(self):
- return {"images": {0: "B", 2: "8H", 3: "8W"}, "latent": {0: "B", 2: "H", 3: "W"}}
-
- def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
- assert batch_size >= self.min_batch and batch_size <= self.max_batch
- min_batch = batch_size if static_batch else self.min_batch
- max_batch = batch_size if static_batch else self.max_batch
- self.check_dims(batch_size, image_height, image_width)
- (
- min_batch,
- max_batch,
- min_image_height,
- max_image_height,
- min_image_width,
- max_image_width,
- _,
- _,
- _,
- _,
- ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
-
- return {
- "images": [
- (min_batch, 3, min_image_height, min_image_width),
- (batch_size, 3, image_height, image_width),
- (max_batch, 3, max_image_height, max_image_width),
- ]
- }
-
- def get_shape_dict(self, batch_size, image_height, image_width):
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
- return {
- "images": (batch_size, 3, image_height, image_width),
- "latent": (batch_size, 4, latent_height, latent_width),
- }
-
- def get_sample_input(self, batch_size, image_height, image_width):
- self.check_dims(batch_size, image_height, image_width)
- return torch.randn(batch_size, 3, image_height, image_width, dtype=torch.float32, device=self.device)
-
-
-def make_VAEEncoder(model, device, max_batch_size, embedding_dim, inpaint=False):
- return VAEEncoder(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
-
-
-class TensorRTStableDiffusionImg2ImgPipeline(StableDiffusionImg2ImgPipeline):
- r"""
- Pipeline for image-to-image generation using TensorRT accelerated Stable Diffusion.
-
- This model inherits from [`StableDiffusionImg2ImgPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- vae ([`AutoencoderKL`]):
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder ([`CLIPTextModel`]):
- Frozen text-encoder. Stable Diffusion uses the text portion of
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
- tokenizer (`CLIPTokenizer`):
- Tokenizer of class
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
- scheduler ([`SchedulerMixin`]):
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
- safety_checker ([`StableDiffusionSafetyChecker`]):
- Classification module that estimates whether generated images could be considered offensive or harmful.
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
- feature_extractor ([`CLIPFeatureExtractor`]):
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
- """
-
- def __init__(
- self,
- vae: AutoencoderKL,
- text_encoder: CLIPTextModel,
- tokenizer: CLIPTokenizer,
- unet: UNet2DConditionModel,
- scheduler: DDIMScheduler,
- safety_checker: StableDiffusionSafetyChecker,
- feature_extractor: CLIPFeatureExtractor,
- requires_safety_checker: bool = True,
- stages=["clip", "unet", "vae", "vae_encoder"],
- image_height: int = 512,
- image_width: int = 512,
- max_batch_size: int = 16,
- # ONNX export parameters
- onnx_opset: int = 17,
- onnx_dir: str = "onnx",
- # TensorRT engine build parameters
- engine_dir: str = "engine",
- build_preview_features: bool = True,
- force_engine_rebuild: bool = False,
- timing_cache: str = "timing_cache",
- ):
- super().__init__(
- vae, text_encoder, tokenizer, unet, scheduler, safety_checker, feature_extractor, requires_safety_checker
- )
-
- self.vae.forward = self.vae.decode
-
- self.stages = stages
- self.image_height, self.image_width = image_height, image_width
- self.inpaint = False
- self.onnx_opset = onnx_opset
- self.onnx_dir = onnx_dir
- self.engine_dir = engine_dir
- self.force_engine_rebuild = force_engine_rebuild
- self.timing_cache = timing_cache
- self.build_static_batch = False
- self.build_dynamic_shape = False
- self.build_preview_features = build_preview_features
-
- self.max_batch_size = max_batch_size
- # TODO: Restrict batch size to 4 for larger image dimensions as a WAR for TensorRT limitation.
- if self.build_dynamic_shape or self.image_height > 512 or self.image_width > 512:
- self.max_batch_size = 4
-
- self.stream = None # loaded in loadResources()
- self.models = {} # loaded in __loadModels()
- self.engine = {} # loaded in build_engines()
-
- def __loadModels(self):
- # Load pipeline models
- self.embedding_dim = self.text_encoder.config.hidden_size
- models_args = {
- "device": self.torch_device,
- "max_batch_size": self.max_batch_size,
- "embedding_dim": self.embedding_dim,
- "inpaint": self.inpaint,
- }
- if "clip" in self.stages:
- self.models["clip"] = make_CLIP(self.text_encoder, **models_args)
- if "unet" in self.stages:
- self.models["unet"] = make_UNet(self.unet, **models_args)
- if "vae" in self.stages:
- self.models["vae"] = make_VAE(self.vae, **models_args)
- if "vae_encoder" in self.stages:
- self.models["vae_encoder"] = make_VAEEncoder(self.vae, **models_args)
-
- @classmethod
- def set_cached_folder(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs):
- cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
- resume_download = kwargs.pop("resume_download", False)
- proxies = kwargs.pop("proxies", None)
- local_files_only = kwargs.pop("local_files_only", False)
- use_auth_token = kwargs.pop("use_auth_token", None)
- revision = kwargs.pop("revision", None)
-
- cls.cached_folder = (
- pretrained_model_name_or_path
- if os.path.isdir(pretrained_model_name_or_path)
- else snapshot_download(
- pretrained_model_name_or_path,
- cache_dir=cache_dir,
- resume_download=resume_download,
- proxies=proxies,
- local_files_only=local_files_only,
- use_auth_token=use_auth_token,
- revision=revision,
- )
- )
-
- def to(self, torch_device: Optional[Union[str, torch.device]] = None, silence_dtype_warnings: bool = False):
- super().to(torch_device, silence_dtype_warnings=silence_dtype_warnings)
-
- self.onnx_dir = os.path.join(self.cached_folder, self.onnx_dir)
- self.engine_dir = os.path.join(self.cached_folder, self.engine_dir)
- self.timing_cache = os.path.join(self.cached_folder, self.timing_cache)
-
- # set device
- self.torch_device = self._execution_device
- logger.warning(f"Running inference on device: {self.torch_device}")
-
- # load models
- self.__loadModels()
-
- # build engines
- self.engine = build_engines(
- self.models,
- self.engine_dir,
- self.onnx_dir,
- self.onnx_opset,
- opt_image_height=self.image_height,
- opt_image_width=self.image_width,
- force_engine_rebuild=self.force_engine_rebuild,
- static_batch=self.build_static_batch,
- static_shape=not self.build_dynamic_shape,
- enable_preview=self.build_preview_features,
- timing_cache=self.timing_cache,
- )
-
- return self
-
- def __initialize_timesteps(self, timesteps, strength):
- self.scheduler.set_timesteps(timesteps)
- offset = self.scheduler.steps_offset if hasattr(self.scheduler, "steps_offset") else 0
- init_timestep = int(timesteps * strength) + offset
- init_timestep = min(init_timestep, timesteps)
- t_start = max(timesteps - init_timestep + offset, 0)
- timesteps = self.scheduler.timesteps[t_start:].to(self.torch_device)
- return timesteps, t_start
-
- def __preprocess_images(self, batch_size, images=()):
- init_images = []
- for image in images:
- image = image.to(self.torch_device).float()
- image = image.repeat(batch_size, 1, 1, 1)
- init_images.append(image)
- return tuple(init_images)
-
- def __encode_image(self, init_image):
- init_latents = runEngine(self.engine["vae_encoder"], {"images": device_view(init_image)}, self.stream)[
- "latent"
- ]
- init_latents = 0.18215 * init_latents
- return init_latents
-
- def __encode_prompt(self, prompt, negative_prompt):
- r"""
- Encodes the prompt into text encoder hidden states.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- prompt to be encoded
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
- Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- """
- # Tokenize prompt
- text_input_ids = (
- self.tokenizer(
- prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- .input_ids.type(torch.int32)
- .to(self.torch_device)
- )
-
- text_input_ids_inp = device_view(text_input_ids)
- # NOTE: output tensor for CLIP must be cloned because it will be overwritten when called again for negative prompt
- text_embeddings = runEngine(self.engine["clip"], {"input_ids": text_input_ids_inp}, self.stream)[
- "text_embeddings"
- ].clone()
-
- # Tokenize negative prompt
- uncond_input_ids = (
- self.tokenizer(
- negative_prompt,
- padding="max_length",
- max_length=self.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- .input_ids.type(torch.int32)
- .to(self.torch_device)
- )
- uncond_input_ids_inp = device_view(uncond_input_ids)
- uncond_embeddings = runEngine(self.engine["clip"], {"input_ids": uncond_input_ids_inp}, self.stream)[
- "text_embeddings"
- ]
-
- # Concatenate the unconditional and text embeddings into a single batch to avoid doing two forward passes for classifier free guidance
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings]).to(dtype=torch.float16)
-
- return text_embeddings
-
- def __denoise_latent(
- self, latents, text_embeddings, timesteps=None, step_offset=0, mask=None, masked_image_latents=None
- ):
- if not isinstance(timesteps, torch.Tensor):
- timesteps = self.scheduler.timesteps
- for step_index, timestep in enumerate(timesteps):
- # Expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2)
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, timestep)
- if isinstance(mask, torch.Tensor):
- latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)
-
- # Predict the noise residual
- timestep_float = timestep.float() if timestep.dtype != torch.float32 else timestep
-
- sample_inp = device_view(latent_model_input)
- timestep_inp = device_view(timestep_float)
- embeddings_inp = device_view(text_embeddings)
- noise_pred = runEngine(
- self.engine["unet"],
- {"sample": sample_inp, "timestep": timestep_inp, "encoder_hidden_states": embeddings_inp},
- self.stream,
- )["latent"]
-
- # Perform guidance
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond)
-
- latents = self.scheduler.step(noise_pred, timestep, latents).prev_sample
-
- latents = 1.0 / 0.18215 * latents
- return latents
-
- def __decode_latent(self, latents):
- images = runEngine(self.engine["vae"], {"latent": device_view(latents)}, self.stream)["images"]
- images = (images / 2 + 0.5).clamp(0, 1)
- return images.cpu().permute(0, 2, 3, 1).float().numpy()
-
- def __loadResources(self, image_height, image_width, batch_size):
- self.stream = cuda.Stream()
-
- # Allocate buffers for TensorRT engine bindings
- for model_name, obj in self.models.items():
- self.engine[model_name].allocate_buffers(
- shape_dict=obj.get_shape_dict(batch_size, image_height, image_width), device=self.torch_device
- )
-
- @torch.no_grad()
- def __call__(
- self,
- prompt: Union[str, List[str]] = None,
- image: Union[torch.FloatTensor, PIL.Image.Image] = None,
- strength: float = 0.8,
- num_inference_steps: int = 50,
- guidance_scale: float = 7.5,
- negative_prompt: Optional[Union[str, List[str]]] = None,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- ):
- r"""
- Function invoked when calling the pipeline for generation.
-
- Args:
- prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
- instead.
- image (`PIL.Image.Image`):
- `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
- be masked out with `mask_image` and repainted according to `prompt`.
- strength (`float`, *optional*, defaults to 0.8):
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
- will be used as a starting point, adding more noise to it the larger the `strength`. The number of
- denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
- be maximum and the denoising process will run for the full number of iterations specified in
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- num_inference_steps (`int`, *optional*, defaults to 50):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 7.5):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- negative_prompt (`str` or `List[str]`, *optional*):
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
- `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
- Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
-
- """
- self.generator = generator
- self.denoising_steps = num_inference_steps
- self.guidance_scale = guidance_scale
-
- # Pre-compute latent input scales and linear multistep coefficients
- self.scheduler.set_timesteps(self.denoising_steps, device=self.torch_device)
-
- # Define call parameters
- if prompt is not None and isinstance(prompt, str):
- batch_size = 1
- prompt = [prompt]
- elif prompt is not None and isinstance(prompt, list):
- batch_size = len(prompt)
- else:
- raise ValueError(f"Expected prompt to be of type list or str but got {type(prompt)}")
-
- if negative_prompt is None:
- negative_prompt = [""] * batch_size
-
- if negative_prompt is not None and isinstance(negative_prompt, str):
- negative_prompt = [negative_prompt]
-
- assert len(prompt) == len(negative_prompt)
-
- if batch_size > self.max_batch_size:
- raise ValueError(
- f"Batch size {len(prompt)} is larger than allowed {self.max_batch_size}. If dynamic shape is used, then maximum batch size is 4"
- )
-
- # load resources
- self.__loadResources(self.image_height, self.image_width, batch_size)
-
- with torch.inference_mode(), torch.autocast("cuda"), trt.Runtime(TRT_LOGGER):
- # Initialize timesteps
- timesteps, t_start = self.__initialize_timesteps(self.denoising_steps, strength)
- latent_timestep = timesteps[:1].repeat(batch_size)
-
- # Pre-process input image
- if isinstance(image, PIL.Image.Image):
- image = preprocess_image(image)
- init_image = self.__preprocess_images(batch_size, (image,))[0]
-
- # VAE encode init image
- init_latents = self.__encode_image(init_image)
-
- # Add noise to latents using timesteps
- noise = torch.randn(
- init_latents.shape, generator=self.generator, device=self.torch_device, dtype=torch.float32
- )
- latents = self.scheduler.add_noise(init_latents, noise, latent_timestep)
-
- # CLIP text encoder
- text_embeddings = self.__encode_prompt(prompt, negative_prompt)
-
- # UNet denoiser
- latents = self.__denoise_latent(latents, text_embeddings, timesteps=timesteps, step_offset=t_start)
-
- # VAE decode latent
- images = self.__decode_latent(latents)
-
- images = self.numpy_to_pil(images)
- return StableDiffusionPipelineOutput(images=images, nsfw_content_detected=None)
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_models_diffuser_to_diffusers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_models_diffuser_to_diffusers.py
deleted file mode 100644
index cc5321e33fe088c652f6014c6dab813bb8d5f246..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_models_diffuser_to_diffusers.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import json
-import os
-
-import torch
-
-from diffusers import UNet1DModel
-
-
-os.makedirs("hub/hopper-medium-v2/unet/hor32", exist_ok=True)
-os.makedirs("hub/hopper-medium-v2/unet/hor128", exist_ok=True)
-
-os.makedirs("hub/hopper-medium-v2/value_function", exist_ok=True)
-
-
-def unet(hor):
- if hor == 128:
- down_block_types = ("DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D")
- block_out_channels = (32, 128, 256)
- up_block_types = ("UpResnetBlock1D", "UpResnetBlock1D")
-
- elif hor == 32:
- down_block_types = ("DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D")
- block_out_channels = (32, 64, 128, 256)
- up_block_types = ("UpResnetBlock1D", "UpResnetBlock1D", "UpResnetBlock1D")
- model = torch.load(f"/Users/bglickenhaus/Documents/diffuser/temporal_unet-hopper-mediumv2-hor{hor}.torch")
- state_dict = model.state_dict()
- config = {
- "down_block_types": down_block_types,
- "block_out_channels": block_out_channels,
- "up_block_types": up_block_types,
- "layers_per_block": 1,
- "use_timestep_embedding": True,
- "out_block_type": "OutConv1DBlock",
- "norm_num_groups": 8,
- "downsample_each_block": False,
- "in_channels": 14,
- "out_channels": 14,
- "extra_in_channels": 0,
- "time_embedding_type": "positional",
- "flip_sin_to_cos": False,
- "freq_shift": 1,
- "sample_size": 65536,
- "mid_block_type": "MidResTemporalBlock1D",
- "act_fn": "mish",
- }
- hf_value_function = UNet1DModel(**config)
- print(f"length of state dict: {len(state_dict.keys())}")
- print(f"length of value function dict: {len(hf_value_function.state_dict().keys())}")
- mapping = dict(zip(model.state_dict().keys(), hf_value_function.state_dict().keys()))
- for k, v in mapping.items():
- state_dict[v] = state_dict.pop(k)
- hf_value_function.load_state_dict(state_dict)
-
- torch.save(hf_value_function.state_dict(), f"hub/hopper-medium-v2/unet/hor{hor}/diffusion_pytorch_model.bin")
- with open(f"hub/hopper-medium-v2/unet/hor{hor}/config.json", "w") as f:
- json.dump(config, f)
-
-
-def value_function():
- config = {
- "in_channels": 14,
- "down_block_types": ("DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D"),
- "up_block_types": (),
- "out_block_type": "ValueFunction",
- "mid_block_type": "ValueFunctionMidBlock1D",
- "block_out_channels": (32, 64, 128, 256),
- "layers_per_block": 1,
- "downsample_each_block": True,
- "sample_size": 65536,
- "out_channels": 14,
- "extra_in_channels": 0,
- "time_embedding_type": "positional",
- "use_timestep_embedding": True,
- "flip_sin_to_cos": False,
- "freq_shift": 1,
- "norm_num_groups": 8,
- "act_fn": "mish",
- }
-
- model = torch.load("/Users/bglickenhaus/Documents/diffuser/value_function-hopper-mediumv2-hor32.torch")
- state_dict = model
- hf_value_function = UNet1DModel(**config)
- print(f"length of state dict: {len(state_dict.keys())}")
- print(f"length of value function dict: {len(hf_value_function.state_dict().keys())}")
-
- mapping = dict(zip(state_dict.keys(), hf_value_function.state_dict().keys()))
- for k, v in mapping.items():
- state_dict[v] = state_dict.pop(k)
-
- hf_value_function.load_state_dict(state_dict)
-
- torch.save(hf_value_function.state_dict(), "hub/hopper-medium-v2/value_function/diffusion_pytorch_model.bin")
- with open("hub/hopper-medium-v2/value_function/config.json", "w") as f:
- json.dump(config, f)
-
-
-if __name__ == "__main__":
- unet(32)
- # unet(128)
- value_function()
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/__init__.py
deleted file mode 100644
index c860b95f609c5c94d327df5d5f6541b87cd44488..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/__init__.py
+++ /dev/null
@@ -1,291 +0,0 @@
-__version__ = "0.19.3"
-
-from .configuration_utils import ConfigMixin
-from .utils import (
- OptionalDependencyNotAvailable,
- is_flax_available,
- is_inflect_available,
- is_invisible_watermark_available,
- is_k_diffusion_available,
- is_k_diffusion_version,
- is_librosa_available,
- is_note_seq_available,
- is_onnx_available,
- is_scipy_available,
- is_torch_available,
- is_torchsde_available,
- is_transformers_available,
- is_transformers_version,
- is_unidecode_available,
- logging,
-)
-
-
-try:
- if not is_onnx_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_onnx_objects import * # noqa F403
-else:
- from .pipelines import OnnxRuntimeModel
-
-try:
- if not is_torch_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_pt_objects import * # noqa F403
-else:
- from .models import (
- AsymmetricAutoencoderKL,
- AutoencoderKL,
- ControlNetModel,
- ModelMixin,
- MultiAdapter,
- PriorTransformer,
- T2IAdapter,
- T5FilmDecoder,
- Transformer2DModel,
- UNet1DModel,
- UNet2DConditionModel,
- UNet2DModel,
- UNet3DConditionModel,
- VQModel,
- )
- from .optimization import (
- get_constant_schedule,
- get_constant_schedule_with_warmup,
- get_cosine_schedule_with_warmup,
- get_cosine_with_hard_restarts_schedule_with_warmup,
- get_linear_schedule_with_warmup,
- get_polynomial_decay_schedule_with_warmup,
- get_scheduler,
- )
- from .pipelines import (
- AudioPipelineOutput,
- AutoPipelineForImage2Image,
- AutoPipelineForInpainting,
- AutoPipelineForText2Image,
- ConsistencyModelPipeline,
- DanceDiffusionPipeline,
- DDIMPipeline,
- DDPMPipeline,
- DiffusionPipeline,
- DiTPipeline,
- ImagePipelineOutput,
- KarrasVePipeline,
- LDMPipeline,
- LDMSuperResolutionPipeline,
- PNDMPipeline,
- RePaintPipeline,
- ScoreSdeVePipeline,
- )
- from .schedulers import (
- CMStochasticIterativeScheduler,
- DDIMInverseScheduler,
- DDIMParallelScheduler,
- DDIMScheduler,
- DDPMParallelScheduler,
- DDPMScheduler,
- DEISMultistepScheduler,
- DPMSolverMultistepInverseScheduler,
- DPMSolverMultistepScheduler,
- DPMSolverSinglestepScheduler,
- EulerAncestralDiscreteScheduler,
- EulerDiscreteScheduler,
- HeunDiscreteScheduler,
- IPNDMScheduler,
- KarrasVeScheduler,
- KDPM2AncestralDiscreteScheduler,
- KDPM2DiscreteScheduler,
- PNDMScheduler,
- RePaintScheduler,
- SchedulerMixin,
- ScoreSdeVeScheduler,
- UnCLIPScheduler,
- UniPCMultistepScheduler,
- VQDiffusionScheduler,
- )
- from .training_utils import EMAModel
-
-try:
- if not (is_torch_available() and is_scipy_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_scipy_objects import * # noqa F403
-else:
- from .schedulers import LMSDiscreteScheduler
-
-try:
- if not (is_torch_available() and is_torchsde_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_torchsde_objects import * # noqa F403
-else:
- from .schedulers import DPMSolverSDEScheduler
-
-try:
- if not (is_torch_available() and is_transformers_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_transformers_objects import * # noqa F403
-else:
- from .pipelines import (
- AltDiffusionImg2ImgPipeline,
- AltDiffusionPipeline,
- AudioLDMPipeline,
- CycleDiffusionPipeline,
- IFImg2ImgPipeline,
- IFImg2ImgSuperResolutionPipeline,
- IFInpaintingPipeline,
- IFInpaintingSuperResolutionPipeline,
- IFPipeline,
- IFSuperResolutionPipeline,
- ImageTextPipelineOutput,
- KandinskyCombinedPipeline,
- KandinskyImg2ImgCombinedPipeline,
- KandinskyImg2ImgPipeline,
- KandinskyInpaintCombinedPipeline,
- KandinskyInpaintPipeline,
- KandinskyPipeline,
- KandinskyPriorPipeline,
- KandinskyV22CombinedPipeline,
- KandinskyV22ControlnetImg2ImgPipeline,
- KandinskyV22ControlnetPipeline,
- KandinskyV22Img2ImgCombinedPipeline,
- KandinskyV22Img2ImgPipeline,
- KandinskyV22InpaintCombinedPipeline,
- KandinskyV22InpaintPipeline,
- KandinskyV22Pipeline,
- KandinskyV22PriorEmb2EmbPipeline,
- KandinskyV22PriorPipeline,
- LDMTextToImagePipeline,
- PaintByExamplePipeline,
- SemanticStableDiffusionPipeline,
- ShapEImg2ImgPipeline,
- ShapEPipeline,
- StableDiffusionAdapterPipeline,
- StableDiffusionAttendAndExcitePipeline,
- StableDiffusionControlNetImg2ImgPipeline,
- StableDiffusionControlNetInpaintPipeline,
- StableDiffusionControlNetPipeline,
- StableDiffusionDepth2ImgPipeline,
- StableDiffusionDiffEditPipeline,
- StableDiffusionImageVariationPipeline,
- StableDiffusionImg2ImgPipeline,
- StableDiffusionInpaintPipeline,
- StableDiffusionInpaintPipelineLegacy,
- StableDiffusionInstructPix2PixPipeline,
- StableDiffusionLatentUpscalePipeline,
- StableDiffusionLDM3DPipeline,
- StableDiffusionModelEditingPipeline,
- StableDiffusionPanoramaPipeline,
- StableDiffusionParadigmsPipeline,
- StableDiffusionPipeline,
- StableDiffusionPipelineSafe,
- StableDiffusionPix2PixZeroPipeline,
- StableDiffusionSAGPipeline,
- StableDiffusionUpscalePipeline,
- StableDiffusionXLControlNetPipeline,
- StableDiffusionXLImg2ImgPipeline,
- StableDiffusionXLInpaintPipeline,
- StableDiffusionXLInstructPix2PixPipeline,
- StableDiffusionXLPipeline,
- StableUnCLIPImg2ImgPipeline,
- StableUnCLIPPipeline,
- TextToVideoSDPipeline,
- TextToVideoZeroPipeline,
- UnCLIPImageVariationPipeline,
- UnCLIPPipeline,
- UniDiffuserModel,
- UniDiffuserPipeline,
- UniDiffuserTextDecoder,
- VersatileDiffusionDualGuidedPipeline,
- VersatileDiffusionImageVariationPipeline,
- VersatileDiffusionPipeline,
- VersatileDiffusionTextToImagePipeline,
- VideoToVideoSDPipeline,
- VQDiffusionPipeline,
- )
-
-try:
- if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403
-else:
- from .pipelines import StableDiffusionKDiffusionPipeline
-
-try:
- if not (is_torch_available() and is_transformers_available() and is_onnx_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_transformers_and_onnx_objects import * # noqa F403
-else:
- from .pipelines import (
- OnnxStableDiffusionImg2ImgPipeline,
- OnnxStableDiffusionInpaintPipeline,
- OnnxStableDiffusionInpaintPipelineLegacy,
- OnnxStableDiffusionPipeline,
- OnnxStableDiffusionUpscalePipeline,
- StableDiffusionOnnxPipeline,
- )
-
-try:
- if not (is_torch_available() and is_librosa_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_torch_and_librosa_objects import * # noqa F403
-else:
- from .pipelines import AudioDiffusionPipeline, Mel
-
-try:
- if not (is_transformers_available() and is_torch_available() and is_note_seq_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403
-else:
- from .pipelines import SpectrogramDiffusionPipeline
-
-try:
- if not is_flax_available():
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_flax_objects import * # noqa F403
-else:
- from .models.controlnet_flax import FlaxControlNetModel
- from .models.modeling_flax_utils import FlaxModelMixin
- from .models.unet_2d_condition_flax import FlaxUNet2DConditionModel
- from .models.vae_flax import FlaxAutoencoderKL
- from .pipelines import FlaxDiffusionPipeline
- from .schedulers import (
- FlaxDDIMScheduler,
- FlaxDDPMScheduler,
- FlaxDPMSolverMultistepScheduler,
- FlaxKarrasVeScheduler,
- FlaxLMSDiscreteScheduler,
- FlaxPNDMScheduler,
- FlaxSchedulerMixin,
- FlaxScoreSdeVeScheduler,
- )
-
-
-try:
- if not (is_flax_available() and is_transformers_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_flax_and_transformers_objects import * # noqa F403
-else:
- from .pipelines import (
- FlaxStableDiffusionControlNetPipeline,
- FlaxStableDiffusionImg2ImgPipeline,
- FlaxStableDiffusionInpaintPipeline,
- FlaxStableDiffusionPipeline,
- )
-
-try:
- if not (is_note_seq_available()):
- raise OptionalDependencyNotAvailable()
-except OptionalDependencyNotAvailable:
- from .utils.dummy_note_seq_objects import * # noqa F403
-else:
- from .pipelines import MidiProcessor
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py
deleted file mode 100644
index 53918fede7c2d4e9aaec8c7549630811c21e5bb7..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py
+++ /dev/null
@@ -1,409 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import Callable, List, Optional, Union
-
-import numpy as np
-import PIL
-import torch
-from PIL import Image
-
-from ...models import UNet2DConditionModel, VQModel
-from ...schedulers import DDPMScheduler
-from ...utils import (
- is_accelerate_available,
- is_accelerate_version,
- logging,
- randn_tensor,
- replace_example_docstring,
-)
-from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
-
-
-logger = logging.get_logger(__name__) # pylint: disable=invalid-name
-
-EXAMPLE_DOC_STRING = """
- Examples:
- ```py
- >>> import torch
- >>> import numpy as np
-
- >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline
- >>> from transformers import pipeline
- >>> from diffusers.utils import load_image
-
-
- >>> def make_hint(image, depth_estimator):
- ... image = depth_estimator(image)["depth"]
- ... image = np.array(image)
- ... image = image[:, :, None]
- ... image = np.concatenate([image, image, image], axis=2)
- ... detected_map = torch.from_numpy(image).float() / 255.0
- ... hint = detected_map.permute(2, 0, 1)
- ... return hint
-
-
- >>> depth_estimator = pipeline("depth-estimation")
-
- >>> pipe_prior = KandinskyV22PriorEmb2EmbPipeline.from_pretrained(
- ... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16
- ... )
- >>> pipe_prior = pipe_prior.to("cuda")
-
- >>> pipe = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained(
- ... "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16
- ... )
- >>> pipe = pipe.to("cuda")
-
- >>> img = load_image(
- ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
- ... "/kandinsky/cat.png"
- ... ).resize((768, 768))
-
-
- >>> hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda")
-
- >>> prompt = "A robot, 4k photo"
- >>> negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature"
-
- >>> generator = torch.Generator(device="cuda").manual_seed(43)
-
- >>> img_emb = pipe_prior(prompt=prompt, image=img, strength=0.85, generator=generator)
- >>> negative_emb = pipe_prior(prompt=negative_prior_prompt, image=img, strength=1, generator=generator)
-
- >>> images = pipe(
- ... image=img,
- ... strength=0.5,
- ... image_embeds=img_emb.image_embeds,
- ... negative_image_embeds=negative_emb.image_embeds,
- ... hint=hint,
- ... num_inference_steps=50,
- ... generator=generator,
- ... height=768,
- ... width=768,
- ... ).images
-
- >>> images[0].save("robot_cat.png")
- ```
-"""
-
-
-# Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2.downscale_height_and_width
-def downscale_height_and_width(height, width, scale_factor=8):
- new_height = height // scale_factor**2
- if height % scale_factor**2 != 0:
- new_height += 1
- new_width = width // scale_factor**2
- if width % scale_factor**2 != 0:
- new_width += 1
- return new_height * scale_factor, new_width * scale_factor
-
-
-# Copied from diffusers.pipelines.kandinsky.pipeline_kandinsky_img2img.prepare_image
-def prepare_image(pil_image, w=512, h=512):
- pil_image = pil_image.resize((w, h), resample=Image.BICUBIC, reducing_gap=1)
- arr = np.array(pil_image.convert("RGB"))
- arr = arr.astype(np.float32) / 127.5 - 1
- arr = np.transpose(arr, [2, 0, 1])
- image = torch.from_numpy(arr).unsqueeze(0)
- return image
-
-
-class KandinskyV22ControlnetImg2ImgPipeline(DiffusionPipeline):
- """
- Pipeline for image-to-image generation using Kandinsky
-
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
-
- Args:
- scheduler ([`DDIMScheduler`]):
- A scheduler to be used in combination with `unet` to generate image latents.
- unet ([`UNet2DConditionModel`]):
- Conditional U-Net architecture to denoise the image embedding.
- movq ([`VQModel`]):
- MoVQ Decoder to generate the image from the latents.
- """
-
- def __init__(
- self,
- unet: UNet2DConditionModel,
- scheduler: DDPMScheduler,
- movq: VQModel,
- ):
- super().__init__()
-
- self.register_modules(
- unet=unet,
- scheduler=scheduler,
- movq=movq,
- )
- self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1)
-
- # Copied from diffusers.pipelines.kandinsky.pipeline_kandinsky_img2img.KandinskyImg2ImgPipeline.get_timesteps
- def get_timesteps(self, num_inference_steps, strength, device):
- # get the original timestep using init_timestep
- init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
-
- t_start = max(num_inference_steps - init_timestep, 0)
- timesteps = self.scheduler.timesteps[t_start:]
-
- return timesteps, num_inference_steps - t_start
-
- # Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2_img2img.KandinskyV22Img2ImgPipeline.prepare_latents
- def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None):
- if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)):
- raise ValueError(
- f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}"
- )
-
- image = image.to(device=device, dtype=dtype)
-
- batch_size = batch_size * num_images_per_prompt
-
- if image.shape[1] == 4:
- init_latents = image
-
- else:
- if isinstance(generator, list) and len(generator) != batch_size:
- raise ValueError(
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
- )
-
- elif isinstance(generator, list):
- init_latents = [
- self.movq.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size)
- ]
- init_latents = torch.cat(init_latents, dim=0)
- else:
- init_latents = self.movq.encode(image).latent_dist.sample(generator)
-
- init_latents = self.movq.config.scaling_factor * init_latents
-
- init_latents = torch.cat([init_latents], dim=0)
-
- shape = init_latents.shape
- noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
-
- # get latents
- init_latents = self.scheduler.add_noise(init_latents, noise, timestep)
-
- latents = init_latents
-
- return latents
-
- # Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2.KandinskyV22Pipeline.enable_model_cpu_offload
- def enable_model_cpu_offload(self, gpu_id=0):
- r"""
- Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
- to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
- method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
- `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
- """
- if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
- from accelerate import cpu_offload_with_hook
- else:
- raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
-
- device = torch.device(f"cuda:{gpu_id}")
-
- if self.device.type != "cpu":
- self.to("cpu", silence_dtype_warnings=True)
- torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
-
- hook = None
- for cpu_offloaded_model in [self.unet, self.movq]:
- _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook)
-
- # We'll offload the last model manually.
- self.final_offload_hook = hook
-
- @torch.no_grad()
- @replace_example_docstring(EXAMPLE_DOC_STRING)
- def __call__(
- self,
- image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]],
- image: Union[torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image]],
- negative_image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]],
- hint: torch.FloatTensor,
- height: int = 512,
- width: int = 512,
- num_inference_steps: int = 100,
- guidance_scale: float = 4.0,
- strength: float = 0.3,
- num_images_per_prompt: int = 1,
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
- output_type: Optional[str] = "pil",
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
- callback_steps: int = 1,
- return_dict: bool = True,
- ):
- """
- Function invoked when calling the pipeline for generation.
-
- Args:
- image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`):
- The clip image embeddings for text prompt, that will be used to condition the image generation.
- image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`):
- `Image`, or tensor representing an image batch, that will be used as the starting point for the
- process. Can also accpet image latents as `image`, if passing latents directly, it will not be encoded
- again.
- strength (`float`, *optional*, defaults to 0.8):
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
- will be used as a starting point, adding more noise to it the larger the `strength`. The number of
- denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
- be maximum and the denoising process will run for the full number of iterations specified in
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
- hint (`torch.FloatTensor`):
- The controlnet condition.
- negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`):
- The clip image embeddings for negative text prompt, will be used to condition the image generation.
- height (`int`, *optional*, defaults to 512):
- The height in pixels of the generated image.
- width (`int`, *optional*, defaults to 512):
- The width in pixels of the generated image.
- num_inference_steps (`int`, *optional*, defaults to 100):
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
- expense of slower inference.
- guidance_scale (`float`, *optional*, defaults to 4.0):
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
- usually at the expense of lower image quality.
- num_images_per_prompt (`int`, *optional*, defaults to 1):
- The number of images to generate per prompt.
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
- to make generation deterministic.
- output_type (`str`, *optional*, defaults to `"pil"`):
- The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"`
- (`np.array`) or `"pt"` (`torch.Tensor`).
- callback (`Callable`, *optional*):
- A function that calls every `callback_steps` steps during inference. The function is called with the
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
- callback_steps (`int`, *optional*, defaults to 1):
- The frequency at which the `callback` function is called. If not specified, the callback is called at
- every step.
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
-
- Examples:
-
- Returns:
- [`~pipelines.ImagePipelineOutput`] or `tuple`
- """
- device = self._execution_device
-
- do_classifier_free_guidance = guidance_scale > 1.0
-
- if isinstance(image_embeds, list):
- image_embeds = torch.cat(image_embeds, dim=0)
- if isinstance(negative_image_embeds, list):
- negative_image_embeds = torch.cat(negative_image_embeds, dim=0)
- if isinstance(hint, list):
- hint = torch.cat(hint, dim=0)
-
- batch_size = image_embeds.shape[0]
-
- if do_classifier_free_guidance:
- image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0)
- negative_image_embeds = negative_image_embeds.repeat_interleave(num_images_per_prompt, dim=0)
- hint = hint.repeat_interleave(num_images_per_prompt, dim=0)
-
- image_embeds = torch.cat([negative_image_embeds, image_embeds], dim=0).to(
- dtype=self.unet.dtype, device=device
- )
- hint = torch.cat([hint, hint], dim=0).to(dtype=self.unet.dtype, device=device)
-
- if not isinstance(image, list):
- image = [image]
- if not all(isinstance(i, (PIL.Image.Image, torch.Tensor)) for i in image):
- raise ValueError(
- f"Input is in incorrect format: {[type(i) for i in image]}. Currently, we only support PIL image and pytorch tensor"
- )
-
- image = torch.cat([prepare_image(i, width, height) for i in image], dim=0)
- image = image.to(dtype=image_embeds.dtype, device=device)
-
- latents = self.movq.encode(image)["latents"]
- latents = latents.repeat_interleave(num_images_per_prompt, dim=0)
- self.scheduler.set_timesteps(num_inference_steps, device=device)
- timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device)
- latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt)
- height, width = downscale_height_and_width(height, width, self.movq_scale_factor)
- latents = self.prepare_latents(
- latents, latent_timestep, batch_size, num_images_per_prompt, image_embeds.dtype, device, generator
- )
- for i, t in enumerate(self.progress_bar(timesteps)):
- # expand the latents if we are doing classifier free guidance
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
-
- added_cond_kwargs = {"image_embeds": image_embeds, "hint": hint}
- noise_pred = self.unet(
- sample=latent_model_input,
- timestep=t,
- encoder_hidden_states=None,
- added_cond_kwargs=added_cond_kwargs,
- return_dict=False,
- )[0]
-
- if do_classifier_free_guidance:
- noise_pred, variance_pred = noise_pred.split(latents.shape[1], dim=1)
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
- _, variance_pred_text = variance_pred.chunk(2)
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
- noise_pred = torch.cat([noise_pred, variance_pred_text], dim=1)
-
- if not (
- hasattr(self.scheduler.config, "variance_type")
- and self.scheduler.config.variance_type in ["learned", "learned_range"]
- ):
- noise_pred, _ = noise_pred.split(latents.shape[1], dim=1)
-
- # compute the previous noisy sample x_t -> x_t-1
-
- latents = self.scheduler.step(
- noise_pred,
- t,
- latents,
- generator=generator,
- )[0]
-
- if callback is not None and i % callback_steps == 0:
- callback(i, t, latents)
-
- # post-processing
- image = self.movq.decode(latents, force_not_quantize=True)["sample"]
-
- # Offload last model to CPU
- if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
- self.final_offload_hook.offload()
-
- if output_type not in ["pt", "np", "pil"]:
- raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}")
-
- if output_type in ["np", "pil"]:
- image = image * 0.5 + 0.5
- image = image.clamp(0, 1)
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
-
- if output_type == "pil":
- image = self.numpy_to_pil(image)
-
- if not return_dict:
- return (image,)
-
- return ImagePipelineOutput(images=image)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py
deleted file mode 100644
index 5ca2a67cde62bff078b7c4c0d696a585265e4c3a..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py
+++ /dev/null
@@ -1,5 +0,0 @@
-_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
-model = dict(
- backbone=dict(
- dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False),
- stage_with_dcn=(False, True, True, True)))
diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_free_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_free_head.py
deleted file mode 100644
index 1814a0cc4f577f470f74f025440073a0aaa1ebd0..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_free_head.py
+++ /dev/null
@@ -1,340 +0,0 @@
-from abc import abstractmethod
-
-import torch
-import torch.nn as nn
-from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init
-from mmcv.runner import force_fp32
-
-from mmdet.core import multi_apply
-from ..builder import HEADS, build_loss
-from .base_dense_head import BaseDenseHead
-from .dense_test_mixins import BBoxTestMixin
-
-
-@HEADS.register_module()
-class AnchorFreeHead(BaseDenseHead, BBoxTestMixin):
- """Anchor-free head (FCOS, Fovea, RepPoints, etc.).
-
- Args:
- num_classes (int): Number of categories excluding the background
- category.
- in_channels (int): Number of channels in the input feature map.
- feat_channels (int): Number of hidden channels. Used in child classes.
- stacked_convs (int): Number of stacking convs of the head.
- strides (tuple): Downsample factor of each feature map.
- dcn_on_last_conv (bool): If true, use dcn in the last layer of
- towers. Default: False.
- conv_bias (bool | str): If specified as `auto`, it will be decided by
- the norm_cfg. Bias of conv will be set as True if `norm_cfg` is
- None, otherwise False. Default: "auto".
- loss_cls (dict): Config of classification loss.
- loss_bbox (dict): Config of localization loss.
- conv_cfg (dict): Config dict for convolution layer. Default: None.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- train_cfg (dict): Training config of anchor head.
- test_cfg (dict): Testing config of anchor head.
- """ # noqa: W605
-
- _version = 1
-
- def __init__(self,
- num_classes,
- in_channels,
- feat_channels=256,
- stacked_convs=4,
- strides=(4, 8, 16, 32, 64),
- dcn_on_last_conv=False,
- conv_bias='auto',
- loss_cls=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=1.0),
- loss_bbox=dict(type='IoULoss', loss_weight=1.0),
- conv_cfg=None,
- norm_cfg=None,
- train_cfg=None,
- test_cfg=None):
- super(AnchorFreeHead, self).__init__()
- self.num_classes = num_classes
- self.cls_out_channels = num_classes
- self.in_channels = in_channels
- self.feat_channels = feat_channels
- self.stacked_convs = stacked_convs
- self.strides = strides
- self.dcn_on_last_conv = dcn_on_last_conv
- assert conv_bias == 'auto' or isinstance(conv_bias, bool)
- self.conv_bias = conv_bias
- self.loss_cls = build_loss(loss_cls)
- self.loss_bbox = build_loss(loss_bbox)
- self.train_cfg = train_cfg
- self.test_cfg = test_cfg
- self.conv_cfg = conv_cfg
- self.norm_cfg = norm_cfg
- self.fp16_enabled = False
-
- self._init_layers()
-
- def _init_layers(self):
- """Initialize layers of the head."""
- self._init_cls_convs()
- self._init_reg_convs()
- self._init_predictor()
-
- def _init_cls_convs(self):
- """Initialize classification conv layers of the head."""
- self.cls_convs = nn.ModuleList()
- for i in range(self.stacked_convs):
- chn = self.in_channels if i == 0 else self.feat_channels
- if self.dcn_on_last_conv and i == self.stacked_convs - 1:
- conv_cfg = dict(type='DCNv2')
- else:
- conv_cfg = self.conv_cfg
- self.cls_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=self.norm_cfg,
- bias=self.conv_bias))
-
- def _init_reg_convs(self):
- """Initialize bbox regression conv layers of the head."""
- self.reg_convs = nn.ModuleList()
- for i in range(self.stacked_convs):
- chn = self.in_channels if i == 0 else self.feat_channels
- if self.dcn_on_last_conv and i == self.stacked_convs - 1:
- conv_cfg = dict(type='DCNv2')
- else:
- conv_cfg = self.conv_cfg
- self.reg_convs.append(
- ConvModule(
- chn,
- self.feat_channels,
- 3,
- stride=1,
- padding=1,
- conv_cfg=conv_cfg,
- norm_cfg=self.norm_cfg,
- bias=self.conv_bias))
-
- def _init_predictor(self):
- """Initialize predictor layers of the head."""
- self.conv_cls = nn.Conv2d(
- self.feat_channels, self.cls_out_channels, 3, padding=1)
- self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1)
-
- def init_weights(self):
- """Initialize weights of the head."""
- for m in self.cls_convs:
- if isinstance(m.conv, nn.Conv2d):
- normal_init(m.conv, std=0.01)
- for m in self.reg_convs:
- if isinstance(m.conv, nn.Conv2d):
- normal_init(m.conv, std=0.01)
- bias_cls = bias_init_with_prob(0.01)
- normal_init(self.conv_cls, std=0.01, bias=bias_cls)
- normal_init(self.conv_reg, std=0.01)
-
- def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
- missing_keys, unexpected_keys, error_msgs):
- """Hack some keys of the model state dict so that can load checkpoints
- of previous version."""
- version = local_metadata.get('version', None)
- if version is None:
- # the key is different in early versions
- # for example, 'fcos_cls' become 'conv_cls' now
- bbox_head_keys = [
- k for k in state_dict.keys() if k.startswith(prefix)
- ]
- ori_predictor_keys = []
- new_predictor_keys = []
- # e.g. 'fcos_cls' or 'fcos_reg'
- for key in bbox_head_keys:
- ori_predictor_keys.append(key)
- key = key.split('.')
- conv_name = None
- if key[1].endswith('cls'):
- conv_name = 'conv_cls'
- elif key[1].endswith('reg'):
- conv_name = 'conv_reg'
- elif key[1].endswith('centerness'):
- conv_name = 'conv_centerness'
- else:
- assert NotImplementedError
- if conv_name is not None:
- key[1] = conv_name
- new_predictor_keys.append('.'.join(key))
- else:
- ori_predictor_keys.pop(-1)
- for i in range(len(new_predictor_keys)):
- state_dict[new_predictor_keys[i]] = state_dict.pop(
- ori_predictor_keys[i])
- super()._load_from_state_dict(state_dict, prefix, local_metadata,
- strict, missing_keys, unexpected_keys,
- error_msgs)
-
- def forward(self, feats):
- """Forward features from the upstream network.
-
- Args:
- feats (tuple[Tensor]): Features from the upstream network, each is
- a 4D-tensor.
-
- Returns:
- tuple: Usually contain classification scores and bbox predictions.
- cls_scores (list[Tensor]): Box scores for each scale level,
- each is a 4D-tensor, the channel number is
- num_points * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level, each is a 4D-tensor, the channel number is
- num_points * 4.
- """
- return multi_apply(self.forward_single, feats)[:2]
-
- def forward_single(self, x):
- """Forward features of a single scale level.
-
- Args:
- x (Tensor): FPN feature maps of the specified stride.
-
- Returns:
- tuple: Scores for each class, bbox predictions, features
- after classification and regression conv layers, some
- models needs these features like FCOS.
- """
- cls_feat = x
- reg_feat = x
-
- for cls_layer in self.cls_convs:
- cls_feat = cls_layer(cls_feat)
- cls_score = self.conv_cls(cls_feat)
-
- for reg_layer in self.reg_convs:
- reg_feat = reg_layer(reg_feat)
- bbox_pred = self.conv_reg(reg_feat)
- return cls_score, bbox_pred, cls_feat, reg_feat
-
- @abstractmethod
- @force_fp32(apply_to=('cls_scores', 'bbox_preds'))
- def loss(self,
- cls_scores,
- bbox_preds,
- gt_bboxes,
- gt_labels,
- img_metas,
- gt_bboxes_ignore=None):
- """Compute loss of the head.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level,
- each is a 4D-tensor, the channel number is
- num_points * num_classes.
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level, each is a 4D-tensor, the channel number is
- num_points * 4.
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
- gt_labels (list[Tensor]): class indices corresponding to each box
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
- boxes can be ignored when computing the loss.
- """
-
- raise NotImplementedError
-
- @abstractmethod
- @force_fp32(apply_to=('cls_scores', 'bbox_preds'))
- def get_bboxes(self,
- cls_scores,
- bbox_preds,
- img_metas,
- cfg=None,
- rescale=None):
- """Transform network output for a batch into bbox predictions.
-
- Args:
- cls_scores (list[Tensor]): Box scores for each scale level
- Has shape (N, num_points * num_classes, H, W)
- bbox_preds (list[Tensor]): Box energies / deltas for each scale
- level with shape (N, num_points * 4, H, W)
- img_metas (list[dict]): Meta information of each image, e.g.,
- image size, scaling factor, etc.
- cfg (mmcv.Config): Test / postprocessing configuration,
- if None, test_cfg would be used
- rescale (bool): If True, return boxes in original image space
- """
-
- raise NotImplementedError
-
- @abstractmethod
- def get_targets(self, points, gt_bboxes_list, gt_labels_list):
- """Compute regression, classification and centerness targets for points
- in multiple images.
-
- Args:
- points (list[Tensor]): Points of each fpn level, each has shape
- (num_points, 2).
- gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image,
- each has shape (num_gt, 4).
- gt_labels_list (list[Tensor]): Ground truth labels of each box,
- each has shape (num_gt,).
- """
- raise NotImplementedError
-
- def _get_points_single(self,
- featmap_size,
- stride,
- dtype,
- device,
- flatten=False):
- """Get points of a single scale level."""
- h, w = featmap_size
- x_range = torch.arange(w, dtype=dtype, device=device)
- y_range = torch.arange(h, dtype=dtype, device=device)
- y, x = torch.meshgrid(y_range, x_range)
- if flatten:
- y = y.flatten()
- x = x.flatten()
- return y, x
-
- def get_points(self, featmap_sizes, dtype, device, flatten=False):
- """Get points according to feature map sizes.
-
- Args:
- featmap_sizes (list[tuple]): Multi-level feature map sizes.
- dtype (torch.dtype): Type of points.
- device (torch.device): Device of points.
-
- Returns:
- tuple: points of each image.
- """
- mlvl_points = []
- for i in range(len(featmap_sizes)):
- mlvl_points.append(
- self._get_points_single(featmap_sizes[i], self.strides[i],
- dtype, device, flatten))
- return mlvl_points
-
- def aug_test(self, feats, img_metas, rescale=False):
- """Test function with test time augmentation.
-
- Args:
- feats (list[Tensor]): the outer list indicates test-time
- augmentations and inner Tensor should have a shape NxCxHxW,
- which contains features for all images in the batch.
- img_metas (list[list[dict]]): the outer list indicates test-time
- augs (multiscale, flip, etc.) and the inner list indicates
- images in a batch. each dict has image information.
- rescale (bool, optional): Whether to rescale the results.
- Defaults to False.
-
- Returns:
- list[ndarray]: bbox results of each class
- """
- return self.aug_test_bboxes(feats, img_metas, rescale=rescale)
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context_59.py
deleted file mode 100644
index 4a8180038be33fba9c3229ee3c017f2f0628544f..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context_59.py
+++ /dev/null
@@ -1,10 +0,0 @@
-_base_ = [
- '../_base_/models/fcn_r50-d8.py',
- '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py',
- '../_base_/schedules/schedule_40k.py'
-]
-model = dict(
- decode_head=dict(num_classes=59),
- auxiliary_head=dict(num_classes=59),
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
-optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py
deleted file mode 100644
index 194564e761ddae165b39ef6598877e2e3820af0a..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from typing import List, TypeVar
-
-T = TypeVar("T")
-
-
-class Stack(List[T]):
- """A small shim over builtin list."""
-
- @property
- def top(self) -> T:
- """Get top of stack."""
- return self[-1]
-
- def push(self, item: T) -> None:
- """Push an item on to the stack (append in stack nomenclature)."""
- self.append(item)
diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extension.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extension.py
deleted file mode 100644
index 58c023f6b4479c631f382e5062932793d2bee26b..0000000000000000000000000000000000000000
--- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extension.py
+++ /dev/null
@@ -1,148 +0,0 @@
-import re
-import functools
-import distutils.core
-import distutils.errors
-import distutils.extension
-
-from .monkey import get_unpatched
-
-
-def _have_cython():
- """
- Return True if Cython can be imported.
- """
- cython_impl = 'Cython.Distutils.build_ext'
- try:
- # from (cython_impl) import build_ext
- __import__(cython_impl, fromlist=['build_ext']).build_ext
- return True
- except Exception:
- pass
- return False
-
-
-# for compatibility
-have_pyrex = _have_cython
-
-_Extension = get_unpatched(distutils.core.Extension)
-
-
-class Extension(_Extension):
- """
- Describes a single extension module.
-
- This means that all source files will be compiled into a single binary file
- ``.`` (with ```` derived from ``name`` and
- ```` defined by one of the values in
- ``importlib.machinery.EXTENSION_SUFFIXES``).
-
- In the case ``.pyx`` files are passed as ``sources and`` ``Cython`` is **not**
- installed in the build environment, ``setuptools`` may also try to look for the
- equivalent ``.cpp`` or ``.c`` files.
-
- :arg str name:
- the full name of the extension, including any packages -- ie.
- *not* a filename or pathname, but Python dotted name
-
- :arg list[str] sources:
- list of source filenames, relative to the distribution root
- (where the setup script lives), in Unix form (slash-separated)
- for portability. Source files may be C, C++, SWIG (.i),
- platform-specific resource files, or whatever else is recognized
- by the "build_ext" command as source for a Python extension.
-
- :keyword list[str] include_dirs:
- list of directories to search for C/C++ header files (in Unix
- form for portability)
-
- :keyword list[tuple[str, str|None]] define_macros:
- list of macros to define; each macro is defined using a 2-tuple:
- the first item corresponding to the name of the macro and the second
- item either a string with its value or None to
- define it without a particular value (equivalent of "#define
- FOO" in source or -DFOO on Unix C compiler command line)
-
- :keyword list[str] undef_macros:
- list of macros to undefine explicitly
-
- :keyword list[str] library_dirs:
- list of directories to search for C/C++ libraries at link time
-
- :keyword list[str] libraries:
- list of library names (not filenames or paths) to link against
-
- :keyword list[str] runtime_library_dirs:
- list of directories to search for C/C++ libraries at run time
- (for shared extensions, this is when the extension is loaded).
- Setting this will cause an exception during build on Windows
- platforms.
-
- :keyword list[str] extra_objects:
- list of extra files to link with (eg. object files not implied
- by 'sources', static library that must be explicitly specified,
- binary resource files, etc.)
-
- :keyword list[str] extra_compile_args:
- any extra platform- and compiler-specific information to use
- when compiling the source files in 'sources'. For platforms and
- compilers where "command line" makes sense, this is typically a
- list of command-line arguments, but for other platforms it could
- be anything.
-
- :keyword list[str] extra_link_args:
- any extra platform- and compiler-specific information to use
- when linking object files together to create the extension (or
- to create a new static Python interpreter). Similar
- interpretation as for 'extra_compile_args'.
-
- :keyword list[str] export_symbols:
- list of symbols to be exported from a shared extension. Not
- used on all platforms, and not generally necessary for Python
- extensions, which typically export exactly one symbol: "init" +
- extension_name.
-
- :keyword list[str] swig_opts:
- any extra options to pass to SWIG if a source file has the .i
- extension.
-
- :keyword list[str] depends:
- list of files that the extension depends on
-
- :keyword str language:
- extension language (i.e. "c", "c++", "objc"). Will be detected
- from the source extensions if not provided.
-
- :keyword bool optional:
- specifies that a build failure in the extension should not abort the
- build process, but simply not install the failing extension.
-
- :keyword bool py_limited_api:
- opt-in flag for the usage of :doc:`Python's limited API `.
-
- :raises setuptools.errors.PlatformError: if 'runtime_library_dirs' is
- specified on Windows. (since v63)
- """
-
- def __init__(self, name, sources, *args, **kw):
- # The *args is needed for compatibility as calls may use positional
- # arguments. py_limited_api may be set only via keyword.
- self.py_limited_api = kw.pop("py_limited_api", False)
- super().__init__(name, sources, *args, **kw)
-
- def _convert_pyx_sources_to_lang(self):
- """
- Replace sources with .pyx extensions to sources with the target
- language extension. This mechanism allows language authors to supply
- pre-converted sources but to prefer the .pyx sources.
- """
- if _have_cython():
- # the build has Cython, so allow it to compile the .pyx files
- return
- lang = self.language or ''
- target_ext = '.cpp' if lang.lower() == 'c++' else '.c'
- sub = functools.partial(re.sub, '.pyx$', target_ext)
- self.sources = list(map(sub, self.sources))
-
-
-class Library(Extension):
- """Just like a regular Extension, but built as a library instead"""
diff --git a/spaces/AtomdffAI/wechatgpt4atom/README.md b/spaces/AtomdffAI/wechatgpt4atom/README.md
deleted file mode 100644
index a060c61d40b2162b8e7cdf6100991a8a45cc5b9a..0000000000000000000000000000000000000000
--- a/spaces/AtomdffAI/wechatgpt4atom/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: wechat-bot
-emoji: 👀
-colorFrom: red
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.19.1
-app_file: app.py
-pinned: false
-duplicated_from: lewisliuX123/wechatgpt3
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/Awesimo/jojogan/e4e/training/__init__.py b/spaces/Awesimo/jojogan/e4e/training/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/fcos.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/fcos.py
deleted file mode 100644
index 55cdb76e836214a2b5a7a4a5a5c47e3382dee86d..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/fcos.py
+++ /dev/null
@@ -1,303 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-import logging
-from typing import List, Optional, Tuple
-import torch
-from fvcore.nn import sigmoid_focal_loss_jit
-from torch import Tensor, nn
-from torch.nn import functional as F
-
-from detectron2.layers import ShapeSpec, batched_nms
-from detectron2.structures import Boxes, ImageList, Instances, pairwise_point_box_distance
-from detectron2.utils.events import get_event_storage
-
-from ..anchor_generator import DefaultAnchorGenerator
-from ..backbone import Backbone
-from ..box_regression import Box2BoxTransformLinear, _dense_box_regression_loss
-from .dense_detector import DenseDetector
-from .retinanet import RetinaNetHead
-
-__all__ = ["FCOS"]
-
-
-logger = logging.getLogger(__name__)
-
-
-class FCOS(DenseDetector):
- """
- Implement FCOS in :paper:`fcos`.
- """
-
- def __init__(
- self,
- *,
- backbone: Backbone,
- head: nn.Module,
- head_in_features: Optional[List[str]] = None,
- box2box_transform=None,
- num_classes,
- center_sampling_radius: float = 1.5,
- focal_loss_alpha=0.25,
- focal_loss_gamma=2.0,
- test_score_thresh=0.2,
- test_topk_candidates=1000,
- test_nms_thresh=0.6,
- max_detections_per_image=100,
- pixel_mean,
- pixel_std,
- ):
- """
- Args:
- center_sampling_radius: radius of the "center" of a groundtruth box,
- within which all anchor points are labeled positive.
- Other arguments mean the same as in :class:`RetinaNet`.
- """
- super().__init__(
- backbone, head, head_in_features, pixel_mean=pixel_mean, pixel_std=pixel_std
- )
-
- self.num_classes = num_classes
-
- # FCOS uses one anchor point per location.
- # We represent the anchor point by a box whose size equals the anchor stride.
- feature_shapes = backbone.output_shape()
- fpn_strides = [feature_shapes[k].stride for k in self.head_in_features]
- self.anchor_generator = DefaultAnchorGenerator(
- sizes=[[k] for k in fpn_strides], aspect_ratios=[1.0], strides=fpn_strides
- )
-
- # FCOS parameterizes box regression by a linear transform,
- # where predictions are normalized by anchor stride (equal to anchor size).
- if box2box_transform is None:
- box2box_transform = Box2BoxTransformLinear(normalize_by_size=True)
- self.box2box_transform = box2box_transform
-
- self.center_sampling_radius = float(center_sampling_radius)
-
- # Loss parameters:
- self.focal_loss_alpha = focal_loss_alpha
- self.focal_loss_gamma = focal_loss_gamma
-
- # Inference parameters:
- self.test_score_thresh = test_score_thresh
- self.test_topk_candidates = test_topk_candidates
- self.test_nms_thresh = test_nms_thresh
- self.max_detections_per_image = max_detections_per_image
-
- def forward_training(self, images, features, predictions, gt_instances):
- # Transpose the Hi*Wi*A dimension to the middle:
- pred_logits, pred_anchor_deltas, pred_centerness = self._transpose_dense_predictions(
- predictions, [self.num_classes, 4, 1]
- )
- anchors = self.anchor_generator(features)
- gt_labels, gt_boxes = self.label_anchors(anchors, gt_instances)
- return self.losses(
- anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes, pred_centerness
- )
-
- @torch.no_grad()
- def match_anchors(self, anchors: List[Boxes], gt_instances: List[Instances]):
- """
- Match anchors with ground truth boxes.
-
- Args:
- anchors: #level boxes, from the highest resolution to lower resolution
- gt_instances: ground truth instances per image
-
- Returns:
- List[Tensor]:
- #image tensors, each is a vector of matched gt
- indices (or -1 for unmatched anchors) for all anchors.
- """
- num_anchors_per_level = [len(x) for x in anchors]
- anchors = Boxes.cat(anchors) # Rx4
- anchor_centers = anchors.get_centers() # Rx2
- anchor_sizes = anchors.tensor[:, 2] - anchors.tensor[:, 0] # R
-
- lower_bound = anchor_sizes * 4
- lower_bound[: num_anchors_per_level[0]] = 0
- upper_bound = anchor_sizes * 8
- upper_bound[-num_anchors_per_level[-1] :] = float("inf")
-
- matched_indices = []
- for gt_per_image in gt_instances:
- gt_centers = gt_per_image.gt_boxes.get_centers() # Nx2
- # FCOS with center sampling: anchor point must be close enough to gt center.
- pairwise_match = (anchor_centers[:, None, :] - gt_centers[None, :, :]).abs_().max(
- dim=2
- ).values < self.center_sampling_radius * anchor_sizes[:, None]
- pairwise_dist = pairwise_point_box_distance(anchor_centers, gt_per_image.gt_boxes)
-
- # The original FCOS anchor matching rule: anchor point must be inside gt
- pairwise_match &= pairwise_dist.min(dim=2).values > 0
-
- # Multilevel anchor matching in FCOS: each anchor is only responsible
- # for certain scale range.
- pairwise_dist = pairwise_dist.max(dim=2).values
- pairwise_match &= (pairwise_dist > lower_bound[:, None]) & (
- pairwise_dist < upper_bound[:, None]
- )
-
- # Match the GT box with minimum area, if there are multiple GT matches
- gt_areas = gt_per_image.gt_boxes.area() # N
- pairwise_match = pairwise_match.to(torch.float32) * (1e8 - gt_areas[None, :])
- min_values, matched_idx = pairwise_match.max(dim=1) # R, per-anchor match
- matched_idx[min_values < 1e-5] = -1 # Unmatched anchors are assigned -1
-
- matched_indices.append(matched_idx)
- return matched_indices
-
- @torch.no_grad()
- def label_anchors(self, anchors, gt_instances):
- """
- Same interface as :meth:`RetinaNet.label_anchors`, but implemented with FCOS
- anchor matching rule.
-
- Unlike RetinaNet, there are no ignored anchors.
- """
- matched_indices = self.match_anchors(anchors, gt_instances)
-
- matched_labels, matched_boxes = [], []
- for gt_index, gt_per_image in zip(matched_indices, gt_instances):
- label = gt_per_image.gt_classes[gt_index.clip(min=0)]
- label[gt_index < 0] = self.num_classes # background
-
- matched_gt_boxes = gt_per_image.gt_boxes[gt_index.clip(min=0)]
-
- matched_labels.append(label)
- matched_boxes.append(matched_gt_boxes)
- return matched_labels, matched_boxes
-
- def losses(
- self, anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes, pred_centerness
- ):
- """
- This method is almost identical to :meth:`RetinaNet.losses`, with an extra
- "loss_centerness" in the returned dict.
- """
- num_images = len(gt_labels)
- gt_labels = torch.stack(gt_labels) # (N, R)
-
- pos_mask = (gt_labels >= 0) & (gt_labels != self.num_classes)
- num_pos_anchors = pos_mask.sum().item()
- get_event_storage().put_scalar("num_pos_anchors", num_pos_anchors / num_images)
- normalizer = self._ema_update("loss_normalizer", max(num_pos_anchors, 1), 300)
-
- # classification and regression loss
- gt_labels_target = F.one_hot(gt_labels, num_classes=self.num_classes + 1)[
- :, :, :-1
- ] # no loss for the last (background) class
- loss_cls = sigmoid_focal_loss_jit(
- torch.cat(pred_logits, dim=1),
- gt_labels_target.to(pred_logits[0].dtype),
- alpha=self.focal_loss_alpha,
- gamma=self.focal_loss_gamma,
- reduction="sum",
- )
-
- loss_box_reg = _dense_box_regression_loss(
- anchors,
- self.box2box_transform,
- pred_anchor_deltas,
- [x.tensor for x in gt_boxes],
- pos_mask,
- box_reg_loss_type="giou",
- )
-
- ctrness_targets = self.compute_ctrness_targets(anchors, gt_boxes) # NxR
- pred_centerness = torch.cat(pred_centerness, dim=1).squeeze(dim=2) # NxR
- ctrness_loss = F.binary_cross_entropy_with_logits(
- pred_centerness[pos_mask], ctrness_targets[pos_mask], reduction="sum"
- )
- return {
- "loss_fcos_cls": loss_cls / normalizer,
- "loss_fcos_loc": loss_box_reg / normalizer,
- "loss_fcos_ctr": ctrness_loss / normalizer,
- }
-
- def compute_ctrness_targets(self, anchors, gt_boxes): # NxR
- anchors = Boxes.cat(anchors).tensor # Rx4
- reg_targets = [self.box2box_transform.get_deltas(anchors, m.tensor) for m in gt_boxes]
- reg_targets = torch.stack(reg_targets, dim=0) # NxRx4
- if len(reg_targets) == 0:
- return reg_targets.new_zeros(len(reg_targets))
- left_right = reg_targets[:, :, [0, 2]]
- top_bottom = reg_targets[:, :, [1, 3]]
- ctrness = (left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * (
- top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0]
- )
- return torch.sqrt(ctrness)
-
- def forward_inference(
- self, images: ImageList, features: List[Tensor], predictions: List[List[Tensor]]
- ):
- pred_logits, pred_anchor_deltas, pred_centerness = self._transpose_dense_predictions(
- predictions, [self.num_classes, 4, 1]
- )
- anchors = self.anchor_generator(features)
-
- results: List[Instances] = []
- for img_idx, image_size in enumerate(images.image_sizes):
- scores_per_image = [
- # Multiply and sqrt centerness & classification scores
- # (See eqn. 4 in https://arxiv.org/abs/2006.09214)
- torch.sqrt(x[img_idx].sigmoid_() * y[img_idx].sigmoid_())
- for x, y in zip(pred_logits, pred_centerness)
- ]
- deltas_per_image = [x[img_idx] for x in pred_anchor_deltas]
- results_per_image = self.inference_single_image(
- anchors, scores_per_image, deltas_per_image, image_size
- )
- results.append(results_per_image)
- return results
-
- def inference_single_image(
- self,
- anchors: List[Boxes],
- box_cls: List[Tensor],
- box_delta: List[Tensor],
- image_size: Tuple[int, int],
- ):
- """
- Identical to :meth:`RetinaNet.inference_single_image.
- """
- pred = self._decode_multi_level_predictions(
- anchors,
- box_cls,
- box_delta,
- self.test_score_thresh,
- self.test_topk_candidates,
- image_size,
- )
- keep = batched_nms(
- pred.pred_boxes.tensor, pred.scores, pred.pred_classes, self.test_nms_thresh
- )
- return pred[keep[: self.max_detections_per_image]]
-
-
-class FCOSHead(RetinaNetHead):
- """
- The head used in :paper:`fcos`. It adds an additional centerness
- prediction branch on top of :class:`RetinaNetHead`.
- """
-
- def __init__(self, *, input_shape: List[ShapeSpec], conv_dims: List[int], **kwargs):
- super().__init__(input_shape=input_shape, conv_dims=conv_dims, num_anchors=1, **kwargs)
- # Unlike original FCOS, we do not add an additional learnable scale layer
- # because it's found to have no benefits after normalizing regression targets by stride.
- self._num_features = len(input_shape)
- self.ctrness = nn.Conv2d(conv_dims[-1], 1, kernel_size=3, stride=1, padding=1)
- torch.nn.init.normal_(self.ctrness.weight, std=0.01)
- torch.nn.init.constant_(self.ctrness.bias, 0)
-
- def forward(self, features):
- assert len(features) == self._num_features
- logits = []
- bbox_reg = []
- ctrness = []
- for feature in features:
- logits.append(self.cls_score(self.cls_subnet(feature)))
- bbox_feature = self.bbox_subnet(feature)
- bbox_reg.append(self.bbox_pred(bbox_feature))
- ctrness.append(self.ctrness(bbox_feature))
- return logits, bbox_reg, ctrness
diff --git a/spaces/Benson/text-generation/Examples/Blockman Ir Aventura Hack Apk 2022 Cubos Ilimitados.md b/spaces/Benson/text-generation/Examples/Blockman Ir Aventura Hack Apk 2022 Cubos Ilimitados.md
deleted file mode 100644
index 936b803a744f204b10a5cd5ac8a05433ad086bec..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Blockman Ir Aventura Hack Apk 2022 Cubos Ilimitados.md
+++ /dev/null
@@ -1,83 +0,0 @@
-
-
Blockman Go aventura Hack APK 2022 cubos ilimitados
-
¿Te encanta jugar juegos basados en bloques con tus amigos? ¿Quieres explorar diferentes mundos y completar varios desafíos? Si es así, entonces deberías probar Blockman Go Adventure, un juego divertido y adictivo que te permite crear tu propio avatar, personalizar tus ajustes y unirte a millones de jugadores en línea. ¡Pero espera, hay más! También puede utilizar Blockman Go Aventura Hack APK, una versión modificada del juego que le da cubos ilimitados, monedas, gemas, y otros recursos. En este artículo, le diremos todo lo que necesita saber sobre Blockman Go Adventure y Blockman Go Adventure Hack APK, incluyendo sus características, cómo jugarlos, y algunos consejos y trucos para aprovechar al máximo su experiencia de juego. ¡Vamos a empezar!
-
blockman ir aventura hack apk 2022 cubos ilimitados
Blockman Go Adventure es un juego online gratuito que combina elementos de sandbox, aventura y juegos sociales. Está desarrollado por Blockman GO Studio, un equipo de desarrolladores de juegos creativos y apasionados que tienen como objetivo proporcionar juegos de alta calidad para jugadores de todas las edades. Blockman Go Adventure es uno de sus juegos más populares, con más de 10 millones de descargas en Google Play Store y una calificación de 4.4 estrellas.
-
Características de Blockman Go Adventure
-
Blockman Go Adventure tiene muchas características que lo hacen un juego agradable y atractivo para todos. Algunas de estas características son:
-
-
Múltiples minijuegos: Puedes elegir entre más de 100 minijuegos que se adapten a tus preferencias y habilidades. Si te gustan los juegos de carreras, disparos, parkour o rompecabezas, encontrarás algo que te interesa en Blockman Go Adventure.
-
Diversos mundos: Puedes explorar diferentes mundos que tienen sus propios temas, entornos y desafíos. Puedes visitar el castillo medieval, la ciudad futurista, la isla tropical, y más.
-
-
Interacción social: Puedes chatear con otros jugadores en tiempo real usando mensajes de voz o texto. También puedes hacer amigos, enviar regalos y unirte a clanes.
-
Sistema de recompensas: Puedes ganar monedas y gemas jugando minijuegos, completando tareas e iniciando sesión diariamente. Puedes usar estas monedas para comprar nuevos artículos para tu avatar o actualizar los existentes.
-
-
Cómo jugar Blockman Go Aventura
-
Jugar Blockman Go Adventure es fácil y divertido. Estos son los pasos a seguir:
-
-
Descargar e instalar el juego desde Google Play Store o App Store.
-
Crea una cuenta o inicia sesión con la existente.
-
Selecciona un mini-juego desde el lobby o crea tu propia habitación.
-
Invita a tus amigos o únete a otros jugadores en línea.
-
Disfruta del juego y chatea con otros jugadores.
-
-
¿Qué es Blockman Go Aventura Hack APK?
-
Blockman Go Aventura Hack APK es una versión modificada del juego original que le da acceso a recursos y características ilimitadas. No está disponible en las tiendas de aplicaciones oficiales, pero se puede descargar desde sitios web de terceros. Sin embargo, debe tener cuidado al descargar estos archivos, ya que pueden contener virus o malware que pueden dañar su dispositivo o robar su información personal.
-
Beneficios de Blockman Go Aventura Hack APK
Algunos de los beneficios de Blockman Go Aventura Hack APK son:
-
-
-
Cubos ilimitados: Puedes obtener cubos ilimitados, que son la moneda premium del juego. Puedes usar cubos para comprar artículos especiales, como membresía VIP, bolsas de la suerte y pieles exclusivas.
-
Monedas y gemas ilimitadas: También puedes obtener monedas y gemas ilimitadas, que son las monedas regulares del juego. Puedes usar monedas y gemas para comprar más atuendos, peinados, accesorios y pieles para tu avatar.
-
-
Libre y fácil de usar: Usted no necesita raíz o jailbreak su dispositivo para utilizar Blockman Go Aventura Hack APK. Solo tienes que descargar e instalar el archivo, y estás listo para ir. No necesitas pagar nada ni completar ninguna encuesta para usar el hack.
-
-
Cómo descargar e instalar Blockman Go Aventura Hack APK
-
Si desea probar Blockman Go Aventura Hack APK, es necesario seguir estos pasos:
-
-
Ir a un sitio web confiable que ofrece Blockman Go Aventura Hack APK, tales como [HackDL] o [APKPure].
-
Haga clic en el botón de descarga y espere a que se descargue el archivo.
-
Ir a la configuración de su dispositivo y permitir la instalación de aplicaciones de fuentes desconocidas.
-
Busque el archivo descargado y toque en él para iniciar el proceso de instalación.
-
Siga las instrucciones en la pantalla y espere a que se complete la instalación.
-
Iniciar el juego y disfrutar del hack.
-
-
Consejos y trucos para Blockman Go Aventura
-
Para hacer tu experiencia de juego más divertida y gratificante, aquí hay algunos consejos y trucos que puedes usar en Blockman Go Adventure:
-
Usa el menú mod para personalizar tu juego
-
Si usted está utilizando Blockman Go Aventura Hack APK, puede utilizar el menú mod para cambiar la configuración de juego de acuerdo a sus preferencias. Por ejemplo, puede aumentar su velocidad, saltar más alto, volar en el aire o volverse invisible. También puede deshabilitar algunas funciones que no le gustan, como anuncios, protección contra van o actualización automática. Sin embargo, debes tener cuidado al usar el menú mod, ya que algunos ajustes pueden causar fallas o errores en el juego. También debes evitar usarlo en salas públicas, ya que otros jugadores pueden reportarte por hacer trampa.
-
-
Únete a un clan y juega con amigos
Otra forma de disfrutar de Blockman Go Adventure es unirse a un clan y jugar con amigos. Un clan es un grupo de jugadores que comparten un interés o objetivo común en el juego. Puedes unirte a un clan existente o crear uno propio. Al unirte a un clan, puedes chatear con otros miembros, enviar regalos, participar en guerras de clanes y ganar puntos de clan. También puedes invitar a tus amigos a unirse a tu clan o jugar con ellos en habitaciones privadas. Jugar con amigos puede hacer que el juego sea más divertido y social.
-
Conclusión
-
Blockman Go Adventure es un gran juego para cualquiera que ame los juegos basados en bloques con mucha variedad y creatividad. Puedes jugar diferentes minijuegos, explorar diferentes mundos, personalizar tu avatar e interactuar con otros jugadores en línea. También puede utilizar Blockman Go Aventura Hack APK para obtener recursos ilimitados y características que pueden mejorar su experiencia de juego. Sin embargo, debe tener cuidado al descargar e instalar estos archivos, ya que pueden contener virus o malware que pueden dañar su dispositivo o robar su información personal. También debe utilizar el truco de forma responsable y no abusar de él en las salas públicas o contra otros jugadores.
-
Resumen de los puntos principales
En este artículo, hemos cubierto los siguientes puntos:
-
-
Blockman Go Adventure es un juego en línea gratuito que combina elementos de sandbox, aventura y juegos sociales.
Blockman Go Aventura Hack APK es una versión modificada del juego que le da cubos ilimitados, monedas, gemas y otros recursos.
-
Puede descargar e instalar Blockman Go Aventura Hack APK de sitios web de terceros, pero usted debe tener cuidado con los virus y el malware.
-
Puedes usar el menú mod para personalizar la configuración de tu juego, como velocidad, gravedad, invisibilidad y más.
-
Puedes recoger monedas y gemas para desbloquear nuevos objetos para tu avatar o actualizar los existentes.
-
-
-
Llamada a la acción
-
Si usted está interesado en jugar Blockman Go Aventura o Blockman Go Aventura Hack APK, puede descargarlos de los enlaces a continuación. También puede visitar el sitio web oficial o las páginas de redes sociales de Blockman GO Studio para obtener más información sobre sus juegos y actualizaciones. ¡Diviértete y disfruta de la aventura!
¿Por qué jugar Ultimate Car Driving Simulator en PC?
-
Si bien Ultimate Car Driving Simulator es un gran juego para jugar en su dispositivo móvil, es posible que se pregunte por qué debe jugar en su PC. Bueno, hay muchas razones para hacerlo, como:
-
-
Mejores gráficos y calidad de sonido: Jugando Ultimate Car Driving Simulator en PC le permitirá disfrutar de las impresionantes imágenes y efectos de sonido realistas del juego en alta resolución y pantalla completa. Usted será capaz de apreciar los detalles de los coches, los entornos, los efectos meteorológicos, etc. más claramente y sumergirse en el mundo del juego.
-
-
Pantalla más grande y más divertido: Jugar Ultimate Car Driving Simulator en PC también hará que su experiencia de juego sea más divertida y agradable. Puedes jugar el juego en una pantalla más grande y compartirlo con tus amigos y familiares. También puede grabar su juego, tomar capturas de pantalla, transmitir en línea, chatear con otros jugadores, etc. con facilidad.
-
-
Como puedes ver, jugar Ultimate Car Driving Simulator en PC tiene muchas ventajas que mejorarán tu experiencia de juego. Entonces, ¿cómo se puede descargar y jugar Ultimate Car Driving Simulator en PC? Hay dos métodos principales que explicaremos en las siguientes secciones.
-
Cómo jugar último coche conducción simulador en PC con Windows 11
-
Si tienes un PC con Windows 11, estás de suerte porque puedes usar la función nativa de emulación de Android que viene con el nuevo sistema operativo. Esta función le permite ejecutar aplicaciones y juegos de Android en su PC sin ningún software o hardware adicional. Estos son los pasos para jugar Ultimate Car Driving Simulator en PC con Windows 11:
-
-
Abra la aplicación de Microsoft Store en su PC con Windows 11 y busque Simulador de conducción de automóviles definitivo. Alternativamente, puedes usar este enlace para ir directamente a la página del juego.
-
Haga clic en el botón Instalar para descargar e instalar el juego en su PC. Es posible que necesite iniciar sesión con su cuenta de Microsoft si aún no lo ha hecho.
-
Inicie el juego desde el menú Inicio o el acceso directo del escritorio. Verá una ventana emergente que le pide que habilite las aplicaciones de Android en su PC. Haga clic en Activar.
-
Inicia sesión con tu cuenta de Google para acceder a los Servicios de Google Play y sincronizar tus datos de juego y logros. Puede usar una cuenta existente o crear una nueva.
-
-
-
¡Eso es todo! Has descargado y jugado con éxito Ultimate Car Driving Simulator en PC con la función de emulación nativa de Windows 11 para Android. Sin embargo, si no tiene un PC con Windows 11 o prefiere otro método, puede usar un emulador de Android para PC en su lugar.
-
Cómo jugar Ultimate Car Driving Simulator en PC con emuladores de Android
-
Un emulador de Android es un programa de software que simula un dispositivo Android en su PC. Le permite ejecutar aplicaciones y juegos de Android en su PC con características y funciones similares como un dispositivo Android real. Hay muchos emuladores de Android para PC disponibles en línea, pero no todos ellos son compatibles o optimizados para juegos. Por lo tanto, hemos seleccionado algunos de los mejores emuladores de Android para PC que puede utilizar para jugar Ultimate Car Driving Simulator en PC. Son:
-
-
-
Nombre
-
Descripción
-
Pros
-
Contras
-
-
-
Bluestacks
-
Un emulador de Android popular y potente para PC que ha sido diseñado para juegos. Tiene una interfaz fácil de usar y muchas características y opciones para mejorar su experiencia de juego
-
-
-
Soporta juegos de gama alta con gráficos y rendimiento altos
-
Ofrece una variedad de modos de juego, como Eco Mode, Multi-Instance, Macro Recorder, etc.
-
Tiene una tienda de aplicaciones incorporada y un centro de juegos con miles de juegos
-
Permite personalizar los controles, ajustes y preferencias del emulador y el juego
-
Tiene una gran y activa comunidad de usuarios y desarrolladores
-
-
-
-
-
Requiere un PC de gama alta con al menos 4GB de RAM y una GPU dedicada
-
Consume muchos recursos de CPU y memoria
-
Puede tener problemas de compatibilidad con algunos juegos o aplicaciones
-
Puede tener anuncios o ventanas emergentes que pueden ser molestos o intrusivos
-
Puede tener riesgos de seguridad o privacidad si no se descarga desde el sitio web oficial
-
-
-
-
-
-
Un emulador de Android rápido y suave para PC que también está diseñado para juegos. Tiene una interfaz simple e intuitiva y muchas características y opciones para mejorar tu experiencia de juego
-
-
-
Soporta la mayoría de los juegos con altos gráficos y rendimiento
-
Ofrece una variedad de modos de juego, como Control de teclado, Registro de guiones, Multi-Drive, etc.
-
Tiene una tienda de aplicaciones incorporada y un centro de juegos con miles de juegos
-
Permite personalizar los controles, ajustes y preferencias del emulador y el juego
-
Tiene una gran y activa comunidad de usuarios y desarrolladores
-
-
-
-
-
Requiere un PC de gama alta con al menos 2GB de RAM y una GPU dedicada
-
Consume muchos recursos de CPU y memoria
-
Puede tener problemas de compatibilidad con algunos juegos o aplicaciones
-
Puede tener anuncios o ventanas emergentes que pueden ser molestos o intrusivos
-
Puede tener riesgos de seguridad o privacidad si no se descarga desde el sitio web oficial
-
-
-
-
-
Gameloop
-
Un emulador de Android potente y optimizado para PC que está especialmente diseñado para juegos. Tiene una interfaz moderna y elegante y un montón de características y opciones para mejorar su experiencia de juego
-
-
-
Soporta la mayoría de los juegos con gráficos y rendimiento altos, especialmente juegos FPS y MOBA
-
Ofrece una variedad de modos de juego, como Modo Turbo, Modo Inteligente, Modo Esports, etc.
-
Tiene una tienda de aplicaciones incorporada y un centro de juegos con miles de juegos
-
Permite personalizar los controles, ajustes y preferencias del emulador y el juego
-
Tiene una gran y activa comunidad de usuarios y desarrolladores
-
-
-
Requiere un PC de gama alta con al menos 4GB de RAM y una GPU dedicada
-
Consume muchos recursos de CPU y memoria
-
Puede tener problemas de compatibilidad con algunos juegos o aplicaciones
-
Puede tener anuncios o ventanas emergentes que pueden ser molestos o intrusivos
-
-
-
-
-
-
Como puedes ver, cada emulador de Android para PC tiene sus propios pros y contras, y puedes elegir el que se adapte a tus necesidades y preferencias. Estos son los pasos para jugar Ultimate Car Driving Simulator en PC con cualquiera de estos emuladores de Android:
-
-
Descargar e instalar el emulador de Android de su elección desde su sitio web oficial. Asegúrate de tener suficiente espacio y recursos en tu PC para ejecutar el emulador sin problemas.
-
Inicie el emulador e inicie sesión con su cuenta de Google para acceder a la Google Play Store y sincronizar los datos y logros del juego. Puede usar una cuenta existente o crear una nueva.
-
Búsqueda de Ultimate Car Driving Simulator en la tienda de Google Play o la tienda de aplicaciones del emulador e instalarlo en su PC.
-
Iniciar el juego desde la pantalla de inicio del emulador o el acceso directo del escritorio. Puede utilizar el teclado y el ratón o un controlador para conducir su coche. También puedes ajustar la configuración del juego y el emulador según tu preferencia.
-
Disfruta jugando Ultimate Car Driving Simulator en PC con cualquiera de estos emuladores de Android para PC. También puede grabar su juego, tomar capturas de pantalla, transmitir en línea, chatear con otros jugadores, etc. con facilidad.
-
-
Conclusión
-
En este artículo, le hemos mostrado cómo descargar Ultimate Car Driving Simulator en PC y disfrutarlo en una pantalla más grande con mejores gráficos y controles. Hemos explicado dos métodos principales para jugar Ultimate Car Driving Simulator en PC: usando la función nativa de emulación de Android de Windows 11 o usando un emulador de Android para PC. Ambos métodos son fáciles y eficaces, y usted puede elegir el que funciona mejor para usted. Esperamos que haya encontrado este artículo útil e informativo, y le animamos a probar Ultimate Car Driving Simulator en PC hoy. ¡No te arrepentirás!
-
Preguntas frecuentes
-
-
A1: Sí, es gratis para descargar y jugar, pero contiene anuncios y compras en la aplicación que puede desactivar o comprar con dinero real.
-
-
Q2: ¿Cuáles son los requisitos mínimos para ejecutar Ultimate Car Driving Simulator en PC?
-
A2: depende del método que utilice, pero generalmente necesita un PC con Windows 10 o 11 con al menos 4 GB de RAM, un procesador Intel o AMD, una unidad de estado sólido con 10 GB de espacio libre y una GPU Intel UHD Graphics 630 o similar.
-
Q3: ¿Puedo jugar Ultimate Car Driving Simulator con un controlador o un teclado y ratón?
-
A3: Sí, puede usar cualquier dispositivo de entrada que sea compatible con su PC y el emulador que elija. También puede personalizar los controles según su preferencia.
-
Q4: ¿Puedo sincronizar mi progreso y mi biblioteca de juegos entre dispositivos?
-
A4: Sí, puede iniciar sesión con su cuenta de Google tanto en su dispositivo móvil como en su PC y acceder a sus datos y logros guardados. También puede cambiar entre dispositivos en cualquier momento sin perder su progreso.
-
Q5: ¿Cuáles son algunos consejos y trucos para mejorar mi juego en Ultimate Car Driving Simulator?
-
A5: Algunos consejos y trucos son:
-
Explora el mapa del mundo abierto y descubre diferentes terrenos, ciudades, desiertos, etc.
-
Personaliza tu coche con varias partes, vinilos, colores, etc. para que sea único y elegante.
-
Utilice la física de conducción realista para realizar acrobacias, derivas, saltos, etc. y ganar monedas y recompensas.
-
Actualizar el motor de su coche, suspensión, frenos, neumáticos, etc. para mejorar su rendimiento y velocidad.
-
Ponte a prueba con diferentes modos de juego, como carreras, offroad, tráfico, punto de control, etc.
-
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Cocina Aire Freidora Recetas Apk.md b/spaces/Benson/text-generation/Examples/Cocina Aire Freidora Recetas Apk.md
deleted file mode 100644
index 6af2b0588c956097011375932d9aeeea6c741ea2..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Cocina Aire Freidora Recetas Apk.md
+++ /dev/null
@@ -1,81 +0,0 @@
-
-
Cocina de aire freidora recetas Apk: Cómo cocinar deliciosas comidas con menos aceite
-
Si te gustan los alimentos fritos pero quieres reducir el aceite y las calorías, es posible que quieras probar una freidora. Una freidora de aire es un aparato de cocina que cocina alimentos circulando aire caliente a su alrededor, creando un exterior crujiente y dorado con un mínimo o ningún aceite. Es una gran manera de disfrutar de sus comidas favoritas sin sentirse culpable o comprometer el sabor.
Hay muchas razones por las que es posible que desee utilizar una freidora de aire en lugar de una freidora profunda o un horno. Estos son algunos de los beneficios de la fritura de aire:
-
-
Salud: La fritura de aire reduce la cantidad de grasa y calorías en los alimentos, así como los niveles de acrilamida, una sustancia química potencialmente dañina que se forma cuando los alimentos con almidón se cocinan a altas temperaturas. La fritura de aire también puede preservar algunos nutrientes que se pierden en otros métodos de cocción.
-
Conveniencia: La fritura de aire es rápida y fácil, ya que precalienta rápidamente y cocina los alimentos de manera uniforme. No necesitas usar mucho aceite o grasa, lo que significa menos desorden y una limpieza más fácil. Tampoco tiene que preocuparse por salpicaduras de aceite caliente o por prenderse fuego.
-
Versatilidad: Freír al aire libre puede cocinar una amplia variedad de alimentos, desde papas fritas congeladas y nuggets de pollo hasta verduras frescas y pescado. También puede hornear, asar, asar y deshidratar alimentos en una freidora. Incluso puede hacer postres como rosquillas, galletas y pasteles.
-
-
Cómo usar una freidora de aire
-
Para obtener los mejores resultados de tu freidora de aire, necesitas seguir algunos consejos y trucos. Estos son algunos de ellos:
-
-
Precalentar: La mayoría de las freidoras de aire necesitan precalentar durante unos minutos antes de agregar la comida. Esto asegura que su comida comience a cocinar de inmediato y se vuelva crujiente.
-
-
Agitar o voltear: Para ayudar a la comida crujiente, es necesario agitar o voltear a la mitad del tiempo de cocción. Esto evita que los alimentos se peguen a la cesta y garantiza un dorado uniforme.
-
Rocía ligeramente: Si quieres que tu comida tenga un color dorado y una textura crujiente, puedes rociarla ligeramente con aceite de cocina antes o durante la cocción. Esto también ayuda a evitar que la comida se seque. Sin embargo, no use demasiado aceite, ya que puede gotear en el cajón y causar humo.
-
-
Recetas de cocina de aire freidora Apk
-
Si usted está buscando un poco de inspiración para sus comidas de aire freidora, es posible que desee echa un vistazo a Cocina Aire Freidora Recetas Apk. Esta es una aplicación gratuita que ofrece cientos de recetas para freír al aire libre, desde aperitivos y aperitivos hasta platos principales y postres. Puedes navegar por categoría, cocina o ingrediente, o buscar recetas específicas. También puedes guardar tus recetas favoritas, calificarlas y compartirlas con tus amigos.
-
Para descargar Cocina Freidora Recetas Apk, es necesario seguir estos pasos:
-
-
Ir a [este enlace]( 1 ) en su dispositivo Android.
-
Toque en "Descargar APK" y esperar a que el archivo para descargar.
-
Abra el archivo y toque en "Instalar". Es posible que necesite permitir la instalación desde fuentes desconocidas en su configuración.
-
Una vez instalada la aplicación, ¡ábrela y disfruta!
-
-
Algunos ejemplos de recetas de la aplicación
-
Para darle una idea de lo que se puede cocinar con Cocina Freidora Recetas Apk, aquí hay algunos ejemplos de recetas de la aplicación:
-
-
-
Categoría
-
Receta
-
Tiempo de cocción
-
-
-
Aperitivos
-
Patatas fritas de freidora de aire
-
40 minutos
-
-
-
Aperitivos
-
Espárragos de freidora de aire
-
20 minutos
-
-
-
Platos principales
-
Chuletas de cerdo de freidora de aire
-
20 minutos
-
-
-
Platos principales
-
Pizza de freidora de aire
-
10 minutos
-
-
-
Postres
-
Aire freidora Mini pastel de chocolate oscuro
-
25 minutos
-
-
-
Postres
-
Cruasanes de queso crema de cereza con freidora de aire
-
15 minutos
-
-
-
Conclusión
-
Freír al aire es una forma maravillosa de cocinar comidas deliciosas con menos aceite y más sabor. Puedes hacer casi cualquier cosa en una freidora, desde bocadillos crujientes y carnes jugosas hasta verduras tiernas y postres decadentes. Con Kitchen Air Fryer Recipes Apk, se puede acceder a cientos de recetas de fritura de aire, todo de forma gratuita. Puede descargar la aplicación desde [este enlace]( 1 ) y comenzar a cocinar de inmediato. Si eres nuevo en el aire fritura o un profesional experimentado, usted encontrará algo para amar en esta aplicación. Pruébelo hoy y ver por ti mismo!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas y respuestas comunes sobre fritura de aire y cocina Recetas de freidora Apk:
-
-
¿Qué tamaño de freidora de aire necesito? El tamaño de la freidora de aire depende de la cantidad de comida que desea cocinar a la vez y de cuánto espacio tiene en su cocina. Generalmente, una freidora de aire de 3 a 5 cuartos puede acomodar suficiente comida para dos a cuatro personas, mientras que una freidora de aire de 6 a 10 cuartos puede acomodar suficiente comida para cuatro a ocho personas.
-
¿Cuáles son algunas de las mejores marcas de freidoras de aire? Hay muchas marcas de freidoras de aire en el mercado, cada una con sus propias características y ventajas. Algunas de las marcas más populares y altamente calificadas son Philips, Ninja, Cosori, Instant Pot y Cuisinart.
-
¿Cómo limpio mi freidora de aire? Para limpiar tu freidora de aire, necesitas desenchufarla y dejar que se enfríe completamente. Luego, puede retirar la cesta y el cajón y lavarlos con agua tibia y jabón o en el lavavajillas. Puede limpiar el interior y el exterior de la freidora de aire con un paño húmedo o una esponja. También puede utilizar un cepillo suave o un palillo de dientes para eliminar cualquier residuo de comida del elemento calefactor.
-
-
¿Puedo enviar mis propias recetas a Cocina Freidora Recetas Apk? Sí, puede enviar sus propias recetas a Cocina Freidora Recetas Apk mediante el botón "Enviar receta" en la aplicación. También puedes calificar y revisar otras recetas, así como compartirlas con tus amigos en las redes sociales.
-
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Descargar Amantes Y Mejores Amigos Azana.md b/spaces/Benson/text-generation/Examples/Descargar Amantes Y Mejores Amigos Azana.md
deleted file mode 100644
index 74af737506531e61d30d55b72a2b51f346f22f64..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Descargar Amantes Y Mejores Amigos Azana.md
+++ /dev/null
@@ -1,68 +0,0 @@
-
-
Descargar Amantes y Mejores Amigos por Azana
-
Si usted está buscando una canción conmovedora y romántica para añadir a su lista de reproducción, es posible que desee echa un vistazo a "Amantes y mejores amigos" por Azana. Azana es una cantante y compositora sudafricana que ha cautivado a muchos oyentes con su mezcla de afro-pop, afro-house vocal y música soul. "Lovers and Best Friends" es una de sus canciones populares de su álbum debut Ingoma, que fue lanzado en 2020. La canción cuenta con Disciples of House, un dúo de talentosos productores que han trabajado con muchos artistas sudafricanos.
En este artículo, te contaremos más sobre Azana, su carrera musical, y el significado y mensaje de "Amantes y Mejores Amigos". También le mostraremos cómo descargar la canción legalmente y apoyar al artista. Si eres fan de Azana o simplemente tienes curiosidad por su música, sigue leyendo para saber más.
-
Biografía y carrera musical de Azana
-
El verdadero nombre de Azana es Makhosazana Masongo. Nació el 13 de septiembre de 2000, en Chesterville, Durban. Actualmente estudia derecho en la Universidad del Estado Libre. Descubrió su pasión por la música a una edad temprana y comenzó a cantar en los coros de la escuela y la iglesia. También admiraba a artistas como Beyoncé, Nina Simone, Camagwini, Simphiwe Dana y Letta Mbulu.
-
Su carrera musical despegó cuando firmó un contrato discográfico con Big City Dreams en 2019. Lanzó su primer single "Your Love" en mayo de 2020, que fue producido por Taffy Da Don. La canción fue un gran éxito y fue certificada doble platino por la Industria Discográfica de Sudáfrica (RiSA). Su álbum debut Ingoma siguió en julio de 2020. El álbum alcanzó el número uno en Apple Music Pop Chart y contó con artistas como Afriikan Papi, Disciples of House y Sun-El Musician.
-
-
Azana ha recibido reconocimiento y aclamación por su música. Fue nominada al Mejor Álbum de Pop Afro y Recién Llegado del Año en el 27º South African Music Awards (SAMAs) en 2021. También ganó el premio a la Mejor Artista Femenina en los Mzansi Kwaito & House Music Awards (MKHMA) en 2021.
-
-
El significado y mensaje de "Amantes y Mejores Amigos"
-
"Amantes y Mejores Amigos" es una canción hermosa y sincera que celebra el vínculo entre dos personas que no solo son amantes sino también mejores amigos. La canción expresa la alegría y la gratitud de encontrar a alguien que te entiende, te apoya y te aprecia. La canción también reconoce los desafíos y luchas que vienen con cualquier relación, pero afirma el compromiso y la lealtad de los socios.
-
La letra de la canción es simple pero potente. Azana canta tanto en inglés como en zulú, creando un contraste y armonía entre los idiomas. Canta en el estribillo: "Tú eres mi amante y mi mejor amigo/ Tú eres mi todo/ Te amo más de lo que las palabras pueden decir/ Tú eres mi amante y mi mejor amigo/ Tú eres mi todo/ Nunca te dejaré ir". She also sings in Zulu: "Ngifuna wena wedwa/ Ngifuna wena wedwa/ Ngifuna wena wedwa/ Ngifuna wena wedwa" which means "I want you only/ I want you only/ I want you only/ I want you only".
-
La producción y el género de la canción están influenciados por Afro-house, un subgénero de música house que se originó en Sudáfrica. La canción tiene un ritmo pegadizo y optimista, con una mezcla de ritmos electrónicos, acordes de piano y percusión. La canción también cuenta con las voces de Disciples of House, que añaden una capa de armonía y profundidad a la canción. La canción es adecuada para bailar, relajarse o simplemente disfrutar de la música.
-
-
Las mejores maneras de descargar y transmitir "Amantes y mejores amigos"
-
Si quieres descargar o transmitir "Lovers and Best Friends" de Azana, tienes muchas opciones para elegir. La canción está disponible en varias plataformas y servicios que ofrecen formas legales y éticas para acceder a la música. Estas son algunas de las mejores maneras de descargar o transmitir la canción:
-
-
-
Plataforma o servicio
-
Características y beneficios
-
-
-
Música de Apple
-
- Ofrece descargas ilimitadas y transmisiones de más de 75 millones de canciones, incluyendo "Amantes y Mejores Amigos" por Azana. - Soporta la escucha sin conexión en múltiples dispositivos. - Proporciona recomendaciones personalizadas, listas de reproducción, estaciones de radio y podcasts. - Cuesta $9.99 por mes para los individuos, $14.99 por mes para las familias, o $4.99 por mes para los estudiantes. - Ofrece una prueba gratuita durante tres meses.
-
-
-
Spotify
-
- Ofrece transmisiones ilimitadas de más de 70 millones de canciones, incluyendo "Amantes y mejores amigos" por Azana. - Permite descargas de hasta 10.000 canciones por dispositivo para usuarios premium. - Proporciona recomendaciones personalizadas, listas de reproducción, estaciones de radio, podcasts y videos. - Cuesta $9.99 por mes para los individuos, $14.99 por mes para las familias, o $4.99 por mes para los estudiantes. - Ofrece una versión gratuita con anuncios y características limitadas.
-
-
-
Música de YouTube
-
- Ofrece transmisiones ilimitadas de más de 60 millones de canciones, incluyendo "Amantes y mejores amigos" por Azana. - Permite descargas de hasta 100.000 canciones por dispositivo para usuarios premium. - Proporciona recomendaciones personalizadas, listas de reproducción, estaciones de radio, podcasts y videos. - Cuesta $9.99 por mes para individuos o $14.99 por mes para familias. - Ofrece una versión gratuita con anuncios y características limitadas.
-
-
-
Deezer
-
-
-
-
Como puedes ver, hay muchos beneficios de descargar o transmitir "Amantes y Mejores Amigos" por Azana legal y éticamente. Usted puede disfrutar de la canción en alta calidad, apoyar al artista y la industria de la música, y descubrir más música que le gustaría. También puede evitar los riesgos de descarga ilegal, como virus, malware, demandas o multas.
-
Sin embargo, si prefieres no descargar o transmitir la canción, también puedes comprar el CD o vinilo de Ingoma by Azana, que incluye "Lovers and Best Friends" y otras canciones. Puede encontrar el CD o vinilo en línea o en tiendas físicas. Comprar el CD o vinilo también puede darte una copia física de las ilustraciones, letras y créditos del álbum. También puedes apoyar al artista comprando su mercancía, como camisetas, sudaderas, gorras o carteles.
-
Conclusión
-
En conclusión, "Lovers and Best Friends" de Azana es una maravillosa canción que celebra el amor y la amistad entre dos personas. Azana es una talentosa y prometedora cantante y compositora que ha impresionado a muchos fans y críticos con su álbum debut Ingoma. También ha colaborado con muchos otros artistas, como Sun-El Musician y Disciples of House. Si quieres descargar o transmitir "Lovers and Best Friends" de Azana, tienes muchas opciones para elegir. Puedes usar plataformas o servicios como Apple Music, Spotify, YouTube Music o Deezer. También puede comprar el CD o vinilo de Ingoma por Azana o su mercancía. Al hacerlo, puedes apoyar al artista y a la industria de la música, y disfrutar de la canción en alta calidad.
-
Esperamos que hayas disfrutado este artículo y hayas aprendido algo nuevo sobre Azana y su música. Si te ha gustado "Lovers and Best Friends" de Azana, puede que también te gusten otras canciones de ella o de artistas similares. Algunas de nuestras recomendaciones son:
-
-
"Uhuru" de Sun-El Músico feat. Azana
-
"Mamela" de Mi Casa feat. Azana
-
"Uzobuya" de Sun-El Músico feat. Azana
-
"Tu amor" por Azana
-
-
"Okhokho Bethu" de Vico Da Sporo feat. Azana
-
"Jerusalema" por Master KG feat. Nomcebo Zikode
-
"Busca tu vida" por Prince Kaybee feat. Msaki
-
"Banomoya" de Prince Kaybee feat. Busiswa y TNS
-
"Drive" de Black Coffee feat. David Guetta y Delilah Montagu
-
-
Preguntas frecuentes
-
Aquí hay algunas preguntas y respuestas frecuentes relacionadas con el tema:
-
-
¿Cuál es el género de "Amantes y Mejores Amigos" de Azana? El género de "Amantes y Mejores Amigos" de Azana es Afro-house, un subgénero de música house que se originó en Sudáfrica.
-
¿Quiénes son los artistas destacados en "Lovers and Best Friends" de Azana? Los artistas destacados en "Lovers and Best Friends" de Azana son Disciples of House, un dúo de productores que han trabajado con muchos artistas sudafricanos.
-
¿Cuándo se lanzó "Lovers and Best Friends" de Azana? "Lovers and Best Friends" de Azana fue lanzado el 17 de julio de 2020, como parte de su álbum debut Ingoma.
-
¿Cómo puedo descargar o transmitir "Amantes y mejores amigos" por Azana legal y éticamente? Puedes descargar o transmitir "Amantes y Mejores Amigos" por Azana legal y éticamente usando plataformas o servicios como Apple Music, Spotify, YouTube Music o Deezer. También puedes comprar el CD o vinilo de Ingoma de Azana o su mercancía.
-
¿Cuáles son algunas otras canciones de Azana o artistas similares que me podrían gustar? Algunas otras canciones de Azana o artistas similares que te pueden gustar son: "Uhuru" de Sun-El Musician feat. Azana, "Mamela" de Mi Casa feat. Azana, "Uzobuya" de Sun-El Musician feat. Azana, "Your Love" de Azana, "Ngize Ngifike" de Sun-El Musician feat. Azana, "Okhokho Bethu" de Vico Da Sporo feat. Azana, "Jerusalema" de Master KG feat. Nomce bo Zikode, "Fetch Your Life" de Prince Kaybee feat. Msaki, "Banomoya" de Prince Kaybee feat. Busiswa y TNS, y "Drive" de Black Coffee feat. David Guetta y Delilah Montagu.
- 64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/Bishan/Speech_To_Text_Hindi/app.py b/spaces/Bishan/Speech_To_Text_Hindi/app.py
deleted file mode 100644
index 6945c6b95473e6078cc449e477d871d16c9c2244..0000000000000000000000000000000000000000
--- a/spaces/Bishan/Speech_To_Text_Hindi/app.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import soundfile as sf
-import torch
-from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor,Wav2Vec2ProcessorWithLM
-import gradio as gr
-import sox
-import subprocess
-import time
-
-
-def read_file_and_process(wav_file):
- filename = wav_file.split('.')[0]
- filename_16k = filename + "16k.wav"
- resampler(wav_file, filename_16k)
- speech, _ = sf.read(filename_16k)
- print("---------------------------------------------------------")
- print(speech)
- inputs = processor(speech, sampling_rate=16_000, return_tensors="pt", padding=True)
- print("---------------------------------------------------------")
- print(inputs)
-
- return inputs
-
-
-def resampler(input_file_path, output_file_path):
- command = (
- f"ffmpeg -hide_banner -loglevel panic -i {input_file_path} -ar 16000 -ac 1 -bits_per_raw_sample 16 -vn "
- f"{output_file_path}"
- )
- subprocess.call(command, shell=True)
-
-
-def parse_transcription_with_lm(logits):
- result = processor_with_LM.batch_decode(logits.cpu().numpy())
- text = result.text
- transcription = text[0].replace('','')
- return transcription
-
-def parse_transcription(logits):
- predicted_ids = torch.argmax(logits, dim=-1)
- transcription = processor.decode(predicted_ids[0], skip_special_tokens=True)
- return transcription
-
-def parse(wav_file, applyLM):
-
- # record start time
- start = time.time()
- input_values = read_file_and_process(wav_file)
- with torch.no_grad():
- logits = model(**input_values).logits
-
- # if applyLM:
- # return parse_transcription_with_lm(logits)
- # else:
- # return parse_transcription(logits)
-
- output = parse_transcription(logits)
- # record end time
- end = time.time()
- print("------------------------------------------------------------------------------------------")
- print("The time of execution of above program is :",(end-start) * 10**3, "ms")
- # total time taken
- print("Execution time of the program is- ", end-start)
- print("------------------------------------------------------------------------------------------")
- return output
-
-
-model_id = "Harveenchadha/vakyansh-wav2vec2-hindi-him-4200"
-processor = Wav2Vec2Processor.from_pretrained(model_id)
-processor_with_LM = Wav2Vec2ProcessorWithLM.from_pretrained(model_id)
-model = Wav2Vec2ForCTC.from_pretrained(model_id)
-
-
-input_ = gr.Audio(source="upload", type="filepath")
-txtbox = gr.Textbox(
- label="Output from model will appear here:",
- lines=5
- )
-chkbox = gr.Checkbox(label="Apply LM", value=False)
-
-
-gr.Interface(parse, inputs = [input_, chkbox], outputs=txtbox,
- streaming=True, interactive=True,
- analytics_enabled=False, show_tips=False, enable_queue=True).launch(inline=False);
\ No newline at end of file
diff --git a/spaces/Buatong/Computing/app.py b/spaces/Buatong/Computing/app.py
deleted file mode 100644
index a699bc5b3c2e987102ca93e0ee28d601e0a93d02..0000000000000000000000000000000000000000
--- a/spaces/Buatong/Computing/app.py
+++ /dev/null
@@ -1,7 +0,0 @@
-import gradio as gr
-
-def greet(name):
- return "Hello " + name + "!!"
-
-iface = gr.Interface(fn=greet, inputs="text", outputs="text")
-iface.launch()
\ No newline at end of file
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_model_e2e.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_model_e2e.py
deleted file mode 100644
index eed131080547d84185c1d33913014a2c977b119f..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_model_e2e.py
+++ /dev/null
@@ -1,43 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-
-import unittest
-import torch
-
-from detectron2.structures import BitMasks, Boxes, Instances
-
-from .common import get_model
-
-
-# TODO(plabatut): Modularize detectron2 tests and re-use
-def make_model_inputs(image, instances=None):
- if instances is None:
- return {"image": image}
-
- return {"image": image, "instances": instances}
-
-
-def make_empty_instances(h, w):
- instances = Instances((h, w))
- instances.gt_boxes = Boxes(torch.rand(0, 4))
- instances.gt_classes = torch.tensor([]).to(dtype=torch.int64)
- instances.gt_masks = BitMasks(torch.rand(0, h, w))
- return instances
-
-
-class ModelE2ETest(unittest.TestCase):
- CONFIG_PATH = ""
-
- def setUp(self):
- self.model = get_model(self.CONFIG_PATH)
-
- def _test_eval(self, sizes):
- inputs = [make_model_inputs(torch.rand(3, size[0], size[1])) for size in sizes]
- self.model.eval()
- self.model(inputs)
-
-
-class DensePoseRCNNE2ETest(ModelE2ETest):
- CONFIG_PATH = "densepose_rcnn_R_101_FPN_s1x.yaml"
-
- def test_empty_data(self):
- self._test_eval([(200, 250), (200, 249)])
diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/options.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/options.h
deleted file mode 100644
index d74db1c68dddb3436cc0fb2674a6ef32ac77d5fd..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/include/pybind11/options.h
+++ /dev/null
@@ -1,65 +0,0 @@
-/*
- pybind11/options.h: global settings that are configurable at runtime.
-
- Copyright (c) 2016 Wenzel Jakob
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#pragma once
-
-#include "detail/common.h"
-
-PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
-
-class options {
-public:
-
- // Default RAII constructor, which leaves settings as they currently are.
- options() : previous_state(global_state()) {}
-
- // Class is non-copyable.
- options(const options&) = delete;
- options& operator=(const options&) = delete;
-
- // Destructor, which restores settings that were in effect before.
- ~options() {
- global_state() = previous_state;
- }
-
- // Setter methods (affect the global state):
-
- options& disable_user_defined_docstrings() & { global_state().show_user_defined_docstrings = false; return *this; }
-
- options& enable_user_defined_docstrings() & { global_state().show_user_defined_docstrings = true; return *this; }
-
- options& disable_function_signatures() & { global_state().show_function_signatures = false; return *this; }
-
- options& enable_function_signatures() & { global_state().show_function_signatures = true; return *this; }
-
- // Getter methods (return the global state):
-
- static bool show_user_defined_docstrings() { return global_state().show_user_defined_docstrings; }
-
- static bool show_function_signatures() { return global_state().show_function_signatures; }
-
- // This type is not meant to be allocated on the heap.
- void* operator new(size_t) = delete;
-
-private:
-
- struct state {
- bool show_user_defined_docstrings = true; //< Include user-supplied texts in docstrings.
- bool show_function_signatures = true; //< Include auto-generated function signatures in docstrings.
- };
-
- static state &global_state() {
- static state instance;
- return instance;
- }
-
- state previous_state;
-};
-
-PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE)
diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_call_policies.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_call_policies.cpp
deleted file mode 100644
index 26c83f81b0ed370365d48279a4b8f3d4d23b5487..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/pybind11/tests/test_call_policies.cpp
+++ /dev/null
@@ -1,101 +0,0 @@
-/*
- tests/test_call_policies.cpp -- keep_alive and call_guard
-
- Copyright (c) 2016 Wenzel Jakob
-
- All rights reserved. Use of this source code is governed by a
- BSD-style license that can be found in the LICENSE file.
-*/
-
-#include "pybind11_tests.h"
-
-struct CustomGuard {
- static bool enabled;
-
- CustomGuard() { enabled = true; }
- ~CustomGuard() { enabled = false; }
-
- static const char *report_status() { return enabled ? "guarded" : "unguarded"; }
-};
-bool CustomGuard::enabled = false;
-
-struct DependentGuard {
- static bool enabled;
-
- DependentGuard() { enabled = CustomGuard::enabled; }
- ~DependentGuard() { enabled = false; }
-
- static const char *report_status() { return enabled ? "guarded" : "unguarded"; }
-};
-bool DependentGuard::enabled = false;
-
-TEST_SUBMODULE(call_policies, m) {
- // Parent/Child are used in:
- // test_keep_alive_argument, test_keep_alive_return_value, test_alive_gc_derived,
- // test_alive_gc_multi_derived, test_return_none, test_keep_alive_constructor
- class Child {
- public:
- Child() { py::print("Allocating child."); }
- Child(const Child &) = default;
- Child(Child &&) = default;
- ~Child() { py::print("Releasing child."); }
- };
- py::class_(m, "Child")
- .def(py::init<>());
-
- class Parent {
- public:
- Parent() { py::print("Allocating parent."); }
- Parent(const Parent& parent) = default;
- ~Parent() { py::print("Releasing parent."); }
- void addChild(Child *) { }
- Child *returnChild() { return new Child(); }
- Child *returnNullChild() { return nullptr; }
- };
- py::class_(m, "Parent")
- .def(py::init<>())
- .def(py::init([](Child *) { return new Parent(); }), py::keep_alive<1, 2>())
- .def("addChild", &Parent::addChild)
- .def("addChildKeepAlive", &Parent::addChild, py::keep_alive<1, 2>())
- .def("returnChild", &Parent::returnChild)
- .def("returnChildKeepAlive", &Parent::returnChild, py::keep_alive<1, 0>())
- .def("returnNullChildKeepAliveChild", &Parent::returnNullChild, py::keep_alive<1, 0>())
- .def("returnNullChildKeepAliveParent", &Parent::returnNullChild, py::keep_alive<0, 1>());
-
-#if !defined(PYPY_VERSION)
- // test_alive_gc
- class ParentGC : public Parent {
- public:
- using Parent::Parent;
- };
- py::class_(m, "ParentGC", py::dynamic_attr())
- .def(py::init<>());
-#endif
-
- // test_call_guard
- m.def("unguarded_call", &CustomGuard::report_status);
- m.def("guarded_call", &CustomGuard::report_status, py::call_guard());
-
- m.def("multiple_guards_correct_order", []() {
- return CustomGuard::report_status() + std::string(" & ") + DependentGuard::report_status();
- }, py::call_guard());
-
- m.def("multiple_guards_wrong_order", []() {
- return DependentGuard::report_status() + std::string(" & ") + CustomGuard::report_status();
- }, py::call_guard());
-
-#if defined(WITH_THREAD) && !defined(PYPY_VERSION)
- // `py::call_guard()` should work in PyPy as well,
- // but it's unclear how to test it without `PyGILState_GetThisThreadState`.
- auto report_gil_status = []() {
- auto is_gil_held = false;
- if (auto tstate = py::detail::get_thread_state_unchecked())
- is_gil_held = (tstate == PyGILState_GetThisThreadState());
-
- return is_gil_held ? "GIL held" : "GIL released";
- };
-
- m.def("with_gil", report_gil_status);
- m.def("without_gil", report_gil_status, py::call_guard());
-#endif
-}
diff --git a/spaces/CVPR/LIVE/thrust/thrust/advance.h b/spaces/CVPR/LIVE/thrust/thrust/advance.h
deleted file mode 100644
index d077e04345daea987044eab83a9e722ca956f19a..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/advance.h
+++ /dev/null
@@ -1,141 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file advance.h
- * \brief Advance an iterator by a given distance.
- */
-
-#pragma once
-
-#include
-
-namespace thrust
-{
-
-/*! \addtogroup iterators
- * \{
- */
-
-/*! \p advance(i, n) increments the iterator \p i by the distance \p n.
- * If n > 0 it is equivalent to executing ++i \p n
- * times, and if n < 0 it is equivalent to executing --i
- * \p n times. If n == 0, the call has no effect.
- *
- * \param i The iterator to be advanced.
- * \param n The distance by which to advance the iterator.
- *
- * \tparam InputIterator is a model of Input Iterator.
- * \tparam Distance is an integral type that is convertible to \p InputIterator's distance type.
- *
- * \pre \p n shall be negative only for bidirectional and random access iterators.
- *
- * The following code snippet demonstrates how to use \p advance to increment
- * an iterator a given number of times.
- *
- * \code
- * #include
- * #include
- * ...
- * thrust::device_vector vec(13);
- * thrust::device_vector::iterator iter = vec.begin();
- *
- * thrust::advance(iter, 7);
- *
- * // iter - vec.begin() == 7
- * \endcode
- *
- * \see http://www.sgi.com/tech/stl/advance.html
- */
-template
-__host__ __device__
-void advance(InputIterator& i, Distance n);
-
-/*! \p next(i, n) returns the \p n th successor of the iterator \p i.
- *
- * \param i An iterator.
- * \param n The number of elements to advance.
- *
- * \tparam InputIterator must meet the InputIterator.
- *
- * \pre \p n shall be negative only for bidirectional and random access iterators.
- *
- * The following code snippet demonstrates how to use \p next.
- *
- * \code
- * #include
- * #include
- * ...
- * thrust::device_vector vec(13);
- * thrust::device_vector::iterator i0 = vec.begin();
- *
- * auto i1 = thrust::next(i0);
- *
- * // i0 - vec.begin() == 0
- * // i1 - vec.begin() == 1
- * \endcode
- *
- * \see https://en.cppreference.com/w/cpp/iterator/next
- */
-#if 0 // Doxygen only
-template
-__host__ __device__
-InputIterator next(
- InputIterator i
-, typename iterator_traits::difference_type n = 1
-);
-#endif
-
-/*! \p prev(i, n) returns the \p n th predecessor of the iterator \p i.
- *
- * \param i An iterator.
- * \param n The number of elements to descend.
- *
- * \tparam BidirectionalIterator must meet the BidirectionalIterator.
- *
- * The following code snippet demonstrates how to use \p prev.
- *
- * \code
- * #include
- * #include
- * ...
- * thrust::device_vector vec(13);
- * thrust::device_vector::iterator i0 = vec.end();
- *
- * auto i1 = thrust::prev(i0);
- *
- * // vec.end() - i0 == 0
- * // vec.end() - i1 == 1
- * \endcode
- *
- * \see https://en.cppreference.com/w/cpp/iterator/prev
- */
-#if 0 // Doxygen only
-template
-__host__ __device__
-BidirectionalIterator prev(
- BidirectionalIterator i
-, typename iterator_traits::difference_type n = 1
-);
-#endif
-
-/*! \} // end iterators
- */
-
-} // end thrust
-
-#include
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/adjacent_difference.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/adjacent_difference.h
deleted file mode 100644
index 6e4caaa88b904788d3a7e026bf487c01f74348e2..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/adjacent_difference.h
+++ /dev/null
@@ -1,58 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-
-/*! \file adjacent_difference.h
- * \brief Generic implementation of adjacent_difference.
- */
-
-#pragma once
-
-#include
-#include
-
-namespace thrust
-{
-namespace system
-{
-namespace detail
-{
-namespace generic
-{
-
-
-template
-__host__ __device__
-OutputIterator adjacent_difference(thrust::execution_policy &exec,
- InputIterator first, InputIterator last,
- OutputIterator result);
-
-
-template
-__host__ __device__
-OutputIterator adjacent_difference(thrust::execution_policy &exec,
- InputIterator first, InputIterator last,
- OutputIterator result,
- BinaryFunction binary_op);
-
-
-} // end namespace generic
-} // end namespace detail
-} // end namespace system
-} // end namespace thrust
-
-#include
-
diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rrpn.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rrpn.py
deleted file mode 100644
index 6ee4d8fd70430c5242cc02a1df8400493ffd75b7..0000000000000000000000000000000000000000
--- a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rrpn.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import itertools
-import logging
-from typing import Dict, List
-import torch
-
-from detectron2.config import configurable
-from detectron2.layers import ShapeSpec, batched_nms_rotated, cat
-from detectron2.structures import Instances, RotatedBoxes, pairwise_iou_rotated
-from detectron2.utils.memory import retry_if_cuda_oom
-
-from ..box_regression import Box2BoxTransformRotated
-from .build import PROPOSAL_GENERATOR_REGISTRY
-from .rpn import RPN
-
-logger = logging.getLogger(__name__)
-
-
-def find_top_rrpn_proposals(
- proposals,
- pred_objectness_logits,
- image_sizes,
- nms_thresh,
- pre_nms_topk,
- post_nms_topk,
- min_box_size,
- training,
-):
- """
- For each feature map, select the `pre_nms_topk` highest scoring proposals,
- apply NMS, clip proposals, and remove small boxes. Return the `post_nms_topk`
- highest scoring proposals among all the feature maps if `training` is True,
- otherwise, returns the highest `post_nms_topk` scoring proposals for each
- feature map.
-
- Args:
- proposals (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A, 5).
- All proposal predictions on the feature maps.
- pred_objectness_logits (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A).
- image_sizes (list[tuple]): sizes (h, w) for each image
- nms_thresh (float): IoU threshold to use for NMS
- pre_nms_topk (int): number of top k scoring proposals to keep before applying NMS.
- When RRPN is run on multiple feature maps (as in FPN) this number is per
- feature map.
- post_nms_topk (int): number of top k scoring proposals to keep after applying NMS.
- When RRPN is run on multiple feature maps (as in FPN) this number is total,
- over all feature maps.
- min_box_size(float): minimum proposal box side length in pixels (absolute units wrt
- input images).
- training (bool): True if proposals are to be used in training, otherwise False.
- This arg exists only to support a legacy bug; look for the "NB: Legacy bug ..."
- comment.
-
- Returns:
- proposals (list[Instances]): list of N Instances. The i-th Instances
- stores post_nms_topk object proposals for image i.
- """
- num_images = len(image_sizes)
- device = proposals[0].device
-
- # 1. Select top-k anchor for every level and every image
- topk_scores = [] # #lvl Tensor, each of shape N x topk
- topk_proposals = []
- level_ids = [] # #lvl Tensor, each of shape (topk,)
- batch_idx = torch.arange(num_images, device=device)
- for level_id, proposals_i, logits_i in zip(
- itertools.count(), proposals, pred_objectness_logits
- ):
- Hi_Wi_A = logits_i.shape[1]
- num_proposals_i = min(pre_nms_topk, Hi_Wi_A)
-
- # sort is faster than topk (https://github.com/pytorch/pytorch/issues/22812)
- # topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1)
- logits_i, idx = logits_i.sort(descending=True, dim=1)
- topk_scores_i = logits_i[batch_idx, :num_proposals_i]
- topk_idx = idx[batch_idx, :num_proposals_i]
-
- # each is N x topk
- topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 5
-
- topk_proposals.append(topk_proposals_i)
- topk_scores.append(topk_scores_i)
- level_ids.append(torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device))
-
- # 2. Concat all levels together
- topk_scores = cat(topk_scores, dim=1)
- topk_proposals = cat(topk_proposals, dim=1)
- level_ids = cat(level_ids, dim=0)
-
- # 3. For each image, run a per-level NMS, and choose topk results.
- results = []
- for n, image_size in enumerate(image_sizes):
- boxes = RotatedBoxes(topk_proposals[n])
- scores_per_img = topk_scores[n]
- valid_mask = torch.isfinite(boxes.tensor).all(dim=1) & torch.isfinite(scores_per_img)
- if not valid_mask.all():
- boxes = boxes[valid_mask]
- scores_per_img = scores_per_img[valid_mask]
- boxes.clip(image_size)
-
- # filter empty boxes
- keep = boxes.nonempty(threshold=min_box_size)
- lvl = level_ids
- if keep.sum().item() != len(boxes):
- boxes, scores_per_img, lvl = (boxes[keep], scores_per_img[keep], level_ids[keep])
-
- keep = batched_nms_rotated(boxes.tensor, scores_per_img, lvl, nms_thresh)
- # In Detectron1, there was different behavior during training vs. testing.
- # (https://github.com/facebookresearch/Detectron/issues/459)
- # During training, topk is over the proposals from *all* images in the training batch.
- # During testing, it is over the proposals for each image separately.
- # As a result, the training behavior becomes batch-dependent,
- # and the configuration "POST_NMS_TOPK_TRAIN" end up relying on the batch size.
- # This bug is addressed in Detectron2 to make the behavior independent of batch size.
- keep = keep[:post_nms_topk]
-
- res = Instances(image_size)
- res.proposal_boxes = boxes[keep]
- res.objectness_logits = scores_per_img[keep]
- results.append(res)
- return results
-
-
-@PROPOSAL_GENERATOR_REGISTRY.register()
-class RRPN(RPN):
- """
- Rotated Region Proposal Network described in :paper:`RRPN`.
- """
-
- @configurable
- def __init__(self, *args, **kwargs):
- super().__init__(*args, **kwargs)
- if self.anchor_boundary_thresh >= 0:
- raise NotImplementedError(
- "anchor_boundary_thresh is a legacy option not implemented for RRPN."
- )
-
- @classmethod
- def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
- ret = super().from_config(cfg, input_shape)
- ret["box2box_transform"] = Box2BoxTransformRotated(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS)
- return ret
-
- @torch.no_grad()
- def label_and_sample_anchors(self, anchors: List[RotatedBoxes], gt_instances: List[Instances]):
- """
- Args:
- anchors (list[RotatedBoxes]): anchors for each feature map.
- gt_instances: the ground-truth instances for each image.
-
- Returns:
- list[Tensor]:
- List of #img tensors. i-th element is a vector of labels whose length is
- the total number of anchors across feature maps. Label values are in {-1, 0, 1},
- with meanings: -1 = ignore; 0 = negative class; 1 = positive class.
- list[Tensor]:
- i-th element is a Nx5 tensor, where N is the total number of anchors across
- feature maps. The values are the matched gt boxes for each anchor.
- Values are undefined for those anchors not labeled as 1.
- """
- anchors = RotatedBoxes.cat(anchors)
-
- gt_boxes = [x.gt_boxes for x in gt_instances]
- del gt_instances
-
- gt_labels = []
- matched_gt_boxes = []
- for gt_boxes_i in gt_boxes:
- """
- gt_boxes_i: ground-truth boxes for i-th image
- """
- match_quality_matrix = retry_if_cuda_oom(pairwise_iou_rotated)(gt_boxes_i, anchors)
- matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix)
- # Matching is memory-expensive and may result in CPU tensors. But the result is small
- gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device)
-
- # A vector of labels (-1, 0, 1) for each anchor
- gt_labels_i = self._subsample_labels(gt_labels_i)
-
- if len(gt_boxes_i) == 0:
- # These values won't be used anyway since the anchor is labeled as background
- matched_gt_boxes_i = torch.zeros_like(anchors.tensor)
- else:
- # TODO wasted indexing computation for ignored boxes
- matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor
-
- gt_labels.append(gt_labels_i) # N,AHW
- matched_gt_boxes.append(matched_gt_boxes_i)
- return gt_labels, matched_gt_boxes
-
- @torch.no_grad()
- def predict_proposals(self, anchors, pred_objectness_logits, pred_anchor_deltas, image_sizes):
- pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas)
- return find_top_rrpn_proposals(
- pred_proposals,
- pred_objectness_logits,
- image_sizes,
- self.nms_thresh,
- self.pre_nms_topk[self.training],
- self.post_nms_topk[self.training],
- self.min_box_size,
- self.training,
- )
diff --git a/spaces/CanonOverseer/Canons-Den/Dockerfile b/spaces/CanonOverseer/Canons-Den/Dockerfile
deleted file mode 100644
index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000
--- a/spaces/CanonOverseer/Canons-Den/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM node:18-bullseye-slim
-RUN apt-get update && \
- apt-get install -y git
-RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
-WORKDIR /app
-RUN npm install
-COPY Dockerfile greeting.md* .env* ./
-RUN npm run build
-EXPOSE 7860
-ENV NODE_ENV=production
-CMD [ "npm", "start" ]
diff --git a/spaces/CarlDennis/Lovelive-VITS-JPZH/text/korean.py b/spaces/CarlDennis/Lovelive-VITS-JPZH/text/korean.py
deleted file mode 100644
index 4b6c3fb27532ae6c033023de8a32fc7379bb5431..0000000000000000000000000000000000000000
--- a/spaces/CarlDennis/Lovelive-VITS-JPZH/text/korean.py
+++ /dev/null
@@ -1,205 +0,0 @@
-import re
-from jamo import h2j, j2hcj
-import ko_pron
-
-
-# This is a list of Korean classifiers preceded by pure Korean numerals.
-_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통'
-
-# List of (hangul, hangul divided) pairs:
-_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('ㄳ', 'ㄱㅅ'),
- ('ㄵ', 'ㄴㅈ'),
- ('ㄶ', 'ㄴㅎ'),
- ('ㄺ', 'ㄹㄱ'),
- ('ㄻ', 'ㄹㅁ'),
- ('ㄼ', 'ㄹㅂ'),
- ('ㄽ', 'ㄹㅅ'),
- ('ㄾ', 'ㄹㅌ'),
- ('ㄿ', 'ㄹㅍ'),
- ('ㅀ', 'ㄹㅎ'),
- ('ㅄ', 'ㅂㅅ'),
- ('ㅘ', 'ㅗㅏ'),
- ('ㅙ', 'ㅗㅐ'),
- ('ㅚ', 'ㅗㅣ'),
- ('ㅝ', 'ㅜㅓ'),
- ('ㅞ', 'ㅜㅔ'),
- ('ㅟ', 'ㅜㅣ'),
- ('ㅢ', 'ㅡㅣ'),
- ('ㅑ', 'ㅣㅏ'),
- ('ㅒ', 'ㅣㅐ'),
- ('ㅕ', 'ㅣㅓ'),
- ('ㅖ', 'ㅣㅔ'),
- ('ㅛ', 'ㅣㅗ'),
- ('ㅠ', 'ㅣㅜ')
-]]
-
-# List of (Latin alphabet, hangul) pairs:
-_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('a', '에이'),
- ('b', '비'),
- ('c', '시'),
- ('d', '디'),
- ('e', '이'),
- ('f', '에프'),
- ('g', '지'),
- ('h', '에이치'),
- ('i', '아이'),
- ('j', '제이'),
- ('k', '케이'),
- ('l', '엘'),
- ('m', '엠'),
- ('n', '엔'),
- ('o', '오'),
- ('p', '피'),
- ('q', '큐'),
- ('r', '아르'),
- ('s', '에스'),
- ('t', '티'),
- ('u', '유'),
- ('v', '브이'),
- ('w', '더블유'),
- ('x', '엑스'),
- ('y', '와이'),
- ('z', '제트')
-]]
-
-# List of (ipa, lazy ipa) pairs:
-_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [
- ('t͡ɕ','ʧ'),
- ('d͡ʑ','ʥ'),
- ('ɲ','n^'),
- ('ɕ','ʃ'),
- ('ʷ','w'),
- ('ɭ','l`'),
- ('ʎ','ɾ'),
- ('ɣ','ŋ'),
- ('ɰ','ɯ'),
- ('ʝ','j'),
- ('ʌ','ə'),
- ('ɡ','g'),
- ('\u031a','#'),
- ('\u0348','='),
- ('\u031e',''),
- ('\u0320',''),
- ('\u0339','')
-]]
-
-
-def latin_to_hangul(text):
- for regex, replacement in _latin_to_hangul:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def divide_hangul(text):
- text = j2hcj(h2j(text))
- for regex, replacement in _hangul_divided:
- text = re.sub(regex, replacement, text)
- return text
-
-
-def hangul_number(num, sino=True):
- '''Reference https://github.com/Kyubyong/g2pK'''
- num = re.sub(',', '', num)
-
- if num == '0':
- return '영'
- if not sino and num == '20':
- return '스무'
-
- digits = '123456789'
- names = '일이삼사오육칠팔구'
- digit2name = {d: n for d, n in zip(digits, names)}
-
- modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉'
- decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔'
- digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())}
- digit2dec = {d: dec for d, dec in zip(digits, decimals.split())}
-
- spelledout = []
- for i, digit in enumerate(num):
- i = len(num) - i - 1
- if sino:
- if i == 0:
- name = digit2name.get(digit, '')
- elif i == 1:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- else:
- if i == 0:
- name = digit2mod.get(digit, '')
- elif i == 1:
- name = digit2dec.get(digit, '')
- if digit == '0':
- if i % 4 == 0:
- last_three = spelledout[-min(3, len(spelledout)):]
- if ''.join(last_three) == '':
- spelledout.append('')
- continue
- else:
- spelledout.append('')
- continue
- if i == 2:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 3:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 4:
- name = digit2name.get(digit, '') + '만'
- name = name.replace('일만', '만')
- elif i == 5:
- name = digit2name.get(digit, '') + '십'
- name = name.replace('일십', '십')
- elif i == 6:
- name = digit2name.get(digit, '') + '백'
- name = name.replace('일백', '백')
- elif i == 7:
- name = digit2name.get(digit, '') + '천'
- name = name.replace('일천', '천')
- elif i == 8:
- name = digit2name.get(digit, '') + '억'
- elif i == 9:
- name = digit2name.get(digit, '') + '십'
- elif i == 10:
- name = digit2name.get(digit, '') + '백'
- elif i == 11:
- name = digit2name.get(digit, '') + '천'
- elif i == 12:
- name = digit2name.get(digit, '') + '조'
- elif i == 13:
- name = digit2name.get(digit, '') + '십'
- elif i == 14:
- name = digit2name.get(digit, '') + '백'
- elif i == 15:
- name = digit2name.get(digit, '') + '천'
- spelledout.append(name)
- return ''.join(elem for elem in spelledout)
-
-
-def number_to_hangul(text):
- '''Reference https://github.com/Kyubyong/g2pK'''
- tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text))
- for token in tokens:
- num, classifier = token
- if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers:
- spelledout = hangul_number(num, sino=False)
- else:
- spelledout = hangul_number(num, sino=True)
- text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}')
- # digit by digit for remaining digits
- digits = '0123456789'
- names = '영일이삼사오육칠팔구'
- for d, n in zip(digits, names):
- text = text.replace(d, n)
- return text
-
-
-def korean_to_lazy_ipa(text):
- text = latin_to_hangul(text)
- text = number_to_hangul(text)
- text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa'),text).split('] ~ [')[0]
- for regex, replacement in _ipa_to_lazy_ipa:
- text = re.sub(regex, replacement, text)
- return text
diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/__main__.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/__main__.py
deleted file mode 100644
index 128f9eea4900429e88276abdde3419b806001ac7..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/__main__.py
+++ /dev/null
@@ -1,5 +0,0 @@
-"""Auto-GPT: A GPT powered AI Assistant"""
-import autogpt.cli
-
-if __name__ == "__main__":
- autogpt.cli.main()
diff --git a/spaces/ChrisCaviar/ControlNet-v1-1/app_segmentation.py b/spaces/ChrisCaviar/ControlNet-v1-1/app_segmentation.py
deleted file mode 100644
index f120db46f7387c76829d987cb9640cc626b1231a..0000000000000000000000000000000000000000
--- a/spaces/ChrisCaviar/ControlNet-v1-1/app_segmentation.py
+++ /dev/null
@@ -1,104 +0,0 @@
-#!/usr/bin/env python
-
-import gradio as gr
-
-from utils import randomize_seed_fn
-
-
-def create_demo(process, max_images=12, default_num_images=3):
- with gr.Blocks() as demo:
- with gr.Row():
- with gr.Column():
- image = gr.Image()
- prompt = gr.Textbox(label='Prompt')
- run_button = gr.Button('Run')
- with gr.Accordion('Advanced options', open=False):
- preprocessor_name = gr.Radio(label='Preprocessor',
- choices=['UPerNet', 'None'],
- type='value',
- value='UPerNet')
- num_samples = gr.Slider(label='Number of images',
- minimum=1,
- maximum=max_images,
- value=default_num_images,
- step=1)
- image_resolution = gr.Slider(label='Image resolution',
- minimum=256,
- maximum=512,
- value=512,
- step=256)
- preprocess_resolution = gr.Slider(
- label='Preprocess resolution',
- minimum=128,
- maximum=512,
- value=512,
- step=1)
- num_steps = gr.Slider(label='Number of steps',
- minimum=1,
- maximum=100,
- value=20,
- step=1)
- guidance_scale = gr.Slider(label='Guidance scale',
- minimum=0.1,
- maximum=30.0,
- value=9.0,
- step=0.1)
- seed = gr.Slider(label='Seed',
- minimum=0,
- maximum=1000000,
- step=1,
- value=0,
- randomize=True)
- randomize_seed = gr.Checkbox(label='Randomize seed',
- value=True)
- a_prompt = gr.Textbox(
- label='Additional prompt',
- value='best quality, extremely detailed')
- n_prompt = gr.Textbox(
- label='Negative prompt',
- value=
- 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
- )
- with gr.Column():
- result = gr.Gallery(label='Output', show_label=False).style(
- columns=2, object_fit='scale-down')
- inputs = [
- image,
- prompt,
- a_prompt,
- n_prompt,
- num_samples,
- image_resolution,
- preprocess_resolution,
- num_steps,
- guidance_scale,
- seed,
- preprocessor_name,
- ]
- prompt.submit(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- )
- run_button.click(
- fn=randomize_seed_fn,
- inputs=[seed, randomize_seed],
- outputs=seed,
- ).then(
- fn=process,
- inputs=inputs,
- outputs=result,
- api_name='segmentation',
- )
- return demo
-
-
-if __name__ == '__main__':
- from model import Model
- model = Model(task_name='segmentation')
- demo = create_demo(model.process_segmentation)
- demo.queue().launch()
diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/model/red/tool.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/model/red/tool.js
deleted file mode 100644
index 70685f9a403ce195c0d8770fa0d88d19176d427c..0000000000000000000000000000000000000000
--- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/model/red/tool.js
+++ /dev/null
@@ -1,428 +0,0 @@
-import fs from 'fs'
-import { createHash, randomUUID } from 'crypto'
-import { resolve, join, dirname, basename } from 'path'
-import fetch, { FormData, Blob } from 'node-fetch'
-import { fileURLToPath } from 'url'
-import { exec, spawn } from 'child_process'
-import os from 'os'
-import _ from 'lodash'
-import { Stream } from "stream"
-import YAML from 'yaml'
-import { TMP_DIR } from '../tool.js'
-
-const user = os.userInfo().username
-let redPath = `C:/Users/${user}/.chronocat`
-if (!fs.existsSync(redPath)) {
- redPath = `C:/Users/${user}/AppData/Roaming/BetterUniverse/QQNT`
-}
-
-const roleMap = {
- 2: 'member',
- 3: 'admin',
- 4: 'owner'
-}
-
-async function uploadImg(bot, msg) {
- const file = await upload(bot, msg, 'image/png')
- if (!file.imageInfo) throw "获取图片信息失败,请检查图片状态"
- return {
- elementType: 2,
- picElement: {
- md5HexStr: file.md5,
- fileSize: file.fileSize,
- picHeight: file.imageInfo.height,
- picWidth: file.imageInfo.width,
- fileName: basename(file.ntFilePath),
- sourcePath: file.ntFilePath,
- picType: file.imageInfo.type === 'gif' ? 2000 : 1000
- }
- }
-}
-
-async function upload(bot, msg, contentType) {
- if (!msg) throw { noLog: true }
- let buffer
- if (msg instanceof Stream.Readable) {
- buffer = fs.readFileSync(msg.path)
- contentType = contentType.split('/')[0] + '/' + msg.path.substring(msg.path.lastIndexOf('.') + 1)
- } if (Buffer.isBuffer(msg)) {
- buffer = msg
- } else if (msg.match(/^base64:\/\//)) {
- buffer = Buffer.from(msg.replace(/^base64:\/\//, ""), 'base64')
- } else if (msg.startsWith('http')) {
- const img = await fetch(msg)
- const type = img.headers.get('content-type');
- if (type) contentType = type
- const arrayBuffer = await img.arrayBuffer()
- buffer = Buffer.from(arrayBuffer)
- } else if (msg.startsWith('file://')) {
- buffer = fs.readFileSync(msg.replace(/file:\/{2,3}/, ''))
- contentType = contentType.split('/')[0] + '/' + msg.substring(msg.lastIndexOf('.') + 1)
- } else {
- buffer = fs.readFileSync(msg)
- contentType = contentType.split('/')[0] + '/' + msg.substring(msg.lastIndexOf('.') + 1)
- }
- const blob = new Blob([buffer], { type: contentType })
- const formData = new FormData()
- formData.append('file', blob, 'ws-plugin.' + contentType.split('/')[1])
- const file = await bot.sendApi('POST', 'upload', formData)
- if (file.error) {
- throw file.error
- }
- file.contentType = contentType
- return file
-}
-
-async function uploadAudio(file) {
- let buffer
- if (file.match(/^base64:\/\//)) {
- buffer = Buffer.from(file.replace(/^base64:\/\//, ""), 'base64')
- } else if (file.startsWith('http')) {
- const http = await fetch(file)
- const arrayBuffer = await http.arrayBuffer()
- buffer = Buffer.from(arrayBuffer)
- } else if (file.startsWith('file://')) {
- buffer = fs.readFileSync(file.replace(/file:\/{2,3}/, ''))
- }
- const head = buffer.subarray(0, 7).toString()
- let filePath
- let duration = 0
- if (!head.includes('SILK')) {
- const tmpPath = await saveTmp(buffer)
- duration = await getDuration(tmpPath)
- const res = await audioTrans(tmpPath)
- filePath = res.silkFile
- buffer = fs.readFileSync(filePath)
- } else {
- filePath = await saveTmp(buffer)
- }
-
- const hash = createHash('md5')
- hash.update(buffer.toString('binary'), 'binary')
- const md5 = hash.digest('hex')
- return {
- elementType: 4,
- pttElement: {
- md5HexStr: md5,
- fileSize: buffer.length,
- fileName: md5 + '.amr',
- filePath: filePath,
- // waveAmplitudes: [36, 28, 68, 28, 84, 28],
- waveAmplitudes: [
- 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99
- ],
- duration: duration
- }
- }
-}
-
-function audioTrans(tmpPath, samplingRate = '24000') {
- return new Promise((resolve, reject) => {
- const pcmFile = join(TMP_DIR, randomUUID({ disableEntropyCache: true }))
- exec(`ffmpeg -y -i "${tmpPath}" -ar ${samplingRate} -ac 1 -f s16le "${pcmFile}"`, async () => {
- fs.unlink(tmpPath, () => { })
- fs.access(pcmFile, fs.constants.F_OK, (err) => {
- if (err) {
- reject('音频转码失败, 请确保你的 ffmpeg 已正确安装')
- }
- })
-
- const silkFile = join(TMP_DIR, randomUUID({ disableEntropyCache: true }))
- try {
- await pcmToSilk(pcmFile, silkFile, samplingRate)
- } catch (error) {
- reject('red发送语音暂不支持非win系统')
- }
- fs.unlink(pcmFile, () => { })
-
- resolve({
- silkFile
- })
- })
- })
-}
-
-function pcmToSilk(input, output, samplingRate) {
- return new Promise((resolve, reject) => {
- const args = ['-i', input, '-s', samplingRate, '-o', output]
- const __filename = fileURLToPath(import.meta.url);
- const __dirname = dirname(__filename);
- const child = spawn(join(__dirname, './cli.exe'), args)
- child.on('exit', () => {
- fs.access(output, fs.constants.F_OK, (err) => {
- if (err) {
- reject('音频转码失败')
- }
- })
- // fs.stat(output, (err, stats) => {
- // if (err) {
- // console.error(err);
- // return;
- // }
- // fs.truncate(output, stats.size - 1, err => {
- // if (err) {
- // console.error(err);
- // return;
- // }
- // });
- // });
- resolve()
- })
- })
-}
-
-function getDuration(file) {
- return new Promise((resolve, reject) => {
- exec(`ffmpeg -i ${file}`, function (err, stdout, stderr) {
- const outStr = stderr.toString()
- const regDuration = /Duration\: ([0-9\:\.]+),/
- const rs = regDuration.exec(outStr)
- if (rs === null) {
- reject("获取音频时长失败, 请确保你的 ffmpeg 已正确安装")
- } else if (rs[1]) {
- const time = rs[1]
- const parts = time.split(":")
- const seconds = (+parts[0]) * 3600 + (+parts[1]) * 60 + (+parts[2])
- const round = seconds.toString().split('.')[0]
- resolve(+ round)
- }
- })
- })
-}
-
-async function saveTmp(data, ext = null) {
- ext = ext ? '.' + ext : ''
- const filename = randomUUID({ disableEntropyCache: true }) + ext
- const tmpPath = resolve(TMP_DIR, filename)
- fs.writeFileSync(tmpPath, data)
- return tmpPath
-}
-
-async function getNtPath(bot) {
- let dataPath
- try {
- const buffer = fs.readFileSync('./plugins/ws-plugin/resources/common/cont/logo.png')
- const blob = new Blob([buffer], { type: 'image/png' })
- const formData = new FormData()
- formData.append('file', blob, '1.png')
- const file = await bot.sendApi('POST', 'upload', formData)
- fs.unlinkSync(file.ntFilePath)
- const index = file.ntFilePath.indexOf('nt_data');
- dataPath = file.ntFilePath.slice(0, index + 'nt_data'.length);
- } catch (error) {
- return null
- }
- return dataPath
-}
-
-async function uploadVideo(bot, file) {
- let type = 'mp4'
- if (file.match(/^base64:\/\//)) {
- const buffer = Buffer.from(file.replace(/^base64:\/\//, ""), 'base64')
- file = join(TMP_DIR, randomUUID({ disableEntropyCache: true }) + '.' + type)
- fs.writeFileSync(file, buffer)
- } else {
- file = file.replace(/file:\/{2,3}/, '')
- type = file.substring(file.lastIndexOf('.') + 1)
- const Temp = join(TMP_DIR, randomUUID({ disableEntropyCache: true }) + '.' + type)
- fs.copyFileSync(file, Temp)
- file = Temp
- }
- const ntPath = await getNtPath(bot)
- if (!ntPath) return
- const now = new Date();
- const year = now.getFullYear();
- const month = now.getMonth() + 1;
- const date = `${year}-${month.toString().padStart(2, '0')}`;
- const video = await getVideoInfo(file)
-
- let oriPath = `${ntPath}/Video`
- if (!fs.existsSync(oriPath)) fs.mkdirSync(oriPath)
- oriPath = `${oriPath}/${date}`
- if (!fs.existsSync(oriPath)) fs.mkdirSync(oriPath)
- oriPath = `${oriPath}/Ori`
- if (!fs.existsSync(oriPath)) fs.mkdirSync(oriPath)
- oriPath = `${oriPath}/${video.videoMd5}.${type}`
-
- let thumbPath = `${ntPath}/Video/${date}/Thumb`
- if (!fs.existsSync(thumbPath)) fs.mkdirSync(thumbPath)
- thumbPath = `${thumbPath}/${video.videoMd5}_0.png`
-
- fs.copyFileSync(file, oriPath)
- fs.unlinkSync(file)
- const thumb = await getThumbInfo(oriPath, thumbPath)
- return {
- elementType: 5,
- videoElement: {
- filePath: oriPath,
- fileName: video.videoMd5 + '.' + type,
- videoMd5: video.videoMd5,
- thumbMd5: thumb.thumbMd5,
- fileTime: video.fileTime,
- thumbSize: thumb.thumbSize,
- fileSize: video.fileSize,
- thumbWidth: thumb.thumbWidth,
- thumbHeight: thumb.thumbHeight
- }
- }
-}
-
-async function getVideoInfo(file) {
- const fileTime = await getVideoTime(file)
- const videoMd5 = await getVideoMd5(file)
- const fileSize = fs.readFileSync(file).length
- return {
- fileTime,
- videoMd5,
- fileSize
- }
-}
-
-function getVideoMd5(file) {
- return new Promise((resolve, reject) => {
- const stream = fs.createReadStream(file);
- const hash = createHash('md5');
- stream.on('data', chunk => {
- hash.update(chunk);
- });
- stream.on('end', () => {
- const md5 = hash.digest('hex');
- resolve(md5)
- });
- })
-}
-
-function getVideoTime(file) {
- return new Promise((resolve, reject) => {
- exec(`ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "${file}"`, (error, stdout, stderr) => {
- if (error) {
- reject('获取视频长度失败, 请确保你的 ffmpeg 已正确安装')
- }
- const durationInSeconds = parseInt(stdout);
- resolve(durationInSeconds)
- });
- })
-}
-
-async function getThumbInfo(file, thumbPath) {
-
- const tempPath = join(TMP_DIR, randomUUID({ disableEntropyCache: true }) + '.jpg')
-
- const { thumbMd5, thumbSize } = await extractThumbnail(file, tempPath);
-
- const { thumbWidth, thumbHeight } = getImageSize(tempPath);
-
- fs.copyFileSync(tempPath, thumbPath)
- fs.unlinkSync(tempPath)
-
- return { thumbMd5, thumbWidth, thumbHeight, thumbSize };
-}
-
-function extractThumbnail(inputFile, outputFile) {
- return new Promise((resolve, reject) => {
- exec(`ffmpeg -i "${inputFile}" -ss 00:00:00.000 -vframes 1 -vf "scale=iw/3:ih/3" "${outputFile}"
- `, async () => {
- fs.access(outputFile, fs.constants.F_OK, (err) => {
- if (err) {
- reject('获取视频封面失败, 请确保你的 ffmpeg 已正确安装')
- }
- })
-
- const buffer = fs.readFileSync(outputFile);
- const hash = createHash('md5');
- hash.update(buffer);
- resolve({
- thumbMd5: hash.digest('hex'),
- thumbSize: buffer.length
- })
- })
- })
-}
-
-function getImageSize(file) {
- const buffer = fs.readFileSync(file);
- const start = buffer.indexOf(Buffer.from([0xff, 0xc0]));
- const thumbHeight = buffer.readUInt16BE(start + 5);
- const thumbWidth = buffer.readUInt16BE(start + 7);
- return { thumbWidth, thumbHeight };
-}
-
-async function uploadFile(file) {
- let buffer, name, path = process.cwd() + '/plugins/ws-plugin/Temp/'
- if (file.startsWith('http')) {
- const http = await fetch(file)
- const arrayBuffer = await http.arrayBuffer()
- buffer = Buffer.from(arrayBuffer)
- name = file.substring(file.lastIndexOf('/') + 1)
- path = path + name
- fs.writeFileSync(path, buffer);
- } else if (file.startsWith('file://')) {
- buffer = fs.readFileSync(file.replace(/file:\/{2,3}/, ''))
- name = file.substring(file.lastIndexOf('/') + 1)
- path = path + name
- fs.copyFileSync(file, path)
- } else if (Buffer.isBuffer(file)) {
- buffer = file
- name = 'buffer'
- path = path + name
- fs.writeFileSync(path, buffer);
- } else {
- buffer = fs.readFileSync(file)
- name = file.substring(file.lastIndexOf('/') + 1)
- path = path + name
- fs.copyFileSync(file, path)
- }
- const size = buffer.length
- const hash = createHash('md5');
- hash.update(buffer);
- const md5 = hash.digest('hex')
- return {
- elementType: 3,
- fileElement: {
- fileMd5: md5,
- fileName: name,
- filePath: path,
- fileSize: size,
- }
- }
-}
-
-function getToken() {
- let tokenPath
- try {
- if (os.platform() === 'win32') {
- tokenPath = `${redPath}/config/chronocat.yml`
- if (fs.existsSync(tokenPath)) {
- const data = YAML.parse(fs.readFileSync(tokenPath, 'utf-8'))
- for (const i of data?.servers || []) {
- if (i.type === 'red') {
- return i.token
- }
- }
- logger.error('[ws-plugin] 请检查chronocat配置是否开启red服务')
- return false
- } else {
- tokenPath = `${redPath}/RED_PROTOCOL_TOKEN`
- return fs.readFileSync(tokenPath, 'utf-8')
- }
- } else {
- logger.error('[ws-plugin] 非Windows系统请自行获取Token')
- return false
- }
- } catch (error) {
- logger.error('[ws-plugin] QQNT自动获取Token失败,请检查是否已安装Chronocat并尝试手动获取')
- logger.error(error)
- return false
- }
-}
-
-export {
- uploadImg,
- uploadAudio,
- uploadVideo,
- uploadFile,
- getToken,
- getNtPath,
- roleMap,
- redPath
-}
\ No newline at end of file
diff --git a/spaces/CjangCjengh/Sanskrit-TTS/utils.py b/spaces/CjangCjengh/Sanskrit-TTS/utils.py
deleted file mode 100644
index 07839a71a8339f90fe7eeff4dc4a6bd284330049..0000000000000000000000000000000000000000
--- a/spaces/CjangCjengh/Sanskrit-TTS/utils.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import logging
-from json import loads
-from torch import load, FloatTensor
-from numpy import float32
-import librosa
-
-
-class HParams():
- def __init__(self, **kwargs):
- for k, v in kwargs.items():
- if type(v) == dict:
- v = HParams(**v)
- self[k] = v
-
- def keys(self):
- return self.__dict__.keys()
-
- def items(self):
- return self.__dict__.items()
-
- def values(self):
- return self.__dict__.values()
-
- def __len__(self):
- return len(self.__dict__)
-
- def __getitem__(self, key):
- return getattr(self, key)
-
- def __setitem__(self, key, value):
- return setattr(self, key, value)
-
- def __contains__(self, key):
- return key in self.__dict__
-
- def __repr__(self):
- return self.__dict__.__repr__()
-
-
-def load_checkpoint(checkpoint_path, model):
- checkpoint_dict = load(checkpoint_path, map_location='cpu')
- iteration = checkpoint_dict['iteration']
- saved_state_dict = checkpoint_dict['model']
- if hasattr(model, 'module'):
- state_dict = model.module.state_dict()
- else:
- state_dict = model.state_dict()
- new_state_dict= {}
- for k, v in state_dict.items():
- try:
- new_state_dict[k] = saved_state_dict[k]
- except:
- logging.info("%s is not in the checkpoint" % k)
- new_state_dict[k] = v
- if hasattr(model, 'module'):
- model.module.load_state_dict(new_state_dict)
- else:
- model.load_state_dict(new_state_dict)
- logging.info("Loaded checkpoint '{}' (iteration {})" .format(
- checkpoint_path, iteration))
- return
-
-
-def get_hparams_from_file(config_path):
- with open(config_path, "r") as f:
- data = f.read()
- config = loads(data)
-
- hparams = HParams(**config)
- return hparams
-
-
-def load_audio_to_torch(full_path, target_sampling_rate):
- audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True)
- return FloatTensor(audio.astype(float32))
diff --git a/spaces/CofAI/LengthConverter/style.css b/spaces/CofAI/LengthConverter/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/CofAI/LengthConverter/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/CofAI/chat.v1/web.html b/spaces/CofAI/chat.v1/web.html
deleted file mode 100644
index 9e1fd00c7dd7aef4e03d88c14c8e8d0e67e808de..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat.v1/web.html
+++ /dev/null
@@ -1,60 +0,0 @@
-
-
-
- API Demo
-
-
-
API Demo
-
-
-
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/spaces/CofAI/chat/g4f/models.py b/spaces/CofAI/chat/g4f/models.py
deleted file mode 100644
index 37efcfb2a7e870f3ef3093d167efdab299083220..0000000000000000000000000000000000000000
--- a/spaces/CofAI/chat/g4f/models.py
+++ /dev/null
@@ -1,233 +0,0 @@
-from g4f import Provider
-
-
-class Model:
- class model:
- name: str
- base_provider: str
- best_provider: str
-
- class gpt_35_turbo:
- name: str = 'gpt-3.5-turbo'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Wewordle
-
- class gpt_35_turbo_0613:
- name: str = 'gpt-3.5-turbo-0613'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Zeabur
-
- class gpt_35_turbo_0301:
- name: str = 'gpt-3.5-turbo-0301'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Zeabur
-
- class gpt_35_turbo_16k_0613:
- name: str = 'gpt-3.5-turbo-16k-0613'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Zeabur
-
- class gpt_35_turbo_16k:
- name: str = 'gpt-3.5-turbo-16k'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.ChatFree
-
- class gpt_4_dev:
- name: str = 'gpt-4-for-dev'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Phind
-
- class gpt_4:
- name: str = 'gpt-4'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.ChatgptAi
-
- class gpt_4_0613:
- name: str = 'gpt-4-0613'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Lockchat
- best_providers: list = [Provider.Bing, Provider.Lockchat]
-
- class claude_instant_v1_100k:
- name: str = 'claude-instant-v1-100k'
- base_provider: str = 'anthropic'
- best_provider: Provider.Provider = Provider.Vercel
-
- class claude_instant_v1:
- name: str = 'claude-instant-v1'
- base_provider: str = 'anthropic'
- best_provider: Provider.Provider = Provider.Vercel
-
- class claude_v1_100k:
- name: str = 'claude-v1-100k'
- base_provider: str = 'anthropic'
- best_provider: Provider.Provider = Provider.Vercel
-
- class claude_v1:
- name: str = 'claude-v1'
- base_provider: str = 'anthropic'
- best_provider: Provider.Provider = Provider.Vercel
-
- class alpaca_7b:
- name: str = 'alpaca-7b'
- base_provider: str = 'replicate'
- best_provider: Provider.Provider = Provider.Vercel
-
- class stablelm_tuned_alpha_7b:
- name: str = 'stablelm-tuned-alpha-7b'
- base_provider: str = 'replicate'
- best_provider: Provider.Provider = Provider.Vercel
-
- class bloom:
- name: str = 'bloom'
- base_provider: str = 'huggingface'
- best_provider: Provider.Provider = Provider.Vercel
-
- class bloomz:
- name: str = 'bloomz'
- base_provider: str = 'huggingface'
- best_provider: Provider.Provider = Provider.Vercel
-
- class flan_t5_xxl:
- name: str = 'flan-t5-xxl'
- base_provider: str = 'huggingface'
- best_provider: Provider.Provider = Provider.Vercel
-
- class flan_ul2:
- name: str = 'flan-ul2'
- base_provider: str = 'huggingface'
- best_provider: Provider.Provider = Provider.Vercel
-
- class gpt_neox_20b:
- name: str = 'gpt-neox-20b'
- base_provider: str = 'huggingface'
- best_provider: Provider.Provider = Provider.Vercel
-
- class oasst_sft_4_pythia_12b_epoch_35:
- name: str = 'oasst-sft-4-pythia-12b-epoch-3.5'
- base_provider: str = 'huggingface'
- best_provider: Provider.Provider = Provider.Vercel
-
- class santacoder:
- name: str = 'santacoder'
- base_provider: str = 'huggingface'
- best_provider: Provider.Provider = Provider.Vercel
-
- class command_medium_nightly:
- name: str = 'command-medium-nightly'
- base_provider: str = 'cohere'
- best_provider: Provider.Provider = Provider.Vercel
-
- class command_xlarge_nightly:
- name: str = 'command-xlarge-nightly'
- base_provider: str = 'cohere'
- best_provider: Provider.Provider = Provider.Vercel
-
- class code_cushman_001:
- name: str = 'code-cushman-001'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Vercel
-
- class code_davinci_002:
- name: str = 'code-davinci-002'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Vercel
-
- class text_ada_001:
- name: str = 'text-ada-001'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Vercel
-
- class text_babbage_001:
- name: str = 'text-babbage-001'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Vercel
-
- class text_curie_001:
- name: str = 'text-curie-001'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Vercel
-
- class text_davinci_002:
- name: str = 'text-davinci-002'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Vercel
-
- class text_davinci_003:
- name: str = 'text-davinci-003'
- base_provider: str = 'openai'
- best_provider: Provider.Provider = Provider.Vercel
-
- class palm:
- name: str = 'palm2'
- base_provider: str = 'google'
- best_provider: Provider.Provider = Provider.Bard
-
- class falcon_40b:
- name: str = 'falcon-40b'
- base_provider: str = 'huggingface'
- best_provider: Provider.Provider = Provider.H2o
-
- class falcon_7b:
- name: str = 'falcon-7b'
- base_provider: str = 'huggingface'
- best_provider: Provider.Provider = Provider.H2o
-
- class llama_13b:
- name: str = 'llama-13b'
- base_provider: str = 'huggingface'
- best_provider: Provider.Provider = Provider.H2o
-
-
-class ModelUtils:
- convert: dict = {
- 'gpt-3.5-turbo': Model.gpt_35_turbo,
- 'gpt-3.5-turbo-0613': Model.gpt_35_turbo_0613,
- 'gpt-3.5-turbo-0301': Model.gpt_35_turbo_0301,
- 'gpt-4': Model.gpt_4,
- 'gpt-4-0613': Model.gpt_4_0613,
- 'gpt-4-for-dev': Model.gpt_4_dev,
- 'gpt-3.5-turbo-16k': Model.gpt_35_turbo_16k,
- 'gpt-3.5-turbo-16k-0613': Model.gpt_35_turbo_16k_0613,
-
- 'claude-instant-v1-100k': Model.claude_instant_v1_100k,
- 'claude-v1-100k': Model.claude_v1_100k,
- 'claude-instant-v1': Model.claude_instant_v1,
- 'claude-v1': Model.claude_v1,
-
- 'alpaca-7b': Model.alpaca_7b,
- 'stablelm-tuned-alpha-7b': Model.stablelm_tuned_alpha_7b,
-
- 'bloom': Model.bloom,
- 'bloomz': Model.bloomz,
-
- 'flan-t5-xxl': Model.flan_t5_xxl,
- 'flan-ul2': Model.flan_ul2,
-
- 'gpt-neox-20b': Model.gpt_neox_20b,
- 'oasst-sft-4-pythia-12b-epoch-3.5': Model.oasst_sft_4_pythia_12b_epoch_35,
- 'santacoder': Model.santacoder,
-
- 'command-medium-nightly': Model.command_medium_nightly,
- 'command-xlarge-nightly': Model.command_xlarge_nightly,
-
- 'code-cushman-001': Model.code_cushman_001,
- 'code-davinci-002': Model.code_davinci_002,
-
- 'text-ada-001': Model.text_ada_001,
- 'text-babbage-001': Model.text_babbage_001,
- 'text-curie-001': Model.text_curie_001,
- 'text-davinci-002': Model.text_davinci_002,
- 'text-davinci-003': Model.text_davinci_003,
-
- 'palm2': Model.palm,
- 'palm': Model.palm,
- 'google': Model.palm,
- 'google-bard': Model.palm,
- 'google-palm': Model.palm,
- 'bard': Model.palm,
-
- 'falcon-40b': Model.falcon_40b,
- 'falcon-7b': Model.falcon_7b,
- 'llama-13b': Model.llama_13b,
- }
diff --git a/spaces/CyberHarem/find_my_waifu/civitai.py b/spaces/CyberHarem/find_my_waifu/civitai.py
deleted file mode 100644
index 7f235e092ca6430818213fc5de8ffd141c26cc16..0000000000000000000000000000000000000000
--- a/spaces/CyberHarem/find_my_waifu/civitai.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from gchar.games.dispatch.access import GAME_CHARS
-
-
-def try_find_title(char_name, game_name):
- try:
- game_cls = GAME_CHARS[game_name.lower()]
- ch = game_cls.get(char_name)
- if ch:
- names = []
- if ch.enname:
- names.append(str(ch.enname))
- if ch.jpname:
- names.append(str(ch.jpname))
- if ch.cnname:
- names.append(str(ch.cnname))
- if hasattr(ch, 'krname') and ch.krname:
- names.append(str(ch.krname))
-
- return f"{'/'.join(names)} ({game_cls.__official_name__})"
-
- else:
- cname = ' '.join(list(map(str.capitalize, char_name.split(' '))))
- return f'{cname} ({game_cls.__official_name__})'
-
- except KeyError:
- return None
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PSDraw.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PSDraw.py
deleted file mode 100644
index 13b3048f67e18ac58170c3a1bd25cb18d66b30fe..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PSDraw.py
+++ /dev/null
@@ -1,229 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# Simple PostScript graphics interface
-#
-# History:
-# 1996-04-20 fl Created
-# 1999-01-10 fl Added gsave/grestore to image method
-# 2005-05-04 fl Fixed floating point issue in image (from Eric Etheridge)
-#
-# Copyright (c) 1997-2005 by Secret Labs AB. All rights reserved.
-# Copyright (c) 1996 by Fredrik Lundh.
-#
-# See the README file for information on usage and redistribution.
-#
-
-import sys
-
-from . import EpsImagePlugin
-
-##
-# Simple PostScript graphics interface.
-
-
-class PSDraw:
- """
- Sets up printing to the given file. If ``fp`` is omitted,
- ``sys.stdout.buffer`` or ``sys.stdout`` is assumed.
- """
-
- def __init__(self, fp=None):
- if not fp:
- try:
- fp = sys.stdout.buffer
- except AttributeError:
- fp = sys.stdout
- self.fp = fp
-
- def begin_document(self, id=None):
- """Set up printing of a document. (Write PostScript DSC header.)"""
- # FIXME: incomplete
- self.fp.write(
- b"%!PS-Adobe-3.0\n"
- b"save\n"
- b"/showpage { } def\n"
- b"%%EndComments\n"
- b"%%BeginDocument\n"
- )
- # self.fp.write(ERROR_PS) # debugging!
- self.fp.write(EDROFF_PS)
- self.fp.write(VDI_PS)
- self.fp.write(b"%%EndProlog\n")
- self.isofont = {}
-
- def end_document(self):
- """Ends printing. (Write PostScript DSC footer.)"""
- self.fp.write(b"%%EndDocument\nrestore showpage\n%%End\n")
- if hasattr(self.fp, "flush"):
- self.fp.flush()
-
- def setfont(self, font, size):
- """
- Selects which font to use.
-
- :param font: A PostScript font name
- :param size: Size in points.
- """
- font = bytes(font, "UTF-8")
- if font not in self.isofont:
- # reencode font
- self.fp.write(b"/PSDraw-%s ISOLatin1Encoding /%s E\n" % (font, font))
- self.isofont[font] = 1
- # rough
- self.fp.write(b"/F0 %d /PSDraw-%s F\n" % (size, font))
-
- def line(self, xy0, xy1):
- """
- Draws a line between the two points. Coordinates are given in
- PostScript point coordinates (72 points per inch, (0, 0) is the lower
- left corner of the page).
- """
- self.fp.write(b"%d %d %d %d Vl\n" % (*xy0, *xy1))
-
- def rectangle(self, box):
- """
- Draws a rectangle.
-
- :param box: A tuple of four integers, specifying left, bottom, width and
- height.
- """
- self.fp.write(b"%d %d M 0 %d %d Vr\n" % box)
-
- def text(self, xy, text):
- """
- Draws text at the given position. You must use
- :py:meth:`~PIL.PSDraw.PSDraw.setfont` before calling this method.
- """
- text = bytes(text, "UTF-8")
- text = b"\\(".join(text.split(b"("))
- text = b"\\)".join(text.split(b")"))
- xy += (text,)
- self.fp.write(b"%d %d M (%s) S\n" % xy)
-
- def image(self, box, im, dpi=None):
- """Draw a PIL image, centered in the given box."""
- # default resolution depends on mode
- if not dpi:
- if im.mode == "1":
- dpi = 200 # fax
- else:
- dpi = 100 # greyscale
- # image size (on paper)
- x = im.size[0] * 72 / dpi
- y = im.size[1] * 72 / dpi
- # max allowed size
- xmax = float(box[2] - box[0])
- ymax = float(box[3] - box[1])
- if x > xmax:
- y = y * xmax / x
- x = xmax
- if y > ymax:
- x = x * ymax / y
- y = ymax
- dx = (xmax - x) / 2 + box[0]
- dy = (ymax - y) / 2 + box[1]
- self.fp.write(b"gsave\n%f %f translate\n" % (dx, dy))
- if (x, y) != im.size:
- # EpsImagePlugin._save prints the image at (0,0,xsize,ysize)
- sx = x / im.size[0]
- sy = y / im.size[1]
- self.fp.write(b"%f %f scale\n" % (sx, sy))
- EpsImagePlugin._save(im, self.fp, None, 0)
- self.fp.write(b"\ngrestore\n")
-
-
-# --------------------------------------------------------------------
-# PostScript driver
-
-#
-# EDROFF.PS -- PostScript driver for Edroff 2
-#
-# History:
-# 94-01-25 fl: created (edroff 2.04)
-#
-# Copyright (c) Fredrik Lundh 1994.
-#
-
-
-EDROFF_PS = b"""\
-/S { show } bind def
-/P { moveto show } bind def
-/M { moveto } bind def
-/X { 0 rmoveto } bind def
-/Y { 0 exch rmoveto } bind def
-/E { findfont
- dup maxlength dict begin
- {
- 1 index /FID ne { def } { pop pop } ifelse
- } forall
- /Encoding exch def
- dup /FontName exch def
- currentdict end definefont pop
-} bind def
-/F { findfont exch scalefont dup setfont
- [ exch /setfont cvx ] cvx bind def
-} bind def
-"""
-
-#
-# VDI.PS -- PostScript driver for VDI meta commands
-#
-# History:
-# 94-01-25 fl: created (edroff 2.04)
-#
-# Copyright (c) Fredrik Lundh 1994.
-#
-
-VDI_PS = b"""\
-/Vm { moveto } bind def
-/Va { newpath arcn stroke } bind def
-/Vl { moveto lineto stroke } bind def
-/Vc { newpath 0 360 arc closepath } bind def
-/Vr { exch dup 0 rlineto
- exch dup 0 exch rlineto
- exch neg 0 rlineto
- 0 exch neg rlineto
- setgray fill } bind def
-/Tm matrix def
-/Ve { Tm currentmatrix pop
- translate scale newpath 0 0 .5 0 360 arc closepath
- Tm setmatrix
-} bind def
-/Vf { currentgray exch setgray fill setgray } bind def
-"""
-
-#
-# ERROR.PS -- Error handler
-#
-# History:
-# 89-11-21 fl: created (pslist 1.10)
-#
-
-ERROR_PS = b"""\
-/landscape false def
-/errorBUF 200 string def
-/errorNL { currentpoint 10 sub exch pop 72 exch moveto } def
-errordict begin /handleerror {
- initmatrix /Courier findfont 10 scalefont setfont
- newpath 72 720 moveto $error begin /newerror false def
- (PostScript Error) show errorNL errorNL
- (Error: ) show
- /errorname load errorBUF cvs show errorNL errorNL
- (Command: ) show
- /command load dup type /stringtype ne { errorBUF cvs } if show
- errorNL errorNL
- (VMstatus: ) show
- vmstatus errorBUF cvs show ( bytes available, ) show
- errorBUF cvs show ( bytes used at level ) show
- errorBUF cvs show errorNL errorNL
- (Operand stargck: ) show errorNL /ostargck load {
- dup type /stringtype ne { errorBUF cvs } if 72 0 rmoveto show errorNL
- } forall errorNL
- (Execution stargck: ) show errorNL /estargck load {
- dup type /stringtype ne { errorBUF cvs } if 72 0 rmoveto show errorNL
- } forall
- end showpage
-} def end
-"""
diff --git a/spaces/DaleChen/AutoGPT/autogpt/workspace.py b/spaces/DaleChen/AutoGPT/autogpt/workspace.py
deleted file mode 100644
index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000
--- a/spaces/DaleChen/AutoGPT/autogpt/workspace.py
+++ /dev/null
@@ -1,47 +0,0 @@
-from __future__ import annotations
-
-import os
-from pathlib import Path
-
-from autogpt.config import Config
-
-CFG = Config()
-
-# Set a dedicated folder for file I/O
-WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace"
-
-# Create the directory if it doesn't exist
-if not os.path.exists(WORKSPACE_PATH):
- os.makedirs(WORKSPACE_PATH)
-
-
-def path_in_workspace(relative_path: str | Path) -> Path:
- """Get full path for item in workspace
-
- Parameters:
- relative_path (str | Path): Path to translate into the workspace
-
- Returns:
- Path: Absolute path for the given path in the workspace
- """
- return safe_path_join(WORKSPACE_PATH, relative_path)
-
-
-def safe_path_join(base: Path, *paths: str | Path) -> Path:
- """Join one or more path components, asserting the resulting path is within the workspace.
-
- Args:
- base (Path): The base path
- *paths (str): The paths to join to the base path
-
- Returns:
- Path: The joined path
- """
- joined_path = base.joinpath(*paths).resolve()
-
- if CFG.restrict_to_workspace and not joined_path.is_relative_to(base):
- raise ValueError(
- f"Attempted to access path '{joined_path}' outside of workspace '{base}'."
- )
-
- return joined_path
diff --git a/spaces/Danil/AnyNameHack/indexer.py b/spaces/Danil/AnyNameHack/indexer.py
deleted file mode 100644
index ed643b491109c741df1e914e801c88b9fbb02b32..0000000000000000000000000000000000000000
--- a/spaces/Danil/AnyNameHack/indexer.py
+++ /dev/null
@@ -1,161 +0,0 @@
-import pickle
-import faiss
-import numpy as np
-import pandas as pd
-from utils import *
-from sentence_transformers import SentenceTransformer
-
-from tqdm import tqdm
-from typing import List
-
-
-class FAISS:
- def __init__(self, dimensions: int) -> None:
- self.dimensions = dimensions
- self.index = faiss.IndexFlatL2(dimensions)
- self.vectors = {}
- self.counter = 0
- self.model_name = 'paraphrase-multilingual-MiniLM-L12-v2'
- self.sentence_encoder = SentenceTransformer(self.model_name)
-
- def init_vectors(self, path: str) -> None:
- """
- Заполняет набор векторов предобученными значениями
-
- Args:
- path: путь к файлу в формате pickle
- """
- with open(path, 'rb') as pkl_file:
- self.vectors = pickle.load(pkl_file)
-
- self.counter = len(self.vectors)
-
- def init_index(self, path) -> None:
- """
- Заполняет индекс FAISS предобученными значениями
-
- Args:
- path: путь к файлу в формате FAISS
- """
- self.index = faiss.read_index(path)
-
- def save_vectors(self, path: str) -> None:
- """
- Сохраняет набор векторов
-
- Args:
- path: желаемый путь к файлу
- """
- with open(path, "wb") as fp:
- pickle.dump(self.index.vectors, fp)
-
- def save_index(self, path: str) -> None:
- """
- Сохраняет индекс FAISS
-
- Args:
- path: желаемый путь к файлу
- """
- faiss.write_index(self.index, path)
-
- def add(self, text: str, idx: int, pop: float, emb=None) -> None:
- """
- Добавляет в поисковый индекс новый вектор
-
- Args:
- text: текст запроса
- idx: индекс нового вектора
- pop: популярность запроса
- emb (optional): эмбеддинг текста запроса (если не указан, то будет подготовлен с помощью self.sentence_encoder)
- """
- if emb is None:
- text_vec = self.sentence_encoder.encode([text])
- else:
- text_vec = emb
-
- self.index.add(text_vec)
- self.vectors[self.counter] = (idx, text, pop, text_vec)
-
- self.counter += 1
-
- def search(self, v: List, k: int = 10) -> List[List]:
- """
- Ищет в поисковом индексе ближайших соседей к вектору v
-
- Args:
- v: вектор для поиска ближайших соседей
- k: число векторов в выдаче
- Returns:
- список векторов, ближайших к вектору v, в формате [idx, text, popularity, similarity]
- """
- result = []
- distance, item_index = self.index.search(v, k)
- for dist, i in zip(distance[0], item_index[0]):
- if i == -1:
- break
- else:
- result.append((self.vectors[i][0], self.vectors[i][1], self.vectors[i][2], dist))
-
- return result
-
- def suggest_tags(self, query: str, top_n: int = 10, k: int = 30) -> List[str]:
- """
- Получает список тегов для пользователя по текстовому запросу
-
- Args:
- query: запрос пользователя
- top_n (optional): число тегов в выдаче
- k (optional): число векторов из индекса, среди которых будут искаться теги для выдачи
- Returns:
- список тегов для выдачи пользователю
- """
- emb = self.sentence_encoder.encode([query.lower()])
- r = self.search(emb, k)
-
- result = []
- for i in r:
- if check(query, i[1]):
- result.append(i)
- # надо добавить вес относительно длины
- result = sorted(result, key=lambda x: x[0] * 0.3 - x[-1], reverse=True)
- total_result = []
- for i in range(len(result)):
- flag = True
- for j in result[i + 1:]:
- flag &= easy_check(result[i][1], j[1])
- if flag:
- total_result.append(result[i][1])
-
- return total_result[:top_n]
-
- def fill(self, queries: List[str], popularities: pd.DataFrame) -> None:
- """
- Заполняет поисковый индекс запросами queries, популярности которых берутся из таблицы popularities
-
- Args:
- queries: список запросов
- popularities: таблица, в которой содержатся колонки query и query_popularity
- """
- idx = -1
- for query in tqdm(queries):
- idx += 1
- if type(query) == str:
- emb = self.index.sentence_encoder.encode([query.lower()])
- bool_add = True
- search_sim = self.index.search(emb, 1)
-
- try:
- popularity = popularities[popularities["query"] == query]["query_popularity"].item()
- except ValueError:
- # Если для текущего запроса неизвестна популярность, возьмем значение 5
- popularity = 5
-
- if len(search_sim) > 0:
- search_sim = search_sim[0]
- if search_sim[-1] < 0.15:
- # Не добавляем вектор, если он находится достаточно близко к уже присутствующему в индексе
- bool_add = False
- if bool_add:
- self.index.add(query, popularity, idx, emb)
- else:
- self.index.add(query, popularity, idx, emb)
\ No newline at end of file
diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/utils/common.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/utils/common.py
deleted file mode 100644
index b19e18ddcb78b06678fa18e4a76da44fc511b789..0000000000000000000000000000000000000000
--- a/spaces/Datasculptor/StyleGAN-NADA/e4e/utils/common.py
+++ /dev/null
@@ -1,55 +0,0 @@
-from PIL import Image
-import matplotlib.pyplot as plt
-
-
-# Log images
-def log_input_image(x, opts):
- return tensor2im(x)
-
-
-def tensor2im(var):
- # var shape: (3, H, W)
- var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy()
- var = ((var + 1) / 2)
- var[var < 0] = 0
- var[var > 1] = 1
- var = var * 255
- return Image.fromarray(var.astype('uint8'))
-
-
-def vis_faces(log_hooks):
- display_count = len(log_hooks)
- fig = plt.figure(figsize=(8, 4 * display_count))
- gs = fig.add_gridspec(display_count, 3)
- for i in range(display_count):
- hooks_dict = log_hooks[i]
- fig.add_subplot(gs[i, 0])
- if 'diff_input' in hooks_dict:
- vis_faces_with_id(hooks_dict, fig, gs, i)
- else:
- vis_faces_no_id(hooks_dict, fig, gs, i)
- plt.tight_layout()
- return fig
-
-
-def vis_faces_with_id(hooks_dict, fig, gs, i):
- plt.imshow(hooks_dict['input_face'])
- plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input'])))
- fig.add_subplot(gs[i, 1])
- plt.imshow(hooks_dict['target_face'])
- plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']),
- float(hooks_dict['diff_target'])))
- fig.add_subplot(gs[i, 2])
- plt.imshow(hooks_dict['output_face'])
- plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target'])))
-
-
-def vis_faces_no_id(hooks_dict, fig, gs, i):
- plt.imshow(hooks_dict['input_face'], cmap="gray")
- plt.title('Input')
- fig.add_subplot(gs[i, 1])
- plt.imshow(hooks_dict['target_face'])
- plt.title('Target')
- fig.add_subplot(gs[i, 2])
- plt.imshow(hooks_dict['output_face'])
- plt.title('Output')
diff --git a/spaces/Deci/DeciDiffusion-v1-0/header.html b/spaces/Deci/DeciDiffusion-v1-0/header.html
deleted file mode 100644
index fafbcb3146686659a84a80ead9d1c4b7998dd94b..0000000000000000000000000000000000000000
--- a/spaces/Deci/DeciDiffusion-v1-0/header.html
+++ /dev/null
@@ -1,17 +0,0 @@
-
\ No newline at end of file
diff --git a/spaces/Detomo/ai-comic-generation/src/components/ui/input.tsx b/spaces/Detomo/ai-comic-generation/src/components/ui/input.tsx
deleted file mode 100644
index 0757ddebdca3800bbd4a46fe1c2c17dff86c5e2f..0000000000000000000000000000000000000000
--- a/spaces/Detomo/ai-comic-generation/src/components/ui/input.tsx
+++ /dev/null
@@ -1,25 +0,0 @@
-import * as React from "react"
-
-import { cn } from "@/lib/utils"
-
-export interface InputProps
- extends React.InputHTMLAttributes {}
-
-const Input = React.forwardRef(
- ({ className, type, ...props }, ref) => {
- return (
-
- )
- }
-)
-Input.displayName = "Input"
-
-export { Input }
diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/gpsnet/panoptic_fpn_r101_fpn_1x_predcls_psg.py b/spaces/ECCV2022/PSG/OpenPSG/configs/gpsnet/panoptic_fpn_r101_fpn_1x_predcls_psg.py
deleted file mode 100644
index 1be5fdcf74eeb3e941ef2829546cfb14338face8..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/PSG/OpenPSG/configs/gpsnet/panoptic_fpn_r101_fpn_1x_predcls_psg.py
+++ /dev/null
@@ -1,26 +0,0 @@
-_base_ = './panoptic_fpn_r50_fpn_1x_predcls_psg.py'
-
-model = dict(backbone=dict(
- depth=101,
- init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101')))
-
-# Log config
-project_name = 'openpsg'
-expt_name = 'gpsnet_panoptic_fpn_r101_fpn_1x_predcls_psg'
-work_dir = f'./work_dirs/{expt_name}'
-
-log_config = dict(
- interval=50,
- hooks=[
- dict(type='TextLoggerHook'),
- dict(
- type='WandbLoggerHook',
- init_kwargs=dict(
- project=project_name,
- name=expt_name,
- ),
- ),
- ],
-)
-
-load_from = 'work_dirs/checkpoints/panoptic_fpn_r101_fpn_1x_coco_20210820_193950-ab9157a2.pth'
diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/utils/misc.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/utils/misc.py
deleted file mode 100644
index 874d9805b482f52bbffc1be620e36e0cffc07c46..0000000000000000000000000000000000000000
--- a/spaces/EPFL-VILAB/MultiMAE/mask2former/utils/misc.py
+++ /dev/null
@@ -1,111 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/util/misc.py
-"""
-Misc functions, including distributed helpers.
-
-Mostly copy-paste from torchvision references.
-"""
-from typing import List, Optional
-
-import torch
-import torch.distributed as dist
-import torchvision
-from torch import Tensor
-
-
-def _max_by_axis(the_list):
- # type: (List[List[int]]) -> List[int]
- maxes = the_list[0]
- for sublist in the_list[1:]:
- for index, item in enumerate(sublist):
- maxes[index] = max(maxes[index], item)
- return maxes
-
-
-class NestedTensor(object):
- def __init__(self, tensors, mask: Optional[Tensor]):
- self.tensors = tensors
- self.mask = mask
-
- def to(self, device):
- # type: (Device) -> NestedTensor # noqa
- cast_tensor = self.tensors.to(device)
- mask = self.mask
- if mask is not None:
- assert mask is not None
- cast_mask = mask.to(device)
- else:
- cast_mask = None
- return NestedTensor(cast_tensor, cast_mask)
-
- def decompose(self):
- return self.tensors, self.mask
-
- def __repr__(self):
- return str(self.tensors)
-
-
-def nested_tensor_from_tensor_list(tensor_list: List[Tensor]):
- # TODO make this more general
- if tensor_list[0].ndim == 3:
- if torchvision._is_tracing():
- # nested_tensor_from_tensor_list() does not export well to ONNX
- # call _onnx_nested_tensor_from_tensor_list() instead
- return _onnx_nested_tensor_from_tensor_list(tensor_list)
-
- # TODO make it support different-sized images
- max_size = _max_by_axis([list(img.shape) for img in tensor_list])
- # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list]))
- batch_shape = [len(tensor_list)] + max_size
- b, c, h, w = batch_shape
- dtype = tensor_list[0].dtype
- device = tensor_list[0].device
- tensor = torch.zeros(batch_shape, dtype=dtype, device=device)
- mask = torch.ones((b, h, w), dtype=torch.bool, device=device)
- for img, pad_img, m in zip(tensor_list, tensor, mask):
- pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- m[: img.shape[1], : img.shape[2]] = False
- else:
- raise ValueError("not supported")
- return NestedTensor(tensor, mask)
-
-
-# _onnx_nested_tensor_from_tensor_list() is an implementation of
-# nested_tensor_from_tensor_list() that is supported by ONNX tracing.
-@torch.jit.unused
-def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor:
- max_size = []
- for i in range(tensor_list[0].dim()):
- max_size_i = torch.max(
- torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32)
- ).to(torch.int64)
- max_size.append(max_size_i)
- max_size = tuple(max_size)
-
- # work around for
- # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img)
- # m[: img.shape[1], :img.shape[2]] = False
- # which is not yet supported in onnx
- padded_imgs = []
- padded_masks = []
- for img in tensor_list:
- padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))]
- padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0]))
- padded_imgs.append(padded_img)
-
- m = torch.zeros_like(img[0], dtype=torch.int, device=img.device)
- padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1)
- padded_masks.append(padded_mask.to(torch.bool))
-
- tensor = torch.stack(padded_imgs)
- mask = torch.stack(padded_masks)
-
- return NestedTensor(tensor, mask=mask)
-
-
-def is_dist_avail_and_initialized():
- if not dist.is_available():
- return False
- if not dist.is_initialized():
- return False
- return True
diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/scripts/pytorch2onnx.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/scripts/pytorch2onnx.py
deleted file mode 100644
index 09d99b2e0171265e70e7507ed8e882b616b449a1..0000000000000000000000000000000000000000
--- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/scripts/pytorch2onnx.py
+++ /dev/null
@@ -1,36 +0,0 @@
-import argparse
-import torch
-import torch.onnx
-from basicsr.archs.rrdbnet_arch import RRDBNet
-
-
-def main(args):
- # An instance of the model
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
- if args.params:
- keyname = 'params'
- else:
- keyname = 'params_ema'
- model.load_state_dict(torch.load(args.input)[keyname])
- # set the train mode to false since we will only run the forward pass.
- model.train(False)
- model.cpu().eval()
-
- # An example input
- x = torch.rand(1, 3, 64, 64)
- # Export the model
- with torch.no_grad():
- torch_out = torch.onnx._export(model, x, args.output, opset_version=11, export_params=True)
- print(torch_out.shape)
-
-
-if __name__ == '__main__':
- """Convert pytorch model to onnx models"""
- parser = argparse.ArgumentParser()
- parser.add_argument(
- '--input', type=str, default='experiments/pretrained_models/RealESRGAN_x4plus.pth', help='Input model path')
- parser.add_argument('--output', type=str, default='realesrgan-x4.onnx', help='Output onnx path')
- parser.add_argument('--params', action='store_false', help='Use params instead of params_ema')
- args = parser.parse_args()
-
- main(args)
diff --git a/spaces/EinsteinCoder/sf-voicebot/README.md b/spaces/EinsteinCoder/sf-voicebot/README.md
deleted file mode 100644
index 92d2c1835bad28014b06dd84025016837ace0b91..0000000000000000000000000000000000000000
--- a/spaces/EinsteinCoder/sf-voicebot/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: SF VoiceBot
-emoji: 💻
-colorFrom: pink
-colorTo: green
-sdk: docker
-pinned: false
-license: other
-app_port: 5050
-duplicated_from: EinsteinCoder/fastapi-demo
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/__init__.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/__init__.py
deleted file mode 100644
index a3f8fdd1aa47c12de9687c578094303eb7369246..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/__init__.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import importlib
-from basicsr.utils import scandir
-from os import path as osp
-
-# automatically scan and import dataset modules for registry
-# scan all the files that end with '_dataset.py' under the data folder
-data_folder = osp.dirname(osp.abspath(__file__))
-dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')]
-# import all the dataset modules
-_dataset_modules = [importlib.import_module(f'realesrgan.data.{file_name}') for file_name in dataset_filenames]
diff --git a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/layers_123812KB .py b/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/layers_123812KB .py
deleted file mode 100644
index b82f06bb4993cd63f076e68d7e24185269b1bc42..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/layers_123812KB .py
+++ /dev/null
@@ -1,118 +0,0 @@
-import torch
-from torch import nn
-import torch.nn.functional as F
-
-from . import spec_utils
-
-
-class Conv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(Conv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nout,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- bias=False,
- ),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class SeperableConv2DBNActiv(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
- super(SeperableConv2DBNActiv, self).__init__()
- self.conv = nn.Sequential(
- nn.Conv2d(
- nin,
- nin,
- kernel_size=ksize,
- stride=stride,
- padding=pad,
- dilation=dilation,
- groups=nin,
- bias=False,
- ),
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
- nn.BatchNorm2d(nout),
- activ(),
- )
-
- def __call__(self, x):
- return self.conv(x)
-
-
-class Encoder(nn.Module):
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
- super(Encoder, self).__init__()
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
-
- def __call__(self, x):
- skip = self.conv1(x)
- h = self.conv2(skip)
-
- return h, skip
-
-
-class Decoder(nn.Module):
- def __init__(
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
- ):
- super(Decoder, self).__init__()
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
- self.dropout = nn.Dropout2d(0.1) if dropout else None
-
- def __call__(self, x, skip=None):
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
- if skip is not None:
- skip = spec_utils.crop_center(skip, x)
- x = torch.cat([x, skip], dim=1)
- h = self.conv(x)
-
- if self.dropout is not None:
- h = self.dropout(h)
-
- return h
-
-
-class ASPPModule(nn.Module):
- def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU):
- super(ASPPModule, self).__init__()
- self.conv1 = nn.Sequential(
- nn.AdaptiveAvgPool2d((1, None)),
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
- )
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
- self.conv3 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
- )
- self.conv4 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
- )
- self.conv5 = SeperableConv2DBNActiv(
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
- )
- self.bottleneck = nn.Sequential(
- Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
- )
-
- def forward(self, x):
- _, _, h, w = x.size()
- feat1 = F.interpolate(
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
- )
- feat2 = self.conv2(x)
- feat3 = self.conv3(x)
- feat4 = self.conv4(x)
- feat5 = self.conv5(x)
- out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1)
- bottle = self.bottleneck(out)
- return bottle
diff --git a/spaces/FridaZuley/RVC_HFKawaii/julius/fftconv.py b/spaces/FridaZuley/RVC_HFKawaii/julius/fftconv.py
deleted file mode 100644
index 1920e5369bb49b76eeea1832b7be2a0ddbc8db6b..0000000000000000000000000000000000000000
--- a/spaces/FridaZuley/RVC_HFKawaii/julius/fftconv.py
+++ /dev/null
@@ -1,183 +0,0 @@
-# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details.
-# Author: adefossez, 2020
-
-"""
-Implementation of a FFT based 1D convolution in PyTorch.
-While FFT is used in CUDNN for small kernel sizes, it is not the case for long ones, e.g. 512.
-This module implements efficient FFT based convolutions for such convolutions. A typical
-application is for evaluationg FIR filters with a long receptive field, typically
-evaluated with a stride of 1.
-"""
-from typing import Optional
-
-import torch
-try:
- import torch.fft as new_fft
-except ImportError:
- new_fft = None # type: ignore
-from torch.nn import functional as F
-
-from .core import pad_to, unfold
-from .utils import simple_repr
-
-
-# This is quite verbose, but sadly needed to make TorchScript happy.
-def _new_rfft(x: torch.Tensor):
- z = new_fft.rfft(x, dim=-1)
- return torch.view_as_real(z)
-
-
-def _old_rfft(x: torch.Tensor):
- return torch.rfft(x, 1) # type: ignore
-
-
-def _old_irfft(x: torch.Tensor, length: int):
- result = torch.irfft(x, 1, signal_sizes=(length,)) # type: ignore
- return result
-
-
-def _new_irfft(x: torch.Tensor, length: int):
- x = torch.view_as_complex(x)
- return new_fft.irfft(x, length, dim=-1)
-
-
-if new_fft is None:
- _rfft = _old_rfft
- _irfft = _old_irfft
-else:
- _rfft = _new_rfft
- _irfft = _new_irfft
-
-
-def _compl_mul_conjugate(a: torch.Tensor, b: torch.Tensor):
- """
- Given a and b two tensors of dimension 4
- with the last dimension being the real and imaginary part,
- returns a multiplied by the conjugate of b, the multiplication
- being with respect to the second dimension.
-
- """
- # PyTorch 1.7 supports complex number, but not for all operations.
- # Once the support is widespread, this can likely go away.
-
- op = "bcft,dct->bdft"
- return torch.stack([
- torch.einsum(op, a[..., 0], b[..., 0]) + torch.einsum(op, a[..., 1], b[..., 1]),
- torch.einsum(op, a[..., 1], b[..., 0]) - torch.einsum(op, a[..., 0], b[..., 1])
- ],
- dim=-1)
-
-
-def fft_conv1d(
- input: torch.Tensor, weight: torch.Tensor,
- bias: Optional[torch.Tensor] = None, stride: int = 1, padding: int = 0,
- block_ratio: float = 5):
- """
- Same as `torch.nn.functional.conv1d` but using FFT for the convolution.
- Please check PyTorch documentation for more information.
-
- Args:
- input (Tensor): input signal of shape `[B, C, T]`.
- weight (Tensor): weight of the convolution `[D, C, K]` with `D` the number
- of output channels.
- bias (Tensor or None): if not None, bias term for the convolution.
- stride (int): stride of convolution.
- padding (int): padding to apply to the input.
- block_ratio (float): can be tuned for speed. The input is splitted in chunks
- with a size of `int(block_ratio * kernel_size)`.
-
- Shape:
-
- - Inputs: `input` is `[B, C, T]`, `weight` is `[D, C, K]` and bias is `[D]`.
- - Output: `(*, T)`
-
-
- ..note::
- This function is faster than `torch.nn.functional.conv1d` only in specific cases.
- Typically, the kernel size should be of the order of 256 to see any real gain,
- for a stride of 1.
-
- ..Warning::
- Dilation and groups are not supported at the moment. This function might use
- more memory than the default Conv1d implementation.
- """
- input = F.pad(input, (padding, padding))
- batch, channels, length = input.shape
- out_channels, _, kernel_size = weight.shape
-
- if length < kernel_size:
- raise RuntimeError(f"Input should be at least as large as the kernel size {kernel_size}, "
- f"but it is only {length} samples long.")
- if block_ratio < 1:
- raise RuntimeError("Block ratio must be greater than 1.")
-
- # We are going to process the input blocks by blocks, as for some reason it is faster
- # and less memory intensive (I think the culprit is `torch.einsum`.
- block_size: int = min(int(kernel_size * block_ratio), length)
- fold_stride = block_size - kernel_size + 1
- weight = pad_to(weight, block_size)
- weight_z = _rfft(weight)
-
- # We pad the input and get the different frames, on which
- frames = unfold(input, block_size, fold_stride)
-
- frames_z = _rfft(frames)
- out_z = _compl_mul_conjugate(frames_z, weight_z)
- out = _irfft(out_z, block_size)
- # The last bit is invalid, because FFT will do a circular convolution.
- out = out[..., :-kernel_size + 1]
- out = out.reshape(batch, out_channels, -1)
- out = out[..., ::stride]
- target_length = (length - kernel_size) // stride + 1
- out = out[..., :target_length]
- if bias is not None:
- out += bias[:, None]
- return out
-
-
-class FFTConv1d(torch.nn.Module):
- """
- Same as `torch.nn.Conv1d` but based on `fft_conv1d`.
- Please check PyTorch documentation for more information.
-
- Args:
- in_channels (int): number of input channels.
- out_channels (int): number of output channels.
- kernel_size (int): kernel size of convolution.
- stride (int): stride of convolution.
- padding (int): padding to apply to the input.
- bias (bool): if True, use a bias term.
-
- ..note::
- This module is faster than `torch.nn.Conv1d` only in specific cases.
- Typically, `kernel_size` should be of the order of 256 to see any real gain,
- for a stride of 1.
-
- ..warning::
- Dilation and groups are not supported at the moment. This module might use
- more memory than the default Conv1d implementation.
-
- >>> fftconv = FFTConv1d(12, 24, 128, 4)
- >>> x = torch.randn(4, 12, 1024)
- >>> print(list(fftconv(x).shape))
- [4, 24, 225]
- """
- def __init__(self, in_channels: int, out_channels: int, kernel_size: int,
- stride: int = 1, padding: int = 0, bias: bool = True):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.kernel_size = kernel_size
- self.stride = stride
- self.padding = padding
-
- conv = torch.nn.Conv1d(in_channels, out_channels, kernel_size, bias=bias)
- self.weight = conv.weight
- self.bias = conv.bias
-
- def forward(self, input: torch.Tensor):
- return fft_conv1d(
- input, self.weight, self.bias, self.stride, self.padding)
-
- def __repr__(self):
- return simple_repr(self, overrides={"bias": self.bias is not None})
diff --git a/spaces/GAIR/Factool/factool/math/pipeline.py b/spaces/GAIR/Factool/factool/math/pipeline.py
deleted file mode 100644
index afa860f172b87d5145dfa2aa1b388a320291b71f..0000000000000000000000000000000000000000
--- a/spaces/GAIR/Factool/factool/math/pipeline.py
+++ /dev/null
@@ -1,192 +0,0 @@
-import json
-import math
-import os
-from typing import List, Dict
-import yaml
-import pdb
-
-from factool.math.tool import python_executor
-from factool.utils.base.pipeline import pipeline
-
-class math_pipeline(pipeline):
- def __init__(self, foundation_model):
- super().__init__('math', foundation_model)
-
- self.tool = python_executor()
-
- with open(os.path.join(self.prompts_path, "claim_extraction.yaml"), 'r') as file:
- data = yaml.load(file, Loader=yaml.FullLoader)
- self.claim_prompt = data['math']
-
- with open(os.path.join(self.prompts_path, 'query_generation.yaml'), 'r') as file:
- data = yaml.load(file, Loader=yaml.FullLoader)
- self.query_prompt = data['math']
-
- def _verification(self, exec_results):
- classification_results = [True for _ in range(len(exec_results))]
- for i in range(len(exec_results)):
- if exec_results[i] is not None and 'False' in exec_results[i]:
- classification_results[i] = False
-
- return classification_results
-
- async def _claim_extraction(self, samples):
- messages_list = [
- [
- {"role": "system", "content": self.claim_prompt['system']},
- {"role": "user", "content": self.claim_prompt['user'].format(input_question=sample['prompt'], input_solution=sample['response'])},
- ]
- for sample in samples
- ]
- return await self.chat.async_run(messages_list, List)
-
- async def _query_generation(self, claims):
- messages_list = [
- [
- {"role": "system", "content": self.query_prompt['system']},
- {"role": "user", "content": self.query_prompt['user'].format(math_calculation=claim['math_calculation'], calculated_answer=claim['calculated_answer'])},
- ]
- for claim in claims
- ]
- return await self.chat.async_run(messages_list, Dict)
-
- async def run_with_tool_live(self, samples):
- claims_in_responses = await self._claim_extraction(samples)
- queries_in_responses = []
- exec_results_in_responses = []
- verifications_in_responses = []
- for claims_in_response in claims_in_responses:
- queries = await self._query_generation(claims_in_response)
- queries_in_responses.append(queries)
- exec_results = []
- for query in queries:
- try:
- exec_results.append(self.tool.run(query['python_snippet']))
- except:
- exec_results.append('None')
- exec_results_in_responses.append(exec_results)
- verifications = self._verification(exec_results)
- verifications_in_responses.append(verifications)
-
- return claims_in_responses, queries_in_responses, exec_results_in_responses, verifications_in_responses
-
- async def run_with_tool_live_without_claim_extraction(self, claims):
- queries = await self._query_generation(claims)
-
- exec_results = []
- for query in queries:
- try:
- exec_results.append(self.tool.run(query['python_snippet']))
- except:
- exec_results.append(None)
- classification_results = self._verification(exec_results)
- return queries, exec_results, classification_results
-
- async def run_with_tool_api_call(self, prompts, responses):
- batch_size = 5
- num_batches = math.ceil(len(prompts) / batch_size)
-
- self.sample_list = [{"prompt": prompt, "response": response, "category": 'math'} for prompt, response in zip(prompts, responses)]
-
- for i in range(num_batches):
- print(i)
- batch_start = i * batch_size
- batch_end = min((i + 1) * batch_size, len(responses))
-
- claims_in_responses, queries_in_responses, exec_results_in_response, verifications_in_responses = await self.run_with_tool_live(self.sample_list[batch_start: batch_end])
-
- for j, (claims_in_response, queries_in_response, exec_results_in_response, verifications_in_response) in enumerate(zip(claims_in_responses, queries_in_responses, exec_results_in_response, verifications_in_responses)):
- index = batch_start + j
-
- self.sample_list[index].update({
- 'claims': claims_in_response,
- 'queries': queries_in_response,
- 'execution_results': exec_results_in_response,
- 'claim_level_factuality': verifications_in_response,
- 'response_level_factuality': all([verification if verification != None else True for verification in verifications_in_response])
- })
-
- return self.sample_list
-
- async def run_with_tool_dataset(self, annotated_dataset_path: str, with_tool_classified_dataset_path: str, rerun: bool = False, rerun_indices: list = []):
- data_path = annotated_dataset_path if not rerun else with_tool_classified_dataset_path
- with open(data_path, 'r') as f:
- data = [json.loads(line) for line in f]
- self.sample_list = data if rerun else [claim for sample in data for claim in sample['claims']]
- rerun_elements = self.sample_list if not rerun else [self.sample_list[i] for i in rerun_indices]
-
- batch_size = 10
- num_batches = math.ceil(len(rerun_elements) / batch_size) # 5
-
- for i in range(num_batches):
- print("test1")
- print(i)
- batch_start = i * batch_size
- batch_end = min((i + 1) * batch_size, len(rerun_elements))
- batch = rerun_elements[batch_start:batch_end]
-
- queries, exec_results, classification_results = await self.run_with_tool_live_without_claim_extraction(batch)
-
- for j, (query, exec_result, classification_result) in enumerate(zip(queries, exec_results, classification_results)):
- index = batch_start + j if not rerun else rerun_indices[batch_start + j]
- self.sample_list[index].update({
- 'query': query,
- 'exec_result': exec_result,
- 'with_tool_classification': classification_result,
- })
-
- # save everything after each batch to prevent data loss
- with open(with_tool_classified_dataset_path, 'w') as f:
- for item in self.sample_list:
- try:
- json_str = json.dumps(item)
- except:
- continue
- f.write(json_str + '\n')
-
- async def run_self_check_live(self, fewshot, batch):
- user_prompt_key = 'user_3_shot_CoT' if fewshot else 'user_zero_shot_CoT'
- messages_list = [
- [
- {"role": "system", "content": self.self_check_prompt['system']},
- {"role": "user", "content": self.self_check_prompt[user_prompt_key].format(input_calculation=response['math_calculation'], input_calculated_answer=response['calculated_answer'])},
- ]
- for response in batch
- ]
- return await self.chat.async_run(messages_list, Dict)
-
- async def run_self_check_dataset(self, annotated_dataset_path: str, self_check_classified_dataset_path: str, fewshot: bool = False, rerun: bool = False, rerun_indices: list = []):
- data_path = annotated_dataset_path if not rerun else self_check_classified_dataset_path
- with open(data_path, 'r') as f:
- data = [json.loads(line) for line in f]
- self.sample_list = data if rerun else [claim for sample in data for claim in sample['claims']]
- rerun_elements = self.sample_list if not rerun else [self.sample_list[i] for i in rerun_indices]
-
- batch_size = 10
- num_batches = math.ceil(len(rerun_elements) / batch_size)
-
- for i in range(num_batches):
- print(i)
- batch_start = i * batch_size
- batch_end = min((i + 1) * batch_size, len(rerun_elements))
- batch = rerun_elements[batch_start:batch_end]
-
- responses = await self.run_self_check_live(fewshot, batch)
- for j, response in enumerate(responses):
- index = batch_start + j if not rerun else rerun_indices[batch_start + j]
- if response is None:
- self.sample_list[index].update({
- 'self_check_classification': 'None',
- 'self_check_reasoning': 'None'
- })
- else:
- self.sample_list[index].update({
- 'self_check_classification': response.get('factuality', 'None'),
- 'self_check_reasoning': response.get('reasoning', 'None')
- })
-
- # save everything after each batch to prevent data loss
- with open(self_check_classified_dataset_path, 'w') as f:
- for item in self.sample_list:
- json_str = json.dumps(item)
- f.write(json_str + '\n')
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/conv2d_resample.py b/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/conv2d_resample.py
deleted file mode 100644
index cd4750744c83354bab78704d4ef51ad1070fcc4a..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/conv2d_resample.py
+++ /dev/null
@@ -1,156 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-"""2D convolution with optional up/downsampling."""
-
-import torch
-
-from .. import misc
-from . import conv2d_gradfix
-from . import upfirdn2d
-from .upfirdn2d import _parse_padding
-from .upfirdn2d import _get_filter_size
-
-#----------------------------------------------------------------------------
-
-def _get_weight_shape(w):
- with misc.suppress_tracer_warnings(): # this value will be treated as a constant
- shape = [int(sz) for sz in w.shape]
- misc.assert_shape(w, shape)
- return shape
-
-#----------------------------------------------------------------------------
-
-def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True):
- """Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations.
- """
- out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w)
-
- # Flip weight if requested.
- if not flip_weight: # conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False).
- w = w.flip([2, 3])
-
- # Workaround performance pitfall in cuDNN 8.0.5, triggered when using
- # 1x1 kernel + memory_format=channels_last + less than 64 channels.
- if kw == 1 and kh == 1 and stride == 1 and padding in [0, [0, 0], (0, 0)] and not transpose:
- if x.stride()[1] == 1 and min(out_channels, in_channels_per_group) < 64:
- if out_channels <= 4 and groups == 1:
- in_shape = x.shape
- x = w.squeeze(3).squeeze(2) @ x.reshape([in_shape[0], in_channels_per_group, -1])
- x = x.reshape([in_shape[0], out_channels, in_shape[2], in_shape[3]])
- else:
- x = x.to(memory_format=torch.contiguous_format)
- w = w.to(memory_format=torch.contiguous_format)
- x = conv2d_gradfix.conv2d(x, w, groups=groups)
- return x.to(memory_format=torch.channels_last)
-
- # Otherwise => execute using conv2d_gradfix.
- op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d
- return op(x, w, stride=stride, padding=padding, groups=groups)
-
-#----------------------------------------------------------------------------
-
-@misc.profiled_function
-def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False):
- r"""2D convolution with optional up/downsampling.
-
- Padding is performed only once at the beginning, not between the operations.
-
- Args:
- x: Input tensor of shape
- `[batch_size, in_channels, in_height, in_width]`.
- w: Weight tensor of shape
- `[out_channels, in_channels//groups, kernel_height, kernel_width]`.
- f: Low-pass filter for up/downsampling. Must be prepared beforehand by
- calling upfirdn2d.setup_filter(). None = identity (default).
- up: Integer upsampling factor (default: 1).
- down: Integer downsampling factor (default: 1).
- padding: Padding with respect to the upsampled image. Can be a single number
- or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
- (default: 0).
- groups: Split input channels into N groups (default: 1).
- flip_weight: False = convolution, True = correlation (default: True).
- flip_filter: False = convolution, True = correlation (default: False).
-
- Returns:
- Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
- """
- # Validate arguments.
- assert isinstance(x, torch.Tensor) and (x.ndim == 4)
- assert isinstance(w, torch.Tensor) and (w.ndim == 4) and (w.dtype == x.dtype)
- assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [1, 2] and f.dtype == torch.float32)
- assert isinstance(up, int) and (up >= 1)
- assert isinstance(down, int) and (down >= 1)
- assert isinstance(groups, int) and (groups >= 1)
- out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w)
- fw, fh = _get_filter_size(f)
- px0, px1, py0, py1 = _parse_padding(padding)
-
- # Adjust padding to account for up/downsampling.
- if up > 1:
- px0 += (fw + up - 1) // 2
- px1 += (fw - up) // 2
- py0 += (fh + up - 1) // 2
- py1 += (fh - up) // 2
- if down > 1:
- px0 += (fw - down + 1) // 2
- px1 += (fw - down) // 2
- py0 += (fh - down + 1) // 2
- py1 += (fh - down) // 2
-
- # Fast path: 1x1 convolution with downsampling only => downsample first, then convolve.
- if kw == 1 and kh == 1 and (down > 1 and up == 1):
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[px0,px1,py0,py1], flip_filter=flip_filter)
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
- return x
-
- # Fast path: 1x1 convolution with upsampling only => convolve first, then upsample.
- if kw == 1 and kh == 1 and (up > 1 and down == 1):
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
- x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter)
- return x
-
- # Fast path: downsampling only => use strided convolution.
- if down > 1 and up == 1:
- x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0,px1,py0,py1], flip_filter=flip_filter)
- x = _conv2d_wrapper(x=x, w=w, stride=down, groups=groups, flip_weight=flip_weight)
- return x
-
- # Fast path: upsampling with optional downsampling => use transpose strided convolution.
- if up > 1:
- if groups == 1:
- w = w.transpose(0, 1)
- else:
- w = w.reshape(groups, out_channels // groups, in_channels_per_group, kh, kw)
- w = w.transpose(1, 2)
- w = w.reshape(groups * in_channels_per_group, out_channels // groups, kh, kw)
- px0 -= kw - 1
- px1 -= kw - up
- py0 -= kh - 1
- py1 -= kh - up
- pxt = max(min(-px0, -px1), 0)
- pyt = max(min(-py0, -py1), 0)
- x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[pyt,pxt], groups=groups, transpose=True, flip_weight=(not flip_weight))
- x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0+pxt,px1+pxt,py0+pyt,py1+pyt], gain=up**2, flip_filter=flip_filter)
- if down > 1:
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter)
- return x
-
- # Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d.
- if up == 1 and down == 1:
- if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0:
- return _conv2d_wrapper(x=x, w=w, padding=[py0,px0], groups=groups, flip_weight=flip_weight)
-
- # Fallback: Generic reference implementation.
- x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter)
- x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
- if down > 1:
- x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter)
- return x
-
-#----------------------------------------------------------------------------
diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/helpers.py b/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/helpers.py
deleted file mode 100644
index c4a58b34ea5ca6912fe53c63dede0a8696f5c024..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/helpers.py
+++ /dev/null
@@ -1,140 +0,0 @@
-from collections import namedtuple
-import torch
-import torch.nn.functional as F
-from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module
-
-"""
-ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch)
-"""
-
-
-class Flatten(Module):
- def forward(self, input):
- return input.view(input.size(0), -1)
-
-
-def l2_norm(input, axis=1):
- norm = torch.norm(input, 2, axis, True)
- output = torch.div(input, norm)
- return output
-
-
-class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])):
- """ A named tuple describing a ResNet block. """
-
-
-def get_block(in_channel, depth, num_units, stride=2):
- return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)]
-
-
-def get_blocks(num_layers):
- if num_layers == 50:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=4),
- get_block(in_channel=128, depth=256, num_units=14),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- elif num_layers == 100:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=13),
- get_block(in_channel=128, depth=256, num_units=30),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- elif num_layers == 152:
- blocks = [
- get_block(in_channel=64, depth=64, num_units=3),
- get_block(in_channel=64, depth=128, num_units=8),
- get_block(in_channel=128, depth=256, num_units=36),
- get_block(in_channel=256, depth=512, num_units=3)
- ]
- else:
- raise ValueError("Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers))
- return blocks
-
-
-class SEModule(Module):
- def __init__(self, channels, reduction):
- super(SEModule, self).__init__()
- self.avg_pool = AdaptiveAvgPool2d(1)
- self.fc1 = Conv2d(channels, channels // reduction, kernel_size=1, padding=0, bias=False)
- self.relu = ReLU(inplace=True)
- self.fc2 = Conv2d(channels // reduction, channels, kernel_size=1, padding=0, bias=False)
- self.sigmoid = Sigmoid()
-
- def forward(self, x):
- module_input = x
- x = self.avg_pool(x)
- x = self.fc1(x)
- x = self.relu(x)
- x = self.fc2(x)
- x = self.sigmoid(x)
- return module_input * x
-
-
-class bottleneck_IR(Module):
- def __init__(self, in_channel, depth, stride):
- super(bottleneck_IR, self).__init__()
- if in_channel == depth:
- self.shortcut_layer = MaxPool2d(1, stride)
- else:
- self.shortcut_layer = Sequential(
- Conv2d(in_channel, depth, (1, 1), stride, bias=False),
- BatchNorm2d(depth)
- )
- self.res_layer = Sequential(
- BatchNorm2d(in_channel),
- Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), PReLU(depth),
- Conv2d(depth, depth, (3, 3), stride, 1, bias=False), BatchNorm2d(depth)
- )
-
- def forward(self, x):
- shortcut = self.shortcut_layer(x)
- res = self.res_layer(x)
- return res + shortcut
-
-
-class bottleneck_IR_SE(Module):
- def __init__(self, in_channel, depth, stride):
- super(bottleneck_IR_SE, self).__init__()
- if in_channel == depth:
- self.shortcut_layer = MaxPool2d(1, stride)
- else:
- self.shortcut_layer = Sequential(
- Conv2d(in_channel, depth, (1, 1), stride, bias=False),
- BatchNorm2d(depth)
- )
- self.res_layer = Sequential(
- BatchNorm2d(in_channel),
- Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False),
- PReLU(depth),
- Conv2d(depth, depth, (3, 3), stride, 1, bias=False),
- BatchNorm2d(depth),
- SEModule(depth, 16)
- )
-
- def forward(self, x):
- shortcut = self.shortcut_layer(x)
- res = self.res_layer(x)
- return res + shortcut
-
-
-def _upsample_add(x, y):
- """Upsample and add two feature maps.
- Args:
- x: (Variable) top feature map to be upsampled.
- y: (Variable) lateral feature map.
- Returns:
- (Variable) added feature map.
- Note in PyTorch, when input size is odd, the upsampled feature map
- with `F.upsample(..., scale_factor=2, mode='nearest')`
- maybe not equal to the lateral feature map size.
- e.g.
- original input size: [N,_,15,15] ->
- conv2d feature map size: [N,_,8,8] ->
- upsampled feature map size: [N,_,16,16]
- So we choose bilinear upsample which supports arbitrary output sizes.
- """
- _, _, H, W = y.size()
- return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/pspnet_unet_s5-d16.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/pspnet_unet_s5-d16.py
deleted file mode 100644
index fcff9ec4f41fad158344ecd77313dc14564f3682..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/pspnet_unet_s5-d16.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# model settings
-norm_cfg = dict(type='SyncBN', requires_grad=True)
-model = dict(
- type='EncoderDecoder',
- pretrained=None,
- backbone=dict(
- type='UNet',
- in_channels=3,
- base_channels=64,
- num_stages=5,
- strides=(1, 1, 1, 1, 1),
- enc_num_convs=(2, 2, 2, 2, 2),
- dec_num_convs=(2, 2, 2, 2),
- downsamples=(True, True, True, True),
- enc_dilations=(1, 1, 1, 1, 1),
- dec_dilations=(1, 1, 1, 1),
- with_cp=False,
- conv_cfg=None,
- norm_cfg=norm_cfg,
- act_cfg=dict(type='ReLU'),
- upsample_cfg=dict(type='InterpConv'),
- norm_eval=False),
- decode_head=dict(
- type='PSPHead',
- in_channels=64,
- in_index=4,
- channels=16,
- pool_scales=(1, 2, 3, 6),
- dropout_ratio=0.1,
- num_classes=2,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
- auxiliary_head=dict(
- type='FCNHead',
- in_channels=128,
- in_index=3,
- channels=64,
- num_convs=1,
- concat_input=False,
- dropout_ratio=0.1,
- num_classes=2,
- norm_cfg=norm_cfg,
- align_corners=False,
- loss_decode=dict(
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
- # model training and testing settings
- train_cfg=dict(),
- test_cfg=dict(mode='slide', crop_size=256, stride=170))
diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/models/modules.py b/spaces/Grezz/generate_human_motion/VQ-Trans/models/modules.py
deleted file mode 100644
index 4f06cd98d4f6029bd3df073095cf50498483d54a..0000000000000000000000000000000000000000
--- a/spaces/Grezz/generate_human_motion/VQ-Trans/models/modules.py
+++ /dev/null
@@ -1,109 +0,0 @@
-import torch
-import torch.nn as nn
-from torch.nn.utils.rnn import pack_padded_sequence
-
-def init_weight(m):
- if isinstance(m, nn.Conv1d) or isinstance(m, nn.Linear) or isinstance(m, nn.ConvTranspose1d):
- nn.init.xavier_normal_(m.weight)
- # m.bias.data.fill_(0.01)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
-
-class MovementConvEncoder(nn.Module):
- def __init__(self, input_size, hidden_size, output_size):
- super(MovementConvEncoder, self).__init__()
- self.main = nn.Sequential(
- nn.Conv1d(input_size, hidden_size, 4, 2, 1),
- nn.Dropout(0.2, inplace=True),
- nn.LeakyReLU(0.2, inplace=True),
- nn.Conv1d(hidden_size, output_size, 4, 2, 1),
- nn.Dropout(0.2, inplace=True),
- nn.LeakyReLU(0.2, inplace=True),
- )
- self.out_net = nn.Linear(output_size, output_size)
- self.main.apply(init_weight)
- self.out_net.apply(init_weight)
-
- def forward(self, inputs):
- inputs = inputs.permute(0, 2, 1)
- outputs = self.main(inputs).permute(0, 2, 1)
- # print(outputs.shape)
- return self.out_net(outputs)
-
-
-
-class TextEncoderBiGRUCo(nn.Module):
- def __init__(self, word_size, pos_size, hidden_size, output_size, device):
- super(TextEncoderBiGRUCo, self).__init__()
- self.device = device
-
- self.pos_emb = nn.Linear(pos_size, word_size)
- self.input_emb = nn.Linear(word_size, hidden_size)
- self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, bidirectional=True)
- self.output_net = nn.Sequential(
- nn.Linear(hidden_size * 2, hidden_size),
- nn.LayerNorm(hidden_size),
- nn.LeakyReLU(0.2, inplace=True),
- nn.Linear(hidden_size, output_size)
- )
-
- self.input_emb.apply(init_weight)
- self.pos_emb.apply(init_weight)
- self.output_net.apply(init_weight)
- self.hidden_size = hidden_size
- self.hidden = nn.Parameter(torch.randn((2, 1, self.hidden_size), requires_grad=True))
-
- # input(batch_size, seq_len, dim)
- def forward(self, word_embs, pos_onehot, cap_lens):
- num_samples = word_embs.shape[0]
-
- pos_embs = self.pos_emb(pos_onehot)
- inputs = word_embs + pos_embs
- input_embs = self.input_emb(inputs)
- hidden = self.hidden.repeat(1, num_samples, 1)
-
- cap_lens = cap_lens.data.tolist()
- emb = pack_padded_sequence(input_embs, cap_lens, batch_first=True)
-
- gru_seq, gru_last = self.gru(emb, hidden)
-
- gru_last = torch.cat([gru_last[0], gru_last[1]], dim=-1)
-
- return self.output_net(gru_last)
-
-
-class MotionEncoderBiGRUCo(nn.Module):
- def __init__(self, input_size, hidden_size, output_size, device):
- super(MotionEncoderBiGRUCo, self).__init__()
- self.device = device
-
- self.input_emb = nn.Linear(input_size, hidden_size)
- self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, bidirectional=True)
- self.output_net = nn.Sequential(
- nn.Linear(hidden_size*2, hidden_size),
- nn.LayerNorm(hidden_size),
- nn.LeakyReLU(0.2, inplace=True),
- nn.Linear(hidden_size, output_size)
- )
-
- self.input_emb.apply(init_weight)
- self.output_net.apply(init_weight)
- self.hidden_size = hidden_size
- self.hidden = nn.Parameter(torch.randn((2, 1, self.hidden_size), requires_grad=True))
-
- # input(batch_size, seq_len, dim)
- def forward(self, inputs, m_lens):
- num_samples = inputs.shape[0]
-
- input_embs = self.input_emb(inputs)
- hidden = self.hidden.repeat(1, num_samples, 1)
-
- cap_lens = m_lens.data.tolist()
- emb = pack_padded_sequence(input_embs, cap_lens, batch_first=True, enforce_sorted=False)
-
- gru_seq, gru_last = self.gru(emb, hidden)
-
- gru_last = torch.cat([gru_last[0], gru_last[1]], dim=-1)
-
- return self.output_net(gru_last)
diff --git a/spaces/GroveStreet/GTA_SOVITS/diffusion/logger/saver.py b/spaces/GroveStreet/GTA_SOVITS/diffusion/logger/saver.py
deleted file mode 100644
index ef78b52b6bcd32106f962b731d3784d72d5f0cce..0000000000000000000000000000000000000000
--- a/spaces/GroveStreet/GTA_SOVITS/diffusion/logger/saver.py
+++ /dev/null
@@ -1,150 +0,0 @@
-'''
-author: wayn391@mastertones
-'''
-
-import os
-import json
-import time
-import yaml
-import datetime
-import torch
-import matplotlib.pyplot as plt
-from . import utils
-from torch.utils.tensorboard import SummaryWriter
-
-class Saver(object):
- def __init__(
- self,
- args,
- initial_global_step=-1):
-
- self.expdir = args.env.expdir
- self.sample_rate = args.data.sampling_rate
-
- # cold start
- self.global_step = initial_global_step
- self.init_time = time.time()
- self.last_time = time.time()
-
- # makedirs
- os.makedirs(self.expdir, exist_ok=True)
-
- # path
- self.path_log_info = os.path.join(self.expdir, 'log_info.txt')
-
- # ckpt
- os.makedirs(self.expdir, exist_ok=True)
-
- # writer
- self.writer = SummaryWriter(os.path.join(self.expdir, 'logs'))
-
- # save config
- path_config = os.path.join(self.expdir, 'config.yaml')
- with open(path_config, "w") as out_config:
- yaml.dump(dict(args), out_config)
-
-
- def log_info(self, msg):
- '''log method'''
- if isinstance(msg, dict):
- msg_list = []
- for k, v in msg.items():
- tmp_str = ''
- if isinstance(v, int):
- tmp_str = '{}: {:,}'.format(k, v)
- else:
- tmp_str = '{}: {}'.format(k, v)
-
- msg_list.append(tmp_str)
- msg_str = '\n'.join(msg_list)
- else:
- msg_str = msg
-
- # dsplay
- print(msg_str)
-
- # save
- with open(self.path_log_info, 'a') as fp:
- fp.write(msg_str+'\n')
-
- def log_value(self, dict):
- for k, v in dict.items():
- self.writer.add_scalar(k, v, self.global_step)
-
- def log_spec(self, name, spec, spec_out, vmin=-14, vmax=3.5):
- spec_cat = torch.cat([(spec_out - spec).abs() + vmin, spec, spec_out], -1)
- spec = spec_cat[0]
- if isinstance(spec, torch.Tensor):
- spec = spec.cpu().numpy()
- fig = plt.figure(figsize=(12, 9))
- plt.pcolor(spec.T, vmin=vmin, vmax=vmax)
- plt.tight_layout()
- self.writer.add_figure(name, fig, self.global_step)
-
- def log_audio(self, dict):
- for k, v in dict.items():
- self.writer.add_audio(k, v, global_step=self.global_step, sample_rate=self.sample_rate)
-
- def get_interval_time(self, update=True):
- cur_time = time.time()
- time_interval = cur_time - self.last_time
- if update:
- self.last_time = cur_time
- return time_interval
-
- def get_total_time(self, to_str=True):
- total_time = time.time() - self.init_time
- if to_str:
- total_time = str(datetime.timedelta(
- seconds=total_time))[:-5]
- return total_time
-
- def save_model(
- self,
- model,
- optimizer,
- name='model',
- postfix='',
- to_json=False):
- # path
- if postfix:
- postfix = '_' + postfix
- path_pt = os.path.join(
- self.expdir , name+postfix+'.pt')
-
- # check
- print(' [*] model checkpoint saved: {}'.format(path_pt))
-
- # save
- if optimizer is not None:
- torch.save({
- 'global_step': self.global_step,
- 'model': model.state_dict(),
- 'optimizer': optimizer.state_dict()}, path_pt)
- else:
- torch.save({
- 'global_step': self.global_step,
- 'model': model.state_dict()}, path_pt)
-
- # to json
- if to_json:
- path_json = os.path.join(
- self.expdir , name+'.json')
- utils.to_json(path_params, path_json)
-
- def delete_model(self, name='model', postfix=''):
- # path
- if postfix:
- postfix = '_' + postfix
- path_pt = os.path.join(
- self.expdir , name+postfix+'.pt')
-
- # delete
- if os.path.exists(path_pt):
- os.remove(path_pt)
- print(' [*] model checkpoint deleted: {}'.format(path_pt))
-
- def global_step_increment(self):
- self.global_step += 1
-
-
diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/masking_generator.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/masking_generator.py
deleted file mode 100644
index 5603eb30b40e6fea64f23d1f406f47041cc000fc..0000000000000000000000000000000000000000
--- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/masking_generator.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# --------------------------------------------------------
-# Based on BEiT, timm, DINO and DeiT code bases
-# https://github.com/microsoft/unilm/tree/master/beit
-# https://github.com/rwightman/pytorch-image-models/tree/master/timm
-# https://github.com/facebookresearch/deit
-# https://github.com/facebookresearch/dino
-# --------------------------------------------------------
-import numpy as np
-
-
-class RandomMaskingGenerator:
- def __init__(self, input_size, mask_ratio):
- if not isinstance(input_size, tuple):
- input_size = (input_size,) * 2
-
- self.height, self.width = input_size
-
- self.num_patches = self.height * self.width
- self.num_mask = int(mask_ratio * self.num_patches)
-
- def __repr__(self):
- repr_str = "Maks: total patches {}, mask patches {}".format(
- self.num_patches, self.num_mask
- )
- return repr_str
-
- def __call__(self):
- mask = np.hstack([
- np.zeros(self.num_patches - self.num_mask),
- np.ones(self.num_mask),
- ])
- np.random.shuffle(mask)
- return mask # [196]
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/FastDemo/README.md b/spaces/HaloMaster/chinesesummary/fengshen/examples/FastDemo/README.md
deleted file mode 100644
index 132519b95da3fd35f4c4fb6aae5d8c44faad3a42..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/FastDemo/README.md
+++ /dev/null
@@ -1,105 +0,0 @@
-# 「streamlit」快速搭建你的算法demo
-在搭建demo之前,首先得做好这些准备工作:
-- 模型训练完毕
-- 模型的入参确定
-- 安装streamlit库,`pip install streamlit` 就可以安装。
-
-streamlit脚本的启动方式是 `streamlit run demo.py`,很简单就启动了一个demo页面,页面会随着脚本代码的改变实时刷新的。所以在没有经验的时候,可以创建一个demo.py的文件,照着下面的教程一步一步添加代码,看页面的展示情况。下面开始上干货,具体细节在代码注释中有说明!
-
-### 第一步 导包
-```python
-import streamlit as st
-# 其他包更具你的需要导入
-```
-[streamlit](https://streamlit.io)是一个用于构建机器学习、深度学习、数据可视化demo的python框架。它不需要你有web开发的经验,会写python就可以高效的开发你的demo。
-
-### 第二步 页面导航信息以及布局配置
-
-```python
-st.set_page_config(
- page_title="余元医疗问答", # 页面标签标题
- page_icon=":shark:", # 页面标签图标
- layout="wide", # 页面的布局
- initial_sidebar_state="expanded", # 左侧的sidebar的布局方式
- # 配置菜单按钮的信息
- menu_items={
- 'Get Help': 'https://www.extremelycoolapp.com/help',
- 'Report a bug': "https://www.extremelycoolapp.com/bug",
- 'About': "# This is a header. This is an *extremely* cool app!"
- }
- )
-```
-这一步可以省略,如果想让app更加个性化,可以添加这些设置。
-
-### 第三步 设置demo标题
-```python
-st.title('Demo for MedicalQA')
-```
-streamlit的每一个小组件对应于页面都有一个默认的样式展示。
-
-### 第四步 配置demo的参数
-
-```python
-# 此处是用的sidebar,侧边栏作为参数配置模块
-st.sidebar.header("参数配置")
-# 这里是在sidebar里面创建了表单,每个表单一定有一个标题和提交按钮
-sbform = st.sidebar.form("固定参数设置")
-# slider是滑动条组建,可以配置数值型参数
-n_sample = sbform.slider("设置返回条数",min_value=1,max_value=10,value=3)
-text_length = sbform.slider('生成长度:',min_value=32,max_value=512,value=64,step=32)
-text_level = sbform.slider('文本多样性:',min_value=0.1,max_value=1.0,value=0.9,step=0.1)
-# number_input也可以配置数值型参数
-model_id = sbform.number_input('选择模型号:',min_value=0,max_value=13,value=13,step=1)
-# selectbox选择组建,只能选择配置的选项
-trans = sbform.selectbox('选择翻译内核',['百度通用','医疗生物'])
-# 提交表单的配置,这些参数的赋值才生效
-sbform.form_submit_button("提交配置")
-
-# 这里是页面中的参数配置,也是demo的主体之一
-form = st.form("参数设置")
-# 本demo是qa demo,所以要录入用户的文本输入,text_input组建可以实现
-input_text = form.text_input('请输入你的问题:',value='',placeholder='例如:糖尿病的症状有哪些?')
-form.form_submit_button("提交")
-```
-以上就把demo的参数基本配置完成了。
-
-### 第五步 模型预测
-```python
-# 定义一个前向预测的方法
-# @st.cache(suppress_st_warning=True)
-def generate_qa(input_text,n_sample,model_id='7',length=64,translator='baidu',level=0.7):
- # 这里我们是把模型用fastapi搭建了一个api服务
- URL = 'http://192.168.190.63:6605/qa'
- data = {
- "text":input_text,"n_sample":n_sample,
- "model_id":model_id,"length":length,
- 'translator':translator,'level':level
- }
- r = requests.get(URL,params=data)
- return r.text
-# 模型预测结果
-results = generate_qa(input_text,n_sample,model_id=str(model_id),
- translator=translator,length=text_length,level=text_level)
-```
-这里说明一下,由于demo展示机器没有GPU,所以模型部署采用的是Fastapi部署在后台的。如果demo展示的机器可以直接部署模型,这里可以直接把模型预测的方法写在这里,不需要另外部署模型,再用api的方式调用。这样做有一个值得注意的地方,因为streamlit的代码每一次运行,都是从头到尾执行一遍,就导致模型可能会重复加载,所以这里需要用到st.cache组建,当内容没有更新的时候,会把这一步的结果缓存,而不会重新执行。保证了效率不会因此而下降。
-
-### 第六步 结果展示
-```python
-with st.spinner('老夫正在思考中🤔...'):
- if input_text:
- results = generate_qa(input_text,n_sample,model_id=str(model_id),
- translator=translator,length=text_length,level=text_level)
- for idx,item in enumerate(eval(results),start=1):
- st.markdown(f"""
- **候选回答「{idx}」:**\n
- """)
- st.info('中文:%s'%item['fy_next_sentence'])
- st.info('英文:%s'%item['next_sentence'])
-```
-streamlit对不同格式的内容展示,有丰富的组建,对于文本可以用`st.markdown`组建以及`st.text`和`st.write`展示。更多组建和功能可以参考官方文档:https://docs.streamlit.io
-
-至此,一个完整的demo展示就完成了。效果图如下:
-
-
-
-完整的代码可以参考:`Fengshenbang-LM/fengshen/examples/FastDemo/YuyuanQA.py`
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/wenzhong_qa/finetune_wenzhong.py b/spaces/HaloMaster/chinesesummary/fengshen/examples/wenzhong_qa/finetune_wenzhong.py
deleted file mode 100644
index bcdeda71fd2d2d70dd56148451ddf2d4946bf31c..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/examples/wenzhong_qa/finetune_wenzhong.py
+++ /dev/null
@@ -1,153 +0,0 @@
-# sys.path.append('./')
-import os
-import torch
-import argparse
-import pytorch_lightning as pl
-from pytorch_lightning.callbacks import ModelCheckpoint
-from pytorch_lightning import Trainer, loggers
-from transformers.optimization import get_linear_schedule_with_warmup
-from transformers import GPT2LMHeadModel
-from fengshen.data.task_dataloader.medicalQADataset import GPT2QADataModel
-
-
-class GPT2FinetuneMedicalQAModelCheckpoint:
- @staticmethod
- def add_argparse_args(parent_args):
- parser = parent_args.add_argument_group('BaseModel')
-
- parser.add_argument('--monitor', default='train_loss', type=str)
- parser.add_argument('--mode', default='min', type=str)
- parser.add_argument('--dirpath', default='./ckpt/', type=str)
- parser.add_argument(
- '--filename', default='model-{epoch:02d}-{train_loss:.4f}', type=str)
- parser.add_argument('--save_last', action='store_true', default=True)
- parser.add_argument('--save_top_k', default=3, type=float)
- parser.add_argument('--every_n_train_steps', default=100, type=float)
- parser.add_argument('--save_weights_only', default=True, type=bool)
-
- return parent_args
-
- def __init__(self, args):
- self.callbacks = ModelCheckpoint(monitor=args.monitor,
- save_top_k=args.save_top_k,
- mode=args.mode,
- every_n_train_steps=args.every_n_train_steps,
- save_weights_only=args.save_weights_only,
- dirpath=args.dirpath,
- filename=args.filename,
- save_last=args.save_last)
-
-
-class GPT2FinetuneMedicalQA(pl.LightningModule):
-
- @staticmethod
- def add_model_specific_args(parent_args):
- parser = parent_args.add_argument_group('BaseModel')
- parser.add_argument('--learning_rate', default=1e-4, type=float)
- parser.add_argument('--weight_decay', default=0.1, type=float)
- parser.add_argument('--warmup', default=0.01, type=float)
- return parent_args
-
- def __init__(self, args, num_data):
- super().__init__()
- self.args = args
- self.num_data = num_data
- print('num_data:', num_data)
- self.model = GPT2LMHeadModel.from_pretrained(args.pretrained_model_path)
-
- def setup(self, stage) -> None:
- if stage == 'fit':
- num_gpus = self.trainer.gpus if self.trainer.gpus is not None else 0
- self.total_step = int(self.trainer.max_epochs * self.num_data
- / (max(1, num_gpus) * self.trainer.accumulate_grad_batches))
- print('Total training step:', self.total_step)
-
- def training_step(self, batch, batch_idx):
- output = self.model(
- input_ids=batch['input_ids'], attention_mask=batch['attention_mask'], labels=batch['labels'])
- # output = self.model(input_ids=batch['input_ids'], labels=batch['labels'])
- # acc = self.comput_metrix(output.logits, batch['labels'])
- self.log('train_loss', output.loss)
- return output.loss
-
- def comput_metrix(self, logits, labels):
- y_pred = torch.argmax(logits, dim=-1)
- y_pred = y_pred.view(size=(-1,))
- y_true = labels.view(size=(-1,)).float()
- corr = torch.eq(y_pred, y_true)
- acc = torch.sum(corr.float()) / labels.size()[0]
- return acc
-
- def validation_step(self, batch, batch_idx):
- output = self.model(
- input_ids=batch['input_ids'], attention_mask=batch['attention_mask'], labels=batch['labels'])
- # output = self.model(input_ids=batch['input_ids'], labels=batch['labels'])
- # acc = self.comput_metrix(output.logits, batch['labels'])
- self.log('val_loss', output.loss)
- # self.log('val_acc', acc)
-
- def configure_optimizers(self):
- no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
- paras = list(
- filter(lambda p: p[1].requires_grad, self.named_parameters()))
- paras = [{
- 'params':
- [p for n, p in paras if not any(nd in n for nd in no_decay)],
- 'weight_decay': self.args.weight_decay
- }, {
- 'params': [p for n, p in paras if any(nd in n for nd in no_decay)],
- 'weight_decay': 0.0
- }]
- optimizer = torch.optim.AdamW(paras, lr=self.args.learning_rate)
- scheduler = get_linear_schedule_with_warmup(
- optimizer, int(self.total_step * self.args.warmup),
- self.total_step)
-
- return [{
- 'optimizer': optimizer,
- 'lr_scheduler': {
- 'scheduler': scheduler,
- 'interval': 'step',
- 'frequency': 1
- }
- }]
-
-
-def main():
- total_parser = argparse.ArgumentParser("QA Task")
- total_parser.add_argument('--do_eval_only', action='store_true', default=False)
- total_parser.add_argument('--pretrained_model_path', default='google/mt5-small', type=str)
- total_parser.add_argument('--output_save_path', default='./predict.json', type=str)
- # * Args for data preprocessing
- total_parser = GPT2QADataModel.add_data_specific_args(total_parser)
- # * Args for training
- total_parser = Trainer.add_argparse_args(total_parser)
- total_parser = GPT2FinetuneMedicalQAModelCheckpoint.add_argparse_args(total_parser)
- total_parser = GPT2FinetuneMedicalQA.add_model_specific_args(total_parser)
- # * Args for base model
- args = total_parser.parse_args()
-
- data_model = GPT2QADataModel(args)
- if not args.do_eval_only:
- model = GPT2FinetuneMedicalQA(args, len(data_model.train_dataloader()))
- checkpoint_callback = GPT2FinetuneMedicalQAModelCheckpoint(args).callbacks
- logger = loggers.TensorBoardLogger(save_dir=os.path.join(
- args.default_root_dir, 'log/'), name='WenZhong')
- trainer = Trainer.from_argparse_args(args,
- logger=logger,
- callbacks=[checkpoint_callback]
- )
- trainer.fit(model, data_model)
-
-
-if __name__ == '__main__':
- main()
- # test()
-
-'''
-# python examples/mt5_summary.py --gpus=1 --test_data=test_public.jsonl
-# --default_root_dir=/cognitive_comp/ganruyi/fengshen/mt5_summary/eval
-# --do_eval_only
-# --resume_from_checkpoint=/cognitive_comp/ganruyi/fengshen/mt5_summary/ckpt/model-epoch=01-train_loss=1.9166.ckpt
-# --strategy=ddp
-'''
diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/longformer/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/models/longformer/__init__.py
deleted file mode 100644
index 8c068ccdcd2a786128a6a90032fea2ff74d3ea0f..0000000000000000000000000000000000000000
--- a/spaces/HaloMaster/chinesesummary/fengshen/models/longformer/__init__.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# coding=utf-8
-# Copyright 2021 The IDEA Authors. All rights reserved.
-
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-
-# http://www.apache.org/licenses/LICENSE-2.0
-
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import TYPE_CHECKING
-
-from transformers.file_utils import _LazyModule, is_torch_available
-
-
-_import_structure = {
- "configuration_longformer": ["LongformerConfig"],
- "tokenization_longformer": ["LongformerTokenizer"],
-}
-
-if is_torch_available():
- _import_structure["modeling_longformer"] = [
- "LongformerModel",
- "LongformerForMaskedLM",
- "LongformerForMultipleChoice",
- "LongformerPreTrainedModel",
- "LongformerForQuestionAnswering",
- "LongformerForSequenceClassification",
- "LongformerForTokenClassification",
- ]
-
-
-if TYPE_CHECKING:
- from .configuration_longformer import LongformerConfig
- from .tokenization_longformer import LongformerTokenizer
-
- if is_torch_available():
- from .modeling_longformer import (
- LongformerModel,
- LongformerForMaskedLM,
- LongformerForMultipleChoice,
- LongformerPreTrainedModel,
- LongformerForQuestionAnswering,
- LongformerForSequenceClassification,
- LongformerForTokenClassification,
- )
-else:
- import sys
-
- sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure)
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/backtranslation/prepare-de-monolingual.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/backtranslation/prepare-de-monolingual.sh
deleted file mode 100644
index 5e67b2b3bcf27d3436031453e796e58a0ae79ec4..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/backtranslation/prepare-de-monolingual.sh
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/bin/bash
-
-SCRIPTS=mosesdecoder/scripts
-TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl
-NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl
-REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl
-BPEROOT=subword-nmt/subword_nmt
-
-
-BPE_CODE=wmt18_en_de/code
-SUBSAMPLE_SIZE=25000000
-LANG=de
-
-
-OUTDIR=wmt18_${LANG}_mono
-orig=orig
-tmp=$OUTDIR/tmp
-mkdir -p $OUTDIR $tmp
-
-
-URLS=(
- "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2007.de.shuffled.gz"
- "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2008.de.shuffled.gz"
- "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2009.de.shuffled.gz"
- "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2010.de.shuffled.gz"
- "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2011.de.shuffled.gz"
- "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2012.de.shuffled.gz"
- "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2013.de.shuffled.gz"
- "http://www.statmt.org/wmt15/training-monolingual-news-crawl-v2/news.2014.de.shuffled.v2.gz"
- "http://data.statmt.org/wmt16/translation-task/news.2015.de.shuffled.gz"
- "http://data.statmt.org/wmt17/translation-task/news.2016.de.shuffled.gz"
- "http://data.statmt.org/wmt18/translation-task/news.2017.de.shuffled.deduped.gz"
-)
-FILES=(
- "news.2007.de.shuffled.gz"
- "news.2008.de.shuffled.gz"
- "news.2009.de.shuffled.gz"
- "news.2010.de.shuffled.gz"
- "news.2011.de.shuffled.gz"
- "news.2012.de.shuffled.gz"
- "news.2013.de.shuffled.gz"
- "news.2014.de.shuffled.v2.gz"
- "news.2015.de.shuffled.gz"
- "news.2016.de.shuffled.gz"
- "news.2017.de.shuffled.deduped.gz"
-)
-
-
-cd $orig
-for ((i=0;i<${#URLS[@]};++i)); do
- file=${FILES[i]}
- if [ -f $file ]; then
- echo "$file already exists, skipping download"
- else
- url=${URLS[i]}
- wget "$url"
- fi
-done
-cd ..
-
-
-if [ -f $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} ]; then
- echo "found monolingual sample, skipping shuffle/sample/tokenize"
-else
- gzip -c -d -k $(for FILE in "${FILES[@]}"; do echo $orig/$FILE; done) \
- | shuf -n $SUBSAMPLE_SIZE \
- | perl $NORM_PUNC $LANG \
- | perl $REM_NON_PRINT_CHAR \
- | perl $TOKENIZER -threads 8 -a -l $LANG \
- > $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG}
-fi
-
-
-if [ -f $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} ]; then
- echo "found BPE monolingual sample, skipping BPE step"
-else
- python $BPEROOT/apply_bpe.py -c $BPE_CODE \
- < $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} \
- > $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG}
-fi
-
-
-if [ -f $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} ]; then
- echo "found deduplicated monolingual sample, skipping deduplication step"
-else
- python deduplicate_lines.py $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} \
- > $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG}
-fi
-
-
-if [ -f $OUTDIR/bpe.monolingual.dedup.00.de ]; then
- echo "found sharded data, skipping sharding step"
-else
- split --lines 1000000 --numeric-suffixes \
- --additional-suffix .${LANG} \
- $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} \
- $OUTDIR/bpe.monolingual.dedup.
-fi
diff --git a/spaces/Hexamind/swarms/dronemodel.py b/spaces/Hexamind/swarms/dronemodel.py
deleted file mode 100644
index caf99b95b5794071da29af9fbf8736875f94c27c..0000000000000000000000000000000000000000
--- a/spaces/Hexamind/swarms/dronemodel.py
+++ /dev/null
@@ -1,103 +0,0 @@
-from dataclasses import dataclass
-from scipy.integrate import odeint
-import numpy as np
-
-import param_
-
-
-@dataclass
-class DroneModel:
- """
- Creates a drone_model of a drone
- """
-
- def __init__(self, is_blue):
- self.drone_model = param_.DRONE_MODELS[param_.DRONE_MODEL[is_blue]]
-
- self.angle_to_neutralisation = self.drone_model['angle_to_neutralisation']
- self.distance_to_neutralisation = self.drone_model['distance_to_neutralisation']
- self.duration_to_neutralisation = self.drone_model['duration_to_neutralisation']
-
- self.Cxy = self.drone_model['Cxy']
- self.Cz = self.drone_model['Cz']
- self.mass = self.drone_model['mass']
-
- self.Fxy_ratio = self.drone_model['Fxy_ratio']
- self.Fz_min_ratio = self.drone_model['Fz_min_ratio']
- self.Fz_max_ratio = self.drone_model['Fz_max_ratio']
-
- self.weight_eq = self.mass * param_.g * (1 - self.Fz_min_ratio)
- self.Fz_plus = (self.Fz_max_ratio - 1) * self.mass * param_.g
- self.Fz_minus = (1 - self.Fz_min_ratio) * self.mass * param_.g
- self.Fxy = self.mass * param_.g * self.Fxy_ratio
-
- self.max_speed = np.sqrt(self.Fxy / self.Cxy)
- self.max_up_speed = np.sqrt(self.Fz_plus / self.Cz)
- self.max_down_speed = np.sqrt(self.Fz_minus / self.Cz)
- self.max_rot_speed = 2 * np.pi
-
- def get_trajectory(self, pos_xyz, speed_xyz, action: np.ndarray(3,), time_: np.ndarray(1,)) -> np.ndarray(3,):
- '''
- returns next position given the current position, speed and applied forces
- :param pos_xyz:
- :param speed_xyz:
- :param action:
- :param time_:
- :return:
- '''
-
- rho = action[0] # in 0, 1
- theta = 2*np.pi * action[1] # in 0, 2pi
- psy = np.pi * (action[2] - 0.5) # in -pi/2, pi/2
-
- fx = rho * np.cos(theta) * np.cos(psy) * self.Fxy
- fy = rho * np.sin(theta) * np.cos(psy) * self.Fxy
- fz = rho * np.sin(psy) * (self.Fz_plus if 0 < psy else self.Fz_minus)
-
- pos_speed = np.hstack((pos_xyz, speed_xyz))
-
- result_ = odeint(
- lambda u, v: self.drone_dynamics(u, v, fx, fy, fz, self.Cxy, self.Cz, self.mass),
- pos_speed,
- time_,
- Dfun=lambda u, v: self.fulljac(u, v, self.Cxy, self.Cz, self.mass)
- )
- x, y, z, dx, dy, dz = result_.T
-
- return np.array([x, y, z], dtype='float32'), np.array([dx, dy, dz], dtype='float32')
-
- def drone_dynamics(self, pos_speed, time_, f_x, f_y, f_z, Cxy, Cz, m):
- x, y, z, dx, dy, dz = pos_speed
- return [dx,
- dy,
- dz,
- 1/m * (f_x - Cxy * dx * np.sqrt(dx**2 + dy**2 + dz**2)),
- 1/m * (f_y - Cxy * dy * np.sqrt(dx**2 + dy**2 + dz**2)),
- 1/m * (f_z - Cz * dz * np.sqrt(dx**2 + dy**2 + dz**2))]
-
- def fulljac(self, pos_speed, time_, Cxy, Cz, m) -> np.ndarray((6, 6), ):
- '''
- returns the Jacobian of the differential equation of the trajectory
- :param pos_speed:
- :param time_:
- :param Cxy:
- :param Cz:
- :param m:
- :return:
- '''
-
- x, y, z, dx, dy, dz = pos_speed
- J = np.zeros((6, 6))
- J[0, 3] = 1
- J[1, 4] = 1
- J[2, 5] = 1
- J[3, 3] = -Cxy/m * ((np.sqrt(dx**2 + dy**2 + dz**2)) + dx**2 / np.sqrt(dx**2 + dy**2 + dz**2))
- J[3, 4] = -Cxy/m * (dx * dy / np.sqrt(dx**2 + dy**2 + dz**2))
- J[3, 5] = -Cxy/m * (dx * dz / np.sqrt(dx**2 + dy**2 + dz**2))
- J[4, 4] = -Cxy/m * ((np.sqrt(dx**2 + dy**2 + dz**2)) + dy**2 / np.sqrt(dx**2 + dy**2 + dz**2))
- J[4, 3] = -Cxy/m * (dy * dx / np.sqrt(dx**2 + dy**2 + dz**2))
- J[4, 5] = -Cxy/m * (dy * dz / np.sqrt(dx**2 + dy**2 + dz**2))
- J[5, 5] = -Cz/m * ((np.sqrt(dx**2 + dy**2 + dz**2)) + dz**2 / np.sqrt(dx**2 + dy**2 + dz**2))
- J[5, 3] = -Cz/m * (dz * dx / np.sqrt(dx**2 + dy**2 + dz**2))
- J[5, 4] = -Cz/m * (dz * dy / np.sqrt(dx**2 + dy**2 + dz**2))
- return J
diff --git a/spaces/HighCWu/GPEN/retinaface/facemodels/retinaface.py b/spaces/HighCWu/GPEN/retinaface/facemodels/retinaface.py
deleted file mode 100644
index b7092a2bc2f35d06ce99d25473bce913ef3fd8e7..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/GPEN/retinaface/facemodels/retinaface.py
+++ /dev/null
@@ -1,127 +0,0 @@
-import torch
-import torch.nn as nn
-import torchvision.models.detection.backbone_utils as backbone_utils
-import torchvision.models._utils as _utils
-import torch.nn.functional as F
-from collections import OrderedDict
-
-from facemodels.net import MobileNetV1 as MobileNetV1
-from facemodels.net import FPN as FPN
-from facemodels.net import SSH as SSH
-
-
-
-class ClassHead(nn.Module):
- def __init__(self,inchannels=512,num_anchors=3):
- super(ClassHead,self).__init__()
- self.num_anchors = num_anchors
- self.conv1x1 = nn.Conv2d(inchannels,self.num_anchors*2,kernel_size=(1,1),stride=1,padding=0)
-
- def forward(self,x):
- out = self.conv1x1(x)
- out = out.permute(0,2,3,1).contiguous()
-
- return out.view(out.shape[0], -1, 2)
-
-class BboxHead(nn.Module):
- def __init__(self,inchannels=512,num_anchors=3):
- super(BboxHead,self).__init__()
- self.conv1x1 = nn.Conv2d(inchannels,num_anchors*4,kernel_size=(1,1),stride=1,padding=0)
-
- def forward(self,x):
- out = self.conv1x1(x)
- out = out.permute(0,2,3,1).contiguous()
-
- return out.view(out.shape[0], -1, 4)
-
-class LandmarkHead(nn.Module):
- def __init__(self,inchannels=512,num_anchors=3):
- super(LandmarkHead,self).__init__()
- self.conv1x1 = nn.Conv2d(inchannels,num_anchors*10,kernel_size=(1,1),stride=1,padding=0)
-
- def forward(self,x):
- out = self.conv1x1(x)
- out = out.permute(0,2,3,1).contiguous()
-
- return out.view(out.shape[0], -1, 10)
-
-class RetinaFace(nn.Module):
- def __init__(self, cfg = None, phase = 'train'):
- """
- :param cfg: Network related settings.
- :param phase: train or test.
- """
- super(RetinaFace,self).__init__()
- self.phase = phase
- backbone = None
- if cfg['name'] == 'mobilenet0.25':
- backbone = MobileNetV1()
- if cfg['pretrain']:
- checkpoint = torch.load("./weights/mobilenetV1X0.25_pretrain.tar", map_location=torch.device('cpu'))
- from collections import OrderedDict
- new_state_dict = OrderedDict()
- for k, v in checkpoint['state_dict'].items():
- name = k[7:] # remove module.
- new_state_dict[name] = v
- # load params
- backbone.load_state_dict(new_state_dict)
- elif cfg['name'] == 'Resnet50':
- import torchvision.models as models
- backbone = models.resnet50(pretrained=cfg['pretrain'])
-
- self.body = _utils.IntermediateLayerGetter(backbone, cfg['return_layers'])
- in_channels_stage2 = cfg['in_channel']
- in_channels_list = [
- in_channels_stage2 * 2,
- in_channels_stage2 * 4,
- in_channels_stage2 * 8,
- ]
- out_channels = cfg['out_channel']
- self.fpn = FPN(in_channels_list,out_channels)
- self.ssh1 = SSH(out_channels, out_channels)
- self.ssh2 = SSH(out_channels, out_channels)
- self.ssh3 = SSH(out_channels, out_channels)
-
- self.ClassHead = self._make_class_head(fpn_num=3, inchannels=cfg['out_channel'])
- self.BboxHead = self._make_bbox_head(fpn_num=3, inchannels=cfg['out_channel'])
- self.LandmarkHead = self._make_landmark_head(fpn_num=3, inchannels=cfg['out_channel'])
-
- def _make_class_head(self,fpn_num=3,inchannels=64,anchor_num=2):
- classhead = nn.ModuleList()
- for i in range(fpn_num):
- classhead.append(ClassHead(inchannels,anchor_num))
- return classhead
-
- def _make_bbox_head(self,fpn_num=3,inchannels=64,anchor_num=2):
- bboxhead = nn.ModuleList()
- for i in range(fpn_num):
- bboxhead.append(BboxHead(inchannels,anchor_num))
- return bboxhead
-
- def _make_landmark_head(self,fpn_num=3,inchannels=64,anchor_num=2):
- landmarkhead = nn.ModuleList()
- for i in range(fpn_num):
- landmarkhead.append(LandmarkHead(inchannels,anchor_num))
- return landmarkhead
-
- def forward(self,inputs):
- out = self.body(inputs)
-
- # FPN
- fpn = self.fpn(out)
-
- # SSH
- feature1 = self.ssh1(fpn[0])
- feature2 = self.ssh2(fpn[1])
- feature3 = self.ssh3(fpn[2])
- features = [feature1, feature2, feature3]
-
- bbox_regressions = torch.cat([self.BboxHead[i](feature) for i, feature in enumerate(features)], dim=1)
- classifications = torch.cat([self.ClassHead[i](feature) for i, feature in enumerate(features)],dim=1)
- ldm_regressions = torch.cat([self.LandmarkHead[i](feature) for i, feature in enumerate(features)], dim=1)
-
- if self.phase == 'train':
- output = (bbox_regressions, classifications, ldm_regressions)
- else:
- output = (bbox_regressions, F.softmax(classifications, dim=-1), ldm_regressions)
- return output
\ No newline at end of file
diff --git a/spaces/HighCWu/Style2Paints-4-Gradio/decompositioner.py b/spaces/HighCWu/Style2Paints-4-Gradio/decompositioner.py
deleted file mode 100644
index f54236f341d97d241ef1050bd28c8d00012d1a3e..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/Style2Paints-4-Gradio/decompositioner.py
+++ /dev/null
@@ -1,185 +0,0 @@
-import os
-import numpy as np
-from scipy.spatial import ConvexHull
-from sklearn.cluster import MiniBatchKMeans
-from tricks import *
-import cv2
-
-
-ksd = 8
-mbc = MiniBatchKMeans(ksd)
-
-
-def get_theme(img):
- images = np.reshape(cv2.resize(img, (256, 256)), (256 * 256, 3))
- hull = ConvexHull(images)
- return hull.points[hull.vertices]
-
-
-def simplify_points(points, img):
- labels = mbc.fit(points)
- new_points = []
- all_center = np.mean(labels.cluster_centers_, axis=0)
- distances = np.sum((points - all_center) ** 2, axis=1) ** 0.5
-
- for idx in range(ksd):
- candidates = points[labels.labels_ == idx]
- scores = distances[labels.labels_ == idx]
- best_id = np.argmax(scores)
- new_points.append(candidates[best_id])
-
- new_points.sort(key=np.sum, reverse=True)
-
- new_points = np.stack(new_points, axis=0)
- return new_points.clip(0, 255).astype(np.uint8)
-
-
-def get_ini_layers(miku, points):
- results = []
- final_target = miku.astype(np.float32)
- bg = np.zeros_like(final_target, dtype=np.float32) + points[0]
- results.append(np.concatenate([bg, np.zeros_like(bg, dtype=np.float32) + 255], axis=2)[:, :, 0:4])
- current_result = bg.copy()
- for layer_index in range(1, ksd):
- current_base = current_result.astype(np.float32)
- current_color = np.zeros_like(final_target, dtype=np.float32) + points[layer_index]
- overall_direction = final_target - current_base
- avaliable_direction = current_color - current_base
- current_alpha = np.sum(overall_direction * avaliable_direction, axis=2, keepdims=True) / np.sum(
- avaliable_direction * avaliable_direction, axis=2, keepdims=True)
- current_alpha = current_alpha.clip(0, 1)
- current_result = (current_color * current_alpha + current_base * (1 - current_alpha)).clip(0, 255)
- results.append(np.concatenate([current_color, current_alpha * 255.0], axis=2))
- return results
-
-
-def make_reconstruction(layers):
- bg = np.zeros_like(layers[0], dtype=np.float32)[:, :, 0:3] + 255
- for item in layers:
- current_alpha = item[:, :, 3:4] / 255.0
- bg = item[:, :, 0:3] * current_alpha + bg * (1 - current_alpha)
- return bg
-
-
-def improve_layers(layers, miku):
- reconstruction = make_reconstruction(layers)
- b = miku - reconstruction
- new_layers = []
- for item in layers:
- new_item = item.copy()
- new_item[:, :, 0:3] = (new_item[:, :, 0:3] + b).clip(0, 255)
- new_layers.append(new_item)
- return new_layers
-
-
-def cluster_all(labeled_array, num_features):
- xs = [[] for _ in range(num_features)]
- ys = [[] for _ in range(num_features)]
- M = labeled_array.shape[0]
- N = labeled_array.shape[1]
- for x in range(M):
- for y in range(N):
- i = labeled_array[x, y]
- xs[i].append(x)
- ys[i].append(y)
- result = []
- for _ in range(num_features):
- result.append((np.array(xs[_]), np.array(ys[_])))
- return result
-
-
-def meder(x):
- y = x.copy()
- y = cv2.medianBlur(y, 5)
- y = cv2.medianBlur(y, 5)
- y = cv2.medianBlur(y, 3)
- y = cv2.medianBlur(y, 3)
- return y
-
-
-def re_med(s_2048):
-
- sample_2048 = s_2048.astype(np.float32)
- sample_1024 = cv2.pyrDown(sample_2048)
- sample_512 = cv2.pyrDown(sample_1024)
- sample_256 = cv2.pyrDown(sample_512)
-
- gradient_2048 = sample_2048 - cv2.pyrUp(sample_1024)
- gradient_1024 = sample_1024 - cv2.pyrUp(sample_512)
- gradient_512 = sample_512 - cv2.pyrUp(sample_256)
-
- rec_256 = meder(sample_256)
- rec_512 = cv2.pyrUp(rec_256) + meder(gradient_512)
- rec_1024 = cv2.pyrUp(rec_512) + meder(gradient_1024)
- rec_2048 = cv2.pyrUp(rec_1024) + meder(gradient_2048)
- return rec_2048
-
-
-def process_ctx(sketch, solid, render):
- solid = solid.astype(np.float32)
- sketch = d_resize(cv2.cvtColor(sketch, cv2.COLOR_GRAY2RGB), solid.shape).astype(np.float32)
- render = d_resize(render, solid.shape).astype(np.float32)
- alpha = sketch / 255.0
- all_diff = render - solid
- all_lines = render.copy()
- all_lines = cv2.erode(all_lines, np.ones((3,3), np.uint8)) * 0.618
- all_diff = re_med(all_diff)
- all_lines = re_med(all_lines)
- recon = solid + all_diff
- recon = recon * alpha + all_lines * (1 - alpha)
- recon2 = (solid + all_diff) * alpha + re_med(solid) * (1 - alpha)
- recon3 = reason_blending(recon2, sketch)
- return recon.clip(0, 255).astype(np.uint8), recon2.clip(0, 255).astype(np.uint8), recon3.clip(0, 255).astype(np.uint8)
-
-
-def process_psd(sketch, solid, render, path='./'):
- recon = process_ctx(sketch, solid, render)
- points = get_theme(solid)
- points = simplify_points(points, solid)
- compositions = get_ini_layers(solid, points)
- compositions = improve_layers(compositions, solid)
- for _ in range(ksd):
- cv2.imwrite(path + str(_ + 1) + '.color.png', compositions[_].clip(0, 255).astype(np.uint8))
- solid = make_reconstruction(compositions).clip(0, 255).astype(np.uint8)
- os.makedirs(path, exist_ok=True)
- alpha = 1 - sketch.astype(np.float32) / 255.0
- now = solid
- now = (now.astype(np.float32) + sketch.astype(np.float32) - 255.0).clip(0, 255)
- sketch = 255 + now - solid
- cv2.imwrite(path + '9.sketch.png', sketch.clip(0, 255).astype(np.uint8))
- all_diff = recon.astype(np.float32) - now
- all_light = all_diff.copy()
- all_shadow = - all_diff.copy()
- all_light[all_light < 0] = 0
- all_shadow[all_shadow < 0] = 0
- sketch_color = all_light * alpha
- light = all_light * (1 - alpha)
- all_shadow = 255 - all_shadow
- cv2.imwrite(path + '10.sketch_color.png', sketch_color.clip(0, 255).astype(np.uint8))
- cv2.imwrite(path + '11.light.png', light.clip(0, 255).astype(np.uint8))
- cv2.imwrite(path + '12.shadow.png', all_shadow.clip(0, 255).astype(np.uint8))
- return recon
-
-
-def process_albedo(albedo, composition, sketch):
- DEL = albedo.astype(np.float32)
- HSV = cv2.cvtColor(albedo, cv2.COLOR_RGB2HSV).astype(np.float32)
- YUV = cv2.cvtColor(albedo, cv2.COLOR_RGB2YUV).astype(np.float32)
- solid = composition.astype(np.float32)
- light = sketch[:, :, None].astype(np.float32)
-
- DEL = DEL * light / 255.0 + solid * (1 - light / 255.0)
- HSV[:, :, 2:3] = np.minimum(HSV[:, :, 2:3], light)
- YUV[:, :, 0:1] = np.minimum(YUV[:, :, 0:1], light)
-
- DEL = DEL.clip(0, 255).astype(np.uint8)
- HSV = HSV.clip(0, 255).astype(np.uint8)
- YUV = YUV.clip(0, 255).astype(np.uint8)
-
- return cv2.cvtColor(HSV, cv2.COLOR_HSV2RGB), cv2.cvtColor(YUV, cv2.COLOR_YUV2RGB), DEL
-
-
-def process_overlay(composition, sketch):
- RGB = composition.astype(np.float32)
- alpha = sketch[:, :, None].astype(np.float32) / 255.0
- return (RGB * alpha).clip(0, 255).astype(np.uint8)
diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/module.d8037460.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/module.d8037460.js
deleted file mode 100644
index 8d5e84b3696a9ef1b576f84f8a09e2600aaa9d02..0000000000000000000000000000000000000000
--- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/module.d8037460.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import{c as i}from"./module.e2741a44.js";const c=i({characterize:({call:e})=>()=>e("characterize"),encode:({call:e})=>(r,n)=>e("encode",{recordingId:r,timeslice:n}),record:({call:e})=>async(r,n,o)=>{await e("record",{recordingId:r,sampleRate:n,typedArrays:o},o.map(({buffer:a})=>a))}}),u=e=>{const r=new Worker(e);return c(r)},l=`(()=>{var e={775:function(e,t,r){!function(e,t,r,n){"use strict";function o(e){return e&&"object"==typeof e&&"default"in e?e:{default:e}}var s=o(t),a=o(r),i=o(n),u=function(e,t){return void 0===t?e:t.reduce((function(e,t){if("capitalize"===t){var r=e.charAt(0).toUpperCase(),n=e.slice(1);return"".concat(r).concat(n)}return"dashify"===t?a.default(e):"prependIndefiniteArticle"===t?"".concat(i.default(e)," ").concat(e):e}),e)},c=function(e){var t=e.name+e.modifiers.map((function(e){return"\\\\.".concat(e,"\\\\(\\\\)")})).join("");return new RegExp("\\\\$\\\\{".concat(t,"}"),"g")},l=function(e,t){for(var r=/\\\${([^.}]+)((\\.[^(]+\\(\\))*)}/g,n=[],o=r.exec(e);null!==o;){var a={modifiers:[],name:o[1]};if(void 0!==o[3])for(var i=/\\.[^(]+\\(\\)/g,l=i.exec(o[2]);null!==l;)a.modifiers.push(l[0].slice(1,-2)),l=i.exec(o[2]);n.push(a),o=r.exec(e)}var d=n.reduce((function(e,r){return e.map((function(e){return"string"==typeof e?e.split(c(r)).reduce((function(e,n,o){return 0===o?[n]:r.name in t?[].concat(s.default(e),[u(t[r.name],r.modifiers),n]):[].concat(s.default(e),[function(e){return u(e[r.name],r.modifiers)},n])}),[]):[e]})).reduce((function(e,t){return[].concat(s.default(e),s.default(t))}),[])}),[e]);return function(e){return d.reduce((function(t,r){return[].concat(s.default(t),"string"==typeof r?[r]:[r(e)])}),[]).join("")}},d=function(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},r=void 0===e.code?void 0:l(e.code,t),n=void 0===e.message?void 0:l(e.message,t);function o(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},o=arguments.length>1?arguments[1]:void 0,s=void 0===o&&(t instanceof Error||void 0!==t.code&&"Exception"===t.code.slice(-9))?{cause:t,missingParameters:{}}:{cause:o,missingParameters:t},a=s.cause,i=s.missingParameters,u=void 0===n?new Error:new Error(n(i));return null!==a&&(u.cause=a),void 0!==r&&(u.code=r(i)),void 0!==e.status&&(u.status=e.status),u}return o};e.compile=d,Object.defineProperty(e,"__esModule",{value:!0})}(t,r(106),r(881),r(507))},881:e=>{"use strict";e.exports=(e,t)=>{if("string"!=typeof e)throw new TypeError("expected a string");return e.trim().replace(/([a-z])([A-Z])/g,"$1-$2").replace(/\\W/g,(e=>/[\xC0-\u017E]/.test(e)?e:"-")).replace(/^-+|-+$/g,"").replace(/-{2,}/g,(e=>t&&t.condense?"-":e)).toLowerCase()}},107:function(e,t){!function(e){"use strict";var t=function(e){return function(t){var r=e(t);return t.add(r),r}},r=function(e){return function(t,r){return e.set(t,r),r}},n=void 0===Number.MAX_SAFE_INTEGER?9007199254740991:Number.MAX_SAFE_INTEGER,o=536870912,s=2*o,a=function(e,t){return function(r){var a=t.get(r),i=void 0===a?r.size:an)throw new Error("Congratulations, you created a collection of unique numbers which uses all available integers!");for(;r.has(i);)i=Math.floor(Math.random()*n);return e(r,i)}},i=new WeakMap,u=r(i),c=a(u,i),l=t(c);e.addUniqueNumber=l,e.generateUniqueNumber=c,Object.defineProperty(e,"__esModule",{value:!0})}(t)},507:e=>{var t=function(e){var t,r,n=/\\w+/.exec(e);if(!n)return"an";var o=(r=n[0]).toLowerCase(),s=["honest","hour","hono"];for(t in s)if(0==o.indexOf(s[t]))return"an";if(1==o.length)return"aedhilmnorsx".indexOf(o)>=0?"an":"a";if(r.match(/(?!FJO|[HLMNS]Y.|RY[EO]|SQU|(F[LR]?|[HL]|MN?|N|RH?|S[CHKLMNPTVW]?|X(YL)?)[AEIOU])[FHLMNRSX][A-Z]/))return"an";var a=[/^e[uw]/,/^onc?e\\b/,/^uni([^nmd]|mo)/,/^u[bcfhjkqrst][aeiou]/];for(t=0;t=0?"an":"a":"aeiou".indexOf(o[0])>=0||o.match(/^y(b[lor]|cl[ea]|fere|gg|p[ios]|rou|tt)/)?"an":"a"};void 0!==e.exports?e.exports=t:window.indefiniteArticle=t},768:e=>{e.exports=function(e,t){(null==t||t>e.length)&&(t=e.length);for(var r=0,n=new Array(t);r{var n=r(768);e.exports=function(e){if(Array.isArray(e))return n(e)},e.exports.__esModule=!0,e.exports.default=e.exports},642:e=>{e.exports=function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)},e.exports.__esModule=!0,e.exports.default=e.exports},344:e=>{e.exports=function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")},e.exports.__esModule=!0,e.exports.default=e.exports},106:(e,t,r)=>{var n=r(907),o=r(642),s=r(906),a=r(344);e.exports=function(e){return n(e)||o(e)||s(e)||a()},e.exports.__esModule=!0,e.exports.default=e.exports},906:(e,t,r)=>{var n=r(768);e.exports=function(e,t){if(e){if("string"==typeof e)return n(e,t);var r=Object.prototype.toString.call(e).slice(8,-1);return"Object"===r&&e.constructor&&(r=e.constructor.name),"Map"===r||"Set"===r?Array.from(e):"Arguments"===r||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(r)?n(e,t):void 0}},e.exports.__esModule=!0,e.exports.default=e.exports}},t={};function r(n){var o=t[n];if(void 0!==o)return o.exports;var s=t[n]={exports:{}};return e[n].call(s.exports,s,s.exports,r),s.exports}(()=>{"use strict";var e=r(775);const t=-32603,n=-32602,o=-32601,s=(0,e.compile)({message:'The requested method called "\${method}" is not supported.',status:o}),a=(0,e.compile)({message:'The handler of the method called "\${method}" returned no required result.',status:t}),i=(0,e.compile)({message:'The handler of the method called "\${method}" returned an unexpected result.',status:t}),u=(0,e.compile)({message:'The specified parameter called "portId" with the given value "\${portId}" does not identify a port connected to this worker.',status:n}),c=(e,t)=>async r=>{let{data:{id:n,method:o,params:u}}=r;const c=t[o];try{if(void 0===c)throw s({method:o});const t=void 0===u?c():c(u);if(void 0===t)throw a({method:o});const r=t instanceof Promise?await t:t;if(null===n){if(void 0!==r.result)throw i({method:o})}else{if(void 0===r.result)throw i({method:o});const{result:t,transferables:s=[]}=r;e.postMessage({id:n,result:t},s)}}catch(t){const{message:r,status:o=-32603}=t;e.postMessage({error:{code:o,message:r},id:n})}};var l=r(107);const d=new Map,f=(e,t,r)=>({...t,connect:r=>{let{port:n}=r;n.start();const o=e(n,t),s=(0,l.generateUniqueNumber)(d);return d.set(s,(()=>{o(),n.close(),d.delete(s)})),{result:s}},disconnect:e=>{let{portId:t}=e;const r=d.get(t);if(void 0===r)throw u({portId:t.toString()});return r(),{result:null}},isSupported:async()=>{if(await new Promise((e=>{const t=new ArrayBuffer(0),{port1:r,port2:n}=new MessageChannel;r.onmessage=t=>{let{data:r}=t;return e(null!==r)},n.postMessage(t,[t])}))){const e=r();return{result:e instanceof Promise?await e:e}}return{result:!1}}}),p=function(e,t){let r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:()=>!0;const n=f(p,t,r),o=c(e,n);return e.addEventListener("message",o),()=>e.removeEventListener("message",o)},m=e=>e.reduce(((e,t)=>e+t.length),0),h=(e,t)=>{const r=[];let n=0;e:for(;nt){const o=n-t;r.forEach(((t,r)=>{const n=t.pop(),s=n.length-o;t.push(n.subarray(0,s)),e[r].unshift(n.subarray(s))}))}return r},v=new Map,g=(e=>(t,r,n)=>{const o=e.get(t);if(void 0===o){const o={channelDataArrays:n.map((e=>[e])),isComplete:!0,sampleRate:r};return e.set(t,o),o}return o.channelDataArrays.forEach(((e,t)=>e.push(n[t]))),o})(v),x=((e,t)=>(r,n,o,s)=>{const a=o>>3,i="subsequent"===n?0:44,u=r.length,c=e(r[0]),l=new ArrayBuffer(c*u*a+i),d=new DataView(l);return"subsequent"!==n&&t(d,o,u,"complete"===n?c:Number.POSITIVE_INFINITY,s),r.forEach(((e,t)=>{let r=i+t*a;e.forEach((e=>{const t=e.length;for(let n=0;n{const s=t>>3,a=Math.min(n*r*s,4294967251);e.setUint32(0,1380533830),e.setUint32(4,a+36,!0),e.setUint32(8,1463899717),e.setUint32(12,1718449184),e.setUint32(16,16,!0),e.setUint16(20,1,!0),e.setUint16(22,r,!0),e.setUint32(24,o,!0),e.setUint32(28,o*r*s,!0),e.setUint16(32,r*s,!0),e.setUint16(34,t,!0),e.setUint32(36,1684108385),e.setUint32(40,a,!0)})),w=new Map;p(self,{characterize:()=>({result:/^audio\\/wav$/}),encode:e=>{let{recordingId:t,timeslice:r}=e;const n=w.get(t);void 0!==n&&(w.delete(t),n.reject(new Error("Another request was made to initiate an encoding.")));const o=v.get(t);if(null!==r){if(void 0===o||m(o.channelDataArrays[0])*(1e3/o.sampleRate){w.set(t,{reject:n,resolve:e,timeslice:r})}));const e=h(o.channelDataArrays,Math.ceil(r*(o.sampleRate/1e3))),n=x(e,o.isComplete?"initial":"subsequent",16,o.sampleRate);return o.isComplete=!1,{result:n,transferables:n}}if(void 0!==o){const e=x(o.channelDataArrays,o.isComplete?"complete":"subsequent",16,o.sampleRate);return v.delete(t),{result:e,transferables:e}}return{result:[],transferables:[]}},record:e=>{let{recordingId:t,sampleRate:r,typedArrays:n}=e;const o=g(t,r,n),s=w.get(t);if(void 0!==s&&m(o.channelDataArrays[0])*(1e3/r)>=s.timeslice){const e=h(o.channelDataArrays,Math.ceil(s.timeslice*(r/1e3))),n=x(e,o.isComplete?"initial":"subsequent",16,r);o.isComplete=!1,w.delete(t),s.resolve({result:n,transferables:n})}return{result:null}}})})()})();`,d=new Blob([l],{type:"application/javascript; charset=utf-8"}),s=URL.createObjectURL(d),t=u(s),p=t.characterize,m=t.connect,h=t.disconnect,v=t.encode,g=t.isSupported,x=t.record;URL.revokeObjectURL(s);export{p as characterize,m as connect,h as disconnect,v as encode,g as isSupported,x as record};
-//# sourceMappingURL=module.d8037460.js.map
diff --git a/spaces/Hila/RobustViT/robustness_dataset.py b/spaces/Hila/RobustViT/robustness_dataset.py
deleted file mode 100644
index e067332e680e9707587dfc4ac509e2b9af5c17bd..0000000000000000000000000000000000000000
--- a/spaces/Hila/RobustViT/robustness_dataset.py
+++ /dev/null
@@ -1,66 +0,0 @@
-import json
-from torch.utils import data
-from torchvision.datasets import ImageFolder
-import torch
-import os
-from PIL import Image
-import numpy as np
-import argparse
-from tqdm import tqdm
-from munkres import Munkres
-import multiprocessing
-from multiprocessing import Process, Manager
-import collections
-import torchvision.transforms as transforms
-import torchvision.transforms.functional as TF
-import random
-import torchvision
-import cv2
-from label_str_to_imagenet_classes import label_str_to_imagenet_classes
-
-torch.manual_seed(0)
-
-ImageItem = collections.namedtuple('ImageItem', ('image_name', 'tag'))
-normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5],
- std=[0.5, 0.5, 0.5])
-
-transform = transforms.Compose([
- transforms.Resize(256),
- transforms.CenterCrop(224),
- transforms.ToTensor(),
- normalize,
-])
-
-class RobustnessDataset(ImageFolder):
- def __init__(self, imagenet_path, imagenet_classes_path='imagenet_classes.json', isV2=False, isSI=False):
- self._isV2 = isV2
- self._isSI = isSI
- self._imagenet_path = imagenet_path
- with open(imagenet_classes_path, 'r') as f:
- self._imagenet_classes = json.load(f)
- self._tag_list = [tag for tag in os.listdir(self._imagenet_path)]
- self._all_images = []
- for tag in self._tag_list:
- base_dir = os.path.join(self._imagenet_path, tag)
- for i, file in enumerate(os.listdir(base_dir)):
- self._all_images.append(ImageItem(file, tag))
-
-
- def __getitem__(self, item):
- image_item = self._all_images[item]
- image_path = os.path.join(self._imagenet_path, image_item.tag, image_item.image_name)
- image = Image.open(image_path)
- image = image.convert('RGB')
- image = transform(image)
-
- if self._isV2:
- class_name = int(image_item.tag)
- elif self._isSI:
- class_name = int(label_str_to_imagenet_classes[image_item.tag])
- else:
- class_name = int(self._imagenet_classes[image_item.tag])
-
- return image, class_name
-
- def __len__(self):
- return len(self._all_images)
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/data_utils.py b/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/data_utils.py
deleted file mode 100644
index 41afac0bf8f6d70e06bee1a34e220ab396ec247d..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/data_utils.py
+++ /dev/null
@@ -1,382 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import csv
-from pathlib import Path
-import zipfile
-from functools import reduce
-from multiprocessing import cpu_count
-from typing import Any, Dict, List, Optional, Union
-import io
-
-import numpy as np
-import pandas as pd
-import sentencepiece as sp
-from fairseq.data.audio.audio_utils import (
- convert_waveform, _get_kaldi_fbank, _get_torchaudio_fbank, is_npy_data,
- is_sf_audio_data
-)
-import torch
-import soundfile as sf
-from tqdm import tqdm
-
-
-UNK_TOKEN, UNK_TOKEN_ID = "", 3
-BOS_TOKEN, BOS_TOKEN_ID = "", 0
-EOS_TOKEN, EOS_TOKEN_ID = "", 2
-PAD_TOKEN, PAD_TOKEN_ID = "", 1
-
-
-def gen_vocab(
- input_path: Path, output_path_prefix: Path, model_type="bpe",
- vocab_size=1000, special_symbols: Optional[List[str]] = None
-):
- # Train SentencePiece Model
- arguments = [
- f"--input={input_path.as_posix()}",
- f"--model_prefix={output_path_prefix.as_posix()}",
- f"--model_type={model_type}",
- f"--vocab_size={vocab_size}",
- "--character_coverage=1.0",
- f"--num_threads={cpu_count()}",
- f"--unk_id={UNK_TOKEN_ID}",
- f"--bos_id={BOS_TOKEN_ID}",
- f"--eos_id={EOS_TOKEN_ID}",
- f"--pad_id={PAD_TOKEN_ID}",
- ]
- if special_symbols is not None:
- _special_symbols = ",".join(special_symbols)
- arguments.append(f"--user_defined_symbols={_special_symbols}")
- sp.SentencePieceTrainer.Train(" ".join(arguments))
- # Export fairseq dictionary
- spm = sp.SentencePieceProcessor()
- spm.Load(output_path_prefix.as_posix() + ".model")
- vocab = {i: spm.IdToPiece(i) for i in range(spm.GetPieceSize())}
- assert (
- vocab.get(UNK_TOKEN_ID) == UNK_TOKEN
- and vocab.get(PAD_TOKEN_ID) == PAD_TOKEN
- and vocab.get(BOS_TOKEN_ID) == BOS_TOKEN
- and vocab.get(EOS_TOKEN_ID) == EOS_TOKEN
- )
- vocab = {
- i: s
- for i, s in vocab.items()
- if s not in {UNK_TOKEN, BOS_TOKEN, EOS_TOKEN, PAD_TOKEN}
- }
- with open(output_path_prefix.as_posix() + ".txt", "w") as f_out:
- for _, s in sorted(vocab.items(), key=lambda x: x[0]):
- f_out.write(f"{s} 1\n")
-
-
-def extract_fbank_features(
- waveform: torch.FloatTensor,
- sample_rate: int,
- output_path: Optional[Path] = None,
- n_mel_bins: int = 80,
- overwrite: bool = False,
-):
- if output_path is not None and output_path.is_file() and not overwrite:
- return
-
- _waveform = convert_waveform(waveform, sample_rate, to_mono=True)
- # Kaldi compliance: 16-bit signed integers
- _waveform = _waveform * (2 ** 15)
- _waveform = _waveform.numpy()
-
- features = _get_kaldi_fbank(_waveform, sample_rate, n_mel_bins)
- if features is None:
- features = _get_torchaudio_fbank(_waveform, sample_rate, n_mel_bins)
- if features is None:
- raise ImportError(
- "Please install pyKaldi or torchaudio to enable fbank feature extraction"
- )
-
- if output_path is not None:
- np.save(output_path.as_posix(), features)
- return features
-
-
-def create_zip(data_root: Path, zip_path: Path):
- paths = list(data_root.glob("*.npy"))
- with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_STORED) as f:
- for path in tqdm(paths):
- f.write(path, arcname=path.name)
-
-
-def get_zip_manifest(
- zip_path: Path, zip_root: Optional[Path] = None, is_audio=False
-):
- _zip_path = Path.joinpath(zip_root or Path(""), zip_path)
- with zipfile.ZipFile(_zip_path, mode="r") as f:
- info = f.infolist()
- paths, lengths = {}, {}
- for i in tqdm(info):
- utt_id = Path(i.filename).stem
- offset, file_size = i.header_offset + 30 + len(i.filename), i.file_size
- paths[utt_id] = f"{zip_path.as_posix()}:{offset}:{file_size}"
- with open(_zip_path, "rb") as f:
- f.seek(offset)
- byte_data = f.read(file_size)
- assert len(byte_data) > 1
- if is_audio:
- assert is_sf_audio_data(byte_data), i
- else:
- assert is_npy_data(byte_data), i
- byte_data_fp = io.BytesIO(byte_data)
- if is_audio:
- lengths[utt_id] = sf.info(byte_data_fp).frames
- else:
- lengths[utt_id] = np.load(byte_data_fp).shape[0]
- return paths, lengths
-
-
-def gen_config_yaml(
- manifest_root: Path,
- spm_filename: Optional[str] = None,
- vocab_name: Optional[str] = None,
- yaml_filename: str = "config.yaml",
- specaugment_policy: Optional[str] = "lb",
- prepend_tgt_lang_tag: bool = False,
- sampling_alpha: Optional[float] = None,
- input_channels: Optional[int] = 1,
- input_feat_per_channel: Optional[int] = 80,
- audio_root: str = "",
- cmvn_type: str = "utterance",
- gcmvn_path: Optional[Path] = None,
- extra=None
-):
- manifest_root = manifest_root.absolute()
- writer = S2TDataConfigWriter(manifest_root / yaml_filename)
- assert spm_filename is not None or vocab_name is not None
- vocab_name = spm_filename.replace(".model", ".txt") if vocab_name is None \
- else vocab_name
- writer.set_vocab_filename(vocab_name)
- if input_channels is not None:
- writer.set_input_channels(input_channels)
- if input_feat_per_channel is not None:
- writer.set_input_feat_per_channel(input_feat_per_channel)
- specaugment_setters = {
- "lb": writer.set_specaugment_lb_policy,
- "ld": writer.set_specaugment_ld_policy,
- "sm": writer.set_specaugment_sm_policy,
- "ss": writer.set_specaugment_ss_policy,
- }
- specaugment_setter = specaugment_setters.get(specaugment_policy, None)
- if specaugment_setter is not None:
- specaugment_setter()
- if spm_filename is not None:
- writer.set_bpe_tokenizer(
- {
- "bpe": "sentencepiece",
- "sentencepiece_model": (manifest_root / spm_filename).as_posix(),
- }
- )
- if prepend_tgt_lang_tag:
- writer.set_prepend_tgt_lang_tag(True)
- if sampling_alpha is not None:
- writer.set_sampling_alpha(sampling_alpha)
-
- if cmvn_type not in ["global", "utterance"]:
- raise NotImplementedError
-
- if specaugment_policy is not None:
- writer.set_feature_transforms(
- "_train", [f"{cmvn_type}_cmvn", "specaugment"]
- )
- writer.set_feature_transforms("*", [f"{cmvn_type}_cmvn"])
-
- if cmvn_type == "global":
- if gcmvn_path is None:
- raise ValueError("Please provide path of global cmvn file.")
- else:
- writer.set_global_cmvn(gcmvn_path.as_posix())
-
- if len(audio_root) > 0:
- writer.set_audio_root(audio_root)
-
- if extra is not None:
- writer.set_extra(extra)
- writer.flush()
-
-
-def load_df_from_tsv(path: Union[str, Path]) -> pd.DataFrame:
- _path = path if isinstance(path, str) else path.as_posix()
- return pd.read_csv(
- _path,
- sep="\t",
- header=0,
- encoding="utf-8",
- escapechar="\\",
- quoting=csv.QUOTE_NONE,
- na_filter=False,
- )
-
-
-def save_df_to_tsv(dataframe, path: Union[str, Path]):
- _path = path if isinstance(path, str) else path.as_posix()
- dataframe.to_csv(
- _path,
- sep="\t",
- header=True,
- index=False,
- encoding="utf-8",
- escapechar="\\",
- quoting=csv.QUOTE_NONE,
- )
-
-
-def load_tsv_to_dicts(path: Union[str, Path]) -> List[dict]:
- with open(path, "r") as f:
- reader = csv.DictReader(
- f,
- delimiter="\t",
- quotechar=None,
- doublequote=False,
- lineterminator="\n",
- quoting=csv.QUOTE_NONE,
- )
- rows = [dict(e) for e in reader]
- return rows
-
-
-def filter_manifest_df(
- df, is_train_split=False, extra_filters=None, min_n_frames=5, max_n_frames=3000
-):
- filters = {
- "no speech": df["audio"] == "",
- f"short speech (<{min_n_frames} frames)": df["n_frames"] < min_n_frames,
- "empty sentence": df["tgt_text"] == "",
- }
- if is_train_split:
- filters[f"long speech (>{max_n_frames} frames)"] = df["n_frames"] > max_n_frames
- if extra_filters is not None:
- filters.update(extra_filters)
- invalid = reduce(lambda x, y: x | y, filters.values())
- valid = ~invalid
- print(
- "| "
- + ", ".join(f"{n}: {f.sum()}" for n, f in filters.items())
- + f", total {invalid.sum()} filtered, {valid.sum()} remained."
- )
- return df[valid]
-
-
-def cal_gcmvn_stats(features_list):
- features = np.concatenate(features_list)
- square_sums = (features ** 2).sum(axis=0)
- mean = features.mean(axis=0)
- features = np.subtract(features, mean)
- var = square_sums / features.shape[0] - mean ** 2
- std = np.sqrt(np.maximum(var, 1e-8))
- return {"mean": mean.astype("float32"), "std": std.astype("float32")}
-
-
-class S2TDataConfigWriter(object):
- DEFAULT_VOCAB_FILENAME = "dict.txt"
- DEFAULT_INPUT_FEAT_PER_CHANNEL = 80
- DEFAULT_INPUT_CHANNELS = 1
-
- def __init__(self, yaml_path: Path):
- try:
- import yaml
- except ImportError:
- print("Please install PyYAML for S2T data config YAML files")
- self.yaml = yaml
- self.yaml_path = yaml_path
- self.config = {}
-
- def flush(self):
- with open(self.yaml_path, "w") as f:
- self.yaml.dump(self.config, f)
-
- def set_audio_root(self, audio_root=""):
- self.config["audio_root"] = audio_root
-
- def set_vocab_filename(self, vocab_filename: str = "dict.txt"):
- self.config["vocab_filename"] = vocab_filename
-
- def set_specaugment(
- self,
- time_wrap_w: int,
- freq_mask_n: int,
- freq_mask_f: int,
- time_mask_n: int,
- time_mask_t: int,
- time_mask_p: float,
- ):
- self.config["specaugment"] = {
- "time_wrap_W": time_wrap_w,
- "freq_mask_N": freq_mask_n,
- "freq_mask_F": freq_mask_f,
- "time_mask_N": time_mask_n,
- "time_mask_T": time_mask_t,
- "time_mask_p": time_mask_p,
- }
-
- def set_specaugment_lb_policy(self):
- self.set_specaugment(
- time_wrap_w=0,
- freq_mask_n=1,
- freq_mask_f=27,
- time_mask_n=1,
- time_mask_t=100,
- time_mask_p=1.0,
- )
-
- def set_specaugment_ld_policy(self):
- self.set_specaugment(
- time_wrap_w=0,
- freq_mask_n=2,
- freq_mask_f=27,
- time_mask_n=2,
- time_mask_t=100,
- time_mask_p=1.0,
- )
-
- def set_specaugment_sm_policy(self):
- self.set_specaugment(
- time_wrap_w=0,
- freq_mask_n=2,
- freq_mask_f=15,
- time_mask_n=2,
- time_mask_t=70,
- time_mask_p=0.2,
- )
-
- def set_specaugment_ss_policy(self):
- self.set_specaugment(
- time_wrap_w=0,
- freq_mask_n=2,
- freq_mask_f=27,
- time_mask_n=2,
- time_mask_t=70,
- time_mask_p=0.2,
- )
-
- def set_input_channels(self, input_channels: int = 1):
- self.config["input_channels"] = input_channels
-
- def set_input_feat_per_channel(self, input_feat_per_channel: int = 80):
- self.config["input_feat_per_channel"] = input_feat_per_channel
-
- def set_bpe_tokenizer(self, bpe_tokenizer: Dict[str, Any]):
- self.config["bpe_tokenizer"] = bpe_tokenizer
-
- def set_global_cmvn(self, stats_npz_path: str):
- self.config["global_cmvn"] = {"stats_npz_path": stats_npz_path}
-
- def set_feature_transforms(self, split: str, transforms: List[str]):
- if "transforms" not in self.config:
- self.config["transforms"] = {}
- self.config["transforms"][split] = transforms
-
- def set_prepend_tgt_lang_tag(self, flag: bool = True):
- self.config["prepend_tgt_lang_tag"] = flag
-
- def set_sampling_alpha(self, sampling_alpha: float = 1.0):
- self.config["sampling_alpha"] = sampling_alpha
-
- def set_extra(self, data):
- self.config.update(data)
diff --git a/spaces/Illumotion/Koboldcpp/include/CL/cl_ext_intel.h b/spaces/Illumotion/Koboldcpp/include/CL/cl_ext_intel.h
deleted file mode 100644
index a7ae87a3400ffb0d3f3411dc0f4a3a330fcccf70..0000000000000000000000000000000000000000
--- a/spaces/Illumotion/Koboldcpp/include/CL/cl_ext_intel.h
+++ /dev/null
@@ -1,19 +0,0 @@
-/*******************************************************************************
- * Copyright (c) 2008-2020 The Khronos Group Inc.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- *
- ******************************************************************************/
-
-#include
-#pragma message("The Intel extensions have been moved into cl_ext.h. Please include cl_ext.h directly.")
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.css b/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.css
deleted file mode 100644
index 22108488886cfc8d7772214dd9b83727b3fca6a3..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.css
+++ /dev/null
@@ -1,468 +0,0 @@
-:root {
- --chatbot-color-light: #000000;
- --chatbot-color-dark: #FFFFFF;
- --chatbot-background-color-light: #F3F3F3;
- --chatbot-background-color-dark: #121111;
- --message-user-background-color-light: #95EC69;
- --message-user-background-color-dark: #26B561;
- --message-bot-background-color-light: #FFFFFF;
- --message-bot-background-color-dark: #2C2C2C;
-}
-
-#app_title {
- font-weight: var(--prose-header-text-weight);
- font-size: var(--text-xxl);
- line-height: 1.3;
- text-align: left;
- margin-top: 6px;
- white-space: nowrap;
-}
-#description {
- text-align: center;
- margin: 32px 0 4px 0;
-}
-
-/* gradio的页脚信息 */
-footer {
- /* display: none !important; */
- margin-top: .2em !important;
- font-size: 85%;
-}
-#footer {
- text-align: center;
-}
-#footer div {
- display: inline-block;
-}
-#footer .versions{
- font-size: 85%;
- opacity: 0.60;
-}
-
-#float_display {
- position: absolute;
- max-height: 30px;
-}
-/* user_info */
-#user_info {
- white-space: nowrap;
- position: absolute; left: 8em; top: .2em;
- z-index: var(--layer-2);
- box-shadow: var(--block-shadow);
- border: none; border-radius: var(--block-label-radius);
- background: var(--color-accent);
- padding: var(--block-label-padding);
- font-size: var(--block-label-text-size); line-height: var(--line-sm);
- width: auto; min-height: 30px!important;
- opacity: 1;
- transition: opacity 0.3s ease-in-out;
-}
-#user_info .wrap {
- opacity: 0;
-}
-#user_info p {
- color: white;
- font-weight: var(--block-label-text-weight);
-}
-#user_info.hideK {
- opacity: 0;
- transition: opacity 1s ease-in-out;
-}
-
-/* status_display */
-#status_display {
- display: flex;
- min-height: 2em;
- align-items: flex-end;
- justify-content: flex-end;
-}
-#status_display p {
- font-size: .85em;
- font-family: ui-monospace, "SF Mono", "SFMono-Regular", "Menlo", "Consolas", "Liberation Mono", "Microsoft Yahei UI", "Microsoft Yahei", monospace;
- /* Windows下中文的monospace会fallback为新宋体,实在太丑,这里折中使用微软雅黑 */
- color: var(--body-text-color-subdued);
-}
-
-#status_display {
- transition: all 0.6s;
-}
-#chuanhu_chatbot {
- transition: height 0.3s ease;
-}
-
-/* usage_display */
-.insert_block {
- position: relative;
- margin: 0;
- padding: .5em 1em;
- box-shadow: var(--block-shadow);
- border-width: var(--block-border-width);
- border-color: var(--block-border-color);
- border-radius: var(--block-radius);
- background: var(--block-background-fill);
- width: 100%;
- line-height: var(--line-sm);
- min-height: 2em;
-}
-#usage_display p, #usage_display span {
- margin: 0;
- font-size: .85em;
- color: var(--body-text-color-subdued);
-}
-.progress-bar {
- background-color: var(--input-background-fill);;
- margin: .5em 0 !important;
- height: 20px;
- border-radius: 10px;
- overflow: hidden;
-}
-.progress {
- background-color: var(--block-title-background-fill);
- height: 100%;
- border-radius: 10px;
- text-align: right;
- transition: width 0.5s ease-in-out;
-}
-.progress-text {
- /* color: white; */
- color: var(--color-accent) !important;
- font-size: 1em !important;
- font-weight: bold;
- padding-right: 10px;
- line-height: 20px;
-}
-
-.apSwitch {
- top: 2px;
- display: inline-block;
- height: 24px;
- position: relative;
- width: 48px;
- border-radius: 12px;
-}
-.apSwitch input {
- display: none !important;
-}
-.apSlider {
- background-color: var(--neutral-200);
- bottom: 0;
- cursor: pointer;
- left: 0;
- position: absolute;
- right: 0;
- top: 0;
- transition: .4s;
- font-size: 18px;
- border-radius: 12px;
-}
-.apSlider::before {
- bottom: -1.5px;
- left: 1px;
- position: absolute;
- transition: .4s;
- content: "🌞";
-}
-input:checked + .apSlider {
- background-color: var(--primary-600);
-}
-input:checked + .apSlider::before {
- transform: translateX(23px);
- content:"🌚";
-}
-
-/* Override Slider Styles (for webkit browsers like Safari and Chrome)
- * 好希望这份提案能早日实现 https://github.com/w3c/csswg-drafts/issues/4410
- * 进度滑块在各个平台还是太不统一了
- */
-input[type="range"] {
- -webkit-appearance: none;
- height: 4px;
- background: var(--input-background-fill);
- border-radius: 5px;
- background-image: linear-gradient(var(--primary-500),var(--primary-500));
- background-size: 0% 100%;
- background-repeat: no-repeat;
-}
-input[type="range"]::-webkit-slider-thumb {
- -webkit-appearance: none;
- height: 20px;
- width: 20px;
- border-radius: 50%;
- border: solid 0.5px #ddd;
- background-color: white;
- cursor: ew-resize;
- box-shadow: var(--input-shadow);
- transition: background-color .1s ease;
-}
-input[type="range"]::-webkit-slider-thumb:hover {
- background: var(--neutral-50);
-}
-input[type=range]::-webkit-slider-runnable-track {
- -webkit-appearance: none;
- box-shadow: none;
- border: none;
- background: transparent;
-}
-
-#submit_btn, #cancel_btn {
- height: 42px !important;
-}
-#submit_btn::before {
- content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-#cancel_btn::before {
- content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E");
- height: 21px;
-}
-/* list */
-ol:not(.options), ul:not(.options) {
- padding-inline-start: 2em !important;
-}
-
-/* 亮色(默认) */
-#chuanhu_chatbot {
- background-color: var(--chatbot-background-color-light) !important;
- color: var(--chatbot-color-light) !important;
-}
-[data-testid = "bot"] {
- background-color: var(--message-bot-background-color-light) !important;
-}
-[data-testid = "user"] {
- background-color: var(--message-user-background-color-light) !important;
-}
-/* 暗色 */
-.dark #chuanhu_chatbot {
- background-color: var(--chatbot-background-color-dark) !important;
- color: var(--chatbot-color-dark) !important;
-}
-.dark [data-testid = "bot"] {
- background-color: var(--message-bot-background-color-dark) !important;
-}
-.dark [data-testid = "user"] {
- background-color: var(--message-user-background-color-dark) !important;
-}
-
-/* 屏幕宽度大于等于500px的设备 */
-/* update on 2023.4.8: 高度的细致调整已写入JavaScript */
-@media screen and (min-width: 500px) {
- #chuanhu_chatbot {
- height: calc(100vh - 200px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
-}
-/* 屏幕宽度小于500px的设备 */
-@media screen and (max-width: 499px) {
- #chuanhu_chatbot {
- height: calc(100vh - 140px);
- }
- #chuanhu_chatbot .wrap {
- max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) );
- }
- [data-testid = "bot"] {
- max-width: 95% !important;
- }
- #app_title h1{
- letter-spacing: -1px; font-size: 22px;
- }
-}
-#chuanhu_chatbot .wrap {
- overflow-x: hidden;
-}
-/* 对话气泡 */
-.message {
- border-radius: var(--radius-xl) !important;
- border: none;
- padding: var(--spacing-xl) !important;
- font-size: var(--text-md) !important;
- line-height: var(--line-md) !important;
- min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
- min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl));
-}
-[data-testid = "bot"] {
- max-width: 85%;
- border-bottom-left-radius: 0 !important;
-}
-[data-testid = "user"] {
- max-width: 85%;
- width: auto !important;
- border-bottom-right-radius: 0 !important;
-}
-
-.message.user p {
- white-space: pre-wrap;
-}
-.message .user-message {
- display: block;
- padding: 0 !important;
- white-space: pre-wrap;
-}
-
-.message .md-message p {
- margin-top: 0.6em !important;
- margin-bottom: 0.6em !important;
-}
-.message .md-message p:first-child { margin-top: 0 !important; }
-.message .md-message p:last-of-type { margin-bottom: 0 !important; }
-
-.message .md-message {
- display: block;
- padding: 0 !important;
-}
-.message .raw-message p {
- margin:0 !important;
-}
-.message .raw-message {
- display: block;
- padding: 0 !important;
- white-space: pre-wrap;
-}
-.raw-message.hideM, .md-message.hideM {
- display: none;
-}
-
-/* custom buttons */
-.chuanhu-btn {
- border-radius: 5px;
- /* background-color: #E6E6E6 !important; */
- color: rgba(120, 120, 120, 0.64) !important;
- padding: 4px !important;
- position: absolute;
- right: -22px;
- cursor: pointer !important;
- transition: color .2s ease, background-color .2s ease;
-}
-.chuanhu-btn:hover {
- background-color: rgba(167, 167, 167, 0.25) !important;
- color: unset !important;
-}
-.chuanhu-btn:active {
- background-color: rgba(167, 167, 167, 0.5) !important;
-}
-.chuanhu-btn:focus {
- outline: none;
-}
-.copy-bot-btn {
- /* top: 18px; */
- bottom: 0;
-}
-.toggle-md-btn {
- /* top: 0; */
- bottom: 20px;
-}
-.copy-code-btn {
- position: relative;
- float: right;
- font-size: 1em;
- cursor: pointer;
-}
-
-.message-wrap>div img{
- border-radius: 10px !important;
-}
-
-/* history message */
-.wrap>.history-message {
- padding: 10px !important;
-}
-.history-message {
- /* padding: 0 !important; */
- opacity: 80%;
- display: flex;
- flex-direction: column;
-}
-.history-message>.history-message {
- padding: 0 !important;
-}
-.history-message>.message-wrap {
- padding: 0 !important;
- margin-bottom: 16px;
-}
-.history-message>.message {
- margin-bottom: 16px;
-}
-.wrap>.history-message::after {
- content: "";
- display: block;
- height: 2px;
- background-color: var(--body-text-color-subdued);
- margin-bottom: 10px;
- margin-top: -10px;
- clear: both;
-}
-.wrap>.history-message>:last-child::after {
- content: "仅供查看";
- display: block;
- text-align: center;
- color: var(--body-text-color-subdued);
- font-size: 0.8em;
-}
-
-/* 表格 */
-table {
- margin: 1em 0;
- border-collapse: collapse;
- empty-cells: show;
-}
-td,th {
- border: 1.2px solid var(--border-color-primary) !important;
- padding: 0.2em;
-}
-thead {
- background-color: rgba(175,184,193,0.2);
-}
-thead th {
- padding: .5em .2em;
-}
-/* 行内代码 */
-.message :not(pre) code {
- display: inline;
- white-space: break-spaces;
- font-family: var(--font-mono);
- border-radius: 6px;
- margin: 0 2px 0 2px;
- padding: .2em .4em .1em .4em;
- background-color: rgba(175,184,193,0.2);
-}
-/* 代码块 */
-.message pre,
-.message pre[class*=language-] {
- color: #fff;
- overflow-x: auto;
- overflow-y: hidden;
- margin: .8em 1em 1em 0em !important;
- padding: var(--spacing-xl) 1.2em !important;
- border-radius: var(--radius-lg) !important;
-}
-.message pre code,
-.message pre code[class*=language-] {
- color: #fff;
- padding: 0;
- margin: 0;
- background-color: unset;
- text-shadow: none;
- font-family: var(--font-mono);
-}
-/* 覆盖 gradio 丑陋的复制按钮样式 */
-pre button[title="copy"] {
- border-radius: 5px;
- transition: background-color .2s ease;
-}
-pre button[title="copy"]:hover {
- background-color: #333232;
-}
-pre button .check {
- color: #fff !important;
- background: var(--neutral-950) !important;
-}
-
-/* 覆盖prism.css */
-.language-css .token.string,
-.style .token.string,
-.token.entity,
-.token.operator,
-.token.url {
- background: none !important;
-}
diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/Google_PaLM.py b/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/Google_PaLM.py
deleted file mode 100644
index 79ca042e228b25546600e4258a0b75790e25bb52..0000000000000000000000000000000000000000
--- a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/Google_PaLM.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from .base_model import BaseLLMModel
-import google.generativeai as palm
-
-class Google_PaLM_Client(BaseLLMModel):
- def __init__(self, model_name, api_key, user_name="") -> None:
- super().__init__(model_name=model_name, user=user_name)
- self.api_key = api_key
-
- def _get_palm_style_input(self):
- new_history = []
- for item in self.history:
- if item["role"] == "user":
- new_history.append({'author': '1', 'content': item["content"]})
- else:
- new_history.append({'author': '0', 'content': item["content"]})
- return new_history
-
- def get_answer_at_once(self):
- palm.configure(api_key=self.api_key)
- messages = self._get_palm_style_input()
- response = palm.chat(context=self.system_prompt, messages=messages, temperature=self.temperature, top_p=self.top_p)
- if response.last is not None:
- return response.last, len(response.last)
- else:
- reasons = '\n\n'.join(reason['reason'].name for reason in response.filters)
- return "由于下面的原因,Google 拒绝返回 PaLM 的回答:\n\n" + reasons, 0
\ No newline at end of file
diff --git a/spaces/KonradSzafer/HF-QA-Demo/tests/discord_bot/__init__.py b/spaces/KonradSzafer/HF-QA-Demo/tests/discord_bot/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/KyanChen/RSPrompter/mmdet/engine/runner/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/engine/runner/__init__.py
deleted file mode 100644
index e8bcce4448e48e2d64354ba6770f9f426fb3d869..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/engine/runner/__init__.py
+++ /dev/null
@@ -1,4 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from .loops import TeacherStudentValLoop
-
-__all__ = ['TeacherStudentValLoop']
diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/trident_faster_rcnn.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/trident_faster_rcnn.py
deleted file mode 100644
index 4244925beaebea820f836b41ab5463f5f499f4d0..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/trident_faster_rcnn.py
+++ /dev/null
@@ -1,81 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from torch import Tensor
-
-from mmdet.registry import MODELS
-from mmdet.structures import SampleList
-from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig
-from .faster_rcnn import FasterRCNN
-
-
-@MODELS.register_module()
-class TridentFasterRCNN(FasterRCNN):
- """Implementation of `TridentNet `_"""
-
- def __init__(self,
- backbone: ConfigType,
- rpn_head: ConfigType,
- roi_head: ConfigType,
- train_cfg: ConfigType,
- test_cfg: ConfigType,
- neck: OptConfigType = None,
- data_preprocessor: OptConfigType = None,
- init_cfg: OptMultiConfig = None) -> None:
-
- super().__init__(
- backbone=backbone,
- neck=neck,
- rpn_head=rpn_head,
- roi_head=roi_head,
- train_cfg=train_cfg,
- test_cfg=test_cfg,
- data_preprocessor=data_preprocessor,
- init_cfg=init_cfg)
- assert self.backbone.num_branch == self.roi_head.num_branch
- assert self.backbone.test_branch_idx == self.roi_head.test_branch_idx
- self.num_branch = self.backbone.num_branch
- self.test_branch_idx = self.backbone.test_branch_idx
-
- def _forward(self, batch_inputs: Tensor,
- batch_data_samples: SampleList) -> tuple:
- """copy the ``batch_data_samples`` to fit multi-branch."""
- num_branch = self.num_branch \
- if self.training or self.test_branch_idx == -1 else 1
- trident_data_samples = batch_data_samples * num_branch
- return super()._forward(
- batch_inputs=batch_inputs, batch_data_samples=trident_data_samples)
-
- def loss(self, batch_inputs: Tensor,
- batch_data_samples: SampleList) -> dict:
- """copy the ``batch_data_samples`` to fit multi-branch."""
- num_branch = self.num_branch \
- if self.training or self.test_branch_idx == -1 else 1
- trident_data_samples = batch_data_samples * num_branch
- return super().loss(
- batch_inputs=batch_inputs, batch_data_samples=trident_data_samples)
-
- def predict(self,
- batch_inputs: Tensor,
- batch_data_samples: SampleList,
- rescale: bool = True) -> SampleList:
- """copy the ``batch_data_samples`` to fit multi-branch."""
- num_branch = self.num_branch \
- if self.training or self.test_branch_idx == -1 else 1
- trident_data_samples = batch_data_samples * num_branch
- return super().predict(
- batch_inputs=batch_inputs,
- batch_data_samples=trident_data_samples,
- rescale=rescale)
-
- # TODO need to refactor
- def aug_test(self, imgs, img_metas, rescale=False):
- """Test with augmentations.
-
- If rescale is False, then returned bboxes and masks will fit the scale
- of imgs[0].
- """
- x = self.extract_feats(imgs)
- num_branch = (self.num_branch if self.test_branch_idx == -1 else 1)
- trident_img_metas = [img_metas * num_branch for img_metas in img_metas]
- proposal_list = self.rpn_head.aug_test_rpn(x, trident_img_metas)
- return self.roi_head.aug_test(
- x, proposal_list, img_metas, rescale=rescale)
diff --git a/spaces/KyanChen/RSPrompter/mmpl/evaluation/metrics/builder.py b/spaces/KyanChen/RSPrompter/mmpl/evaluation/metrics/builder.py
deleted file mode 100644
index bd55df759561b73656a71941e67f9c033d900dd7..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpl/evaluation/metrics/builder.py
+++ /dev/null
@@ -1,34 +0,0 @@
-import copy
-import inspect
-from typing import List, Union
-
-import torch
-import torch.nn as nn
-import lightning
-import torchmetrics
-import torchmetrics.detection
-
-from mmengine.config import Config, ConfigDict
-from mmpl.registry import METRICS
-
-
-def register_pl_metrics() -> List[str]:
- """Register loggers in ``lightning.pytorch.loggers`` to the ``LOGGERS`` registry.
-
- Returns:
- List[str]: A list of registered optimizers' name.
- """
- pl_metrics = []
- for modules in [torchmetrics, torchmetrics.detection]:
- for module_name in dir(modules):
- if module_name.startswith('__'):
- continue
- _metric = getattr(modules, module_name)
- if inspect.isclass(_metric):
- METRICS.register_module(module=_metric)
- pl_metrics.append(module_name)
- return pl_metrics
-
-
-PL_METRICS = register_pl_metrics()
-
diff --git a/spaces/KyanChen/RSPrompter/mmpl/utils/typing_utils.py b/spaces/KyanChen/RSPrompter/mmpl/utils/typing_utils.py
deleted file mode 100644
index 6caf6de53274594e139dbe7c1973c747229bf010..0000000000000000000000000000000000000000
--- a/spaces/KyanChen/RSPrompter/mmpl/utils/typing_utils.py
+++ /dev/null
@@ -1,22 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-"""Collecting some commonly used type hint in mmdetection."""
-from typing import List, Optional, Sequence, Tuple, Union
-
-from mmengine.config import ConfigDict
-from mmengine.structures import InstanceData, PixelData
-
-# TODO: Need to avoid circular import with assigner and sampler
-# Type hint of config data
-ConfigType = Union[ConfigDict, dict]
-OptConfigType = Optional[ConfigType]
-# Type hint of one or more config data
-MultiConfig = Union[ConfigType, List[ConfigType]]
-OptMultiConfig = Optional[MultiConfig]
-
-InstanceList = List[InstanceData]
-OptInstanceList = Optional[InstanceList]
-
-PixelList = List[PixelData]
-OptPixelList = Optional[PixelList]
-
-RangeType = Sequence[Tuple[int, int]]
diff --git a/spaces/LabAlproITS/CyberDAS-FE/main.py b/spaces/LabAlproITS/CyberDAS-FE/main.py
deleted file mode 100644
index a4005077331080ef19a0ac5118f31d8b322bff5d..0000000000000000000000000000000000000000
--- a/spaces/LabAlproITS/CyberDAS-FE/main.py
+++ /dev/null
@@ -1,40 +0,0 @@
-#!/usr/bin/env python
-# encoding: utf-8
-
-from fastapi import FastAPI, Form, Depends, Request
-from fastapi.templating import Jinja2Templates
-from pydantic import BaseModel
-import pickle
-import json
-
-app = FastAPI()
-
-# Menentukan direktori templates
-templates = Jinja2Templates(directory="templates")
-
-class Msg(BaseModel):
- msg: str
-
-
-class Req(BaseModel):
- age: int
- sex: int
- smoker: int
- bmi: float
- children: int
- region: int
-
-
-@app.get("/welcomeMessage")
-async def welcome():
- return {"message": "Hello World. Welcome to FastAPI!"}
-
-@app.get("/")
-async def root(request: Request):
- return templates.TemplateResponse(
- "index.html",
- {
- "request": request,
- "insurance_cost": 0,
- }
- )
diff --git a/spaces/Laihiujin/OneFormer/README.md b/spaces/Laihiujin/OneFormer/README.md
deleted file mode 100644
index 0adcc679d28eb3ec75ab7b60ed753f6e17795106..0000000000000000000000000000000000000000
--- a/spaces/Laihiujin/OneFormer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: OneFormer
-emoji: 🎗️
-colorFrom: red
-colorTo: blue
-sdk: docker
-app_port: 7860
-pinned: false
-license: mit
-duplicated_from: shi-labs/OneFormer
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Makiing/coolb-in-gtest/cloudflare/worker.js b/spaces/Makiing/coolb-in-gtest/cloudflare/worker.js
deleted file mode 100644
index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/cloudflare/worker.js
+++ /dev/null
@@ -1,18 +0,0 @@
-const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。
-
-export default {
- async fetch(request) {
- const uri = new URL(request.url);
- if (uri.protocol === 'http:') {
- uri.protocol = 'https:';
- return new Response('', {
- status: 301,
- headers: {
- location: uri.toString(),
- },
- })
- }
- uri.host = TRAGET_HOST
- return fetch(new Request(uri.toString(), request));
- },
-};
diff --git a/spaces/Makiing/coolb-in-gtest/src/app/layout.tsx b/spaces/Makiing/coolb-in-gtest/src/app/layout.tsx
deleted file mode 100644
index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000
--- a/spaces/Makiing/coolb-in-gtest/src/app/layout.tsx
+++ /dev/null
@@ -1,47 +0,0 @@
-import { Metadata } from 'next'
-import { Toaster } from 'react-hot-toast'
-import { TailwindIndicator } from '@/components/tailwind-indicator'
-import { Providers } from '@/components/providers'
-import { Header } from '@/components/header'
-
-import '@/app/globals.scss'
-
-
-export const metadata: Metadata = {
- title: {
- default: 'Bing AI Chatbot',
- template: `%s - Bing AI Chatbot`
- },
- description: 'Bing AI Chatbot Web App.',
- themeColor: [
- { media: '(prefers-color-scheme: light)', color: 'white' },
- { media: '(prefers-color-scheme: dark)', color: 'dark' }
- ],
- icons: {
- icon: '/favicon.ico',
- shortcut: '../assets/images/logo.svg',
- apple: '../assets/images/logo.svg'
- }
-}
-
-interface RootLayoutProps {
- children: React.ReactNode
-}
-
-export default function RootLayout({ children }: RootLayoutProps) {
- return (
-
-
-
-
-
- {/* @ts-ignore */}
-
- {children}
-
-
-
-
-
- )
-}
diff --git a/spaces/MarcusSu1216/XingTong/app.py b/spaces/MarcusSu1216/XingTong/app.py
deleted file mode 100644
index 8310b81340923a9aaea9ee5aba1d6e7811859097..0000000000000000000000000000000000000000
--- a/spaces/MarcusSu1216/XingTong/app.py
+++ /dev/null
@@ -1,75 +0,0 @@
-import io
-import os
-
-os.system("wget -P hubert/ https://huggingface.co/spaces/MarcusSu1216/XingTong/blob/main/hubert/checkpoint_best_legacy_500.pt")
-import gradio as gr
-import librosa
-import numpy as np
-import soundfile
-from inference.infer_tool import Svc
-import logging
-
-logging.getLogger('numba').setLevel(logging.WARNING)
-logging.getLogger('markdown_it').setLevel(logging.WARNING)
-logging.getLogger('urllib3').setLevel(logging.WARNING)
-logging.getLogger('matplotlib').setLevel(logging.WARNING)
-
-model = Svc("logs/44k/G_99200.pth", "configs/config.json", cluster_model_path="logs/44k/kmeans_10000.pt")
-
-def vc_fn(sid, input_audio, vc_transform, auto_f0,cluster_ratio, noise_scale):
- if input_audio is None:
- return "You need to upload an audio", None
- sampling_rate, audio = input_audio
- # print(audio.shape,sampling_rate)
- duration = audio.shape[0] / sampling_rate
- if duration > 100:
- return "请上传小于100s的音频,需要转换长音频请本地进行转换", None
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
- if len(audio.shape) > 1:
- audio = librosa.to_mono(audio.transpose(1, 0))
- if sampling_rate != 16000:
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
- print(audio.shape)
- out_wav_path = "temp.wav"
- soundfile.write(out_wav_path, audio, 16000, format="wav")
- print( cluster_ratio, auto_f0, noise_scale)
- out_audio, out_sr = model.infer(sid, vc_transform, out_wav_path,
- cluster_infer_ratio=cluster_ratio,
- auto_predict_f0=auto_f0,
- noice_scale=noise_scale
- )
- return "转换成功", (44100, out_audio.numpy())
-
-
-app = gr.Blocks()
-with app:
- with gr.Tabs():
- with gr.TabItem("介绍"):
- gr.Markdown(value="""
- 星瞳_Official的语音在线合成,基于so-vits-svc-4.0生成。\n
-
- 使用须知:\n
- 1、请使用伴奏和声去除干净的人声素材,时长小于100秒,格式为mp3或wav。\n
- 2、去除伴奏推荐使用UVR5软件,B站上有详细教程。\n
- 3、条件不支持推荐使用以下几个去伴奏的网站:\n
- https://vocalremover.org/zh/\n
- https://tuanziai.com/vocal-remover/upload\n
- https://www.lalal.ai/zh-hans/\n
- 4、在线版服务器为2核16G免费版,转换效率较慢请耐心等待。\n
- 5、使用此模型请标注作者:一闪一闪小星瞳,以及该项目地址。\n
- 6、有问题可以在B站私聊我反馈:https://space.bilibili.com/38523418\n
- 7、语音模型转换出的音频请勿用于商业化。
- """)
- spks = list(model.spk2id.keys())
- sid = gr.Dropdown(label="音色", choices=["XT4.0"], value="XT4.0")
- vc_input3 = gr.Audio(label="上传音频(长度建议小于100秒)")
- vc_transform = gr.Number(label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0)
- cluster_ratio = gr.Number(label="聚类模型混合比例,0-1之间,默认为0不启用聚类,能提升音色相似度,但会导致咬字下降(如果使用建议0.5左右)", value=0)
- auto_f0 = gr.Checkbox(label="自动f0预测,配合聚类模型f0预测效果更好,会导致变调功能失效(仅限转换语音,歌声不要勾选此项会究极跑调)", value=False)
- noise_scale = gr.Number(label="noise_scale 建议不要动,会影响音质,玄学参数", value=0.4)
- vc_submit = gr.Button("转换", variant="primary")
- vc_output1 = gr.Textbox(label="Output Message")
- vc_output2 = gr.Audio(label="Output Audio")
- vc_submit.click(vc_fn, [sid, vc_input3, vc_transform,auto_f0,cluster_ratio, noise_scale], [vc_output1, vc_output2])
-
- app.launch()
diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/u2net.py b/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/u2net.py
deleted file mode 100644
index 4144a10e8b4bfa7a19e480dd955923d800931540..0000000000000000000000000000000000000000
--- a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/u2net.py
+++ /dev/null
@@ -1,49 +0,0 @@
-import os
-from typing import List
-
-import numpy as np
-import pooch
-from PIL import Image
-from PIL.Image import Image as PILImage
-
-from .base import BaseSession
-
-
-class U2netSession(BaseSession):
- def predict(self, img: PILImage, *args, **kwargs) -> List[PILImage]:
- ort_outs = self.inner_session.run(
- None,
- self.normalize(
- img, (0.485, 0.456, 0.406), (0.229, 0.224, 0.225), (320, 320)
- ),
- )
-
- pred = ort_outs[0][:, 0, :, :]
-
- ma = np.max(pred)
- mi = np.min(pred)
-
- pred = (pred - mi) / (ma - mi)
- pred = np.squeeze(pred)
-
- mask = Image.fromarray((pred * 255).astype("uint8"), mode="L")
- mask = mask.resize(img.size, Image.LANCZOS)
-
- return [mask]
-
- @classmethod
- def download_models(cls, *args, **kwargs):
- fname = f"{cls.name()}.onnx"
- pooch.retrieve(
- "https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net.onnx",
- "md5:60024c5c889badc19c04ad937298a77b",
- fname=fname,
- path=cls.u2net_home(),
- progressbar=True,
- )
-
- return os.path.join(cls.u2net_home(), fname)
-
- @classmethod
- def name(cls, *args, **kwargs):
- return "u2net"
diff --git a/spaces/MonkeyDBoa/AvengersDetector/README.md b/spaces/MonkeyDBoa/AvengersDetector/README.md
deleted file mode 100644
index 867385765abfad0d8dfc95ab0b4be4d30f429578..0000000000000000000000000000000000000000
--- a/spaces/MonkeyDBoa/AvengersDetector/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: AvengersDetector
-emoji: ⚡
-colorFrom: green
-colorTo: pink
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio`, `streamlit`, or `static`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/resnet_cifar_main.py b/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/resnet_cifar_main.py
deleted file mode 100644
index 4a02fec8b96e25228e6e0467d646c26995f944fc..0000000000000000000000000000000000000000
--- a/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/resnet_cifar_main.py
+++ /dev/null
@@ -1,284 +0,0 @@
-# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ==============================================================================
-"""Runs a ResNet model on the Cifar-10 dataset."""
-
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-from absl import app
-from absl import flags
-from absl import logging
-import numpy as np
-import tensorflow as tf
-from official.benchmark.models import cifar_preprocessing
-from official.benchmark.models import resnet_cifar_model
-from official.benchmark.models import synthetic_util
-from official.utils.flags import core as flags_core
-from official.utils.misc import distribution_utils
-from official.utils.misc import keras_utils
-from official.vision.image_classification.resnet import common
-
-
-LR_SCHEDULE = [ # (multiplier, epoch to start) tuples
- (0.1, 91), (0.01, 136), (0.001, 182)
-]
-
-
-def learning_rate_schedule(current_epoch,
- current_batch,
- batches_per_epoch,
- batch_size):
- """Handles linear scaling rule and LR decay.
-
- Scale learning rate at epoch boundaries provided in LR_SCHEDULE by the
- provided scaling factor.
-
- Args:
- current_epoch: integer, current epoch indexed from 0.
- current_batch: integer, current batch in the current epoch, indexed from 0.
- batches_per_epoch: integer, number of steps in an epoch.
- batch_size: integer, total batch sized.
-
- Returns:
- Adjusted learning rate.
- """
- del current_batch, batches_per_epoch # not used
- initial_learning_rate = common.BASE_LEARNING_RATE * batch_size / 128
- learning_rate = initial_learning_rate
- for mult, start_epoch in LR_SCHEDULE:
- if current_epoch >= start_epoch:
- learning_rate = initial_learning_rate * mult
- else:
- break
- return learning_rate
-
-
-class LearningRateBatchScheduler(tf.keras.callbacks.Callback):
- """Callback to update learning rate on every batch (not epoch boundaries).
-
- N.B. Only support Keras optimizers, not TF optimizers.
-
- Attributes:
- schedule: a function that takes an epoch index and a batch index as input
- (both integer, indexed from 0) and returns a new learning rate as
- output (float).
- """
-
- def __init__(self, schedule, batch_size, steps_per_epoch):
- super(LearningRateBatchScheduler, self).__init__()
- self.schedule = schedule
- self.steps_per_epoch = steps_per_epoch
- self.batch_size = batch_size
- self.epochs = -1
- self.prev_lr = -1
-
- def on_epoch_begin(self, epoch, logs=None):
- if not hasattr(self.model.optimizer, 'learning_rate'):
- raise ValueError('Optimizer must have a "learning_rate" attribute.')
- self.epochs += 1
-
- def on_batch_begin(self, batch, logs=None):
- """Executes before step begins."""
- lr = self.schedule(self.epochs,
- batch,
- self.steps_per_epoch,
- self.batch_size)
- if not isinstance(lr, (float, np.float32, np.float64)):
- raise ValueError('The output of the "schedule" function should be float.')
- if lr != self.prev_lr:
- self.model.optimizer.learning_rate = lr # lr should be a float here
- self.prev_lr = lr
- logging.debug(
- 'Epoch %05d Batch %05d: LearningRateBatchScheduler '
- 'change learning rate to %s.', self.epochs, batch, lr)
-
-
-def run(flags_obj):
- """Run ResNet Cifar-10 training and eval loop using native Keras APIs.
-
- Args:
- flags_obj: An object containing parsed flag values.
-
- Raises:
- ValueError: If fp16 is passed as it is not currently supported.
-
- Returns:
- Dictionary of training and eval stats.
- """
- keras_utils.set_session_config(
- enable_xla=flags_obj.enable_xla)
-
- # Execute flag override logic for better model performance
- if flags_obj.tf_gpu_thread_mode:
- keras_utils.set_gpu_thread_mode_and_count(
- per_gpu_thread_count=flags_obj.per_gpu_thread_count,
- gpu_thread_mode=flags_obj.tf_gpu_thread_mode,
- num_gpus=flags_obj.num_gpus,
- datasets_num_private_threads=flags_obj.datasets_num_private_threads)
- common.set_cudnn_batchnorm_mode()
-
- dtype = flags_core.get_tf_dtype(flags_obj)
- if dtype == 'fp16':
- raise ValueError('dtype fp16 is not supported in Keras. Use the default '
- 'value(fp32).')
-
- data_format = flags_obj.data_format
- if data_format is None:
- data_format = ('channels_first' if tf.config.list_physical_devices('GPU')
- else 'channels_last')
- tf.keras.backend.set_image_data_format(data_format)
-
- strategy = distribution_utils.get_distribution_strategy(
- distribution_strategy=flags_obj.distribution_strategy,
- num_gpus=flags_obj.num_gpus,
- all_reduce_alg=flags_obj.all_reduce_alg,
- num_packs=flags_obj.num_packs)
-
- if strategy:
- # flags_obj.enable_get_next_as_optional controls whether enabling
- # get_next_as_optional behavior in DistributedIterator. If true, last
- # partial batch can be supported.
- strategy.extended.experimental_enable_get_next_as_optional = (
- flags_obj.enable_get_next_as_optional
- )
-
- strategy_scope = distribution_utils.get_strategy_scope(strategy)
-
- if flags_obj.use_synthetic_data:
- synthetic_util.set_up_synthetic_data()
- input_fn = common.get_synth_input_fn(
- height=cifar_preprocessing.HEIGHT,
- width=cifar_preprocessing.WIDTH,
- num_channels=cifar_preprocessing.NUM_CHANNELS,
- num_classes=cifar_preprocessing.NUM_CLASSES,
- dtype=flags_core.get_tf_dtype(flags_obj),
- drop_remainder=True)
- else:
- synthetic_util.undo_set_up_synthetic_data()
- input_fn = cifar_preprocessing.input_fn
-
- train_input_dataset = input_fn(
- is_training=True,
- data_dir=flags_obj.data_dir,
- batch_size=flags_obj.batch_size,
- parse_record_fn=cifar_preprocessing.parse_record,
- datasets_num_private_threads=flags_obj.datasets_num_private_threads,
- dtype=dtype,
- # Setting drop_remainder to avoid the partial batch logic in normalization
- # layer, which triggers tf.where and leads to extra memory copy of input
- # sizes between host and GPU.
- drop_remainder=(not flags_obj.enable_get_next_as_optional))
-
- eval_input_dataset = None
- if not flags_obj.skip_eval:
- eval_input_dataset = input_fn(
- is_training=False,
- data_dir=flags_obj.data_dir,
- batch_size=flags_obj.batch_size,
- parse_record_fn=cifar_preprocessing.parse_record)
-
- steps_per_epoch = (
- cifar_preprocessing.NUM_IMAGES['train'] // flags_obj.batch_size)
- lr_schedule = 0.1
- if flags_obj.use_tensor_lr:
- initial_learning_rate = common.BASE_LEARNING_RATE * flags_obj.batch_size / 128
- lr_schedule = tf.keras.optimizers.schedules.PiecewiseConstantDecay(
- boundaries=list(p[1] * steps_per_epoch for p in LR_SCHEDULE),
- values=[initial_learning_rate] +
- list(p[0] * initial_learning_rate for p in LR_SCHEDULE))
-
- with strategy_scope:
- optimizer = common.get_optimizer(lr_schedule)
- model = resnet_cifar_model.resnet56(classes=cifar_preprocessing.NUM_CLASSES)
- model.compile(
- loss='sparse_categorical_crossentropy',
- optimizer=optimizer,
- metrics=(['sparse_categorical_accuracy']
- if flags_obj.report_accuracy_metrics else None),
- run_eagerly=flags_obj.run_eagerly)
-
- train_epochs = flags_obj.train_epochs
-
- callbacks = common.get_callbacks()
-
- if not flags_obj.use_tensor_lr:
- lr_callback = LearningRateBatchScheduler(
- schedule=learning_rate_schedule,
- batch_size=flags_obj.batch_size,
- steps_per_epoch=steps_per_epoch)
- callbacks.append(lr_callback)
-
- # if mutliple epochs, ignore the train_steps flag.
- if train_epochs <= 1 and flags_obj.train_steps:
- steps_per_epoch = min(flags_obj.train_steps, steps_per_epoch)
- train_epochs = 1
-
- num_eval_steps = (cifar_preprocessing.NUM_IMAGES['validation'] //
- flags_obj.batch_size)
-
- validation_data = eval_input_dataset
- if flags_obj.skip_eval:
- if flags_obj.set_learning_phase_to_train:
- # TODO(haoyuzhang): Understand slowdown of setting learning phase when
- # not using distribution strategy.
- tf.keras.backend.set_learning_phase(1)
- num_eval_steps = None
- validation_data = None
-
- if not strategy and flags_obj.explicit_gpu_placement:
- # TODO(b/135607227): Add device scope automatically in Keras training loop
- # when not using distribition strategy.
- no_dist_strat_device = tf.device('/device:GPU:0')
- no_dist_strat_device.__enter__()
-
- history = model.fit(train_input_dataset,
- epochs=train_epochs,
- steps_per_epoch=steps_per_epoch,
- callbacks=callbacks,
- validation_steps=num_eval_steps,
- validation_data=validation_data,
- validation_freq=flags_obj.epochs_between_evals,
- verbose=2)
- eval_output = None
- if not flags_obj.skip_eval:
- eval_output = model.evaluate(eval_input_dataset,
- steps=num_eval_steps,
- verbose=2)
-
- if not strategy and flags_obj.explicit_gpu_placement:
- no_dist_strat_device.__exit__()
-
- stats = common.build_stats(history, eval_output, callbacks)
- return stats
-
-
-def define_cifar_flags():
- common.define_keras_flags(dynamic_loss_scale=False)
-
- flags_core.set_defaults(data_dir='/tmp/cifar10_data/cifar-10-batches-bin',
- model_dir='/tmp/cifar10_model',
- epochs_between_evals=10,
- batch_size=128)
-
-
-def main(_):
- return run(flags.FLAGS)
-
-
-if __name__ == '__main__':
- logging.set_verbosity(logging.INFO)
- define_cifar_flags()
- app.run(main)
diff --git a/spaces/Nesip/meta-llama-Llama-2-70b-chat-hf/app.py b/spaces/Nesip/meta-llama-Llama-2-70b-chat-hf/app.py
deleted file mode 100644
index a461703287a9bda9c93cfdfbb94d4c3cf90aaba9..0000000000000000000000000000000000000000
--- a/spaces/Nesip/meta-llama-Llama-2-70b-chat-hf/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/meta-llama/Llama-2-70b-chat-hf").launch()
\ No newline at end of file
diff --git a/spaces/Nick1/rvc-models/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/Nick1/rvc-models/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py
deleted file mode 100644
index b412ba2814e114ca7bb00b6fd6ef217f63d788a3..0000000000000000000000000000000000000000
--- a/spaces/Nick1/rvc-models/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py
+++ /dev/null
@@ -1,86 +0,0 @@
-from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
-import pyworld
-import numpy as np
-
-
-class HarvestF0Predictor(F0Predictor):
- def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
- self.hop_length = hop_length
- self.f0_min = f0_min
- self.f0_max = f0_max
- self.sampling_rate = sampling_rate
-
- def interpolate_f0(self, f0):
- """
- 对F0进行插值处理
- """
-
- data = np.reshape(f0, (f0.size, 1))
-
- vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
- vuv_vector[data > 0.0] = 1.0
- vuv_vector[data <= 0.0] = 0.0
-
- ip_data = data
-
- frame_number = data.size
- last_value = 0.0
- for i in range(frame_number):
- if data[i] <= 0.0:
- j = i + 1
- for j in range(i + 1, frame_number):
- if data[j] > 0.0:
- break
- if j < frame_number - 1:
- if last_value > 0.0:
- step = (data[j] - data[i - 1]) / float(j - i)
- for k in range(i, j):
- ip_data[k] = data[i - 1] + step * (k - i + 1)
- else:
- for k in range(i, j):
- ip_data[k] = data[j]
- else:
- for k in range(i, frame_number):
- ip_data[k] = last_value
- else:
- ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
- last_value = data[i]
-
- return ip_data[:, 0], vuv_vector[:, 0]
-
- def resize_f0(self, x, target_len):
- source = np.array(x)
- source[source < 0.001] = np.nan
- target = np.interp(
- np.arange(0, len(source) * target_len, len(source)) / target_len,
- np.arange(0, len(source)),
- source,
- )
- res = np.nan_to_num(target)
- return res
-
- def compute_f0(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.harvest(
- wav.astype(np.double),
- fs=self.hop_length,
- f0_ceil=self.f0_max,
- f0_floor=self.f0_min,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs)
- return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
-
- def compute_f0_uv(self, wav, p_len=None):
- if p_len is None:
- p_len = wav.shape[0] // self.hop_length
- f0, t = pyworld.harvest(
- wav.astype(np.double),
- fs=self.sampling_rate,
- f0_floor=self.f0_min,
- f0_ceil=self.f0_max,
- frame_period=1000 * self.hop_length / self.sampling_rate,
- )
- f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
- return self.interpolate_f0(self.resize_f0(f0, p_len))
diff --git a/spaces/Nultx/VITS-TTS/text/sanskrit.py b/spaces/Nultx/VITS-TTS/text/sanskrit.py
deleted file mode 100644
index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000
--- a/spaces/Nultx/VITS-TTS/text/sanskrit.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import re
-from indic_transliteration import sanscript
-
-
-# List of (iast, ipa) pairs:
-_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [
- ('a', 'ə'),
- ('ā', 'aː'),
- ('ī', 'iː'),
- ('ū', 'uː'),
- ('ṛ', 'ɹ`'),
- ('ṝ', 'ɹ`ː'),
- ('ḷ', 'l`'),
- ('ḹ', 'l`ː'),
- ('e', 'eː'),
- ('o', 'oː'),
- ('k', 'k⁼'),
- ('k⁼h', 'kʰ'),
- ('g', 'g⁼'),
- ('g⁼h', 'gʰ'),
- ('ṅ', 'ŋ'),
- ('c', 'ʧ⁼'),
- ('ʧ⁼h', 'ʧʰ'),
- ('j', 'ʥ⁼'),
- ('ʥ⁼h', 'ʥʰ'),
- ('ñ', 'n^'),
- ('ṭ', 't`⁼'),
- ('t`⁼h', 't`ʰ'),
- ('ḍ', 'd`⁼'),
- ('d`⁼h', 'd`ʰ'),
- ('ṇ', 'n`'),
- ('t', 't⁼'),
- ('t⁼h', 'tʰ'),
- ('d', 'd⁼'),
- ('d⁼h', 'dʰ'),
- ('p', 'p⁼'),
- ('p⁼h', 'pʰ'),
- ('b', 'b⁼'),
- ('b⁼h', 'bʰ'),
- ('y', 'j'),
- ('ś', 'ʃ'),
- ('ṣ', 's`'),
- ('r', 'ɾ'),
- ('l̤', 'l`'),
- ('h', 'ɦ'),
- ("'", ''),
- ('~', '^'),
- ('ṃ', '^')
-]]
-
-
-def devanagari_to_ipa(text):
- text = text.replace('ॐ', 'ओम्')
- text = re.sub(r'\s*।\s*$', '.', text)
- text = re.sub(r'\s*।\s*', ', ', text)
- text = re.sub(r'\s*॥', '.', text)
- text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST)
- for regex, replacement in _iast_to_ipa:
- text = re.sub(regex, replacement, text)
- text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0)
- [:-1]+'h'+x.group(1)+'*', text)
- return text
diff --git a/spaces/OAOA/DifFace/facelib/detection/yolov5face/utils/extract_ckpt.py b/spaces/OAOA/DifFace/facelib/detection/yolov5face/utils/extract_ckpt.py
deleted file mode 100644
index 4b8b631348f2d0cdea4e5a3594bb59f3e8f34a0f..0000000000000000000000000000000000000000
--- a/spaces/OAOA/DifFace/facelib/detection/yolov5face/utils/extract_ckpt.py
+++ /dev/null
@@ -1,5 +0,0 @@
-import torch
-import sys
-sys.path.insert(0,'./facelib/detection/yolov5face')
-model = torch.load('facelib/detection/yolov5face/yolov5n-face.pt', map_location='cpu')['model']
-torch.save(model.state_dict(),'weights/facelib/yolov5n-face.pth')
\ No newline at end of file
diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/__init__.py
deleted file mode 100644
index eee484d427a68828462469d133144a8d7c052c40..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/__init__.py
+++ /dev/null
@@ -1,6 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from . import transformer_xl_model, truncated_bptt_lm_task # noqa
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/lightconv.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/lightconv.py
deleted file mode 100644
index 4edfe359379bc2445c1ae1ada04bd34ca4a32798..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/lightconv.py
+++ /dev/null
@@ -1,1019 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.models import (
- FairseqEncoder,
- FairseqEncoderDecoderModel,
- FairseqIncrementalDecoder,
- register_model,
- register_model_architecture,
-)
-from fairseq.modules import (
- AdaptiveSoftmax,
- DynamicConv,
- FairseqDropout,
- LayerNorm,
- LightweightConv,
- MultiheadAttention,
- PositionalEmbedding,
-)
-from fairseq.utils import safe_hasattr
-
-
-@register_model("lightconv")
-class LightConvModel(FairseqEncoderDecoderModel):
- """
- LightConv and DynamicConv model from `"Pay Less Attention with Lightweight and Dynamic Convolutions" (Wu, et al, 2019)
- `_.
- To use LightConv please set ``--encoder-conv-type lightweight --decoder-conv-type lightweight``
- To use DynamicConv please set ``--encoder-conv-type dynamic --decoder-conv-type dynamic``
-
- Args:
- encoder (LightConvEncoder): the encoder
- decoder (LightConvDecoder): the decoder
-
- The LightConv model provides the following named architectures and
- command-line arguments:
-
- .. argparse::
- :ref: fairseq.models.lightconv_parser
- :prog:
- """
-
- @classmethod
- def hub_models(cls):
- # fmt: off
-
- def moses_subword(path):
- return {
- 'path': path,
- 'tokenizer': 'moses',
- 'bpe': 'subword_nmt',
- }
-
- return {
- 'lightconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz'),
- 'dynamicconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz'),
- 'lightconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz'),
- 'dynamicconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz'),
- 'lightconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'),
- 'dynamicconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'),
- 'lightconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'),
- 'dynamicconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'),
- 'lightconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz'),
- 'dynamicconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz'),
- 'lightconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz'),
- 'dynamicconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz'),
- }
- # fmt: on
-
- def __init__(self, encoder, decoder):
- super().__init__(encoder, decoder)
-
- @staticmethod
- def add_args(parser):
- """Add model-specific arguments to the parser."""
- parser.add_argument(
- "--dropout", type=float, metavar="D", help="dropout probability"
- )
- parser.add_argument(
- "--attention-dropout",
- type=float,
- metavar="D",
- help="dropout probability for attention weights",
- )
- parser.add_argument(
- "--relu-dropout",
- type=float,
- metavar="D",
- help="dropout probability after ReLU in FFN",
- )
- parser.add_argument(
- "--input-dropout",
- type=float,
- metavar="D",
- help="dropout probability of the inputs",
- )
- parser.add_argument(
- "--encoder-embed-path",
- type=str,
- metavar="STR",
- help="path to pre-trained encoder embedding",
- )
- parser.add_argument(
- "--encoder-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-conv-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension",
- )
- parser.add_argument(
- "--encoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="encoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--encoder-layers", type=int, metavar="N", help="num encoder layers"
- )
- parser.add_argument(
- "--encoder-attention-heads",
- type=int,
- metavar="N",
- help="num encoder attention heads or LightConv/DynamicConv heads",
- )
- parser.add_argument(
- "--encoder-normalize-before",
- action="store_true",
- help="apply layernorm before each encoder block",
- )
- parser.add_argument(
- "--encoder-learned-pos",
- action="store_true",
- help="use learned positional embeddings in the encoder",
- )
- parser.add_argument(
- "--decoder-embed-path",
- type=str,
- metavar="STR",
- help="path to pre-trained decoder embedding",
- )
- parser.add_argument(
- "--decoder-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-conv-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension",
- )
- parser.add_argument(
- "--decoder-ffn-embed-dim",
- type=int,
- metavar="N",
- help="decoder embedding dimension for FFN",
- )
- parser.add_argument(
- "--decoder-layers", type=int, metavar="N", help="num decoder layers"
- )
- parser.add_argument(
- "--decoder-attention-heads",
- type=int,
- metavar="N",
- help="num decoder attention heads or LightConv/DynamicConv heads",
- )
- parser.add_argument(
- "--decoder-learned-pos",
- action="store_true",
- help="use learned positional embeddings in the decoder",
- )
- parser.add_argument(
- "--decoder-normalize-before",
- action="store_true",
- help="apply layernorm before each decoder block",
- )
- parser.add_argument(
- "--share-decoder-input-output-embed",
- action="store_true",
- help="share decoder input and output embeddings",
- )
- parser.add_argument(
- "--share-all-embeddings",
- action="store_true",
- help="share encoder, decoder and output embeddings"
- " (requires shared dictionary and embed dim)",
- )
- parser.add_argument(
- "--adaptive-softmax-cutoff",
- metavar="EXPR",
- help="comma separated list of adaptive softmax cutoff points. "
- "Must be used with adaptive_loss criterion",
- ),
- parser.add_argument(
- "--adaptive-softmax-dropout",
- type=float,
- metavar="D",
- help="sets adaptive softmax dropout for the tail projections",
- )
-
- """LightConv and DynamicConv arguments"""
- parser.add_argument(
- "--encoder-kernel-size-list",
- type=lambda x: utils.eval_str_list(x, int),
- help='list of kernel size (default: "[3,7,15,31,31,31,31]")',
- )
- parser.add_argument(
- "--decoder-kernel-size-list",
- type=lambda x: utils.eval_str_list(x, int),
- help='list of kernel size (default: "[3,7,15,31,31,31]")',
- )
- parser.add_argument(
- "--encoder-glu", type=utils.eval_bool, help="glu after in proj"
- )
- parser.add_argument(
- "--decoder-glu", type=utils.eval_bool, help="glu after in proj"
- )
- parser.add_argument(
- "--encoder-conv-type",
- default="dynamic",
- type=str,
- choices=["dynamic", "lightweight"],
- help="type of convolution",
- )
- parser.add_argument(
- "--decoder-conv-type",
- default="dynamic",
- type=str,
- choices=["dynamic", "lightweight"],
- help="type of convolution",
- )
- parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool)
- parser.add_argument(
- "--weight-dropout",
- type=float,
- metavar="D",
- help="dropout probability for conv weights",
- )
-
- @classmethod
- def build_model(cls, args, task):
- """Build a new model instance."""
-
- # make sure all arguments are present in older models
- base_architecture(args)
-
- if not safe_hasattr(args, "max_source_positions"):
- args.max_source_positions = 1024
- if not safe_hasattr(args, "max_target_positions"):
- args.max_target_positions = 1024
-
- src_dict, tgt_dict = task.source_dictionary, task.target_dictionary
-
- def build_embedding(dictionary, embed_dim, path=None):
- num_embeddings = len(dictionary)
- padding_idx = dictionary.pad()
- emb = Embedding(num_embeddings, embed_dim, padding_idx)
- # if provided, load from preloaded dictionaries
- if path:
- embed_dict = utils.parse_embedding(path)
- utils.load_embedding(embed_dict, dictionary, emb)
- return emb
-
- if args.share_all_embeddings:
- if src_dict != tgt_dict:
- raise RuntimeError(
- "--share-all-embeddings requires a joined dictionary"
- )
- if args.encoder_embed_dim != args.decoder_embed_dim:
- raise RuntimeError(
- "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim"
- )
- if args.decoder_embed_path and (
- args.decoder_embed_path != args.encoder_embed_path
- ):
- raise RuntimeError(
- "--share-all-embeddings not compatible with --decoder-embed-path"
- )
- encoder_embed_tokens = build_embedding(
- src_dict, args.encoder_embed_dim, args.encoder_embed_path
- )
- decoder_embed_tokens = encoder_embed_tokens
- args.share_decoder_input_output_embed = True
- else:
- encoder_embed_tokens = build_embedding(
- src_dict, args.encoder_embed_dim, args.encoder_embed_path
- )
- decoder_embed_tokens = build_embedding(
- tgt_dict, args.decoder_embed_dim, args.decoder_embed_path
- )
-
- encoder = LightConvEncoder(args, src_dict, encoder_embed_tokens)
- decoder = LightConvDecoder(args, tgt_dict, decoder_embed_tokens)
- return LightConvModel(encoder, decoder)
-
-
-class LightConvEncoder(FairseqEncoder):
- """
- LightConv encoder consisting of *args.encoder_layers* layers. Each layer
- is a :class:`LightConvEncoderLayer`.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- dictionary (~fairseq.data.Dictionary): encoding dictionary
- embed_tokens (torch.nn.Embedding): input embedding
- """
-
- def __init__(self, args, dictionary, embed_tokens):
- super().__init__(dictionary)
- self.dropout_module = FairseqDropout(
- args.dropout, module_name=self.__class__.__name__
- )
-
- embed_dim = embed_tokens.embedding_dim
- self.padding_idx = embed_tokens.padding_idx
- self.max_source_positions = args.max_source_positions
-
- self.embed_tokens = embed_tokens
- self.embed_scale = math.sqrt(embed_dim)
- self.embed_positions = (
- PositionalEmbedding(
- args.max_source_positions,
- embed_dim,
- self.padding_idx,
- learned=args.encoder_learned_pos,
- )
- if not args.no_token_positional_embeddings
- else None
- )
-
- self.layers = nn.ModuleList([])
- self.layers.extend(
- [
- LightConvEncoderLayer(
- args, kernel_size=args.encoder_kernel_size_list[i]
- )
- for i in range(args.encoder_layers)
- ]
- )
- self.register_buffer("version", torch.Tensor([2]))
- self.normalize = args.encoder_normalize_before
- if self.normalize:
- self.layer_norm = LayerNorm(embed_dim)
-
- def forward(self, src_tokens, **unused):
- """
- Args:
- src_tokens (LongTensor): tokens in the source language of shape
- `(batch, src_len)`
-
- Returns:
- dict:
- - **encoder_out** (Tensor): the last encoder layer's output of
- shape `(src_len, batch, embed_dim)`
- - **encoder_padding_mask** (ByteTensor): the positions of
- padding elements of shape `(batch, src_len)`
- """
- # embed tokens and positions
- x = self.embed_scale * self.embed_tokens(src_tokens)
- if self.embed_positions is not None:
- x += self.embed_positions(src_tokens)
- x = self.dropout_module(x)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
-
- # compute padding mask
- encoder_padding_mask = src_tokens.eq(self.padding_idx)
- if not encoder_padding_mask.any():
- encoder_padding_mask = None
-
- # encoder layers
- for layer in self.layers:
- x = layer(x, encoder_padding_mask)
-
- if self.normalize:
- x = self.layer_norm(x)
-
- return {
- "encoder_out": x, # T x B x C
- "encoder_padding_mask": encoder_padding_mask, # B x T
- }
-
- def reorder_encoder_out(self, encoder_out, new_order):
- """
- Reorder encoder output according to *new_order*.
-
- Args:
- encoder_out: output from the ``forward()`` method
- new_order (LongTensor): desired order
-
- Returns:
- *encoder_out* rearranged according to *new_order*
- """
- if encoder_out["encoder_out"] is not None:
- encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select(
- 1, new_order
- )
- if encoder_out["encoder_padding_mask"] is not None:
- encoder_out["encoder_padding_mask"] = encoder_out[
- "encoder_padding_mask"
- ].index_select(0, new_order)
- return encoder_out
-
- def max_positions(self):
- """Maximum input length supported by the encoder."""
- if self.embed_positions is None:
- return self.max_source_positions
- return min(self.max_source_positions, self.embed_positions.max_positions)
-
-
-class LightConvDecoder(FairseqIncrementalDecoder):
- """
- LightConv decoder consisting of *args.decoder_layers* layers. Each layer
- is a :class:`LightConvDecoderLayer`.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- dictionary (~fairseq.data.Dictionary): decoding dictionary
- embed_tokens (torch.nn.Embedding): output embedding
- no_encoder_attn (bool, optional): whether to attend to encoder outputs.
- Default: ``False``
- """
-
- def __init__(
- self, args, dictionary, embed_tokens, no_encoder_attn=False, final_norm=True
- ):
- super().__init__(dictionary)
- self.dropout_module = FairseqDropout(
- args.dropout, module_name=self.__class__.__name__
- )
- self.share_input_output_embed = args.share_decoder_input_output_embed
-
- input_embed_dim = embed_tokens.embedding_dim
- embed_dim = args.decoder_embed_dim
- output_embed_dim = args.decoder_output_dim
-
- padding_idx = embed_tokens.padding_idx
- self.max_target_positions = args.max_target_positions
-
- self.embed_tokens = embed_tokens
- self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim
-
- self.project_in_dim = (
- Linear(input_embed_dim, embed_dim, bias=False)
- if embed_dim != input_embed_dim
- else None
- )
-
- self.embed_positions = (
- PositionalEmbedding(
- args.max_target_positions,
- embed_dim,
- padding_idx,
- learned=args.decoder_learned_pos,
- )
- if not args.no_token_positional_embeddings
- else None
- )
-
- self.layers = nn.ModuleList([])
- self.layers.extend(
- [
- LightConvDecoderLayer(
- args, no_encoder_attn, kernel_size=args.decoder_kernel_size_list[i]
- )
- for i in range(args.decoder_layers)
- ]
- )
-
- self.adaptive_softmax = None
-
- self.project_out_dim = (
- Linear(embed_dim, output_embed_dim, bias=False)
- if embed_dim != output_embed_dim and not args.tie_adaptive_weights
- else None
- )
-
- if args.adaptive_softmax_cutoff is not None:
- self.adaptive_softmax = AdaptiveSoftmax(
- len(dictionary),
- output_embed_dim,
- utils.eval_str_list(args.adaptive_softmax_cutoff, type=int),
- dropout=args.adaptive_softmax_dropout,
- adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None,
- factor=args.adaptive_softmax_factor,
- tie_proj=args.tie_adaptive_proj,
- )
- elif not self.share_input_output_embed:
- self.embed_out = nn.Parameter(
- torch.Tensor(len(dictionary), output_embed_dim)
- )
- nn.init.normal_(self.embed_out, mean=0, std=output_embed_dim ** -0.5)
- self.register_buffer("version", torch.Tensor([2]))
- self.normalize = args.decoder_normalize_before and final_norm
- if self.normalize:
- self.layer_norm = LayerNorm(embed_dim)
-
- def forward(
- self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs
- ):
- """
- Args:
- prev_output_tokens (LongTensor): previous decoder outputs of shape
- `(batch, tgt_len)`, for teacher forcing
- encoder_out (Tensor, optional): output from the encoder, used for
- encoder-side attention
- incremental_state (dict): dictionary used for storing state during
- :ref:`Incremental decoding`
-
- Returns:
- tuple:
- - the last decoder layer's output of shape `(batch, tgt_len,
- vocab)`
- - the last decoder layer's attention weights of shape `(batch,
- tgt_len, src_len)`
- """
- # embed positions
- positions = (
- self.embed_positions(
- prev_output_tokens,
- incremental_state=incremental_state,
- )
- if self.embed_positions is not None
- else None
- )
-
- if incremental_state is not None:
- prev_output_tokens = prev_output_tokens[:, -1:]
- if positions is not None:
- positions = positions[:, -1:]
-
- # embed tokens and positions
- x = self.embed_scale * self.embed_tokens(prev_output_tokens)
-
- if self.project_in_dim is not None:
- x = self.project_in_dim(x)
-
- if positions is not None:
- x += positions
- x = self.dropout_module(x)
-
- # B x T x C -> T x B x C
- x = x.transpose(0, 1)
- attn = None
-
- inner_states = [x]
-
- # decoder layers
- for layer in self.layers:
- x, attn = layer(
- x,
- encoder_out["encoder_out"] if encoder_out is not None else None,
- encoder_out["encoder_padding_mask"]
- if encoder_out is not None
- else None,
- incremental_state,
- )
- inner_states.append(x)
-
- if self.normalize:
- x = self.layer_norm(x)
-
- # T x B x C -> B x T x C
- x = x.transpose(0, 1)
-
- if self.project_out_dim is not None:
- x = self.project_out_dim(x)
-
- if self.adaptive_softmax is None:
- # project back to size of vocabulary
- if self.share_input_output_embed:
- x = F.linear(x, self.embed_tokens.weight)
- else:
- x = F.linear(x, self.embed_out)
-
- return x, {"attn": attn, "inner_states": inner_states}
-
- def max_positions(self):
- """Maximum output length supported by the decoder."""
- if self.embed_positions is None:
- return self.max_target_positions
- return min(self.max_target_positions, self.embed_positions.max_positions)
-
- def buffered_future_mask(self, tensor):
- dim = tensor.size(0)
- if (
- not hasattr(self, "_future_mask")
- or self._future_mask is None
- or self._future_mask.device != tensor.device
- ):
- self._future_mask = torch.triu(
- utils.fill_with_neg_inf(tensor.new(dim, dim)), 1
- )
- if self._future_mask.size(0) < dim:
- self._future_mask = torch.triu(
- utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1
- )
- return self._future_mask[:dim, :dim]
-
-
-class LightConvEncoderLayer(nn.Module):
- """Encoder layer block.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- kernel_size: kernel size of the convolution
- """
-
- def __init__(self, args, kernel_size=0):
- super().__init__()
- self.embed_dim = args.encoder_embed_dim
- self.conv_dim = args.encoder_conv_dim
- padding_l = (
- kernel_size // 2
- if kernel_size % 2 == 1
- else ((kernel_size - 1) // 2, kernel_size // 2)
- )
-
- if args.encoder_glu:
- self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim)
- self.act = nn.GLU()
- else:
- self.linear1 = Linear(self.embed_dim, self.conv_dim)
- self.act = None
- if args.encoder_conv_type == "lightweight":
- self.conv = LightweightConv(
- self.conv_dim,
- kernel_size,
- padding_l=padding_l,
- weight_softmax=args.weight_softmax,
- num_heads=args.encoder_attention_heads,
- weight_dropout=args.weight_dropout,
- )
- elif args.encoder_conv_type == "dynamic":
- self.conv = DynamicConv(
- self.conv_dim,
- kernel_size,
- padding_l=padding_l,
- weight_softmax=args.weight_softmax,
- num_heads=args.encoder_attention_heads,
- weight_dropout=args.weight_dropout,
- )
- else:
- raise NotImplementedError
- self.linear2 = Linear(self.conv_dim, self.embed_dim)
-
- self.dropout_module = FairseqDropout(
- args.dropout, module_name=self.__class__.__name__
- )
- self.relu_dropout_module = FairseqDropout(
- args.relu_dropout, module_name=self.__class__.__name__
- )
- self.input_dropout_module = FairseqDropout(
- args.input_dropout, module_name=self.__class__.__name__
- )
- self.normalize_before = args.encoder_normalize_before
- self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim)
- self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim)
- self.layer_norms = nn.ModuleList([LayerNorm(self.embed_dim) for _ in range(2)])
-
- def forward(self, x, encoder_padding_mask):
- """
- Args:
- x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- encoder_padding_mask (ByteTensor): binary ByteTensor of shape
- `(batch, src_len)` where padding elements are indicated by ``1``.
-
- Returns:
- encoded output of shape `(batch, src_len, embed_dim)`
- """
- residual = x
- x = self.maybe_layer_norm(0, x, before=True)
- x = self.input_dropout_module(x)
- x = self.linear1(x)
- if self.act is not None:
- x = self.act(x)
- if encoder_padding_mask is not None:
- x = x.masked_fill(encoder_padding_mask.transpose(0, 1).unsqueeze(2), 0)
- x = self.conv(x)
- x = self.linear2(x)
- x = self.dropout_module(x)
- x = residual + x
- x = self.maybe_layer_norm(0, x, after=True)
-
- residual = x
- x = self.maybe_layer_norm(1, x, before=True)
- x = F.relu(self.fc1(x))
- x = self.relu_dropout_module(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- x = residual + x
- x = self.maybe_layer_norm(1, x, after=True)
- return x
-
- def maybe_layer_norm(self, i, x, before=False, after=False):
- assert before ^ after
- if after ^ self.normalize_before:
- return self.layer_norms[i](x)
- else:
- return x
-
- def extra_repr(self):
- return (
- "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format(
- self.dropout_module.p,
- self.relu_dropout_module.p,
- self.input_dropout_module.p,
- self.normalize_before,
- )
- )
-
-
-class LightConvDecoderLayer(nn.Module):
- """Decoder layer block.
-
- Args:
- args (argparse.Namespace): parsed command-line arguments
- no_encoder_attn (bool, optional): whether to attend to encoder outputs.
- Default: ``False``
- kernel_size: kernel size of the convolution
- """
-
- def __init__(self, args, no_encoder_attn=False, kernel_size=0):
- super().__init__()
- self.embed_dim = args.decoder_embed_dim
- self.conv_dim = args.decoder_conv_dim
- if args.decoder_glu:
- self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim)
- self.act = nn.GLU()
- else:
- self.linear1 = Linear(self.embed_dim, self.conv_dim)
- self.act = None
- if args.decoder_conv_type == "lightweight":
- self.conv = LightweightConv(
- self.conv_dim,
- kernel_size,
- padding_l=kernel_size - 1,
- weight_softmax=args.weight_softmax,
- num_heads=args.decoder_attention_heads,
- weight_dropout=args.weight_dropout,
- )
- elif args.decoder_conv_type == "dynamic":
- self.conv = DynamicConv(
- self.conv_dim,
- kernel_size,
- padding_l=kernel_size - 1,
- weight_softmax=args.weight_softmax,
- num_heads=args.decoder_attention_heads,
- weight_dropout=args.weight_dropout,
- )
- else:
- raise NotImplementedError
- self.linear2 = Linear(self.conv_dim, self.embed_dim)
-
- self.dropout_module = FairseqDropout(
- args.dropout, module_name=self.__class__.__name__
- )
- self.relu_dropout_module = FairseqDropout(
- args.relu_dropout, module_name=self.__class__.__name__
- )
- self.input_dropout_module = FairseqDropout(
- args.input_dropout, module_name=self.__class__.__name__
- )
- self.normalize_before = args.decoder_normalize_before
-
- self.conv_layer_norm = LayerNorm(self.embed_dim)
-
- if no_encoder_attn:
- self.encoder_attn = None
- self.encoder_attn_layer_norm = None
- else:
- self.encoder_attn = MultiheadAttention(
- self.embed_dim,
- args.decoder_attention_heads,
- dropout=args.attention_dropout,
- encoder_decoder_attention=True,
- )
- self.encoder_attn_layer_norm = LayerNorm(self.embed_dim)
-
- self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim)
- self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim)
-
- self.final_layer_norm = LayerNorm(self.embed_dim)
- self.need_attn = True
-
- def forward(
- self,
- x,
- encoder_out,
- encoder_padding_mask,
- incremental_state,
- prev_conv_state=None,
- prev_attn_state=None,
- conv_mask=None,
- conv_padding_mask=None,
- ):
- """
- Args:
- x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)`
- encoder_padding_mask (ByteTensor): binary ByteTensor of shape
- `(batch, src_len)` where padding elements are indicated by ``1``.
-
- Returns:
- encoded output of shape `(batch, src_len, embed_dim)`
- """
- residual = x
- x = self.maybe_layer_norm(self.conv_layer_norm, x, before=True)
- if prev_conv_state is not None:
- if incremental_state is None:
- incremental_state = {}
- self.conv._set_input_buffer(incremental_state, prev_conv_state)
- x = self.input_dropout_module(x)
- x = self.linear1(x)
- if self.act is not None:
- x = self.act(x)
- x = self.conv(x, incremental_state=incremental_state)
- x = self.linear2(x)
- x = self.dropout_module(x)
- x = residual + x
- x = self.maybe_layer_norm(self.conv_layer_norm, x, after=True)
-
- attn = None
- if self.encoder_attn is not None:
- residual = x
- x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True)
- if prev_attn_state is not None:
- if incremental_state is None:
- incremental_state = {}
- prev_key, prev_value = prev_attn_state
- saved_state = {"prev_key": prev_key, "prev_value": prev_value}
- self.encoder_attn._set_input_buffer(incremental_state, saved_state)
- x, attn = self.encoder_attn(
- query=x,
- key=encoder_out,
- value=encoder_out,
- key_padding_mask=encoder_padding_mask,
- incremental_state=incremental_state,
- static_kv=True,
- need_weights=(not self.training and self.need_attn),
- )
- x = self.dropout_module(x)
- x = residual + x
- x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True)
-
- residual = x
- x = self.maybe_layer_norm(self.final_layer_norm, x, before=True)
- x = F.relu(self.fc1(x))
- x = self.relu_dropout_module(x)
- x = self.fc2(x)
- x = self.dropout_module(x)
- x = residual + x
- x = self.maybe_layer_norm(self.final_layer_norm, x, after=True)
- return x, attn
-
- def maybe_layer_norm(self, layer_norm, x, before=False, after=False):
- assert before ^ after
- if after ^ self.normalize_before:
- return layer_norm(x)
- else:
- return x
-
- def make_generation_fast_(self, need_attn=False, **kwargs):
- self.need_attn = need_attn
-
- def extra_repr(self):
- return (
- "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format(
- self.dropout_module.p,
- self.relu_dropout_module.p,
- self.input_dropout_module.p,
- self.normalize_before,
- )
- )
-
-
-def Embedding(num_embeddings, embedding_dim, padding_idx):
- m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
- nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
- nn.init.constant_(m.weight[padding_idx], 0)
- return m
-
-
-def Linear(in_features, out_features, bias=True):
- m = nn.Linear(in_features, out_features, bias)
- nn.init.xavier_uniform_(m.weight)
- if bias:
- nn.init.constant_(m.bias, 0.0)
- return m
-
-
-@register_model_architecture("lightconv", "lightconv")
-def base_architecture(args):
- args.encoder_embed_path = getattr(args, "encoder_embed_path", None)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048)
- args.encoder_layers = getattr(args, "encoder_layers", 7)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False)
- args.decoder_embed_path = getattr(args, "decoder_embed_path", None)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim)
- args.decoder_ffn_embed_dim = getattr(
- args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim
- )
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8)
- args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False)
- args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False)
- args.attention_dropout = getattr(args, "attention_dropout", 0.0)
- args.relu_dropout = getattr(args, "relu_dropout", 0.0)
- args.dropout = getattr(args, "dropout", 0.1)
- args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None)
- args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0)
- args.share_decoder_input_output_embed = getattr(
- args, "share_decoder_input_output_embed", False
- )
- args.share_all_embeddings = getattr(args, "share_all_embeddings", False)
- args.no_token_positional_embeddings = getattr(
- args, "no_token_positional_embeddings", False
- )
-
- args.decoder_output_dim = getattr(
- args, "decoder_output_dim", args.decoder_embed_dim
- )
- args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim)
-
- args.encoder_conv_dim = getattr(args, "encoder_conv_dim", args.encoder_embed_dim)
- args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim)
-
- args.encoder_kernel_size_list = getattr(
- args, "encoder_kernel_size_list", [3, 7, 15, 31, 31, 31, 31]
- )
- args.decoder_kernel_size_list = getattr(
- args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31]
- )
- if len(args.encoder_kernel_size_list) == 1:
- args.encoder_kernel_size_list = (
- args.encoder_kernel_size_list * args.encoder_layers
- )
- if len(args.decoder_kernel_size_list) == 1:
- args.decoder_kernel_size_list = (
- args.decoder_kernel_size_list * args.decoder_layers
- )
- assert (
- len(args.encoder_kernel_size_list) == args.encoder_layers
- ), "encoder_kernel_size_list doesn't match encoder_layers"
- assert (
- len(args.decoder_kernel_size_list) == args.decoder_layers
- ), "decoder_kernel_size_list doesn't match decoder_layers"
- args.encoder_glu = getattr(args, "encoder_glu", True)
- args.decoder_glu = getattr(args, "decoder_glu", True)
- args.input_dropout = getattr(args, "input_dropout", 0.1)
- args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout)
-
-
-@register_model_architecture("lightconv", "lightconv_iwslt_de_en")
-def lightconv_iwslt_de_en(args):
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4)
- args.encoder_layers = getattr(args, "encoder_layers", 7)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4)
- args.decoder_layers = getattr(args, "decoder_layers", 6)
- args.attention_dropout = getattr(args, "attention_dropout", 0.1)
- args.weight_dropout = getattr(args, "weight_dropout", 0.1)
- args.encoder_glu = getattr(args, "encoder_glu", False)
- args.decoder_glu = getattr(args, "decoder_glu", False)
- args.input_dropout = getattr(args, "input_dropout", 0.0)
- base_architecture(args)
-
-
-@register_model_architecture("lightconv", "lightconv_wmt_en_de")
-def lightconv_wmt_en_de(args):
- base_architecture(args)
-
-
-@register_model_architecture("lightconv", "lightconv_wmt_en_de_big")
-def lightconv_wmt_en_de_big(args):
- args.attention_dropout = getattr(args, "attention_dropout", 0.1)
- args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024)
- args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096)
- args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16)
- args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False)
- args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024)
- args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096)
- args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16)
- args.dropout = getattr(args, "dropout", 0.3)
- base_architecture(args)
-
-
-@register_model_architecture("lightconv", "lightconv_wmt_en_fr_big")
-def lightconv_wmt_en_fr_big(args):
- args.dropout = getattr(args, "dropout", 0.1)
- lightconv_wmt_en_de_big(args)
-
-
-@register_model_architecture("lightconv", "lightconv_wmt_zh_en_big")
-def lightconv_wmt_zh_en_big(args):
- args.dropout = getattr(args, "dropout", 0.2)
- args.attention_dropout = getattr(args, "attention_dropout", 0.2)
- args.weight_dropout = getattr(args, "weight_dropout", 0.2)
- lightconv_wmt_en_de_big(args)
diff --git a/spaces/OFA-Sys/OFA-Image_Caption/utils/cider/pyciderevalcap/ciderD/ciderD.py b/spaces/OFA-Sys/OFA-Image_Caption/utils/cider/pyciderevalcap/ciderD/ciderD.py
deleted file mode 100644
index 280f9890312a76b54695b2a8c456c5d52a87e186..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-Image_Caption/utils/cider/pyciderevalcap/ciderD/ciderD.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Filename: ciderD.py
-#
-# Description: Describes the class to compute the CIDEr-D (Consensus-Based Image Description Evaluation) Metric
-# by Vedantam, Zitnick, and Parikh (http://arxiv.org/abs/1411.5726)
-#
-# Creation Date: Sun Feb 8 14:16:54 2015
-#
-# Authors: Ramakrishna Vedantam and Tsung-Yi Lin
-from __future__ import absolute_import
-from __future__ import division
-from __future__ import print_function
-
-from .ciderD_scorer import CiderScorer
-import pdb
-
-class CiderD:
- """
- Main Class to compute the CIDEr metric
-
- """
- def __init__(self, n=4, sigma=6.0, df="corpus"):
- # set cider to sum over 1 to 4-grams
- self._n = n
- # set the standard deviation parameter for gaussian penalty
- self._sigma = sigma
- # set which where to compute document frequencies from
- self._df = df
- self.cider_scorer = CiderScorer(n=self._n, df_mode=self._df)
-
- def compute_score(self, gts, res):
- """
- Main function to compute CIDEr score
- :param hypo_for_image (dict) : dictionary with key and value
- ref_for_image (dict) : dictionary with key and value
- :return: cider (float) : computed CIDEr score for the corpus
- """
-
- # clear all the previous hypos and refs
- tmp_cider_scorer = self.cider_scorer.copy_empty()
- tmp_cider_scorer.clear()
- for res_id in res:
-
- hypo = res_id['caption']
- ref = gts[res_id['image_id']]
-
- # Sanity check.
- assert(type(hypo) is list)
- assert(len(hypo) == 1)
- assert(type(ref) is list)
- assert(len(ref) > 0)
- tmp_cider_scorer += (hypo[0], ref)
-
- (score, scores) = tmp_cider_scorer.compute_score()
-
- return score, scores
-
- def method(self):
- return "CIDEr-D"
diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/pass_through.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/pass_through.py
deleted file mode 100644
index 2f93db328c1de9b268e8ee1c0c1cad558fd089aa..0000000000000000000000000000000000000000
--- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/pass_through.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-from dataclasses import dataclass
-
-from fairseq.dataclass import FairseqDataclass
-from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler
-
-
-@dataclass
-class PassThroughScheduleConfig(FairseqDataclass):
- pass
-
-
-@register_lr_scheduler("pass_through", dataclass=PassThroughScheduleConfig)
-class PassThroughScheduleSchedule(FairseqLRScheduler):
- """Delegate lr scheduling to the optimizer."""
-
- def __init__(self, cfg: PassThroughScheduleConfig, optimizer):
- super().__init__(cfg, optimizer)
- assert (
- hasattr(optimizer, "lr_scheduler") and optimizer.lr_scheduler is not None
- ), "Pass-through schedule can only be used with optimizers with their own schedulers"
-
- def state_dict(self):
- return self.optimizer.lr_scheduler.state_dict()
-
- def load_state_dict(self, state_dict):
- self.optimizer.lr_scheduler.load_state_dict(state_dict)
-
- def step_begin_epoch(self, epoch):
- """Update the learning rate at the beginning of the given epoch."""
- return self.optimizer.lr_scheduler.step_begin_epoch(epoch)
-
- def step_update(self, num_updates):
- """Update the learning rate after each update."""
- return self.optimizer.lr_scheduler.step_update(num_updates)
diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/mhl/index.tsx b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/mhl/index.tsx
deleted file mode 100644
index c69add7504c51f88d9b865e106b2b775bc642fa4..0000000000000000000000000000000000000000
--- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/mhl/index.tsx
+++ /dev/null
@@ -1,26 +0,0 @@
-import React from 'react';
-import { Theme } from '../interface';
-import { DefaultSoundNames, defaultSounds } from '../default';
-
-const imagesUrls = import.meta.glob('./images/*.png', {
- import: 'default',
- eager: true,
-});
-
-const mhls = Object.entries(imagesUrls).map(([key, value]) => ({
- name: key.slice(9, -4),
- // eslint-disable-next-line @typescript-eslint/ban-ts-comment
- // @ts-ignore
- content: ,
-}));
-
-export const mhlTheme: Theme = {
- name: 'kitten',
- icons: mhls.map(({ name, content }) => ({
- name,
- content,
- clickSound: 'button-click',
- tripleSound: 'triple',
- })),
- sounds: defaultSounds,
-};
diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/__init__.py
deleted file mode 100644
index d13e9c57235b982f3e0645bc316de2b75755dfda..0000000000000000000000000000000000000000
--- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/__init__.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from .box_head import ROI_BOX_HEAD_REGISTRY, build_box_head, FastRCNNConvFCHead
-from .keypoint_head import (
- ROI_KEYPOINT_HEAD_REGISTRY,
- build_keypoint_head,
- BaseKeypointRCNNHead,
- KRCNNConvDeconvUpsampleHead,
-)
-from .mask_head import (
- ROI_MASK_HEAD_REGISTRY,
- build_mask_head,
- BaseMaskRCNNHead,
- MaskRCNNConvUpsampleHead,
-)
-from .roi_heads import (
- ROI_HEADS_REGISTRY,
- ROIHeads,
- Res5ROIHeads,
- StandardROIHeads,
- build_roi_heads,
- select_foreground_proposals,
-)
-from .cascade_rcnn import CascadeROIHeads
-from .rotated_fast_rcnn import RROIHeads
-from .fast_rcnn import FastRCNNOutputLayers
-
-from . import cascade_rcnn # isort:skip
-
-__all__ = list(globals().keys())
diff --git a/spaces/OptimalScale/Robin-33b/lmflow/utils/__init__.py b/spaces/OptimalScale/Robin-33b/lmflow/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/OthmaneJ/transcribe-distil-wav2vec2/README.md b/spaces/OthmaneJ/transcribe-distil-wav2vec2/README.md
deleted file mode 100644
index cf1bbff05ef4f6abacc515a9059d09f1f9243509..0000000000000000000000000000000000000000
--- a/spaces/OthmaneJ/transcribe-distil-wav2vec2/README.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Transcribe Distil Wav2vec2
-emoji: 🐠
-colorFrom: red
-colorTo: blue
-sdk: gradio
-app_file: app.py
-pinned: false
----
-
-# Configuration
-
-`title`: _string_
-Display title for the Space
-
-`emoji`: _string_
-Space emoji (emoji-only character allowed)
-
-`colorFrom`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`colorTo`: _string_
-Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
-
-`sdk`: _string_
-Can be either `gradio` or `streamlit`
-
-`sdk_version` : _string_
-Only applicable for `streamlit` SDK.
-See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
-
-`app_file`: _string_
-Path to your main application file (which contains either `gradio` or `streamlit` Python code).
-Path is relative to the root of the repository.
-
-`pinned`: _boolean_
-Whether the Space stays on top of your list.
diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/video/processing.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/video/processing.py
deleted file mode 100644
index 3d90b96e0823d5f116755e7f498d25d17017224a..0000000000000000000000000000000000000000
--- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/video/processing.py
+++ /dev/null
@@ -1,160 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import os
-import os.path as osp
-import subprocess
-import tempfile
-
-from annotator.uniformer.mmcv.utils import requires_executable
-
-
-@requires_executable('ffmpeg')
-def convert_video(in_file,
- out_file,
- print_cmd=False,
- pre_options='',
- **kwargs):
- """Convert a video with ffmpeg.
-
- This provides a general api to ffmpeg, the executed command is::
-
- `ffmpeg -y -i `
-
- Options(kwargs) are mapped to ffmpeg commands with the following rules:
-
- - key=val: "-key val"
- - key=True: "-key"
- - key=False: ""
-
- Args:
- in_file (str): Input video filename.
- out_file (str): Output video filename.
- pre_options (str): Options appears before "-i ".
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- options = []
- for k, v in kwargs.items():
- if isinstance(v, bool):
- if v:
- options.append(f'-{k}')
- elif k == 'log_level':
- assert v in [
- 'quiet', 'panic', 'fatal', 'error', 'warning', 'info',
- 'verbose', 'debug', 'trace'
- ]
- options.append(f'-loglevel {v}')
- else:
- options.append(f'-{k} {v}')
- cmd = f'ffmpeg -y {pre_options} -i {in_file} {" ".join(options)} ' \
- f'{out_file}'
- if print_cmd:
- print(cmd)
- subprocess.call(cmd, shell=True)
-
-
-@requires_executable('ffmpeg')
-def resize_video(in_file,
- out_file,
- size=None,
- ratio=None,
- keep_ar=False,
- log_level='info',
- print_cmd=False):
- """Resize a video.
-
- Args:
- in_file (str): Input video filename.
- out_file (str): Output video filename.
- size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1).
- ratio (tuple or float): Expected resize ratio, (2, 0.5) means
- (w*2, h*0.5).
- keep_ar (bool): Whether to keep original aspect ratio.
- log_level (str): Logging level of ffmpeg.
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- if size is None and ratio is None:
- raise ValueError('expected size or ratio must be specified')
- if size is not None and ratio is not None:
- raise ValueError('size and ratio cannot be specified at the same time')
- options = {'log_level': log_level}
- if size:
- if not keep_ar:
- options['vf'] = f'scale={size[0]}:{size[1]}'
- else:
- options['vf'] = f'scale=w={size[0]}:h={size[1]}:' \
- 'force_original_aspect_ratio=decrease'
- else:
- if not isinstance(ratio, tuple):
- ratio = (ratio, ratio)
- options['vf'] = f'scale="trunc(iw*{ratio[0]}):trunc(ih*{ratio[1]})"'
- convert_video(in_file, out_file, print_cmd, **options)
-
-
-@requires_executable('ffmpeg')
-def cut_video(in_file,
- out_file,
- start=None,
- end=None,
- vcodec=None,
- acodec=None,
- log_level='info',
- print_cmd=False):
- """Cut a clip from a video.
-
- Args:
- in_file (str): Input video filename.
- out_file (str): Output video filename.
- start (None or float): Start time (in seconds).
- end (None or float): End time (in seconds).
- vcodec (None or str): Output video codec, None for unchanged.
- acodec (None or str): Output audio codec, None for unchanged.
- log_level (str): Logging level of ffmpeg.
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- options = {'log_level': log_level}
- if vcodec is None:
- options['vcodec'] = 'copy'
- if acodec is None:
- options['acodec'] = 'copy'
- if start:
- options['ss'] = start
- else:
- start = 0
- if end:
- options['t'] = end - start
- convert_video(in_file, out_file, print_cmd, **options)
-
-
-@requires_executable('ffmpeg')
-def concat_video(video_list,
- out_file,
- vcodec=None,
- acodec=None,
- log_level='info',
- print_cmd=False):
- """Concatenate multiple videos into a single one.
-
- Args:
- video_list (list): A list of video filenames
- out_file (str): Output video filename
- vcodec (None or str): Output video codec, None for unchanged
- acodec (None or str): Output audio codec, None for unchanged
- log_level (str): Logging level of ffmpeg.
- print_cmd (bool): Whether to print the final ffmpeg command.
- """
- tmp_filehandler, tmp_filename = tempfile.mkstemp(suffix='.txt', text=True)
- with open(tmp_filename, 'w') as f:
- for filename in video_list:
- f.write(f'file {osp.abspath(filename)}\n')
- options = {'log_level': log_level}
- if vcodec is None:
- options['vcodec'] = 'copy'
- if acodec is None:
- options['acodec'] = 'copy'
- convert_video(
- tmp_filename,
- out_file,
- print_cmd,
- pre_options='-f concat -safe 0',
- **options)
- os.close(tmp_filehandler)
- os.remove(tmp_filename)
diff --git a/spaces/PKUWilliamYang/VToonify/vtoonify/model/raft/core/extractor.py b/spaces/PKUWilliamYang/VToonify/vtoonify/model/raft/core/extractor.py
deleted file mode 100644
index 9a9c759d1243d4694e8656c2f6f8a37e53edd009..0000000000000000000000000000000000000000
--- a/spaces/PKUWilliamYang/VToonify/vtoonify/model/raft/core/extractor.py
+++ /dev/null
@@ -1,267 +0,0 @@
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-class ResidualBlock(nn.Module):
- def __init__(self, in_planes, planes, norm_fn='group', stride=1):
- super(ResidualBlock, self).__init__()
-
- self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, padding=1, stride=stride)
- self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1)
- self.relu = nn.ReLU(inplace=True)
-
- num_groups = planes // 8
-
- if norm_fn == 'group':
- self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
- self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
- if not stride == 1:
- self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
-
- elif norm_fn == 'batch':
- self.norm1 = nn.BatchNorm2d(planes)
- self.norm2 = nn.BatchNorm2d(planes)
- if not stride == 1:
- self.norm3 = nn.BatchNorm2d(planes)
-
- elif norm_fn == 'instance':
- self.norm1 = nn.InstanceNorm2d(planes)
- self.norm2 = nn.InstanceNorm2d(planes)
- if not stride == 1:
- self.norm3 = nn.InstanceNorm2d(planes)
-
- elif norm_fn == 'none':
- self.norm1 = nn.Sequential()
- self.norm2 = nn.Sequential()
- if not stride == 1:
- self.norm3 = nn.Sequential()
-
- if stride == 1:
- self.downsample = None
-
- else:
- self.downsample = nn.Sequential(
- nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm3)
-
-
- def forward(self, x):
- y = x
- y = self.relu(self.norm1(self.conv1(y)))
- y = self.relu(self.norm2(self.conv2(y)))
-
- if self.downsample is not None:
- x = self.downsample(x)
-
- return self.relu(x+y)
-
-
-
-class BottleneckBlock(nn.Module):
- def __init__(self, in_planes, planes, norm_fn='group', stride=1):
- super(BottleneckBlock, self).__init__()
-
- self.conv1 = nn.Conv2d(in_planes, planes//4, kernel_size=1, padding=0)
- self.conv2 = nn.Conv2d(planes//4, planes//4, kernel_size=3, padding=1, stride=stride)
- self.conv3 = nn.Conv2d(planes//4, planes, kernel_size=1, padding=0)
- self.relu = nn.ReLU(inplace=True)
-
- num_groups = planes // 8
-
- if norm_fn == 'group':
- self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4)
- self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4)
- self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
- if not stride == 1:
- self.norm4 = nn.GroupNorm(num_groups=num_groups, num_channels=planes)
-
- elif norm_fn == 'batch':
- self.norm1 = nn.BatchNorm2d(planes//4)
- self.norm2 = nn.BatchNorm2d(planes//4)
- self.norm3 = nn.BatchNorm2d(planes)
- if not stride == 1:
- self.norm4 = nn.BatchNorm2d(planes)
-
- elif norm_fn == 'instance':
- self.norm1 = nn.InstanceNorm2d(planes//4)
- self.norm2 = nn.InstanceNorm2d(planes//4)
- self.norm3 = nn.InstanceNorm2d(planes)
- if not stride == 1:
- self.norm4 = nn.InstanceNorm2d(planes)
-
- elif norm_fn == 'none':
- self.norm1 = nn.Sequential()
- self.norm2 = nn.Sequential()
- self.norm3 = nn.Sequential()
- if not stride == 1:
- self.norm4 = nn.Sequential()
-
- if stride == 1:
- self.downsample = None
-
- else:
- self.downsample = nn.Sequential(
- nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm4)
-
-
- def forward(self, x):
- y = x
- y = self.relu(self.norm1(self.conv1(y)))
- y = self.relu(self.norm2(self.conv2(y)))
- y = self.relu(self.norm3(self.conv3(y)))
-
- if self.downsample is not None:
- x = self.downsample(x)
-
- return self.relu(x+y)
-
-class BasicEncoder(nn.Module):
- def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0):
- super(BasicEncoder, self).__init__()
- self.norm_fn = norm_fn
-
- if self.norm_fn == 'group':
- self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64)
-
- elif self.norm_fn == 'batch':
- self.norm1 = nn.BatchNorm2d(64)
-
- elif self.norm_fn == 'instance':
- self.norm1 = nn.InstanceNorm2d(64)
-
- elif self.norm_fn == 'none':
- self.norm1 = nn.Sequential()
-
- self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3)
- self.relu1 = nn.ReLU(inplace=True)
-
- self.in_planes = 64
- self.layer1 = self._make_layer(64, stride=1)
- self.layer2 = self._make_layer(96, stride=2)
- self.layer3 = self._make_layer(128, stride=2)
-
- # output convolution
- self.conv2 = nn.Conv2d(128, output_dim, kernel_size=1)
-
- self.dropout = None
- if dropout > 0:
- self.dropout = nn.Dropout2d(p=dropout)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)):
- if m.weight is not None:
- nn.init.constant_(m.weight, 1)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def _make_layer(self, dim, stride=1):
- layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride)
- layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1)
- layers = (layer1, layer2)
-
- self.in_planes = dim
- return nn.Sequential(*layers)
-
-
- def forward(self, x):
-
- # if input is list, combine batch dimension
- is_list = isinstance(x, tuple) or isinstance(x, list)
- if is_list:
- batch_dim = x[0].shape[0]
- x = torch.cat(x, dim=0)
-
- x = self.conv1(x)
- x = self.norm1(x)
- x = self.relu1(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
-
- x = self.conv2(x)
-
- if self.training and self.dropout is not None:
- x = self.dropout(x)
-
- if is_list:
- x = torch.split(x, [batch_dim, batch_dim], dim=0)
-
- return x
-
-
-class SmallEncoder(nn.Module):
- def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0):
- super(SmallEncoder, self).__init__()
- self.norm_fn = norm_fn
-
- if self.norm_fn == 'group':
- self.norm1 = nn.GroupNorm(num_groups=8, num_channels=32)
-
- elif self.norm_fn == 'batch':
- self.norm1 = nn.BatchNorm2d(32)
-
- elif self.norm_fn == 'instance':
- self.norm1 = nn.InstanceNorm2d(32)
-
- elif self.norm_fn == 'none':
- self.norm1 = nn.Sequential()
-
- self.conv1 = nn.Conv2d(3, 32, kernel_size=7, stride=2, padding=3)
- self.relu1 = nn.ReLU(inplace=True)
-
- self.in_planes = 32
- self.layer1 = self._make_layer(32, stride=1)
- self.layer2 = self._make_layer(64, stride=2)
- self.layer3 = self._make_layer(96, stride=2)
-
- self.dropout = None
- if dropout > 0:
- self.dropout = nn.Dropout2d(p=dropout)
-
- self.conv2 = nn.Conv2d(96, output_dim, kernel_size=1)
-
- for m in self.modules():
- if isinstance(m, nn.Conv2d):
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
- elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)):
- if m.weight is not None:
- nn.init.constant_(m.weight, 1)
- if m.bias is not None:
- nn.init.constant_(m.bias, 0)
-
- def _make_layer(self, dim, stride=1):
- layer1 = BottleneckBlock(self.in_planes, dim, self.norm_fn, stride=stride)
- layer2 = BottleneckBlock(dim, dim, self.norm_fn, stride=1)
- layers = (layer1, layer2)
-
- self.in_planes = dim
- return nn.Sequential(*layers)
-
-
- def forward(self, x):
-
- # if input is list, combine batch dimension
- is_list = isinstance(x, tuple) or isinstance(x, list)
- if is_list:
- batch_dim = x[0].shape[0]
- x = torch.cat(x, dim=0)
-
- x = self.conv1(x)
- x = self.norm1(x)
- x = self.relu1(x)
-
- x = self.layer1(x)
- x = self.layer2(x)
- x = self.layer3(x)
- x = self.conv2(x)
-
- if self.training and self.dropout is not None:
- x = self.dropout(x)
-
- if is_list:
- x = torch.split(x, [batch_dim, batch_dim], dim=0)
-
- return x
diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/font-encodings.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/font-encodings.go
deleted file mode 100644
index 825c989540a5a15236795b85e29fdce7b8f4af7e..0000000000000000000000000000000000000000
Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/font-encodings.go and /dev/null differ
diff --git a/spaces/Proxdigestpills1/README/README.md b/spaces/Proxdigestpills1/README/README.md
deleted file mode 100644
index 123b6fe172b61efe12ce67aee1e4830d3c1dbd91..0000000000000000000000000000000000000000
--- a/spaces/Proxdigestpills1/README/README.md
+++ /dev/null
@@ -1,163 +0,0 @@
-[.png)](https://www.glitco.com/get-pro-x-digest)
-
-What is Pro X Digest?
-=====================
-
-Pro X Digest is a digestive health supplement featuring a blend of enzymes, probiotics, and other ingredients to support healthy digestion.
-
-Millions of people are diagnosed with a digestive disorder each year. As you get older, your risk of developing a digestive disorder increases.
-
-Pro X Digest claims to help by using a blend of natural ingredients to target the root cause of digestive discomfort. The blend of enzymes and probiotics can make it easier to break down food, helping your body digest everything you eat.
-
-Pro X Digest is made in the United States in an FDA-registered, GMP-certified facility. The manufacturer is based in West Jordan, Utah.
-
-**Pro X Digest Benefits**
--------------------------
-
-Pro X Digest contains a blend of digestive enzymes and probiotics to support healthy digestion, immunity, and overall health and wellness.
-
-### **[Here are some of the benefits of Pro X Digest, according to the official website:](https://www.glitco.com/get-pro-x-digest)**
-
- All natural way to help with your digestive system
-
- Keep your digestive system healthy and regular
-
- Natural digestive enzymes to break down proteins, fats, oils, and carbs
-
- Natural probiotics to support good bacteria, immune function, and overall gut health
-
- Backed by 60 day moneyback guarantee
-
- Made in the United States in FDA-registered, GMP-certified facility
-
-Order your supply of Pro X Digest now and start enjoying the benefits!
-
-**How Does Pro X Digest Work?**
--------------------------------
-
-Pro X Digest works using a blend of two main categories of ingredients: digestive enzymes and probiotics. The two ingredients work in different ways to support good digestion.
-
-Digestive enzymes, for example, help to break down the food you eat and extract its nutritional value. If you don’t have sufficient levels of digestive enzymes, then your body struggles to break down certain foods.
-
-Many people feel bloated after a protein-rich meal or protein shake, for example. This could be due to a lack of protease, a digestive enzyme to help break down protein. Others feel bloated or uncomfortable after dairy products, which could be caused by a lack of the lactase enzyme, which helps to break down the lactose protein in dairy.
-
-In addition to digestive enzymes, Pro X Digest contains probiotics, or good gut bacteria to help your gut flourish. A healthy gut is filled with billions of living bacteria that contribute to immunity, food breakdown, and overall gut wellness. People with poor gut health tend to have a less diverse gut microbiome than others. People with strong gut health tend to have thriving levels of billions of probiotic bacteria.
-
-Overall, Pro X Digest contains a blend of proven ingredients to target digestion in multiple ways. There are 3 probiotic strains, 7 digestive enzymes, and 1 fungi to help support digestive health and overall digestive balance.
-
-### **[Also Read: What Do You Mean by Gut Health Or Probiotic Supplements?](https://www.glitco.com/get-pro-x-digest)**
-
-**Pro X Digest Ingredients**
-----------------------------
-
-Pro X Digest contains a blend of two categories of ingredients: digestive enzymes and probiotic supplements.
-
-Digestive enzymes help to break down the foods you eat, while probiotics help your gut maintain a healthy balance overall. Enzymes can help extract nutrients, while probiotics can support immunity, weight loss, energy, metabolism, and other features linked to digestion.
-
-All three probiotics in Pro X Digest are part of the Lactobacillus family, including L. acidophilus, L. casei, and L. plantarum.
-
-Here are all of the ingredients in Pro X Digest and how they work, according to the manufacture:
-
-### [**Click here to order while supplies last!**](https://www.glitco.com/get-pro-x-digest)
-
-**Lactobacillus Acidophilus:** Lactobacillus acidophilus promotes the growth of good bacteria and helps treat digestive disorders, according to the manufacturer. Common digestive disorders include irritable bowel syndrome (IBS) or indigestion. Some also have poor probiotic balance because of Crohn’s disease, celiac disease, lactose intolerance, or other conditions. Although L. acidophilus can’t help with all of these, it’s found in many probiotic supplements and prized for its effects on overall gut balance.
-
-**Lactobacillus Casei:** Lactobacillus casei is a common probiotic found in your digestive tract. Like other probiotics, L. casei is considered friendly because it plays a valuable role in digestion and immunity. One study found L. casei increased the activity of natural killer (NK) cells, for example, while other studies have linked L. casei to general digestive health and discomfort.
-
-**Lactobacillus Plantarum:** The third probiotic strain in Pro X Digest and the third member of the Lactobacillus family, L. plantarum can improve cognitive function and help with gut immunity, according to the manufacturer. Over 70% of your immune system is found in your gut. If your gut bacteria are imbalanced, then your body’s immune system may struggle to defend itself. You need a balanced gut and thriving microflora to maintain good immunity, and Lactobacillus plantarum could help with that.
-
-**Bromelain**: Bromelain is a digestive enzyme found in pineapple. Many nutritional supplements contain bromelain from pineapple for its effects on digestion and the overall breakdown of food. Studies have linked bromelain to a range of effects – from weight loss to immune function. Today, many people take bromelain supplements daily for overall health and wellness.
-
-**Papain**: Papain is a digestive enzyme similar to bromelain. However, instead of coming from pineapple, papain comes from papaya. Papain can break down food for better digestion while helping to relieve bloating, constipation, and gas, according to the makers of Pro X Digest.
-
-**Aspergillus Oryzae:** Aspergillus oryzae is a fungus or mold used in food manufacturing in East Asia. It’s particularly common in fermented foods in Japan and China, for example. The makers of Pro X Digest added this unique ingredient to the formula to improve cognitive function and aid gut immunity. According to the manufacturer, the mold can support brain health and gut immunity, working in a similar way to probiotics.
-
-**Protease**: Pro X Digest contains protease, an enzyme designed to break down proteins. If you feel bloated or uncomfortable after eating protein, then you may need more protease. Pro X Digest can help your body break down protein, process its nutritional value, and absorb the maximum amount of protein from your foods.
-
-**Lipase**: Pro X Digest contains lipase, an enzyme to break down fats and oils. Many people feel bloated after eating a meal high in fats and oils. Pro X Digest can help by giving you a strong dose of lipase. Your body normally makes lipase in your pancreas. However, your salivary (spit) glands and stomach also produce lipase. As food enters your mouth, travels through your stomach, and enters your digestive tract, lipase helps to break down food along the way. As Mount Sinai explains, studies show lipase supplements can help reduce bloating, gas, and fullness after large meals.
-
-**Amylase**: Pro X Digest contains amylase, an enzyme to break down carbs. Like lipase and protease, amylase is designed to help your body process a specific type of ingredient: carbs. Your body produces amylase from its pancreas and salivary glands. Like lipase, amylase helps to break down food as it travels from your mouth throughout your digestive tract. Some people undergo amylase testing if unsure about the cause of their digestive problems.
-
-**Lactase:** Pro X Digest contains lactase, an enzyme that breaks down dairy. Some people naturally have less lactase than others, making it difficult to digest the lactose, or milk sugars, in dairy foods and beverages. Pro X Digest can help by breaking down these milk sugars to help you digest milk products more efficiently. Even if you don’t consume dairy, lactase can contribute to overall digestive comfort.
-
-**Alpha Galactosidase:** Pro X Digest contains alpha galactosidase, an enzyme involved in the metabolism of glycolipids, a specific type of fat that may contribute to digestive discomfort. A 2007 study showed alpha galactosidase supplementation led to a significant reduction in gas after a large meal.
-
-The makers of Pro X Digest claim all ingredients are tested by third-party labs to verify purity and potency. The company also assembles all ingredients together in the United States at an FDA-registered, GMP-certified facility.
-
-**[(Limited Supply) Order Pro X Digest Before Supplies Run Out!!](https://www.glitco.com/get-pro-x-digest)**
-
-Scientific Evidence for Pro X Digest
-------------------------------------
-
-As proof Pro X Digest works, the company cites several studies linking each of the ingredients to various health effects. We’ll review some of that scientific evidence below to validate the claims made on the **[Pro X Digest website.](https://www.npmjs.com/package/pro-x-digest-buy-official-site)**
-
-Pro X Digest contains alpha galactosidase, a digestive enzyme linked to health and wellness. In a 2000 study, researchers found the enzyme could play a valuable role in enzyme therapy. By taking alpha galactosidase enzymes from healthy adults and giving them to patients with enzyme deficiency, researchers found they could restore normal levels of enzymes. Alpha galactosidase appears to be particularly important for breaking down carbs.
-
-Lactobacillus casei has a long history of use as a probiotic supplement and overall digestive aid. In a 2019 study published in Nutrients, researchers found Lactobacillus casei could be beneficial for modulating gut microbiota. Researchers found people who took a L. casei supplement – like Pro X Digest – tended to have higher levels of L. casei in their system after taking the supplement, and those higher levels were linked to lower rates of diarrhea and other digestive issues.
-
-Lactobacillus acidophilus is backed by similar studies. A 2020 study found Lactobacillus acidophilus could help manage gastrointestinal disorders. Researchers found ample evidence L. acidophilus could help with acute diarrhea, chronic diarrhea, antibiotic-associated digestive problems, and even immune problems linked to the gut, among other benefits.
-
-As the National Center for Complementary and Integrative Health explains, bromelain is a group of enzymes found in the fruit and stem of the pineapple plant. Today, some take bromelain to reduce pain and swelling. Others take it for digestive problems. Some early studies have linked bromelain to promising digestive effects, although we need more research to conclusively make this connection.
-
-Aspergillus oryzae is one of the more unique ingredients in Pro X Digest. You can find plenty of digestive enzyme supplements and probiotic formulas online. However, aspergillus oryzae fills a more unique role. Also known as koji mold, A. oryzae is commonly used in food manufacturing. A 1999 study found the ingredient was commonly used in sake, miso, and soy sauce production in Japan, for example, describing its role as “pivotal” in food manufacturing. According to the makers of Pro X Digest, this same koji mold has powerful effects on cognition and digestion.
-
-Overall, Pro X Digest contains a blend of science-backed digestive enzymes and probiotics designed to support gut health in multiple ways. Although we don’t know specific dose or concentration information, Pro X Digest could work to support gut health by breaking down food, boosting immunity, and helping your digestive system function like normal.
-
-**[Place your order today by clicking here before stock runs out! >>>](https://www.glitco.com/get-pro-x-digest)**
-
-**How to Take Pro X Digest**
-----------------------------
-
-The makers of Pro X Digest recommend taking one capsule of Pro X Digest twice a day. Or, for best results, take it 20 to 30 minutes before a meal:
-
- Take 1 capsule (1 serving) of Pro X Digest 2 times per day
-
- For best results, take Pro X Digest 20 to 30 minutes before a meal
-
-Pro X Digest Pricing
---------------------
-
-Pro X Digest is normally priced at $199 per bottle. As part of a 2023 promotion, however, the manufacturer has reduced the price to just $59 per bottle. You can save even more money by ordering multiple bottles, which drops the price to $39 per bottle and comes bundled with free bonuses.
-
-[.png)](https://www.glitco.com/get-pro-x-digest)
-
-### **Here’s how pricing works when ordering online today:**
-
- 1 Bottle: $59 + Shipping
-
- 3 Bottles: $147 ($49 Per Bottle) + 1 Free Bonus + Shipping
-
- 6 Bottles: $234 ($39 Per Bottle) + 1 Free Bonus + Free Shipping
-
-#### **Order Pro X Digest Right Here At The Best Prices!!**
-
-Each bottle contains a 30 day supply of Pro X Digest, or 30 servings. You take one serving daily to help with digestion.
-
-Pro X Digest Refund Policy
---------------------------
-
-Pro X Digest comes with a 60 day moneyback guarantee. You can request a refund on your purchase within 60 days with no questions asked if you’re unhappy with the supplement for any reason.
-
-**Returns Address: Health Heroes 8152 S. Welby Park Dr Ste B, West Jordan, UT 84088**
-
-**About Health Heroes**
------------------------
-
-Pro X Digest is made in the United States in an FDA-registered, GMP-certified facility by a Utah-based company named Health Heroes. The company manufactures the supplement using natural ingredients.
-
-**You can contact the makers of Pro X Digest and the company’s customer service team via the following:**
-
- Email: [support@proxdigest.com](https://www.glitco.com/get-pro-x-digest)
-
- Phone: 702-859-3292
-
- Registered Address: Health Heroes 8152 S. Welby Park Dr Ste B, West Jordan, UT 84088
-
-**Final Word**
---------------
-
-Pro X Digest is a digestive health supplement available exclusively online. Made by a West Jordan, Utah-based company, Pro X Digest features a blend of digestive enzymes and probiotics to support gut health.
-
-Millions of Americans deal with bloating and digestive discomfort after meals. In many cases, these problems are linked to low digestive enzyme levels or poor probiotic balance. Pro X Digest aims to solve both of these issues.
-
-To learn more about Pro X Digest and how it works or to buy the digestive health supplement today, **[visit the official website.](https://www.glitco.com/get-pro-x-digest)**
\ No newline at end of file
diff --git a/spaces/Raghav001/Experiment/README.md b/spaces/Raghav001/Experiment/README.md
deleted file mode 100644
index e055b5bb6296ba8cee13d0d5f89ae23c87b9a390..0000000000000000000000000000000000000000
--- a/spaces/Raghav001/Experiment/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: ChatPDF
-emoji: 💻
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.20.0
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: Raghav001/DocTalk
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Rami/validate_chat_utd/README.md b/spaces/Rami/validate_chat_utd/README.md
deleted file mode 100644
index c80ed2fa95b5cf379345a87e4a8f9da0c9a99857..0000000000000000000000000000000000000000
--- a/spaces/Rami/validate_chat_utd/README.md
+++ /dev/null
@@ -1,11 +0,0 @@
----
-title: Validate Chat Utd
-emoji: 🌍
-colorFrom: green
-colorTo: yellow
-sdk: docker
-pinned: false
-license: openrail
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/svg.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/svg.py
deleted file mode 100644
index 075150a4b586d668c1666513fbf90463cdbb11ab..0000000000000000000000000000000000000000
--- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/svg.py
+++ /dev/null
@@ -1,188 +0,0 @@
-"""
- pygments.formatters.svg
- ~~~~~~~~~~~~~~~~~~~~~~~
-
- Formatter for SVG output.
-
- :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-"""
-
-from pip._vendor.pygments.formatter import Formatter
-from pip._vendor.pygments.token import Comment
-from pip._vendor.pygments.util import get_bool_opt, get_int_opt
-
-__all__ = ['SvgFormatter']
-
-
-def escape_html(text):
- """Escape &, <, > as well as single and double quotes for HTML."""
- return text.replace('&', '&'). \
- replace('<', '<'). \
- replace('>', '>'). \
- replace('"', '"'). \
- replace("'", ''')
-
-
-class2style = {}
-
-class SvgFormatter(Formatter):
- """
- Format tokens as an SVG graphics file. This formatter is still experimental.
- Each line of code is a ```` element with explicit ``x`` and ``y``
- coordinates containing ```` elements with the individual token styles.
-
- By default, this formatter outputs a full SVG document including doctype
- declaration and the ``