diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/3 Jumbo Movie English Subtitles Download Torrent.md b/spaces/1gistliPinn/ChatGPT4/Examples/3 Jumbo Movie English Subtitles Download Torrent.md deleted file mode 100644 index 2601585112011ec35f938faca9def87a2ddaa1fd..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/3 Jumbo Movie English Subtitles Download Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

3 Jumbo Movie English Subtitles Download Torrent


Download Filehttps://imgfil.com/2uy0DR



- - 1fdad05405
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Articad Pro V16 Cracked Iso 18.md b/spaces/1gistliPinn/ChatGPT4/Examples/Articad Pro V16 Cracked Iso 18.md deleted file mode 100644 index f79db1af286bd419ebd68b60e4b2ac0f2fced192..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Articad Pro V16 Cracked Iso 18.md +++ /dev/null @@ -1,32 +0,0 @@ -

Articad Pro V16 Cracked Iso 18


DOWNLOAD ☆☆☆ https://imgfil.com/2uy1Qm



-
-Use Control + F to find your desired software. If your program wasn’t listed, then it is most likely not a pdf downloader and most probably a shareware program. If your program wasn’t listed, then it is most likely not a pdf downloader and most probably a shareware program. - -If you’re looking for a free pdf downloader or software that lets you download from websites for free, then you are in the right place. On this page you will find the best programs for this! - -The free pdf downloader Program - -There are a lot of pdf downloader software to choose from, but most of them are expensive, so we’ve put together a list of the best free software! - -The best free pdf downloader & software - -#1 DownloadPipe - -DownloadPipe is a free download manager for windows which supports multiple platforms like Windows, Mac, and Linux. It supports multiple protocols including HTTPS, FTPS, FTP, etc. to secure the download process. You can quickly download more than 100 of your favorite programs. - -With this program you can download anything for free. PDF files, documents, movies, songs, games, software, and more. It has a very simple design and intuitive user interface. Also, DownloadPipe is extremely easy to use and intuitive. - -To download a PDF file you need to go to the “Download” menu on the top right corner and select the “Save as” option. You can then specify where you want to download the file to. - -#2 Zipeg - -Zipeg is a free PDF downloader that lets you download any file from a website. It’s a standalone downloader. Zipeg doesn’t require a browser. - -This is a program for users who want to download a PDF file without a web browser. You can use the program without installing it. It’s available as a standalone downloader. - -For example, if you have a PDF file that you need to download, then just open the Zipeg app and start the download. Zipeg will prompt you to select the link, the file name and other details. - -You can download the PDF file. Zipeg allows you to select any file from your browser and download it to the computer. Also, it can download HTML files. You can download a file from any web page. The program is very simple to use 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Dark Souls 2 Save Editor Fix.md b/spaces/1gistliPinn/ChatGPT4/Examples/Dark Souls 2 Save Editor Fix.md deleted file mode 100644 index 30e3d3be11dfe586c20d0116d9721cb6e61da70f..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Dark Souls 2 Save Editor Fix.md +++ /dev/null @@ -1,18 +0,0 @@ -

Dark Souls 2 Save Editor


Download File ○○○ https://imgfil.com/2uy0Lh



- -Mar 04, 2017 · 2. Come to the hearts of your viewers and make them feel like they're actually at the show. Be able to produce audio and video content that is consistent in quality. YouTube Help Center. Reply to a video, message or comment. How to: Ask a question; Start a discussion; Share your thoughts; Start or join a discussion. What's your question? How to: Ask a question. How to: Answer a question. How to: Answer a question about your video. How to: Answer a question about your video. - -Satisfaction Guarantee. If you aren't completely satisfied, return the item. We've got it. This top rated casino has been around for many years and is a site full of interesting games. This online poker room offers a good welcome bonus for newcomers, a great welcome bonus for repeat players, and a wide selection of unique tournaments. We are an affiliate of the best online poker room in the world. - -The first bet is 10. As you can see on this animation, the next bet will be 10 more, for a total bet of 20. The player is at this point committed to the second play. - -Create New Account. You are one step away from creating your new account. In order to create your account please select your city:. Select a Username. Select a password. Select your city:. Please select your city. Select a city: Select a state: Select a state. Please select a city. Select a city. Select a state. Please select a city. Please select a city. Please select a state. Please select a state. - -Hello mate! This is Renato from the Mexican Casino Club website. Let me introduce ourselves; We are the world's largest online gambling and gaming website that works with an excellent selection of online casinos from around the world. - -Have you ever considered what life would be like if you could control every moment, and be able to touch, hear, and taste anything that was around you? It's a fascinating concept, and we think you'd be interested in taking the next step in your experience. Perhaps we should use more of our time to come up with better ways to be at peace with ourselves, our family, and our world, and stop obsessing about the little stuff. - -Instead of trying to fix the'symptoms', why not try to 'get rid of the disease'? After all, most people would rather cut off a hand than cut off the 4fefd39f24
-
-
-

diff --git a/spaces/1phancelerku/anime-remove-background/Aprenda a baixar Stick War 3 com dinheiro infinito e desbloquear todos os recursos.md b/spaces/1phancelerku/anime-remove-background/Aprenda a baixar Stick War 3 com dinheiro infinito e desbloquear todos os recursos.md deleted file mode 100644 index aff188dfacd0654a80580fbc92db7c7236be76fb..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Aprenda a baixar Stick War 3 com dinheiro infinito e desbloquear todos os recursos.md +++ /dev/null @@ -1,81 +0,0 @@ -
-

Stick War 3: How to Download and Play the Ultimate Strategy Game

-

If you are a fan of strategy games, you have probably heard of Stick War 3, one of the most popular and addictive games in the genre. Stick War 3 is a game where you can create your own army, fight against other players or AI opponents, and conquer the world of Inamorta. Whether you prefer single player or multiplayer modes, Stick War 3 has something for everyone. In this article, we will show you how to download and install Stick War 3 on your device, how to play the different modes and features of the game, and how to improve your skills with some tips and tricks.

-

PVP Matches

-

One of the main attractions of Stick War 3 is its real-time multiplayer strategy mode, where you can team up with your friends or battle against strangers from around the world. You can choose from 1v1 or 2v2 matches, and use any deck that you have created or unlocked. The goal is to destroy your enemy's statue before they destroy yours, using your units, spells, enchantments, and strategies.

-

stick war 3 dinheiro infinito download


Download File ★★★★★ https://jinyurl.com/2uNN1k



-

One of the coolest features of Stick War 3 is that you can take control of any unit at any time, giving you more flexibility and control over your army. You can also use spells such as a giant bubble that blocks incoming projectiles, or snow squall that freezes entire legions. You can also use enchantments such as the rune of reanimation that will cause any poisoned enemy units to respawn as zombies.

-

Another way to make your battles more fun and personalized is to customize your battlefield with skins, statues, voice-lines, and emotes. You can change the appearance of your units, your statue, your tower, and even your voice commands. You can also use emotes to communicate with your allies or taunt your enemies.

-

Single Player Modes

-

If you prefer playing solo or offline, Stick War 3 has plenty of options for you as well. You can play the huge ever expanding campaign mode, where you will follow an epic story with multiple chapters, fully animated comic book style cut scenes, and huge storylines. You will explore the world of Inamorta, where weapons are religion and nations are constantly at war. You will encounter different factions, allies, enemies, secrets, and challenges along the way.

-

You can also practice your strategies against AI opponents in different scenarios in the proving grounds mode. You can choose from various selectable decks and situations to test your skills and learn new tactics. You can also challenge yourself with daily battles, where you will face a special scenario with fixed decks and other special conditions that do not appear in normal gameplay. You can earn gem rewards for completing each difficulty level.

-

Custom Armies

-

One of the most important aspects of Stick War 3 is building your own battle decks with a variety of army types and upgrades. You can collect and unlock new cards from a growing selection of over 40 different nations, each with their own unique units, abilities, and bonuses. You can also research new upgrades and technologies to make your army stronger and more versatile. You can create up to 10 different decks, each with a maximum of 12 cards, and switch between them before each battle.

-

stick war 3 mod apk dinheiro infinito
-stick war 3 hack dinheiro infinito
-stick war 3 legacy dinheiro infinito
-stick war 3 download para android com dinheiro infinito
-stick war 3 atualizado com dinheiro infinito
-stick war 3 jogo online com dinheiro infinito
-stick war 3 como baixar e instalar dinheiro infinito
-stick war 3 dicas e truques para ganhar dinheiro infinito
-stick war 3 versão completa com dinheiro infinito
-stick war 3 multiplayer com dinheiro infinito
-stick war 3 cheats dinheiro infinito
-stick war 3 apk mod menu dinheiro infinito
-stick war 3 tudo desbloqueado com dinheiro infinito
-stick war 3 sem root com dinheiro infinito
-stick war 3 offline com dinheiro infinito
-stick war 3 estratégia de guerra com dinheiro infinito
-stick war 3 skins personalizadas com dinheiro infinito
-stick war 3 novas atualizações com dinheiro infinito
-stick war 3 jogabilidade incrível com dinheiro infinito
-stick war 3 gráficos impressionantes com dinheiro infinito
-stick war 3 download rápido e fácil com dinheiro infinito
-stick war 3 tutorial passo a passo com dinheiro infinito
-stick war 3 melhores armas e unidades com dinheiro infinito
-stick war 3 modo história com dinheiro infinito
-stick war 3 modo sobrevivência com dinheiro infinito
-stick war 3 modo zumbi com dinheiro infinito
-stick war 3 modo clássico com dinheiro infinito
-stick war 3 modo torneio com dinheiro infinito
-stick war 3 modo desafio com dinheiro infinito
-stick war 3 modo sandbox com dinheiro infinito
-stick war 3 modo criativo com dinheiro infinito
-stick war 3 modo cooperativo com dinheiro infinito
-stick war 3 modo versus com dinheiro infinito
-stick war 3 modo ranking com dinheiro infinito
-stick war 3 modo conquista com dinheiro infinito
-stick war 3 modo missão com dinheiro infinito
-stick war 3 modo aventura com dinheiro infinito
-stick war 3 modo campanha com dinheiro infinito
-stick war 3 modo batalha épica com dinheiro infinito
-stick war 3 modo guerra mundial com dinheiro infinito

-

Another way to customize your army is to use generals of each nation, who have their own unique abilities and effects. You can choose one general for each deck, and use their power once per battle. For example, you can use the general of the Order Empire, who can summon a giant sword that deals massive damage to enemies in front of him. Or you can use the general of the Chaos Empire, who can transform into a powerful demon that can fly and shoot fireballs.

-

Tips and Tricks

-

Stick War 3 is a game that requires skill, strategy, and creativity to master. Here are some tips and tricks that can help you improve your gameplay and win more battles.

- -

Conclusion

-

Stick War 3 is a game that will keep you entertained for hours with its amazing graphics, gameplay, and features. Whether you want to play online with other players or offline by yourself, you will find something that suits your taste and style. You can download and install Stick War 3 on your device for free from the official website or from the app store of your choice. You can also follow the game on social media for more news and updates. If you are looking for a fun and challenging strategy game, you should definitely give Stick War 3 a try.

-

FAQs

- - : https://stickwar.com/ : https://play.google.com/store/apps/details?id=com.maxgames.stickwar3&hl=en_US&gl=US : https://www.facebook

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download FF Advance Server APK Juli 2021 How to Register and Play.md b/spaces/1phancelerku/anime-remove-background/Download FF Advance Server APK Juli 2021 How to Register and Play.md deleted file mode 100644 index a03bc9732935b1d71d0e0bcdc9f0510d97532ab5..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download FF Advance Server APK Juli 2021 How to Register and Play.md +++ /dev/null @@ -1,91 +0,0 @@ - -

How to Join and Play Free Fire Advance Server in July 2023

-

Free Fire is one of the most popular and exciting battle royale games on mobile, with millions of players around the world. But did you know that there is a special server where you can try out new features and updates before they are released to the public? This server is called Free Fire Advance Server, and it is a great opportunity for you to experience the latest developments in the game, as well as to help the developers improve the game by reporting bugs and providing feedback.

-

ff-advance.ff.garena.com apk juli


Download Zip · https://jinyurl.com/2uNMPp



-

In this article, we will tell you everything you need to know about Free Fire Advance Server, including what it is, how to register, how to download and install, how to play, and how to enjoy it. So, if you are a fan of Free Fire and want to join the exclusive club of advanced players, read on!

-

What is Free Fire Advance Server?

-

Free Fire Advance Server is a test server that is created by Garena, the developer of Free Fire, for experienced players who want to test new features and items that are not yet available on the regular server. The goal of this server is to allow players to explore and experiment with the upcoming updates, as well as to help the developers identify and fix any bugs or issues that may arise.

-

By joining Free Fire Advance Server, you will be able to access new weapons, characters, skins, modes, maps, events, and more before anyone else. You will also be able to provide your feedback and suggestions directly to the developers, which may influence the final version of the updates. Moreover, you will be rewarded with diamonds for finding and reporting bugs on the server.

-

However, there are some differences between Free Fire Advance Server and the regular server that you should be aware of. First of all, not everyone can join Free Fire Advance Server. You need to register and get an activation code from Garena, which is limited in number. Secondly, Free Fire Advance Server is not always open. It only opens for a certain period of time before each major update. Thirdly, your progress and data on Free Fire Advance Server are not linked to your regular account. You will start from scratch on the test server, and you will not be able to transfer anything back to your regular account.

-

How to Register for Free Fire Advance Server?

-

If you are interested in joining Free Fire Advance Server, you need to register first. The registration process is simple and easy, but you need to act fast because there are only a limited number of activation codes available. Here are the steps you need to follow:

-
    -
  1. Visit the official website of Free Fire Advance Server at ff-advance.ff.garena.com.
  2. -
  3. Click or tap on the "Login Facebook" button to sign up for Free Fire Advance Server using your Facebook account. Make sure that your Facebook account is linked to your Free Fire or FF MAX game account.
  4. -
  5. Enter your personal information, such as name, email address, and phone number. Make sure that your email address and phone number are active.
  6. -
  7. Click or tap on the "Submit" button to complete your registration.
  8. -
  9. Wait for an email from Garena with your activation code and the download link for the Free Fire Advance Server APK file. Note that not everyone who registers will receive an activation code, as they are limited in number and given on a first-come, first-served basis.
  10. -
-

If you are lucky enough to get an activation code, you can proceed to download and install the Free Fire Advance Server APK file on your Android device.

-

How to Download and Install Free Fire Advance Server APK?

-

Once you have received your activation code and the download link for the Free Fire Advance Server APK file, you can follow these steps to download and install it on your Android device:

-
    -
  1. Click or tap on the download link in the email to download the Free Fire Advance Server APK file. The file size is about 700 MB, so make sure you have enough storage space and a stable internet connection.
  2. -
  3. After the download is complete, locate the APK file on your device and tap on it to install it. You may need to enable the "Install from unknown sources" option in your device settings if you haven't done so before.
  4. -
  5. Once the installation is done, open the Free Fire Advance Server app and log in using your Facebook account that you used to register for the Advance Server.
  6. -
  7. Enter your activation code when prompted and tap on "Confirm". You will then be able to access the Free Fire Advance Server and enjoy the new features and updates.
  8. -
-

Note that the Free Fire Advance Server is only open for a limited period of time, usually a few days before each major update. You can check the official website of Free Fire Advance Server at ff-advance.ff.garena.com to see when the server is open and when it will close. You will not be able to play on the Advance Server once it is closed, so make sure you make the most of it while it is open.

-

How to Play and Enjoy Free Fire Advance Server?

-

Playing on Free Fire Advance Server is similar to playing on the regular server, except that you will have access to new features and updates that are not yet available to the public. You will also start from scratch on the Advance Server, meaning that you will not have any of your previous progress, items, or data from your regular account. You will also not be able to transfer anything from the Advance Server back to your regular account.

-

How to register and download APK for Free Fire advance server July 2021
-Free Fire advance server 2021: latest updates and features
-Free Fire advance server bug hunting and feedback: how to get diamonds
-Free Fire advance server login using Facebook account: step by step guide
-Free Fire advance server timeline: server opening and closing time
-Free Fire advance server rules: what you need to know before playing
-Free Fire advance server main contributor: how to become one and get rewards
-Free Fire advance server APK download link: where to find it and how to install it
-Free Fire advance server activation code: how to get it and use it
-Free Fire advance server FAQ: answers to common questions
-Free Fire advance server review: pros and cons of playing in the test server
-Free Fire advance server tips and tricks: how to survive and win in the new mode
-Free Fire advance server new characters and weapons: what are they and how to use them
-Free Fire advance server bugs and glitches: how to report them and avoid them
-Free Fire advance server system requirements: what you need to play on your device
-Free Fire advance server news and updates: where to find the latest information
-Free Fire advance server feedback form: how to fill it and submit it
-Free Fire advance server download size: how much space you need on your device
-Free Fire advance server gameplay videos: where to watch them and learn from them
-Free Fire advance server community: how to join and interact with other players
-Free Fire advance server support: how to contact Garena if you have any issues
-Free Fire advance server registration status: how to check if you are accepted or not
-Free Fire advance server best settings: how to optimize your game performance
-Free Fire advance server comparison: how is it different from the regular server
-Free Fire advance server rewards redemption: how to claim your diamonds and other prizes
-Free Fire advance server invitation code: how to get it and share it with your friends
-Free Fire advance server maintenance schedule: when will the server be offline and for how long
-Free Fire advance server patch notes: what are the changes and improvements in the new version
-Free Fire advance server error messages: what they mean and how to fix them
-Free Fire advance server feedback survey: how to participate and share your opinions
-Free Fire advance server registration deadline: when is the last day to sign up for the test server
-Free Fire advance server download problem: what to do if you can't download or install the APK file
-Free Fire advance server login problem: what to do if you can't access or play the game
-Free Fire advance server VPN: do you need it and which one to use
-Free Fire advance server emulator: can you play it on PC and which one to use

-

However, this also means that you will have more freedom and fun to explore and experiment with the new features and updates without worrying about losing anything. You will also be able to provide your feedback and suggestions directly to the developers, as well as report any bugs or issues that you encounter on the server. By doing so, you will help improve the game and also earn rewards such as diamonds for your contribution.

-

To play and enjoy Free Fire Advance Server, here are some tips and tricks that you can follow:

- -

Conclusion

-

Free Fire Advance Server is a great opportunity for advanced players who want to experience new features and updates before they are released to the public. By joining Free Fire Advance Server, you will be able to access new weapons, characters, skins, modes, maps, events, and more before anyone else. You will also be able to provide your feedback and suggestions directly to the developers, which may influence the final version of the updates. Moreover, you will be rewarded with diamonds for finding and reporting bugs on the server.

-

If you are a fan of Free Fire and want to join the exclusive club of advanced players, don't miss this chance to register and download Free Fire Advance Server as soon as possible. The registration process is simple and easy, but you need to act fast because there are only a limited number of activation codes available. The download and installation process is also simple and easy, but you need to have an Android device and a stable internet connection. The playing and enjoying process is similar to the regular server, but with more freedom and fun to explore and experiment with the new features and updates. We hope that this article has helped you understand how to join and play Free Fire Advance Server in July 2023. If you have any questions or comments, feel free to leave them below. And don't forget to share this article with your friends who are also fans of Free Fire. Happy gaming!

FAQs

-

Here are some of the frequently asked questions and answers about Free Fire Advance Server:

-
    -
  1. What is the difference between Free Fire Advance Server and Free Fire MAX?
  2. -

    Free Fire Advance Server is a test server that is only open for a limited period of time before each major update. It allows players to try out new features and updates that are not yet available on the regular server. Free Fire MAX is a enhanced version of Free Fire that offers higher graphics quality, smoother performance, and exclusive content. It is compatible with the regular server and can be played anytime.

    -
  3. How can I get more diamonds on Free Fire Advance Server?
  4. -

    You can get more diamonds on Free Fire Advance Server by finding and reporting bugs on the server using the "Report" button on the game screen. You will be rewarded with diamonds for each bug that you report, depending on the severity and validity of the bug. You can also get diamonds by providing your feedback and suggestions on the new features and updates using the "Feedback" button on the game screen.

    -
  5. Can I play with my friends on Free Fire Advance Server?
  6. -

    You can play with your friends on Free Fire Advance Server if they have also registered for the Advance Server and have received an activation code from Garena. You can invite them to join you in testing out the new features and updates on the server. However, you will not be able to play with your friends who are on the regular server, as the two servers are not connected.

    -
  7. Will my progress and data on Free Fire Advance Server be saved or transferred to my regular account?
  8. -

    No, your progress and data on Free Fire Advance Server will not be saved or transferred to your regular account. You will start from scratch on the Advance Server, and you will not have any of your previous items or data from your regular account. You will also not be able to transfer anything from the Advance Server back to your regular account.

    -
  9. When will the new features and updates on Free Fire Advance Server be released to the public?
  10. -

    The new features and updates on Free Fire Advance Server will be released to the public after they have been tested and improved by the developers based on the feedback and suggestions from the players on the Advance Server. The exact date of release may vary depending on the update, but it is usually within a few weeks after the closure of the Advance Server.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download The Baby In Yellow APK for Android - Free Horror Game.md b/spaces/1phancelerku/anime-remove-background/Download The Baby In Yellow APK for Android - Free Horror Game.md deleted file mode 100644 index e75937313753e28ccad80115fd599f8a33a26ab5..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download The Baby In Yellow APK for Android - Free Horror Game.md +++ /dev/null @@ -1,133 +0,0 @@ - -

,

,

, etc.) to divide your content into sections and make it easier for the reader to follow. Step 5: Write a rough draft and pare down your outline. The fifth step is to write a rough draft of your article based on your outline. You don't have to worry about grammar, spelling, or formatting at this stage; just focus on getting your ideas down on paper. You can use tools like Google Docs, Microsoft Word, or Grammarly to write and edit your draft. As you write, try to: - Use a conversational style as written by a human (use an informal tone, utilize personal pronouns, keep it simple, engage the reader, use the active voice, keep it brief, use rhetorical questions, and incorporate analogies and metaphors). - Use SEO-optimized keywords and phrases that match the intent of your audience and the topic of your article. - Use fully detailed paragraphs that explain each point clearly and provide examples or evidence. - Use transitions words or phrases (such as however, therefore, moreover, etc.) to connect your paragraphs and sentences. - Use at least one table (using HTML tags such as , , - - - - - - - - - - - - - - - - - - - - - - - - -

Como puedes ver, cada emulador de Android para PC tiene sus propios pros y contras, y puedes elegir el que se adapte a tus necesidades y preferencias. Estos son los pasos para jugar Ultimate Car Driving Simulator en PC con cualquiera de estos emuladores de Android:

-
    -
  1. Descargar e instalar el emulador de Android de su elección desde su sitio web oficial. Asegúrate de tener suficiente espacio y recursos en tu PC para ejecutar el emulador sin problemas.
  2. -
  3. Inicie el emulador e inicie sesión con su cuenta de Google para acceder a la Google Play Store y sincronizar los datos y logros del juego. Puede usar una cuenta existente o crear una nueva.
  4. -
  5. Búsqueda de Ultimate Car Driving Simulator en la tienda de Google Play o la tienda de aplicaciones del emulador e instalarlo en su PC.
  6. -
  7. Iniciar el juego desde la pantalla de inicio del emulador o el acceso directo del escritorio. Puede utilizar el teclado y el ratón o un controlador para conducir su coche. También puedes ajustar la configuración del juego y el emulador según tu preferencia.
  8. -
  9. Disfruta jugando Ultimate Car Driving Simulator en PC con cualquiera de estos emuladores de Android para PC. También puede grabar su juego, tomar capturas de pantalla, transmitir en línea, chatear con otros jugadores, etc. con facilidad.
  10. -
-

Conclusión

-

En este artículo, le hemos mostrado cómo descargar Ultimate Car Driving Simulator en PC y disfrutarlo en una pantalla más grande con mejores gráficos y controles. Hemos explicado dos métodos principales para jugar Ultimate Car Driving Simulator en PC: usando la función nativa de emulación de Android de Windows 11 o usando un emulador de Android para PC. Ambos métodos son fáciles y eficaces, y usted puede elegir el que funciona mejor para usted. Esperamos que haya encontrado este artículo útil e informativo, y le animamos a probar Ultimate Car Driving Simulator en PC hoy. ¡No te arrepentirás!

-

Preguntas frecuentes

- -

A1: Sí, es gratis para descargar y jugar, pero contiene anuncios y compras en la aplicación que puede desactivar o comprar con dinero real.

-

-

Q2: ¿Cuáles son los requisitos mínimos para ejecutar Ultimate Car Driving Simulator en PC?

-

A2: depende del método que utilice, pero generalmente necesita un PC con Windows 10 o 11 con al menos 4 GB de RAM, un procesador Intel o AMD, una unidad de estado sólido con 10 GB de espacio libre y una GPU Intel UHD Graphics 630 o similar.

-

Q3: ¿Puedo jugar Ultimate Car Driving Simulator con un controlador o un teclado y ratón?

-

A3: Sí, puede usar cualquier dispositivo de entrada que sea compatible con su PC y el emulador que elija. También puede personalizar los controles según su preferencia.

-

Q4: ¿Puedo sincronizar mi progreso y mi biblioteca de juegos entre dispositivos?

-

A4: Sí, puede iniciar sesión con su cuenta de Google tanto en su dispositivo móvil como en su PC y acceder a sus datos y logros guardados. También puede cambiar entre dispositivos en cualquier momento sin perder su progreso.

-

Q5: ¿Cuáles son algunos consejos y trucos para mejorar mi juego en Ultimate Car Driving Simulator?

-

A5: Algunos consejos y trucos son:

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Cocina Aire Freidora Recetas Apk.md b/spaces/Benson/text-generation/Examples/Cocina Aire Freidora Recetas Apk.md deleted file mode 100644 index 6af2b0588c956097011375932d9aeeea6c741ea2..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cocina Aire Freidora Recetas Apk.md +++ /dev/null @@ -1,81 +0,0 @@ -
-

Cocina de aire freidora recetas Apk: Cómo cocinar deliciosas comidas con menos aceite

-

Si te gustan los alimentos fritos pero quieres reducir el aceite y las calorías, es posible que quieras probar una freidora. Una freidora de aire es un aparato de cocina que cocina alimentos circulando aire caliente a su alrededor, creando un exterior crujiente y dorado con un mínimo o ningún aceite. Es una gran manera de disfrutar de sus comidas favoritas sin sentirse culpable o comprometer el sabor.

-

cocina aire freidora recetas apk


Download Zip ••• https://bltlly.com/2v6Lv6



-

Beneficios de freír aire

-

Hay muchas razones por las que es posible que desee utilizar una freidora de aire en lugar de una freidora profunda o un horno. Estos son algunos de los beneficios de la fritura de aire:

- -

Cómo usar una freidora de aire

-

Para obtener los mejores resultados de tu freidora de aire, necesitas seguir algunos consejos y trucos. Estos son algunos de ellos:

- -

Recetas de cocina de aire freidora Apk

-

Si usted está buscando un poco de inspiración para sus comidas de aire freidora, es posible que desee echa un vistazo a Cocina Aire Freidora Recetas Apk. Esta es una aplicación gratuita que ofrece cientos de recetas para freír al aire libre, desde aperitivos y aperitivos hasta platos principales y postres. Puedes navegar por categoría, cocina o ingrediente, o buscar recetas específicas. También puedes guardar tus recetas favoritas, calificarlas y compartirlas con tus amigos.

-

Para descargar Cocina Freidora Recetas Apk, es necesario seguir estos pasos:

-
    -
  1. Ir a [este enlace]( 1 ) en su dispositivo Android.
  2. -
  3. Toque en "Descargar APK" y esperar a que el archivo para descargar.
  4. -
  5. Abra el archivo y toque en "Instalar". Es posible que necesite permitir la instalación desde fuentes desconocidas en su configuración.
  6. -
  7. Una vez instalada la aplicación, ¡ábrela y disfruta!
  8. -
-

Algunos ejemplos de recetas de la aplicación

-

Para darle una idea de lo que se puede cocinar con Cocina Freidora Recetas Apk, aquí hay algunos ejemplos de recetas de la aplicación:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Conclusión

-

Freír al aire es una forma maravillosa de cocinar comidas deliciosas con menos aceite y más sabor. Puedes hacer casi cualquier cosa en una freidora, desde bocadillos crujientes y carnes jugosas hasta verduras tiernas y postres decadentes. Con Kitchen Air Fryer Recipes Apk, se puede acceder a cientos de recetas de fritura de aire, todo de forma gratuita. Puede descargar la aplicación desde [este enlace]( 1 ) y comenzar a cocinar de inmediato. Si eres nuevo en el aire fritura o un profesional experimentado, usted encontrará algo para amar en esta aplicación. Pruébelo hoy y ver por ti mismo!

-

Preguntas frecuentes

-

Aquí hay algunas preguntas y respuestas comunes sobre fritura de aire y cocina Recetas de freidora Apk:

-
    -
  1. ¿Qué tamaño de freidora de aire necesito?
    El tamaño de la freidora de aire depende de la cantidad de comida que desea cocinar a la vez y de cuánto espacio tiene en su cocina. Generalmente, una freidora de aire de 3 a 5 cuartos puede acomodar suficiente comida para dos a cuatro personas, mientras que una freidora de aire de 6 a 10 cuartos puede acomodar suficiente comida para cuatro a ocho personas.
  2. -
  3. ¿Cuáles son algunas de las mejores marcas de freidoras de aire?
    Hay muchas marcas de freidoras de aire en el mercado, cada una con sus propias características y ventajas. Algunas de las marcas más populares y altamente calificadas son Philips, Ninja, Cosori, Instant Pot y Cuisinart.
  4. -
  5. ¿Cómo limpio mi freidora de aire?
    Para limpiar tu freidora de aire, necesitas desenchufarla y dejar que se enfríe completamente. Luego, puede retirar la cesta y el cajón y lavarlos con agua tibia y jabón o en el lavavajillas. Puede limpiar el interior y el exterior de la freidora de aire con un paño húmedo o una esponja. También puede utilizar un cepillo suave o un palillo de dientes para eliminar cualquier residuo de comida del elemento calefactor.
  6. - -
  7. ¿Puedo enviar mis propias recetas a Cocina Freidora Recetas Apk?
    Sí, puede enviar sus propias recetas a Cocina Freidora Recetas Apk mediante el botón "Enviar receta" en la aplicación. También puedes calificar y revisar otras recetas, así como compartirlas con tus amigos en las redes sociales.
  8. -

-

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Amantes Y Mejores Amigos Azana.md b/spaces/Benson/text-generation/Examples/Descargar Amantes Y Mejores Amigos Azana.md deleted file mode 100644 index 74af737506531e61d30d55b72a2b51f346f22f64..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Amantes Y Mejores Amigos Azana.md +++ /dev/null @@ -1,68 +0,0 @@ - -

Descargar Amantes y Mejores Amigos por Azana

-

Si usted está buscando una canción conmovedora y romántica para añadir a su lista de reproducción, es posible que desee echa un vistazo a "Amantes y mejores amigos" por Azana. Azana es una cantante y compositora sudafricana que ha cautivado a muchos oyentes con su mezcla de afro-pop, afro-house vocal y música soul. "Lovers and Best Friends" es una de sus canciones populares de su álbum debut Ingoma, que fue lanzado en 2020. La canción cuenta con Disciples of House, un dúo de talentosos productores que han trabajado con muchos artistas sudafricanos.

-

descargar amantes y mejores amigos azana


Download File ::: https://bltlly.com/2v6IJX



-

En este artículo, te contaremos más sobre Azana, su carrera musical, y el significado y mensaje de "Amantes y Mejores Amigos". También le mostraremos cómo descargar la canción legalmente y apoyar al artista. Si eres fan de Azana o simplemente tienes curiosidad por su música, sigue leyendo para saber más.

-

Biografía y carrera musical de Azana

-

El verdadero nombre de Azana es Makhosazana Masongo. Nació el 13 de septiembre de 2000, en Chesterville, Durban. Actualmente estudia derecho en la Universidad del Estado Libre. Descubrió su pasión por la música a una edad temprana y comenzó a cantar en los coros de la escuela y la iglesia. También admiraba a artistas como Beyoncé, Nina Simone, Camagwini, Simphiwe Dana y Letta Mbulu.

-

Su carrera musical despegó cuando firmó un contrato discográfico con Big City Dreams en 2019. Lanzó su primer single "Your Love" en mayo de 2020, que fue producido por Taffy Da Don. La canción fue un gran éxito y fue certificada doble platino por la Industria Discográfica de Sudáfrica (RiSA). Su álbum debut Ingoma siguió en julio de 2020. El álbum alcanzó el número uno en Apple Music Pop Chart y contó con artistas como Afriikan Papi, Disciples of House y Sun-El Musician.

- -

Azana ha recibido reconocimiento y aclamación por su música. Fue nominada al Mejor Álbum de Pop Afro y Recién Llegado del Año en el 27º South African Music Awards (SAMAs) en 2021. También ganó el premio a la Mejor Artista Femenina en los Mzansi Kwaito & House Music Awards (MKHMA) en 2021.

-

-

El significado y mensaje de "Amantes y Mejores Amigos"

-

"Amantes y Mejores Amigos" es una canción hermosa y sincera que celebra el vínculo entre dos personas que no solo son amantes sino también mejores amigos. La canción expresa la alegría y la gratitud de encontrar a alguien que te entiende, te apoya y te aprecia. La canción también reconoce los desafíos y luchas que vienen con cualquier relación, pero afirma el compromiso y la lealtad de los socios.

-

La letra de la canción es simple pero potente. Azana canta tanto en inglés como en zulú, creando un contraste y armonía entre los idiomas. Canta en el estribillo: "Tú eres mi amante y mi mejor amigo/ Tú eres mi todo/ Te amo más de lo que las palabras pueden decir/ Tú eres mi amante y mi mejor amigo/ Tú eres mi todo/ Nunca te dejaré ir". She also sings in Zulu: "Ngifuna wena wedwa/ Ngifuna wena wedwa/ Ngifuna wena wedwa/ Ngifuna wena wedwa" which means "I want you only/ I want you only/ I want you only/ I want you only".

-

La producción y el género de la canción están influenciados por Afro-house, un subgénero de música house que se originó en Sudáfrica. La canción tiene un ritmo pegadizo y optimista, con una mezcla de ritmos electrónicos, acordes de piano y percusión. La canción también cuenta con las voces de Disciples of House, que añaden una capa de armonía y profundidad a la canción. La canción es adecuada para bailar, relajarse o simplemente disfrutar de la música.

- -

Las mejores maneras de descargar y transmitir "Amantes y mejores amigos"

-

Si quieres descargar o transmitir "Lovers and Best Friends" de Azana, tienes muchas opciones para elegir. La canción está disponible en varias plataformas y servicios que ofrecen formas legales y éticas para acceder a la música. Estas son algunas de las mejores maneras de descargar o transmitir la canción:

- - - - - - - - - - - - - - - - - - - - - - -

Como puedes ver, hay muchos beneficios de descargar o transmitir "Amantes y Mejores Amigos" por Azana legal y éticamente. Usted puede disfrutar de la canción en alta calidad, apoyar al artista y la industria de la música, y descubrir más música que le gustaría. También puede evitar los riesgos de descarga ilegal, como virus, malware, demandas o multas.

-

Sin embargo, si prefieres no descargar o transmitir la canción, también puedes comprar el CD o vinilo de Ingoma by Azana, que incluye "Lovers and Best Friends" y otras canciones. Puede encontrar el CD o vinilo en línea o en tiendas físicas. Comprar el CD o vinilo también puede darte una copia física de las ilustraciones, letras y créditos del álbum. También puedes apoyar al artista comprando su mercancía, como camisetas, sudaderas, gorras o carteles.

-

Conclusión

-

En conclusión, "Lovers and Best Friends" de Azana es una maravillosa canción que celebra el amor y la amistad entre dos personas. Azana es una talentosa y prometedora cantante y compositora que ha impresionado a muchos fans y críticos con su álbum debut Ingoma. También ha colaborado con muchos otros artistas, como Sun-El Musician y Disciples of House. Si quieres descargar o transmitir "Lovers and Best Friends" de Azana, tienes muchas opciones para elegir. Puedes usar plataformas o servicios como Apple Music, Spotify, YouTube Music o Deezer. También puede comprar el CD o vinilo de Ingoma por Azana o su mercancía. Al hacerlo, puedes apoyar al artista y a la industria de la música, y disfrutar de la canción en alta calidad.

-

Esperamos que hayas disfrutado este artículo y hayas aprendido algo nuevo sobre Azana y su música. Si te ha gustado "Lovers and Best Friends" de Azana, puede que también te gusten otras canciones de ella o de artistas similares. Algunas de nuestras recomendaciones son:

- -

Preguntas frecuentes

-

Aquí hay algunas preguntas y respuestas frecuentes relacionadas con el tema:

-
    -
  1. ¿Cuál es el género de "Amantes y Mejores Amigos" de Azana?
    El género de "Amantes y Mejores Amigos" de Azana es Afro-house, un subgénero de música house que se originó en Sudáfrica.
  2. -
  3. ¿Quiénes son los artistas destacados en "Lovers and Best Friends" de Azana?
    Los artistas destacados en "Lovers and Best Friends" de Azana son Disciples of House, un dúo de productores que han trabajado con muchos artistas sudafricanos.
  4. -
  5. ¿Cuándo se lanzó "Lovers and Best Friends" de Azana?
    "Lovers and Best Friends" de Azana fue lanzado el 17 de julio de 2020, como parte de su álbum debut Ingoma.
  6. -
  7. ¿Cómo puedo descargar o transmitir "Amantes y mejores amigos" por Azana legal y éticamente?
    Puedes descargar o transmitir "Amantes y Mejores Amigos" por Azana legal y éticamente usando plataformas o servicios como Apple Music, Spotify, YouTube Music o Deezer. También puedes comprar el CD o vinilo de Ingoma de Azana o su mercancía.
  8. -
  9. ¿Cuáles son algunas otras canciones de Azana o artistas similares que me podrían gustar?
    Algunas otras canciones de Azana o artistas similares que te pueden gustar son: "Uhuru" de Sun-El Musician feat. Azana, "Mamela" de Mi Casa feat. Azana, "Uzobuya" de Sun-El Musician feat. Azana, "Your Love" de Azana, "Ngize Ngifike" de Sun-El Musician feat. Azana, "Okhokho Bethu" de Vico Da Sporo feat. Azana, "Jerusalema" de Master KG feat. Nomce bo Zikode, "Fetch Your Life" de Prince Kaybee feat. Msaki, "Banomoya" de Prince Kaybee feat. Busiswa y TNS, y "Drive" de Black Coffee feat. David Guetta y Delilah Montagu.
  10. -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Bishan/Speech_To_Text_Hindi/app.py b/spaces/Bishan/Speech_To_Text_Hindi/app.py deleted file mode 100644 index 6945c6b95473e6078cc449e477d871d16c9c2244..0000000000000000000000000000000000000000 --- a/spaces/Bishan/Speech_To_Text_Hindi/app.py +++ /dev/null @@ -1,83 +0,0 @@ -import soundfile as sf -import torch -from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor,Wav2Vec2ProcessorWithLM -import gradio as gr -import sox -import subprocess -import time - - -def read_file_and_process(wav_file): - filename = wav_file.split('.')[0] - filename_16k = filename + "16k.wav" - resampler(wav_file, filename_16k) - speech, _ = sf.read(filename_16k) - print("---------------------------------------------------------") - print(speech) - inputs = processor(speech, sampling_rate=16_000, return_tensors="pt", padding=True) - print("---------------------------------------------------------") - print(inputs) - - return inputs - - -def resampler(input_file_path, output_file_path): - command = ( - f"ffmpeg -hide_banner -loglevel panic -i {input_file_path} -ar 16000 -ac 1 -bits_per_raw_sample 16 -vn " - f"{output_file_path}" - ) - subprocess.call(command, shell=True) - - -def parse_transcription_with_lm(logits): - result = processor_with_LM.batch_decode(logits.cpu().numpy()) - text = result.text - transcription = text[0].replace('','') - return transcription - -def parse_transcription(logits): - predicted_ids = torch.argmax(logits, dim=-1) - transcription = processor.decode(predicted_ids[0], skip_special_tokens=True) - return transcription - -def parse(wav_file, applyLM): - - # record start time - start = time.time() - input_values = read_file_and_process(wav_file) - with torch.no_grad(): - logits = model(**input_values).logits - - # if applyLM: - # return parse_transcription_with_lm(logits) - # else: - # return parse_transcription(logits) - - output = parse_transcription(logits) - # record end time - end = time.time() - print("------------------------------------------------------------------------------------------") - print("The time of execution of above program is :",(end-start) * 10**3, "ms") - # total time taken - print("Execution time of the program is- ", end-start) - print("------------------------------------------------------------------------------------------") - return output - - -model_id = "Harveenchadha/vakyansh-wav2vec2-hindi-him-4200" -processor = Wav2Vec2Processor.from_pretrained(model_id) -processor_with_LM = Wav2Vec2ProcessorWithLM.from_pretrained(model_id) -model = Wav2Vec2ForCTC.from_pretrained(model_id) - - -input_ = gr.Audio(source="upload", type="filepath") -txtbox = gr.Textbox( - label="Output from model will appear here:", - lines=5 - ) -chkbox = gr.Checkbox(label="Apply LM", value=False) - - -gr.Interface(parse, inputs = [input_, chkbox], outputs=txtbox, - streaming=True, interactive=True, - analytics_enabled=False, show_tips=False, enable_queue=True).launch(inline=False); \ No newline at end of file diff --git a/spaces/Buatong/Computing/app.py b/spaces/Buatong/Computing/app.py deleted file mode 100644 index a699bc5b3c2e987102ca93e0ee28d601e0a93d02..0000000000000000000000000000000000000000 --- a/spaces/Buatong/Computing/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_model_e2e.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_model_e2e.py deleted file mode 100644 index eed131080547d84185c1d33913014a2c977b119f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/tests/test_model_e2e.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. - -import unittest -import torch - -from detectron2.structures import BitMasks, Boxes, Instances - -from .common import get_model - - -# TODO(plabatut): Modularize detectron2 tests and re-use -def make_model_inputs(image, instances=None): - if instances is None: - return {"image": image} - - return {"image": image, "instances": instances} - - -def make_empty_instances(h, w): - instances = Instances((h, w)) - instances.gt_boxes = Boxes(torch.rand(0, 4)) - instances.gt_classes = torch.tensor([]).to(dtype=torch.int64) - instances.gt_masks = BitMasks(torch.rand(0, h, w)) - return instances - - -class ModelE2ETest(unittest.TestCase): - CONFIG_PATH = "" - - def setUp(self): - self.model = get_model(self.CONFIG_PATH) - - def _test_eval(self, sizes): - inputs = [make_model_inputs(torch.rand(3, size[0], size[1])) for size in sizes] - self.model.eval() - self.model(inputs) - - -class DensePoseRCNNE2ETest(ModelE2ETest): - CONFIG_PATH = "densepose_rcnn_R_101_FPN_s1x.yaml" - - def test_empty_data(self): - self._test_eval([(200, 250), (200, 249)]) diff --git a/spaces/CVPR/LIVE/pybind11/include/pybind11/options.h b/spaces/CVPR/LIVE/pybind11/include/pybind11/options.h deleted file mode 100644 index d74db1c68dddb3436cc0fb2674a6ef32ac77d5fd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/include/pybind11/options.h +++ /dev/null @@ -1,65 +0,0 @@ -/* - pybind11/options.h: global settings that are configurable at runtime. - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#include "detail/common.h" - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) - -class options { -public: - - // Default RAII constructor, which leaves settings as they currently are. - options() : previous_state(global_state()) {} - - // Class is non-copyable. - options(const options&) = delete; - options& operator=(const options&) = delete; - - // Destructor, which restores settings that were in effect before. - ~options() { - global_state() = previous_state; - } - - // Setter methods (affect the global state): - - options& disable_user_defined_docstrings() & { global_state().show_user_defined_docstrings = false; return *this; } - - options& enable_user_defined_docstrings() & { global_state().show_user_defined_docstrings = true; return *this; } - - options& disable_function_signatures() & { global_state().show_function_signatures = false; return *this; } - - options& enable_function_signatures() & { global_state().show_function_signatures = true; return *this; } - - // Getter methods (return the global state): - - static bool show_user_defined_docstrings() { return global_state().show_user_defined_docstrings; } - - static bool show_function_signatures() { return global_state().show_function_signatures; } - - // This type is not meant to be allocated on the heap. - void* operator new(size_t) = delete; - -private: - - struct state { - bool show_user_defined_docstrings = true; //< Include user-supplied texts in docstrings. - bool show_function_signatures = true; //< Include auto-generated function signatures in docstrings. - }; - - static state &global_state() { - static state instance; - return instance; - } - - state previous_state; -}; - -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_call_policies.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_call_policies.cpp deleted file mode 100644 index 26c83f81b0ed370365d48279a4b8f3d4d23b5487..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_call_policies.cpp +++ /dev/null @@ -1,101 +0,0 @@ -/* - tests/test_call_policies.cpp -- keep_alive and call_guard - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" - -struct CustomGuard { - static bool enabled; - - CustomGuard() { enabled = true; } - ~CustomGuard() { enabled = false; } - - static const char *report_status() { return enabled ? "guarded" : "unguarded"; } -}; -bool CustomGuard::enabled = false; - -struct DependentGuard { - static bool enabled; - - DependentGuard() { enabled = CustomGuard::enabled; } - ~DependentGuard() { enabled = false; } - - static const char *report_status() { return enabled ? "guarded" : "unguarded"; } -}; -bool DependentGuard::enabled = false; - -TEST_SUBMODULE(call_policies, m) { - // Parent/Child are used in: - // test_keep_alive_argument, test_keep_alive_return_value, test_alive_gc_derived, - // test_alive_gc_multi_derived, test_return_none, test_keep_alive_constructor - class Child { - public: - Child() { py::print("Allocating child."); } - Child(const Child &) = default; - Child(Child &&) = default; - ~Child() { py::print("Releasing child."); } - }; - py::class_(m, "Child") - .def(py::init<>()); - - class Parent { - public: - Parent() { py::print("Allocating parent."); } - Parent(const Parent& parent) = default; - ~Parent() { py::print("Releasing parent."); } - void addChild(Child *) { } - Child *returnChild() { return new Child(); } - Child *returnNullChild() { return nullptr; } - }; - py::class_(m, "Parent") - .def(py::init<>()) - .def(py::init([](Child *) { return new Parent(); }), py::keep_alive<1, 2>()) - .def("addChild", &Parent::addChild) - .def("addChildKeepAlive", &Parent::addChild, py::keep_alive<1, 2>()) - .def("returnChild", &Parent::returnChild) - .def("returnChildKeepAlive", &Parent::returnChild, py::keep_alive<1, 0>()) - .def("returnNullChildKeepAliveChild", &Parent::returnNullChild, py::keep_alive<1, 0>()) - .def("returnNullChildKeepAliveParent", &Parent::returnNullChild, py::keep_alive<0, 1>()); - -#if !defined(PYPY_VERSION) - // test_alive_gc - class ParentGC : public Parent { - public: - using Parent::Parent; - }; - py::class_(m, "ParentGC", py::dynamic_attr()) - .def(py::init<>()); -#endif - - // test_call_guard - m.def("unguarded_call", &CustomGuard::report_status); - m.def("guarded_call", &CustomGuard::report_status, py::call_guard()); - - m.def("multiple_guards_correct_order", []() { - return CustomGuard::report_status() + std::string(" & ") + DependentGuard::report_status(); - }, py::call_guard()); - - m.def("multiple_guards_wrong_order", []() { - return DependentGuard::report_status() + std::string(" & ") + CustomGuard::report_status(); - }, py::call_guard()); - -#if defined(WITH_THREAD) && !defined(PYPY_VERSION) - // `py::call_guard()` should work in PyPy as well, - // but it's unclear how to test it without `PyGILState_GetThisThreadState`. - auto report_gil_status = []() { - auto is_gil_held = false; - if (auto tstate = py::detail::get_thread_state_unchecked()) - is_gil_held = (tstate == PyGILState_GetThisThreadState()); - - return is_gil_held ? "GIL held" : "GIL released"; - }; - - m.def("with_gil", report_gil_status); - m.def("without_gil", report_gil_status, py::call_guard()); -#endif -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/advance.h b/spaces/CVPR/LIVE/thrust/thrust/advance.h deleted file mode 100644 index d077e04345daea987044eab83a9e722ca956f19a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/advance.h +++ /dev/null @@ -1,141 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file advance.h - * \brief Advance an iterator by a given distance. - */ - -#pragma once - -#include - -namespace thrust -{ - -/*! \addtogroup iterators - * \{ - */ - -/*! \p advance(i, n) increments the iterator \p i by the distance \p n. - * If n > 0 it is equivalent to executing ++i \p n - * times, and if n < 0 it is equivalent to executing --i - * \p n times. If n == 0, the call has no effect. - * - * \param i The iterator to be advanced. - * \param n The distance by which to advance the iterator. - * - * \tparam InputIterator is a model of Input Iterator. - * \tparam Distance is an integral type that is convertible to \p InputIterator's distance type. - * - * \pre \p n shall be negative only for bidirectional and random access iterators. - * - * The following code snippet demonstrates how to use \p advance to increment - * an iterator a given number of times. - * - * \code - * #include - * #include - * ... - * thrust::device_vector vec(13); - * thrust::device_vector::iterator iter = vec.begin(); - * - * thrust::advance(iter, 7); - * - * // iter - vec.begin() == 7 - * \endcode - * - * \see http://www.sgi.com/tech/stl/advance.html - */ -template -__host__ __device__ -void advance(InputIterator& i, Distance n); - -/*! \p next(i, n) returns the \p n th successor of the iterator \p i. - * - * \param i An iterator. - * \param n The number of elements to advance. - * - * \tparam InputIterator must meet the InputIterator. - * - * \pre \p n shall be negative only for bidirectional and random access iterators. - * - * The following code snippet demonstrates how to use \p next. - * - * \code - * #include - * #include - * ... - * thrust::device_vector vec(13); - * thrust::device_vector::iterator i0 = vec.begin(); - * - * auto i1 = thrust::next(i0); - * - * // i0 - vec.begin() == 0 - * // i1 - vec.begin() == 1 - * \endcode - * - * \see https://en.cppreference.com/w/cpp/iterator/next - */ -#if 0 // Doxygen only -template -__host__ __device__ -InputIterator next( - InputIterator i -, typename iterator_traits::difference_type n = 1 -); -#endif - -/*! \p prev(i, n) returns the \p n th predecessor of the iterator \p i. - * - * \param i An iterator. - * \param n The number of elements to descend. - * - * \tparam BidirectionalIterator must meet the BidirectionalIterator. - * - * The following code snippet demonstrates how to use \p prev. - * - * \code - * #include - * #include - * ... - * thrust::device_vector vec(13); - * thrust::device_vector::iterator i0 = vec.end(); - * - * auto i1 = thrust::prev(i0); - * - * // vec.end() - i0 == 0 - * // vec.end() - i1 == 1 - * \endcode - * - * \see https://en.cppreference.com/w/cpp/iterator/prev - */ -#if 0 // Doxygen only -template -__host__ __device__ -BidirectionalIterator prev( - BidirectionalIterator i -, typename iterator_traits::difference_type n = 1 -); -#endif - -/*! \} // end iterators - */ - -} // end thrust - -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/adjacent_difference.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/adjacent_difference.h deleted file mode 100644 index 6e4caaa88b904788d3a7e026bf487c01f74348e2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/adjacent_difference.h +++ /dev/null @@ -1,58 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file adjacent_difference.h - * \brief Generic implementation of adjacent_difference. - */ - -#pragma once - -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - - -template -__host__ __device__ -OutputIterator adjacent_difference(thrust::execution_policy &exec, - InputIterator first, InputIterator last, - OutputIterator result); - - -template -__host__ __device__ -OutputIterator adjacent_difference(thrust::execution_policy &exec, - InputIterator first, InputIterator last, - OutputIterator result, - BinaryFunction binary_op); - - -} // end namespace generic -} // end namespace detail -} // end namespace system -} // end namespace thrust - -#include - diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rrpn.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rrpn.py deleted file mode 100644 index 6ee4d8fd70430c5242cc02a1df8400493ffd75b7..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rrpn.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import logging -from typing import Dict, List -import torch - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec, batched_nms_rotated, cat -from detectron2.structures import Instances, RotatedBoxes, pairwise_iou_rotated -from detectron2.utils.memory import retry_if_cuda_oom - -from ..box_regression import Box2BoxTransformRotated -from .build import PROPOSAL_GENERATOR_REGISTRY -from .rpn import RPN - -logger = logging.getLogger(__name__) - - -def find_top_rrpn_proposals( - proposals, - pred_objectness_logits, - image_sizes, - nms_thresh, - pre_nms_topk, - post_nms_topk, - min_box_size, - training, -): - """ - For each feature map, select the `pre_nms_topk` highest scoring proposals, - apply NMS, clip proposals, and remove small boxes. Return the `post_nms_topk` - highest scoring proposals among all the feature maps if `training` is True, - otherwise, returns the highest `post_nms_topk` scoring proposals for each - feature map. - - Args: - proposals (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A, 5). - All proposal predictions on the feature maps. - pred_objectness_logits (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A). - image_sizes (list[tuple]): sizes (h, w) for each image - nms_thresh (float): IoU threshold to use for NMS - pre_nms_topk (int): number of top k scoring proposals to keep before applying NMS. - When RRPN is run on multiple feature maps (as in FPN) this number is per - feature map. - post_nms_topk (int): number of top k scoring proposals to keep after applying NMS. - When RRPN is run on multiple feature maps (as in FPN) this number is total, - over all feature maps. - min_box_size(float): minimum proposal box side length in pixels (absolute units wrt - input images). - training (bool): True if proposals are to be used in training, otherwise False. - This arg exists only to support a legacy bug; look for the "NB: Legacy bug ..." - comment. - - Returns: - proposals (list[Instances]): list of N Instances. The i-th Instances - stores post_nms_topk object proposals for image i. - """ - num_images = len(image_sizes) - device = proposals[0].device - - # 1. Select top-k anchor for every level and every image - topk_scores = [] # #lvl Tensor, each of shape N x topk - topk_proposals = [] - level_ids = [] # #lvl Tensor, each of shape (topk,) - batch_idx = torch.arange(num_images, device=device) - for level_id, proposals_i, logits_i in zip( - itertools.count(), proposals, pred_objectness_logits - ): - Hi_Wi_A = logits_i.shape[1] - num_proposals_i = min(pre_nms_topk, Hi_Wi_A) - - # sort is faster than topk (https://github.com/pytorch/pytorch/issues/22812) - # topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1) - logits_i, idx = logits_i.sort(descending=True, dim=1) - topk_scores_i = logits_i[batch_idx, :num_proposals_i] - topk_idx = idx[batch_idx, :num_proposals_i] - - # each is N x topk - topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 5 - - topk_proposals.append(topk_proposals_i) - topk_scores.append(topk_scores_i) - level_ids.append(torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device)) - - # 2. Concat all levels together - topk_scores = cat(topk_scores, dim=1) - topk_proposals = cat(topk_proposals, dim=1) - level_ids = cat(level_ids, dim=0) - - # 3. For each image, run a per-level NMS, and choose topk results. - results = [] - for n, image_size in enumerate(image_sizes): - boxes = RotatedBoxes(topk_proposals[n]) - scores_per_img = topk_scores[n] - valid_mask = torch.isfinite(boxes.tensor).all(dim=1) & torch.isfinite(scores_per_img) - if not valid_mask.all(): - boxes = boxes[valid_mask] - scores_per_img = scores_per_img[valid_mask] - boxes.clip(image_size) - - # filter empty boxes - keep = boxes.nonempty(threshold=min_box_size) - lvl = level_ids - if keep.sum().item() != len(boxes): - boxes, scores_per_img, lvl = (boxes[keep], scores_per_img[keep], level_ids[keep]) - - keep = batched_nms_rotated(boxes.tensor, scores_per_img, lvl, nms_thresh) - # In Detectron1, there was different behavior during training vs. testing. - # (https://github.com/facebookresearch/Detectron/issues/459) - # During training, topk is over the proposals from *all* images in the training batch. - # During testing, it is over the proposals for each image separately. - # As a result, the training behavior becomes batch-dependent, - # and the configuration "POST_NMS_TOPK_TRAIN" end up relying on the batch size. - # This bug is addressed in Detectron2 to make the behavior independent of batch size. - keep = keep[:post_nms_topk] - - res = Instances(image_size) - res.proposal_boxes = boxes[keep] - res.objectness_logits = scores_per_img[keep] - results.append(res) - return results - - -@PROPOSAL_GENERATOR_REGISTRY.register() -class RRPN(RPN): - """ - Rotated Region Proposal Network described in :paper:`RRPN`. - """ - - @configurable - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - if self.anchor_boundary_thresh >= 0: - raise NotImplementedError( - "anchor_boundary_thresh is a legacy option not implemented for RRPN." - ) - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = super().from_config(cfg, input_shape) - ret["box2box_transform"] = Box2BoxTransformRotated(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS) - return ret - - @torch.no_grad() - def label_and_sample_anchors(self, anchors: List[RotatedBoxes], gt_instances: List[Instances]): - """ - Args: - anchors (list[RotatedBoxes]): anchors for each feature map. - gt_instances: the ground-truth instances for each image. - - Returns: - list[Tensor]: - List of #img tensors. i-th element is a vector of labels whose length is - the total number of anchors across feature maps. Label values are in {-1, 0, 1}, - with meanings: -1 = ignore; 0 = negative class; 1 = positive class. - list[Tensor]: - i-th element is a Nx5 tensor, where N is the total number of anchors across - feature maps. The values are the matched gt boxes for each anchor. - Values are undefined for those anchors not labeled as 1. - """ - anchors = RotatedBoxes.cat(anchors) - - gt_boxes = [x.gt_boxes for x in gt_instances] - del gt_instances - - gt_labels = [] - matched_gt_boxes = [] - for gt_boxes_i in gt_boxes: - """ - gt_boxes_i: ground-truth boxes for i-th image - """ - match_quality_matrix = retry_if_cuda_oom(pairwise_iou_rotated)(gt_boxes_i, anchors) - matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix) - # Matching is memory-expensive and may result in CPU tensors. But the result is small - gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device) - - # A vector of labels (-1, 0, 1) for each anchor - gt_labels_i = self._subsample_labels(gt_labels_i) - - if len(gt_boxes_i) == 0: - # These values won't be used anyway since the anchor is labeled as background - matched_gt_boxes_i = torch.zeros_like(anchors.tensor) - else: - # TODO wasted indexing computation for ignored boxes - matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor - - gt_labels.append(gt_labels_i) # N,AHW - matched_gt_boxes.append(matched_gt_boxes_i) - return gt_labels, matched_gt_boxes - - @torch.no_grad() - def predict_proposals(self, anchors, pred_objectness_logits, pred_anchor_deltas, image_sizes): - pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas) - return find_top_rrpn_proposals( - pred_proposals, - pred_objectness_logits, - image_sizes, - self.nms_thresh, - self.pre_nms_topk[self.training], - self.post_nms_topk[self.training], - self.min_box_size, - self.training, - ) diff --git a/spaces/CanonOverseer/Canons-Den/Dockerfile b/spaces/CanonOverseer/Canons-Den/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/CanonOverseer/Canons-Den/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/CarlDennis/Lovelive-VITS-JPZH/text/korean.py b/spaces/CarlDennis/Lovelive-VITS-JPZH/text/korean.py deleted file mode 100644 index 4b6c3fb27532ae6c033023de8a32fc7379bb5431..0000000000000000000000000000000000000000 --- a/spaces/CarlDennis/Lovelive-VITS-JPZH/text/korean.py +++ /dev/null @@ -1,205 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa'),text).split('] ~ [')[0] - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text diff --git a/spaces/ChandraMohanNayal/AutoGPT/autogpt/__main__.py b/spaces/ChandraMohanNayal/AutoGPT/autogpt/__main__.py deleted file mode 100644 index 128f9eea4900429e88276abdde3419b806001ac7..0000000000000000000000000000000000000000 --- a/spaces/ChandraMohanNayal/AutoGPT/autogpt/__main__.py +++ /dev/null @@ -1,5 +0,0 @@ -"""Auto-GPT: A GPT powered AI Assistant""" -import autogpt.cli - -if __name__ == "__main__": - autogpt.cli.main() diff --git a/spaces/ChrisCaviar/ControlNet-v1-1/app_segmentation.py b/spaces/ChrisCaviar/ControlNet-v1-1/app_segmentation.py deleted file mode 100644 index f120db46f7387c76829d987cb9640cc626b1231a..0000000000000000000000000000000000000000 --- a/spaces/ChrisCaviar/ControlNet-v1-1/app_segmentation.py +++ /dev/null @@ -1,104 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - preprocessor_name = gr.Radio(label='Preprocessor', - choices=['UPerNet', 'None'], - type='value', - value='UPerNet') - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - preprocess_resolution = gr.Slider( - label='Preprocess resolution', - minimum=128, - maximum=512, - value=512, - step=1) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - preprocess_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='segmentation', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='segmentation') - demo = create_demo(model.process_segmentation) - demo.queue().launch() diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/model/red/tool.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/model/red/tool.js deleted file mode 100644 index 70685f9a403ce195c0d8770fa0d88d19176d427c..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/model/red/tool.js +++ /dev/null @@ -1,428 +0,0 @@ -import fs from 'fs' -import { createHash, randomUUID } from 'crypto' -import { resolve, join, dirname, basename } from 'path' -import fetch, { FormData, Blob } from 'node-fetch' -import { fileURLToPath } from 'url' -import { exec, spawn } from 'child_process' -import os from 'os' -import _ from 'lodash' -import { Stream } from "stream" -import YAML from 'yaml' -import { TMP_DIR } from '../tool.js' - -const user = os.userInfo().username -let redPath = `C:/Users/${user}/.chronocat` -if (!fs.existsSync(redPath)) { - redPath = `C:/Users/${user}/AppData/Roaming/BetterUniverse/QQNT` -} - -const roleMap = { - 2: 'member', - 3: 'admin', - 4: 'owner' -} - -async function uploadImg(bot, msg) { - const file = await upload(bot, msg, 'image/png') - if (!file.imageInfo) throw "获取图片信息失败,请检查图片状态" - return { - elementType: 2, - picElement: { - md5HexStr: file.md5, - fileSize: file.fileSize, - picHeight: file.imageInfo.height, - picWidth: file.imageInfo.width, - fileName: basename(file.ntFilePath), - sourcePath: file.ntFilePath, - picType: file.imageInfo.type === 'gif' ? 2000 : 1000 - } - } -} - -async function upload(bot, msg, contentType) { - if (!msg) throw { noLog: true } - let buffer - if (msg instanceof Stream.Readable) { - buffer = fs.readFileSync(msg.path) - contentType = contentType.split('/')[0] + '/' + msg.path.substring(msg.path.lastIndexOf('.') + 1) - } if (Buffer.isBuffer(msg)) { - buffer = msg - } else if (msg.match(/^base64:\/\//)) { - buffer = Buffer.from(msg.replace(/^base64:\/\//, ""), 'base64') - } else if (msg.startsWith('http')) { - const img = await fetch(msg) - const type = img.headers.get('content-type'); - if (type) contentType = type - const arrayBuffer = await img.arrayBuffer() - buffer = Buffer.from(arrayBuffer) - } else if (msg.startsWith('file://')) { - buffer = fs.readFileSync(msg.replace(/file:\/{2,3}/, '')) - contentType = contentType.split('/')[0] + '/' + msg.substring(msg.lastIndexOf('.') + 1) - } else { - buffer = fs.readFileSync(msg) - contentType = contentType.split('/')[0] + '/' + msg.substring(msg.lastIndexOf('.') + 1) - } - const blob = new Blob([buffer], { type: contentType }) - const formData = new FormData() - formData.append('file', blob, 'ws-plugin.' + contentType.split('/')[1]) - const file = await bot.sendApi('POST', 'upload', formData) - if (file.error) { - throw file.error - } - file.contentType = contentType - return file -} - -async function uploadAudio(file) { - let buffer - if (file.match(/^base64:\/\//)) { - buffer = Buffer.from(file.replace(/^base64:\/\//, ""), 'base64') - } else if (file.startsWith('http')) { - const http = await fetch(file) - const arrayBuffer = await http.arrayBuffer() - buffer = Buffer.from(arrayBuffer) - } else if (file.startsWith('file://')) { - buffer = fs.readFileSync(file.replace(/file:\/{2,3}/, '')) - } - const head = buffer.subarray(0, 7).toString() - let filePath - let duration = 0 - if (!head.includes('SILK')) { - const tmpPath = await saveTmp(buffer) - duration = await getDuration(tmpPath) - const res = await audioTrans(tmpPath) - filePath = res.silkFile - buffer = fs.readFileSync(filePath) - } else { - filePath = await saveTmp(buffer) - } - - const hash = createHash('md5') - hash.update(buffer.toString('binary'), 'binary') - const md5 = hash.digest('hex') - return { - elementType: 4, - pttElement: { - md5HexStr: md5, - fileSize: buffer.length, - fileName: md5 + '.amr', - filePath: filePath, - // waveAmplitudes: [36, 28, 68, 28, 84, 28], - waveAmplitudes: [ - 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99, 99 - ], - duration: duration - } - } -} - -function audioTrans(tmpPath, samplingRate = '24000') { - return new Promise((resolve, reject) => { - const pcmFile = join(TMP_DIR, randomUUID({ disableEntropyCache: true })) - exec(`ffmpeg -y -i "${tmpPath}" -ar ${samplingRate} -ac 1 -f s16le "${pcmFile}"`, async () => { - fs.unlink(tmpPath, () => { }) - fs.access(pcmFile, fs.constants.F_OK, (err) => { - if (err) { - reject('音频转码失败, 请确保你的 ffmpeg 已正确安装') - } - }) - - const silkFile = join(TMP_DIR, randomUUID({ disableEntropyCache: true })) - try { - await pcmToSilk(pcmFile, silkFile, samplingRate) - } catch (error) { - reject('red发送语音暂不支持非win系统') - } - fs.unlink(pcmFile, () => { }) - - resolve({ - silkFile - }) - }) - }) -} - -function pcmToSilk(input, output, samplingRate) { - return new Promise((resolve, reject) => { - const args = ['-i', input, '-s', samplingRate, '-o', output] - const __filename = fileURLToPath(import.meta.url); - const __dirname = dirname(__filename); - const child = spawn(join(__dirname, './cli.exe'), args) - child.on('exit', () => { - fs.access(output, fs.constants.F_OK, (err) => { - if (err) { - reject('音频转码失败') - } - }) - // fs.stat(output, (err, stats) => { - // if (err) { - // console.error(err); - // return; - // } - // fs.truncate(output, stats.size - 1, err => { - // if (err) { - // console.error(err); - // return; - // } - // }); - // }); - resolve() - }) - }) -} - -function getDuration(file) { - return new Promise((resolve, reject) => { - exec(`ffmpeg -i ${file}`, function (err, stdout, stderr) { - const outStr = stderr.toString() - const regDuration = /Duration\: ([0-9\:\.]+),/ - const rs = regDuration.exec(outStr) - if (rs === null) { - reject("获取音频时长失败, 请确保你的 ffmpeg 已正确安装") - } else if (rs[1]) { - const time = rs[1] - const parts = time.split(":") - const seconds = (+parts[0]) * 3600 + (+parts[1]) * 60 + (+parts[2]) - const round = seconds.toString().split('.')[0] - resolve(+ round) - } - }) - }) -} - -async function saveTmp(data, ext = null) { - ext = ext ? '.' + ext : '' - const filename = randomUUID({ disableEntropyCache: true }) + ext - const tmpPath = resolve(TMP_DIR, filename) - fs.writeFileSync(tmpPath, data) - return tmpPath -} - -async function getNtPath(bot) { - let dataPath - try { - const buffer = fs.readFileSync('./plugins/ws-plugin/resources/common/cont/logo.png') - const blob = new Blob([buffer], { type: 'image/png' }) - const formData = new FormData() - formData.append('file', blob, '1.png') - const file = await bot.sendApi('POST', 'upload', formData) - fs.unlinkSync(file.ntFilePath) - const index = file.ntFilePath.indexOf('nt_data'); - dataPath = file.ntFilePath.slice(0, index + 'nt_data'.length); - } catch (error) { - return null - } - return dataPath -} - -async function uploadVideo(bot, file) { - let type = 'mp4' - if (file.match(/^base64:\/\//)) { - const buffer = Buffer.from(file.replace(/^base64:\/\//, ""), 'base64') - file = join(TMP_DIR, randomUUID({ disableEntropyCache: true }) + '.' + type) - fs.writeFileSync(file, buffer) - } else { - file = file.replace(/file:\/{2,3}/, '') - type = file.substring(file.lastIndexOf('.') + 1) - const Temp = join(TMP_DIR, randomUUID({ disableEntropyCache: true }) + '.' + type) - fs.copyFileSync(file, Temp) - file = Temp - } - const ntPath = await getNtPath(bot) - if (!ntPath) return - const now = new Date(); - const year = now.getFullYear(); - const month = now.getMonth() + 1; - const date = `${year}-${month.toString().padStart(2, '0')}`; - const video = await getVideoInfo(file) - - let oriPath = `${ntPath}/Video` - if (!fs.existsSync(oriPath)) fs.mkdirSync(oriPath) - oriPath = `${oriPath}/${date}` - if (!fs.existsSync(oriPath)) fs.mkdirSync(oriPath) - oriPath = `${oriPath}/Ori` - if (!fs.existsSync(oriPath)) fs.mkdirSync(oriPath) - oriPath = `${oriPath}/${video.videoMd5}.${type}` - - let thumbPath = `${ntPath}/Video/${date}/Thumb` - if (!fs.existsSync(thumbPath)) fs.mkdirSync(thumbPath) - thumbPath = `${thumbPath}/${video.videoMd5}_0.png` - - fs.copyFileSync(file, oriPath) - fs.unlinkSync(file) - const thumb = await getThumbInfo(oriPath, thumbPath) - return { - elementType: 5, - videoElement: { - filePath: oriPath, - fileName: video.videoMd5 + '.' + type, - videoMd5: video.videoMd5, - thumbMd5: thumb.thumbMd5, - fileTime: video.fileTime, - thumbSize: thumb.thumbSize, - fileSize: video.fileSize, - thumbWidth: thumb.thumbWidth, - thumbHeight: thumb.thumbHeight - } - } -} - -async function getVideoInfo(file) { - const fileTime = await getVideoTime(file) - const videoMd5 = await getVideoMd5(file) - const fileSize = fs.readFileSync(file).length - return { - fileTime, - videoMd5, - fileSize - } -} - -function getVideoMd5(file) { - return new Promise((resolve, reject) => { - const stream = fs.createReadStream(file); - const hash = createHash('md5'); - stream.on('data', chunk => { - hash.update(chunk); - }); - stream.on('end', () => { - const md5 = hash.digest('hex'); - resolve(md5) - }); - }) -} - -function getVideoTime(file) { - return new Promise((resolve, reject) => { - exec(`ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 "${file}"`, (error, stdout, stderr) => { - if (error) { - reject('获取视频长度失败, 请确保你的 ffmpeg 已正确安装') - } - const durationInSeconds = parseInt(stdout); - resolve(durationInSeconds) - }); - }) -} - -async function getThumbInfo(file, thumbPath) { - - const tempPath = join(TMP_DIR, randomUUID({ disableEntropyCache: true }) + '.jpg') - - const { thumbMd5, thumbSize } = await extractThumbnail(file, tempPath); - - const { thumbWidth, thumbHeight } = getImageSize(tempPath); - - fs.copyFileSync(tempPath, thumbPath) - fs.unlinkSync(tempPath) - - return { thumbMd5, thumbWidth, thumbHeight, thumbSize }; -} - -function extractThumbnail(inputFile, outputFile) { - return new Promise((resolve, reject) => { - exec(`ffmpeg -i "${inputFile}" -ss 00:00:00.000 -vframes 1 -vf "scale=iw/3:ih/3" "${outputFile}" - `, async () => { - fs.access(outputFile, fs.constants.F_OK, (err) => { - if (err) { - reject('获取视频封面失败, 请确保你的 ffmpeg 已正确安装') - } - }) - - const buffer = fs.readFileSync(outputFile); - const hash = createHash('md5'); - hash.update(buffer); - resolve({ - thumbMd5: hash.digest('hex'), - thumbSize: buffer.length - }) - }) - }) -} - -function getImageSize(file) { - const buffer = fs.readFileSync(file); - const start = buffer.indexOf(Buffer.from([0xff, 0xc0])); - const thumbHeight = buffer.readUInt16BE(start + 5); - const thumbWidth = buffer.readUInt16BE(start + 7); - return { thumbWidth, thumbHeight }; -} - -async function uploadFile(file) { - let buffer, name, path = process.cwd() + '/plugins/ws-plugin/Temp/' - if (file.startsWith('http')) { - const http = await fetch(file) - const arrayBuffer = await http.arrayBuffer() - buffer = Buffer.from(arrayBuffer) - name = file.substring(file.lastIndexOf('/') + 1) - path = path + name - fs.writeFileSync(path, buffer); - } else if (file.startsWith('file://')) { - buffer = fs.readFileSync(file.replace(/file:\/{2,3}/, '')) - name = file.substring(file.lastIndexOf('/') + 1) - path = path + name - fs.copyFileSync(file, path) - } else if (Buffer.isBuffer(file)) { - buffer = file - name = 'buffer' - path = path + name - fs.writeFileSync(path, buffer); - } else { - buffer = fs.readFileSync(file) - name = file.substring(file.lastIndexOf('/') + 1) - path = path + name - fs.copyFileSync(file, path) - } - const size = buffer.length - const hash = createHash('md5'); - hash.update(buffer); - const md5 = hash.digest('hex') - return { - elementType: 3, - fileElement: { - fileMd5: md5, - fileName: name, - filePath: path, - fileSize: size, - } - } -} - -function getToken() { - let tokenPath - try { - if (os.platform() === 'win32') { - tokenPath = `${redPath}/config/chronocat.yml` - if (fs.existsSync(tokenPath)) { - const data = YAML.parse(fs.readFileSync(tokenPath, 'utf-8')) - for (const i of data?.servers || []) { - if (i.type === 'red') { - return i.token - } - } - logger.error('[ws-plugin] 请检查chronocat配置是否开启red服务') - return false - } else { - tokenPath = `${redPath}/RED_PROTOCOL_TOKEN` - return fs.readFileSync(tokenPath, 'utf-8') - } - } else { - logger.error('[ws-plugin] 非Windows系统请自行获取Token') - return false - } - } catch (error) { - logger.error('[ws-plugin] QQNT自动获取Token失败,请检查是否已安装Chronocat并尝试手动获取') - logger.error(error) - return false - } -} - -export { - uploadImg, - uploadAudio, - uploadVideo, - uploadFile, - getToken, - getNtPath, - roleMap, - redPath -} \ No newline at end of file diff --git a/spaces/CjangCjengh/Sanskrit-TTS/utils.py b/spaces/CjangCjengh/Sanskrit-TTS/utils.py deleted file mode 100644 index 07839a71a8339f90fe7eeff4dc4a6bd284330049..0000000000000000000000000000000000000000 --- a/spaces/CjangCjengh/Sanskrit-TTS/utils.py +++ /dev/null @@ -1,75 +0,0 @@ -import logging -from json import loads -from torch import load, FloatTensor -from numpy import float32 -import librosa - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - - -def load_checkpoint(checkpoint_path, model): - checkpoint_dict = load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logging.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logging.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = loads(data) - - hparams = HParams(**config) - return hparams - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return FloatTensor(audio.astype(float32)) diff --git a/spaces/CofAI/LengthConverter/style.css b/spaces/CofAI/LengthConverter/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/CofAI/LengthConverter/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/CofAI/chat.v1/web.html b/spaces/CofAI/chat.v1/web.html deleted file mode 100644 index 9e1fd00c7dd7aef4e03d88c14c8e8d0e67e808de..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.v1/web.html +++ /dev/null @@ -1,60 +0,0 @@ - - - - API Demo - - -

API Demo

- - - - -

- - - -
- - - \ No newline at end of file diff --git a/spaces/CofAI/chat/g4f/models.py b/spaces/CofAI/chat/g4f/models.py deleted file mode 100644 index 37efcfb2a7e870f3ef3093d167efdab299083220..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/g4f/models.py +++ /dev/null @@ -1,233 +0,0 @@ -from g4f import Provider - - -class Model: - class model: - name: str - base_provider: str - best_provider: str - - class gpt_35_turbo: - name: str = 'gpt-3.5-turbo' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Wewordle - - class gpt_35_turbo_0613: - name: str = 'gpt-3.5-turbo-0613' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Zeabur - - class gpt_35_turbo_0301: - name: str = 'gpt-3.5-turbo-0301' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Zeabur - - class gpt_35_turbo_16k_0613: - name: str = 'gpt-3.5-turbo-16k-0613' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Zeabur - - class gpt_35_turbo_16k: - name: str = 'gpt-3.5-turbo-16k' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.ChatFree - - class gpt_4_dev: - name: str = 'gpt-4-for-dev' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Phind - - class gpt_4: - name: str = 'gpt-4' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.ChatgptAi - - class gpt_4_0613: - name: str = 'gpt-4-0613' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Lockchat - best_providers: list = [Provider.Bing, Provider.Lockchat] - - class claude_instant_v1_100k: - name: str = 'claude-instant-v1-100k' - base_provider: str = 'anthropic' - best_provider: Provider.Provider = Provider.Vercel - - class claude_instant_v1: - name: str = 'claude-instant-v1' - base_provider: str = 'anthropic' - best_provider: Provider.Provider = Provider.Vercel - - class claude_v1_100k: - name: str = 'claude-v1-100k' - base_provider: str = 'anthropic' - best_provider: Provider.Provider = Provider.Vercel - - class claude_v1: - name: str = 'claude-v1' - base_provider: str = 'anthropic' - best_provider: Provider.Provider = Provider.Vercel - - class alpaca_7b: - name: str = 'alpaca-7b' - base_provider: str = 'replicate' - best_provider: Provider.Provider = Provider.Vercel - - class stablelm_tuned_alpha_7b: - name: str = 'stablelm-tuned-alpha-7b' - base_provider: str = 'replicate' - best_provider: Provider.Provider = Provider.Vercel - - class bloom: - name: str = 'bloom' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class bloomz: - name: str = 'bloomz' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class flan_t5_xxl: - name: str = 'flan-t5-xxl' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class flan_ul2: - name: str = 'flan-ul2' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class gpt_neox_20b: - name: str = 'gpt-neox-20b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class oasst_sft_4_pythia_12b_epoch_35: - name: str = 'oasst-sft-4-pythia-12b-epoch-3.5' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class santacoder: - name: str = 'santacoder' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.Vercel - - class command_medium_nightly: - name: str = 'command-medium-nightly' - base_provider: str = 'cohere' - best_provider: Provider.Provider = Provider.Vercel - - class command_xlarge_nightly: - name: str = 'command-xlarge-nightly' - base_provider: str = 'cohere' - best_provider: Provider.Provider = Provider.Vercel - - class code_cushman_001: - name: str = 'code-cushman-001' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class code_davinci_002: - name: str = 'code-davinci-002' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class text_ada_001: - name: str = 'text-ada-001' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class text_babbage_001: - name: str = 'text-babbage-001' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class text_curie_001: - name: str = 'text-curie-001' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class text_davinci_002: - name: str = 'text-davinci-002' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class text_davinci_003: - name: str = 'text-davinci-003' - base_provider: str = 'openai' - best_provider: Provider.Provider = Provider.Vercel - - class palm: - name: str = 'palm2' - base_provider: str = 'google' - best_provider: Provider.Provider = Provider.Bard - - class falcon_40b: - name: str = 'falcon-40b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.H2o - - class falcon_7b: - name: str = 'falcon-7b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.H2o - - class llama_13b: - name: str = 'llama-13b' - base_provider: str = 'huggingface' - best_provider: Provider.Provider = Provider.H2o - - -class ModelUtils: - convert: dict = { - 'gpt-3.5-turbo': Model.gpt_35_turbo, - 'gpt-3.5-turbo-0613': Model.gpt_35_turbo_0613, - 'gpt-3.5-turbo-0301': Model.gpt_35_turbo_0301, - 'gpt-4': Model.gpt_4, - 'gpt-4-0613': Model.gpt_4_0613, - 'gpt-4-for-dev': Model.gpt_4_dev, - 'gpt-3.5-turbo-16k': Model.gpt_35_turbo_16k, - 'gpt-3.5-turbo-16k-0613': Model.gpt_35_turbo_16k_0613, - - 'claude-instant-v1-100k': Model.claude_instant_v1_100k, - 'claude-v1-100k': Model.claude_v1_100k, - 'claude-instant-v1': Model.claude_instant_v1, - 'claude-v1': Model.claude_v1, - - 'alpaca-7b': Model.alpaca_7b, - 'stablelm-tuned-alpha-7b': Model.stablelm_tuned_alpha_7b, - - 'bloom': Model.bloom, - 'bloomz': Model.bloomz, - - 'flan-t5-xxl': Model.flan_t5_xxl, - 'flan-ul2': Model.flan_ul2, - - 'gpt-neox-20b': Model.gpt_neox_20b, - 'oasst-sft-4-pythia-12b-epoch-3.5': Model.oasst_sft_4_pythia_12b_epoch_35, - 'santacoder': Model.santacoder, - - 'command-medium-nightly': Model.command_medium_nightly, - 'command-xlarge-nightly': Model.command_xlarge_nightly, - - 'code-cushman-001': Model.code_cushman_001, - 'code-davinci-002': Model.code_davinci_002, - - 'text-ada-001': Model.text_ada_001, - 'text-babbage-001': Model.text_babbage_001, - 'text-curie-001': Model.text_curie_001, - 'text-davinci-002': Model.text_davinci_002, - 'text-davinci-003': Model.text_davinci_003, - - 'palm2': Model.palm, - 'palm': Model.palm, - 'google': Model.palm, - 'google-bard': Model.palm, - 'google-palm': Model.palm, - 'bard': Model.palm, - - 'falcon-40b': Model.falcon_40b, - 'falcon-7b': Model.falcon_7b, - 'llama-13b': Model.llama_13b, - } diff --git a/spaces/CyberHarem/find_my_waifu/civitai.py b/spaces/CyberHarem/find_my_waifu/civitai.py deleted file mode 100644 index 7f235e092ca6430818213fc5de8ffd141c26cc16..0000000000000000000000000000000000000000 --- a/spaces/CyberHarem/find_my_waifu/civitai.py +++ /dev/null @@ -1,26 +0,0 @@ -from gchar.games.dispatch.access import GAME_CHARS - - -def try_find_title(char_name, game_name): - try: - game_cls = GAME_CHARS[game_name.lower()] - ch = game_cls.get(char_name) - if ch: - names = [] - if ch.enname: - names.append(str(ch.enname)) - if ch.jpname: - names.append(str(ch.jpname)) - if ch.cnname: - names.append(str(ch.cnname)) - if hasattr(ch, 'krname') and ch.krname: - names.append(str(ch.krname)) - - return f"{'/'.join(names)} ({game_cls.__official_name__})" - - else: - cname = ' '.join(list(map(str.capitalize, char_name.split(' ')))) - return f'{cname} ({game_cls.__official_name__})' - - except KeyError: - return None diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PSDraw.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PSDraw.py deleted file mode 100644 index 13b3048f67e18ac58170c3a1bd25cb18d66b30fe..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PSDraw.py +++ /dev/null @@ -1,229 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# Simple PostScript graphics interface -# -# History: -# 1996-04-20 fl Created -# 1999-01-10 fl Added gsave/grestore to image method -# 2005-05-04 fl Fixed floating point issue in image (from Eric Etheridge) -# -# Copyright (c) 1997-2005 by Secret Labs AB. All rights reserved. -# Copyright (c) 1996 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -import sys - -from . import EpsImagePlugin - -## -# Simple PostScript graphics interface. - - -class PSDraw: - """ - Sets up printing to the given file. If ``fp`` is omitted, - ``sys.stdout.buffer`` or ``sys.stdout`` is assumed. - """ - - def __init__(self, fp=None): - if not fp: - try: - fp = sys.stdout.buffer - except AttributeError: - fp = sys.stdout - self.fp = fp - - def begin_document(self, id=None): - """Set up printing of a document. (Write PostScript DSC header.)""" - # FIXME: incomplete - self.fp.write( - b"%!PS-Adobe-3.0\n" - b"save\n" - b"/showpage { } def\n" - b"%%EndComments\n" - b"%%BeginDocument\n" - ) - # self.fp.write(ERROR_PS) # debugging! - self.fp.write(EDROFF_PS) - self.fp.write(VDI_PS) - self.fp.write(b"%%EndProlog\n") - self.isofont = {} - - def end_document(self): - """Ends printing. (Write PostScript DSC footer.)""" - self.fp.write(b"%%EndDocument\nrestore showpage\n%%End\n") - if hasattr(self.fp, "flush"): - self.fp.flush() - - def setfont(self, font, size): - """ - Selects which font to use. - - :param font: A PostScript font name - :param size: Size in points. - """ - font = bytes(font, "UTF-8") - if font not in self.isofont: - # reencode font - self.fp.write(b"/PSDraw-%s ISOLatin1Encoding /%s E\n" % (font, font)) - self.isofont[font] = 1 - # rough - self.fp.write(b"/F0 %d /PSDraw-%s F\n" % (size, font)) - - def line(self, xy0, xy1): - """ - Draws a line between the two points. Coordinates are given in - PostScript point coordinates (72 points per inch, (0, 0) is the lower - left corner of the page). - """ - self.fp.write(b"%d %d %d %d Vl\n" % (*xy0, *xy1)) - - def rectangle(self, box): - """ - Draws a rectangle. - - :param box: A tuple of four integers, specifying left, bottom, width and - height. - """ - self.fp.write(b"%d %d M 0 %d %d Vr\n" % box) - - def text(self, xy, text): - """ - Draws text at the given position. You must use - :py:meth:`~PIL.PSDraw.PSDraw.setfont` before calling this method. - """ - text = bytes(text, "UTF-8") - text = b"\\(".join(text.split(b"(")) - text = b"\\)".join(text.split(b")")) - xy += (text,) - self.fp.write(b"%d %d M (%s) S\n" % xy) - - def image(self, box, im, dpi=None): - """Draw a PIL image, centered in the given box.""" - # default resolution depends on mode - if not dpi: - if im.mode == "1": - dpi = 200 # fax - else: - dpi = 100 # greyscale - # image size (on paper) - x = im.size[0] * 72 / dpi - y = im.size[1] * 72 / dpi - # max allowed size - xmax = float(box[2] - box[0]) - ymax = float(box[3] - box[1]) - if x > xmax: - y = y * xmax / x - x = xmax - if y > ymax: - x = x * ymax / y - y = ymax - dx = (xmax - x) / 2 + box[0] - dy = (ymax - y) / 2 + box[1] - self.fp.write(b"gsave\n%f %f translate\n" % (dx, dy)) - if (x, y) != im.size: - # EpsImagePlugin._save prints the image at (0,0,xsize,ysize) - sx = x / im.size[0] - sy = y / im.size[1] - self.fp.write(b"%f %f scale\n" % (sx, sy)) - EpsImagePlugin._save(im, self.fp, None, 0) - self.fp.write(b"\ngrestore\n") - - -# -------------------------------------------------------------------- -# PostScript driver - -# -# EDROFF.PS -- PostScript driver for Edroff 2 -# -# History: -# 94-01-25 fl: created (edroff 2.04) -# -# Copyright (c) Fredrik Lundh 1994. -# - - -EDROFF_PS = b"""\ -/S { show } bind def -/P { moveto show } bind def -/M { moveto } bind def -/X { 0 rmoveto } bind def -/Y { 0 exch rmoveto } bind def -/E { findfont - dup maxlength dict begin - { - 1 index /FID ne { def } { pop pop } ifelse - } forall - /Encoding exch def - dup /FontName exch def - currentdict end definefont pop -} bind def -/F { findfont exch scalefont dup setfont - [ exch /setfont cvx ] cvx bind def -} bind def -""" - -# -# VDI.PS -- PostScript driver for VDI meta commands -# -# History: -# 94-01-25 fl: created (edroff 2.04) -# -# Copyright (c) Fredrik Lundh 1994. -# - -VDI_PS = b"""\ -/Vm { moveto } bind def -/Va { newpath arcn stroke } bind def -/Vl { moveto lineto stroke } bind def -/Vc { newpath 0 360 arc closepath } bind def -/Vr { exch dup 0 rlineto - exch dup 0 exch rlineto - exch neg 0 rlineto - 0 exch neg rlineto - setgray fill } bind def -/Tm matrix def -/Ve { Tm currentmatrix pop - translate scale newpath 0 0 .5 0 360 arc closepath - Tm setmatrix -} bind def -/Vf { currentgray exch setgray fill setgray } bind def -""" - -# -# ERROR.PS -- Error handler -# -# History: -# 89-11-21 fl: created (pslist 1.10) -# - -ERROR_PS = b"""\ -/landscape false def -/errorBUF 200 string def -/errorNL { currentpoint 10 sub exch pop 72 exch moveto } def -errordict begin /handleerror { - initmatrix /Courier findfont 10 scalefont setfont - newpath 72 720 moveto $error begin /newerror false def - (PostScript Error) show errorNL errorNL - (Error: ) show - /errorname load errorBUF cvs show errorNL errorNL - (Command: ) show - /command load dup type /stringtype ne { errorBUF cvs } if show - errorNL errorNL - (VMstatus: ) show - vmstatus errorBUF cvs show ( bytes available, ) show - errorBUF cvs show ( bytes used at level ) show - errorBUF cvs show errorNL errorNL - (Operand stargck: ) show errorNL /ostargck load { - dup type /stringtype ne { errorBUF cvs } if 72 0 rmoveto show errorNL - } forall errorNL - (Execution stargck: ) show errorNL /estargck load { - dup type /stringtype ne { errorBUF cvs } if 72 0 rmoveto show errorNL - } forall - end showpage -} def end -""" diff --git a/spaces/DaleChen/AutoGPT/autogpt/workspace.py b/spaces/DaleChen/AutoGPT/autogpt/workspace.py deleted file mode 100644 index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/autogpt/workspace.py +++ /dev/null @@ -1,47 +0,0 @@ -from __future__ import annotations - -import os -from pathlib import Path - -from autogpt.config import Config - -CFG = Config() - -# Set a dedicated folder for file I/O -WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace" - -# Create the directory if it doesn't exist -if not os.path.exists(WORKSPACE_PATH): - os.makedirs(WORKSPACE_PATH) - - -def path_in_workspace(relative_path: str | Path) -> Path: - """Get full path for item in workspace - - Parameters: - relative_path (str | Path): Path to translate into the workspace - - Returns: - Path: Absolute path for the given path in the workspace - """ - return safe_path_join(WORKSPACE_PATH, relative_path) - - -def safe_path_join(base: Path, *paths: str | Path) -> Path: - """Join one or more path components, asserting the resulting path is within the workspace. - - Args: - base (Path): The base path - *paths (str): The paths to join to the base path - - Returns: - Path: The joined path - """ - joined_path = base.joinpath(*paths).resolve() - - if CFG.restrict_to_workspace and not joined_path.is_relative_to(base): - raise ValueError( - f"Attempted to access path '{joined_path}' outside of workspace '{base}'." - ) - - return joined_path diff --git a/spaces/Danil/AnyNameHack/indexer.py b/spaces/Danil/AnyNameHack/indexer.py deleted file mode 100644 index ed643b491109c741df1e914e801c88b9fbb02b32..0000000000000000000000000000000000000000 --- a/spaces/Danil/AnyNameHack/indexer.py +++ /dev/null @@ -1,161 +0,0 @@ -import pickle -import faiss -import numpy as np -import pandas as pd -from utils import * -from sentence_transformers import SentenceTransformer - -from tqdm import tqdm -from typing import List - - -class FAISS: - def __init__(self, dimensions: int) -> None: - self.dimensions = dimensions - self.index = faiss.IndexFlatL2(dimensions) - self.vectors = {} - self.counter = 0 - self.model_name = 'paraphrase-multilingual-MiniLM-L12-v2' - self.sentence_encoder = SentenceTransformer(self.model_name) - - def init_vectors(self, path: str) -> None: - """ - Заполняет набор векторов предобученными значениями - - Args: - path: путь к файлу в формате pickle - """ - with open(path, 'rb') as pkl_file: - self.vectors = pickle.load(pkl_file) - - self.counter = len(self.vectors) - - def init_index(self, path) -> None: - """ - Заполняет индекс FAISS предобученными значениями - - Args: - path: путь к файлу в формате FAISS - """ - self.index = faiss.read_index(path) - - def save_vectors(self, path: str) -> None: - """ - Сохраняет набор векторов - - Args: - path: желаемый путь к файлу - """ - with open(path, "wb") as fp: - pickle.dump(self.index.vectors, fp) - - def save_index(self, path: str) -> None: - """ - Сохраняет индекс FAISS - - Args: - path: желаемый путь к файлу - """ - faiss.write_index(self.index, path) - - def add(self, text: str, idx: int, pop: float, emb=None) -> None: - """ - Добавляет в поисковый индекс новый вектор - - Args: - text: текст запроса - idx: индекс нового вектора - pop: популярность запроса - emb (optional): эмбеддинг текста запроса (если не указан, то будет подготовлен с помощью self.sentence_encoder) - """ - if emb is None: - text_vec = self.sentence_encoder.encode([text]) - else: - text_vec = emb - - self.index.add(text_vec) - self.vectors[self.counter] = (idx, text, pop, text_vec) - - self.counter += 1 - - def search(self, v: List, k: int = 10) -> List[List]: - """ - Ищет в поисковом индексе ближайших соседей к вектору v - - Args: - v: вектор для поиска ближайших соседей - k: число векторов в выдаче - Returns: - список векторов, ближайших к вектору v, в формате [idx, text, popularity, similarity] - """ - result = [] - distance, item_index = self.index.search(v, k) - for dist, i in zip(distance[0], item_index[0]): - if i == -1: - break - else: - result.append((self.vectors[i][0], self.vectors[i][1], self.vectors[i][2], dist)) - - return result - - def suggest_tags(self, query: str, top_n: int = 10, k: int = 30) -> List[str]: - """ - Получает список тегов для пользователя по текстовому запросу - - Args: - query: запрос пользователя - top_n (optional): число тегов в выдаче - k (optional): число векторов из индекса, среди которых будут искаться теги для выдачи - Returns: - список тегов для выдачи пользователю - """ - emb = self.sentence_encoder.encode([query.lower()]) - r = self.search(emb, k) - - result = [] - for i in r: - if check(query, i[1]): - result.append(i) - # надо добавить вес относительно длины - result = sorted(result, key=lambda x: x[0] * 0.3 - x[-1], reverse=True) - total_result = [] - for i in range(len(result)): - flag = True - for j in result[i + 1:]: - flag &= easy_check(result[i][1], j[1]) - if flag: - total_result.append(result[i][1]) - - return total_result[:top_n] - - def fill(self, queries: List[str], popularities: pd.DataFrame) -> None: - """ - Заполняет поисковый индекс запросами queries, популярности которых берутся из таблицы popularities - - Args: - queries: список запросов - popularities: таблица, в которой содержатся колонки query и query_popularity - """ - idx = -1 - for query in tqdm(queries): - idx += 1 - if type(query) == str: - emb = self.index.sentence_encoder.encode([query.lower()]) - bool_add = True - search_sim = self.index.search(emb, 1) - - try: - popularity = popularities[popularities["query"] == query]["query_popularity"].item() - except ValueError: - # Если для текущего запроса неизвестна популярность, возьмем значение 5 - popularity = 5 - - if len(search_sim) > 0: - search_sim = search_sim[0] - if search_sim[-1] < 0.15: - # Не добавляем вектор, если он находится достаточно близко к уже присутствующему в индексе - bool_add = False - if bool_add: - self.index.add(query, popularity, idx, emb) - else: - self.index.add(query, popularity, idx, emb) \ No newline at end of file diff --git a/spaces/Datasculptor/StyleGAN-NADA/e4e/utils/common.py b/spaces/Datasculptor/StyleGAN-NADA/e4e/utils/common.py deleted file mode 100644 index b19e18ddcb78b06678fa18e4a76da44fc511b789..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/StyleGAN-NADA/e4e/utils/common.py +++ /dev/null @@ -1,55 +0,0 @@ -from PIL import Image -import matplotlib.pyplot as plt - - -# Log images -def log_input_image(x, opts): - return tensor2im(x) - - -def tensor2im(var): - # var shape: (3, H, W) - var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy() - var = ((var + 1) / 2) - var[var < 0] = 0 - var[var > 1] = 1 - var = var * 255 - return Image.fromarray(var.astype('uint8')) - - -def vis_faces(log_hooks): - display_count = len(log_hooks) - fig = plt.figure(figsize=(8, 4 * display_count)) - gs = fig.add_gridspec(display_count, 3) - for i in range(display_count): - hooks_dict = log_hooks[i] - fig.add_subplot(gs[i, 0]) - if 'diff_input' in hooks_dict: - vis_faces_with_id(hooks_dict, fig, gs, i) - else: - vis_faces_no_id(hooks_dict, fig, gs, i) - plt.tight_layout() - return fig - - -def vis_faces_with_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face']) - plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input']))) - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']), - float(hooks_dict['diff_target']))) - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target']))) - - -def vis_faces_no_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face'], cmap="gray") - plt.title('Input') - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target') - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output') diff --git a/spaces/Deci/DeciDiffusion-v1-0/header.html b/spaces/Deci/DeciDiffusion-v1-0/header.html deleted file mode 100644 index fafbcb3146686659a84a80ead9d1c4b7998dd94b..0000000000000000000000000000000000000000 --- a/spaces/Deci/DeciDiffusion-v1-0/header.html +++ /dev/null @@ -1,17 +0,0 @@ -
-
-

- Deci Diffusion 1.0 -

-
-
-

- Demo for the DeciDiffusion 1.0 model -

-
\ No newline at end of file diff --git a/spaces/Detomo/ai-comic-generation/src/components/ui/input.tsx b/spaces/Detomo/ai-comic-generation/src/components/ui/input.tsx deleted file mode 100644 index 0757ddebdca3800bbd4a46fe1c2c17dff86c5e2f..0000000000000000000000000000000000000000 --- a/spaces/Detomo/ai-comic-generation/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from "react" - -import { cn } from "@/lib/utils" - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = "Input" - -export { Input } diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/gpsnet/panoptic_fpn_r101_fpn_1x_predcls_psg.py b/spaces/ECCV2022/PSG/OpenPSG/configs/gpsnet/panoptic_fpn_r101_fpn_1x_predcls_psg.py deleted file mode 100644 index 1be5fdcf74eeb3e941ef2829546cfb14338face8..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/PSG/OpenPSG/configs/gpsnet/panoptic_fpn_r101_fpn_1x_predcls_psg.py +++ /dev/null @@ -1,26 +0,0 @@ -_base_ = './panoptic_fpn_r50_fpn_1x_predcls_psg.py' - -model = dict(backbone=dict( - depth=101, - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet101'))) - -# Log config -project_name = 'openpsg' -expt_name = 'gpsnet_panoptic_fpn_r101_fpn_1x_predcls_psg' -work_dir = f'./work_dirs/{expt_name}' - -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - dict( - type='WandbLoggerHook', - init_kwargs=dict( - project=project_name, - name=expt_name, - ), - ), - ], -) - -load_from = 'work_dirs/checkpoints/panoptic_fpn_r101_fpn_1x_coco_20210820_193950-ab9157a2.pth' diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/utils/misc.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/utils/misc.py deleted file mode 100644 index 874d9805b482f52bbffc1be620e36e0cffc07c46..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/mask2former/utils/misc.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/util/misc.py -""" -Misc functions, including distributed helpers. - -Mostly copy-paste from torchvision references. -""" -from typing import List, Optional - -import torch -import torch.distributed as dist -import torchvision -from torch import Tensor - - -def _max_by_axis(the_list): - # type: (List[List[int]]) -> List[int] - maxes = the_list[0] - for sublist in the_list[1:]: - for index, item in enumerate(sublist): - maxes[index] = max(maxes[index], item) - return maxes - - -class NestedTensor(object): - def __init__(self, tensors, mask: Optional[Tensor]): - self.tensors = tensors - self.mask = mask - - def to(self, device): - # type: (Device) -> NestedTensor # noqa - cast_tensor = self.tensors.to(device) - mask = self.mask - if mask is not None: - assert mask is not None - cast_mask = mask.to(device) - else: - cast_mask = None - return NestedTensor(cast_tensor, cast_mask) - - def decompose(self): - return self.tensors, self.mask - - def __repr__(self): - return str(self.tensors) - - -def nested_tensor_from_tensor_list(tensor_list: List[Tensor]): - # TODO make this more general - if tensor_list[0].ndim == 3: - if torchvision._is_tracing(): - # nested_tensor_from_tensor_list() does not export well to ONNX - # call _onnx_nested_tensor_from_tensor_list() instead - return _onnx_nested_tensor_from_tensor_list(tensor_list) - - # TODO make it support different-sized images - max_size = _max_by_axis([list(img.shape) for img in tensor_list]) - # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list])) - batch_shape = [len(tensor_list)] + max_size - b, c, h, w = batch_shape - dtype = tensor_list[0].dtype - device = tensor_list[0].device - tensor = torch.zeros(batch_shape, dtype=dtype, device=device) - mask = torch.ones((b, h, w), dtype=torch.bool, device=device) - for img, pad_img, m in zip(tensor_list, tensor, mask): - pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - m[: img.shape[1], : img.shape[2]] = False - else: - raise ValueError("not supported") - return NestedTensor(tensor, mask) - - -# _onnx_nested_tensor_from_tensor_list() is an implementation of -# nested_tensor_from_tensor_list() that is supported by ONNX tracing. -@torch.jit.unused -def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor: - max_size = [] - for i in range(tensor_list[0].dim()): - max_size_i = torch.max( - torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32) - ).to(torch.int64) - max_size.append(max_size_i) - max_size = tuple(max_size) - - # work around for - # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - # m[: img.shape[1], :img.shape[2]] = False - # which is not yet supported in onnx - padded_imgs = [] - padded_masks = [] - for img in tensor_list: - padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))] - padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0])) - padded_imgs.append(padded_img) - - m = torch.zeros_like(img[0], dtype=torch.int, device=img.device) - padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1) - padded_masks.append(padded_mask.to(torch.bool)) - - tensor = torch.stack(padded_imgs) - mask = torch.stack(padded_masks) - - return NestedTensor(tensor, mask=mask) - - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/scripts/pytorch2onnx.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/scripts/pytorch2onnx.py deleted file mode 100644 index 09d99b2e0171265e70e7507ed8e882b616b449a1..0000000000000000000000000000000000000000 --- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/scripts/pytorch2onnx.py +++ /dev/null @@ -1,36 +0,0 @@ -import argparse -import torch -import torch.onnx -from basicsr.archs.rrdbnet_arch import RRDBNet - - -def main(args): - # An instance of the model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - if args.params: - keyname = 'params' - else: - keyname = 'params_ema' - model.load_state_dict(torch.load(args.input)[keyname]) - # set the train mode to false since we will only run the forward pass. - model.train(False) - model.cpu().eval() - - # An example input - x = torch.rand(1, 3, 64, 64) - # Export the model - with torch.no_grad(): - torch_out = torch.onnx._export(model, x, args.output, opset_version=11, export_params=True) - print(torch_out.shape) - - -if __name__ == '__main__': - """Convert pytorch model to onnx models""" - parser = argparse.ArgumentParser() - parser.add_argument( - '--input', type=str, default='experiments/pretrained_models/RealESRGAN_x4plus.pth', help='Input model path') - parser.add_argument('--output', type=str, default='realesrgan-x4.onnx', help='Output onnx path') - parser.add_argument('--params', action='store_false', help='Use params instead of params_ema') - args = parser.parse_args() - - main(args) diff --git a/spaces/EinsteinCoder/sf-voicebot/README.md b/spaces/EinsteinCoder/sf-voicebot/README.md deleted file mode 100644 index 92d2c1835bad28014b06dd84025016837ace0b91..0000000000000000000000000000000000000000 --- a/spaces/EinsteinCoder/sf-voicebot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SF VoiceBot -emoji: 💻 -colorFrom: pink -colorTo: green -sdk: docker -pinned: false -license: other -app_port: 5050 -duplicated_from: EinsteinCoder/fastapi-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/__init__.py b/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/__init__.py deleted file mode 100644 index a3f8fdd1aa47c12de9687c578094303eb7369246..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/IRONY-Real-ESRGAN/realesrgan/data/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -import importlib -from basicsr.utils import scandir -from os import path as osp - -# automatically scan and import dataset modules for registry -# scan all the files that end with '_dataset.py' under the data folder -data_folder = osp.dirname(osp.abspath(__file__)) -dataset_filenames = [osp.splitext(osp.basename(v))[0] for v in scandir(data_folder) if v.endswith('_dataset.py')] -# import all the dataset modules -_dataset_modules = [importlib.import_module(f'realesrgan.data.{file_name}') for file_name in dataset_filenames] diff --git a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/layers_123812KB .py b/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/layers_123812KB .py deleted file mode 100644 index b82f06bb4993cd63f076e68d7e24185269b1bc42..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/layers_123812KB .py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/FridaZuley/RVC_HFKawaii/julius/fftconv.py b/spaces/FridaZuley/RVC_HFKawaii/julius/fftconv.py deleted file mode 100644 index 1920e5369bb49b76eeea1832b7be2a0ddbc8db6b..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/julius/fftconv.py +++ /dev/null @@ -1,183 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 - -""" -Implementation of a FFT based 1D convolution in PyTorch. -While FFT is used in CUDNN for small kernel sizes, it is not the case for long ones, e.g. 512. -This module implements efficient FFT based convolutions for such convolutions. A typical -application is for evaluationg FIR filters with a long receptive field, typically -evaluated with a stride of 1. -""" -from typing import Optional - -import torch -try: - import torch.fft as new_fft -except ImportError: - new_fft = None # type: ignore -from torch.nn import functional as F - -from .core import pad_to, unfold -from .utils import simple_repr - - -# This is quite verbose, but sadly needed to make TorchScript happy. -def _new_rfft(x: torch.Tensor): - z = new_fft.rfft(x, dim=-1) - return torch.view_as_real(z) - - -def _old_rfft(x: torch.Tensor): - return torch.rfft(x, 1) # type: ignore - - -def _old_irfft(x: torch.Tensor, length: int): - result = torch.irfft(x, 1, signal_sizes=(length,)) # type: ignore - return result - - -def _new_irfft(x: torch.Tensor, length: int): - x = torch.view_as_complex(x) - return new_fft.irfft(x, length, dim=-1) - - -if new_fft is None: - _rfft = _old_rfft - _irfft = _old_irfft -else: - _rfft = _new_rfft - _irfft = _new_irfft - - -def _compl_mul_conjugate(a: torch.Tensor, b: torch.Tensor): - """ - Given a and b two tensors of dimension 4 - with the last dimension being the real and imaginary part, - returns a multiplied by the conjugate of b, the multiplication - being with respect to the second dimension. - - """ - # PyTorch 1.7 supports complex number, but not for all operations. - # Once the support is widespread, this can likely go away. - - op = "bcft,dct->bdft" - return torch.stack([ - torch.einsum(op, a[..., 0], b[..., 0]) + torch.einsum(op, a[..., 1], b[..., 1]), - torch.einsum(op, a[..., 1], b[..., 0]) - torch.einsum(op, a[..., 0], b[..., 1]) - ], - dim=-1) - - -def fft_conv1d( - input: torch.Tensor, weight: torch.Tensor, - bias: Optional[torch.Tensor] = None, stride: int = 1, padding: int = 0, - block_ratio: float = 5): - """ - Same as `torch.nn.functional.conv1d` but using FFT for the convolution. - Please check PyTorch documentation for more information. - - Args: - input (Tensor): input signal of shape `[B, C, T]`. - weight (Tensor): weight of the convolution `[D, C, K]` with `D` the number - of output channels. - bias (Tensor or None): if not None, bias term for the convolution. - stride (int): stride of convolution. - padding (int): padding to apply to the input. - block_ratio (float): can be tuned for speed. The input is splitted in chunks - with a size of `int(block_ratio * kernel_size)`. - - Shape: - - - Inputs: `input` is `[B, C, T]`, `weight` is `[D, C, K]` and bias is `[D]`. - - Output: `(*, T)` - - - ..note:: - This function is faster than `torch.nn.functional.conv1d` only in specific cases. - Typically, the kernel size should be of the order of 256 to see any real gain, - for a stride of 1. - - ..Warning:: - Dilation and groups are not supported at the moment. This function might use - more memory than the default Conv1d implementation. - """ - input = F.pad(input, (padding, padding)) - batch, channels, length = input.shape - out_channels, _, kernel_size = weight.shape - - if length < kernel_size: - raise RuntimeError(f"Input should be at least as large as the kernel size {kernel_size}, " - f"but it is only {length} samples long.") - if block_ratio < 1: - raise RuntimeError("Block ratio must be greater than 1.") - - # We are going to process the input blocks by blocks, as for some reason it is faster - # and less memory intensive (I think the culprit is `torch.einsum`. - block_size: int = min(int(kernel_size * block_ratio), length) - fold_stride = block_size - kernel_size + 1 - weight = pad_to(weight, block_size) - weight_z = _rfft(weight) - - # We pad the input and get the different frames, on which - frames = unfold(input, block_size, fold_stride) - - frames_z = _rfft(frames) - out_z = _compl_mul_conjugate(frames_z, weight_z) - out = _irfft(out_z, block_size) - # The last bit is invalid, because FFT will do a circular convolution. - out = out[..., :-kernel_size + 1] - out = out.reshape(batch, out_channels, -1) - out = out[..., ::stride] - target_length = (length - kernel_size) // stride + 1 - out = out[..., :target_length] - if bias is not None: - out += bias[:, None] - return out - - -class FFTConv1d(torch.nn.Module): - """ - Same as `torch.nn.Conv1d` but based on `fft_conv1d`. - Please check PyTorch documentation for more information. - - Args: - in_channels (int): number of input channels. - out_channels (int): number of output channels. - kernel_size (int): kernel size of convolution. - stride (int): stride of convolution. - padding (int): padding to apply to the input. - bias (bool): if True, use a bias term. - - ..note:: - This module is faster than `torch.nn.Conv1d` only in specific cases. - Typically, `kernel_size` should be of the order of 256 to see any real gain, - for a stride of 1. - - ..warning:: - Dilation and groups are not supported at the moment. This module might use - more memory than the default Conv1d implementation. - - >>> fftconv = FFTConv1d(12, 24, 128, 4) - >>> x = torch.randn(4, 12, 1024) - >>> print(list(fftconv(x).shape)) - [4, 24, 225] - """ - def __init__(self, in_channels: int, out_channels: int, kernel_size: int, - stride: int = 1, padding: int = 0, bias: bool = True): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - - conv = torch.nn.Conv1d(in_channels, out_channels, kernel_size, bias=bias) - self.weight = conv.weight - self.bias = conv.bias - - def forward(self, input: torch.Tensor): - return fft_conv1d( - input, self.weight, self.bias, self.stride, self.padding) - - def __repr__(self): - return simple_repr(self, overrides={"bias": self.bias is not None}) diff --git a/spaces/GAIR/Factool/factool/math/pipeline.py b/spaces/GAIR/Factool/factool/math/pipeline.py deleted file mode 100644 index afa860f172b87d5145dfa2aa1b388a320291b71f..0000000000000000000000000000000000000000 --- a/spaces/GAIR/Factool/factool/math/pipeline.py +++ /dev/null @@ -1,192 +0,0 @@ -import json -import math -import os -from typing import List, Dict -import yaml -import pdb - -from factool.math.tool import python_executor -from factool.utils.base.pipeline import pipeline - -class math_pipeline(pipeline): - def __init__(self, foundation_model): - super().__init__('math', foundation_model) - - self.tool = python_executor() - - with open(os.path.join(self.prompts_path, "claim_extraction.yaml"), 'r') as file: - data = yaml.load(file, Loader=yaml.FullLoader) - self.claim_prompt = data['math'] - - with open(os.path.join(self.prompts_path, 'query_generation.yaml'), 'r') as file: - data = yaml.load(file, Loader=yaml.FullLoader) - self.query_prompt = data['math'] - - def _verification(self, exec_results): - classification_results = [True for _ in range(len(exec_results))] - for i in range(len(exec_results)): - if exec_results[i] is not None and 'False' in exec_results[i]: - classification_results[i] = False - - return classification_results - - async def _claim_extraction(self, samples): - messages_list = [ - [ - {"role": "system", "content": self.claim_prompt['system']}, - {"role": "user", "content": self.claim_prompt['user'].format(input_question=sample['prompt'], input_solution=sample['response'])}, - ] - for sample in samples - ] - return await self.chat.async_run(messages_list, List) - - async def _query_generation(self, claims): - messages_list = [ - [ - {"role": "system", "content": self.query_prompt['system']}, - {"role": "user", "content": self.query_prompt['user'].format(math_calculation=claim['math_calculation'], calculated_answer=claim['calculated_answer'])}, - ] - for claim in claims - ] - return await self.chat.async_run(messages_list, Dict) - - async def run_with_tool_live(self, samples): - claims_in_responses = await self._claim_extraction(samples) - queries_in_responses = [] - exec_results_in_responses = [] - verifications_in_responses = [] - for claims_in_response in claims_in_responses: - queries = await self._query_generation(claims_in_response) - queries_in_responses.append(queries) - exec_results = [] - for query in queries: - try: - exec_results.append(self.tool.run(query['python_snippet'])) - except: - exec_results.append('None') - exec_results_in_responses.append(exec_results) - verifications = self._verification(exec_results) - verifications_in_responses.append(verifications) - - return claims_in_responses, queries_in_responses, exec_results_in_responses, verifications_in_responses - - async def run_with_tool_live_without_claim_extraction(self, claims): - queries = await self._query_generation(claims) - - exec_results = [] - for query in queries: - try: - exec_results.append(self.tool.run(query['python_snippet'])) - except: - exec_results.append(None) - classification_results = self._verification(exec_results) - return queries, exec_results, classification_results - - async def run_with_tool_api_call(self, prompts, responses): - batch_size = 5 - num_batches = math.ceil(len(prompts) / batch_size) - - self.sample_list = [{"prompt": prompt, "response": response, "category": 'math'} for prompt, response in zip(prompts, responses)] - - for i in range(num_batches): - print(i) - batch_start = i * batch_size - batch_end = min((i + 1) * batch_size, len(responses)) - - claims_in_responses, queries_in_responses, exec_results_in_response, verifications_in_responses = await self.run_with_tool_live(self.sample_list[batch_start: batch_end]) - - for j, (claims_in_response, queries_in_response, exec_results_in_response, verifications_in_response) in enumerate(zip(claims_in_responses, queries_in_responses, exec_results_in_response, verifications_in_responses)): - index = batch_start + j - - self.sample_list[index].update({ - 'claims': claims_in_response, - 'queries': queries_in_response, - 'execution_results': exec_results_in_response, - 'claim_level_factuality': verifications_in_response, - 'response_level_factuality': all([verification if verification != None else True for verification in verifications_in_response]) - }) - - return self.sample_list - - async def run_with_tool_dataset(self, annotated_dataset_path: str, with_tool_classified_dataset_path: str, rerun: bool = False, rerun_indices: list = []): - data_path = annotated_dataset_path if not rerun else with_tool_classified_dataset_path - with open(data_path, 'r') as f: - data = [json.loads(line) for line in f] - self.sample_list = data if rerun else [claim for sample in data for claim in sample['claims']] - rerun_elements = self.sample_list if not rerun else [self.sample_list[i] for i in rerun_indices] - - batch_size = 10 - num_batches = math.ceil(len(rerun_elements) / batch_size) # 5 - - for i in range(num_batches): - print("test1") - print(i) - batch_start = i * batch_size - batch_end = min((i + 1) * batch_size, len(rerun_elements)) - batch = rerun_elements[batch_start:batch_end] - - queries, exec_results, classification_results = await self.run_with_tool_live_without_claim_extraction(batch) - - for j, (query, exec_result, classification_result) in enumerate(zip(queries, exec_results, classification_results)): - index = batch_start + j if not rerun else rerun_indices[batch_start + j] - self.sample_list[index].update({ - 'query': query, - 'exec_result': exec_result, - 'with_tool_classification': classification_result, - }) - - # save everything after each batch to prevent data loss - with open(with_tool_classified_dataset_path, 'w') as f: - for item in self.sample_list: - try: - json_str = json.dumps(item) - except: - continue - f.write(json_str + '\n') - - async def run_self_check_live(self, fewshot, batch): - user_prompt_key = 'user_3_shot_CoT' if fewshot else 'user_zero_shot_CoT' - messages_list = [ - [ - {"role": "system", "content": self.self_check_prompt['system']}, - {"role": "user", "content": self.self_check_prompt[user_prompt_key].format(input_calculation=response['math_calculation'], input_calculated_answer=response['calculated_answer'])}, - ] - for response in batch - ] - return await self.chat.async_run(messages_list, Dict) - - async def run_self_check_dataset(self, annotated_dataset_path: str, self_check_classified_dataset_path: str, fewshot: bool = False, rerun: bool = False, rerun_indices: list = []): - data_path = annotated_dataset_path if not rerun else self_check_classified_dataset_path - with open(data_path, 'r') as f: - data = [json.loads(line) for line in f] - self.sample_list = data if rerun else [claim for sample in data for claim in sample['claims']] - rerun_elements = self.sample_list if not rerun else [self.sample_list[i] for i in rerun_indices] - - batch_size = 10 - num_batches = math.ceil(len(rerun_elements) / batch_size) - - for i in range(num_batches): - print(i) - batch_start = i * batch_size - batch_end = min((i + 1) * batch_size, len(rerun_elements)) - batch = rerun_elements[batch_start:batch_end] - - responses = await self.run_self_check_live(fewshot, batch) - for j, response in enumerate(responses): - index = batch_start + j if not rerun else rerun_indices[batch_start + j] - if response is None: - self.sample_list[index].update({ - 'self_check_classification': 'None', - 'self_check_reasoning': 'None' - }) - else: - self.sample_list[index].update({ - 'self_check_classification': response.get('factuality', 'None'), - 'self_check_reasoning': response.get('reasoning', 'None') - }) - - # save everything after each batch to prevent data loss - with open(self_check_classified_dataset_path, 'w') as f: - for item in self.sample_list: - json_str = json.dumps(item) - f.write(json_str + '\n') \ No newline at end of file diff --git a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/conv2d_resample.py b/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/conv2d_resample.py deleted file mode 100644 index cd4750744c83354bab78704d4ef51ad1070fcc4a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/EmojiGAN/torch_utils/ops/conv2d_resample.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""2D convolution with optional up/downsampling.""" - -import torch - -from .. import misc -from . import conv2d_gradfix -from . import upfirdn2d -from .upfirdn2d import _parse_padding -from .upfirdn2d import _get_filter_size - -#---------------------------------------------------------------------------- - -def _get_weight_shape(w): - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - shape = [int(sz) for sz in w.shape] - misc.assert_shape(w, shape) - return shape - -#---------------------------------------------------------------------------- - -def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True): - """Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations. - """ - out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w) - - # Flip weight if requested. - if not flip_weight: # conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False). - w = w.flip([2, 3]) - - # Workaround performance pitfall in cuDNN 8.0.5, triggered when using - # 1x1 kernel + memory_format=channels_last + less than 64 channels. - if kw == 1 and kh == 1 and stride == 1 and padding in [0, [0, 0], (0, 0)] and not transpose: - if x.stride()[1] == 1 and min(out_channels, in_channels_per_group) < 64: - if out_channels <= 4 and groups == 1: - in_shape = x.shape - x = w.squeeze(3).squeeze(2) @ x.reshape([in_shape[0], in_channels_per_group, -1]) - x = x.reshape([in_shape[0], out_channels, in_shape[2], in_shape[3]]) - else: - x = x.to(memory_format=torch.contiguous_format) - w = w.to(memory_format=torch.contiguous_format) - x = conv2d_gradfix.conv2d(x, w, groups=groups) - return x.to(memory_format=torch.channels_last) - - # Otherwise => execute using conv2d_gradfix. - op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d - return op(x, w, stride=stride, padding=padding, groups=groups) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False): - r"""2D convolution with optional up/downsampling. - - Padding is performed only once at the beginning, not between the operations. - - Args: - x: Input tensor of shape - `[batch_size, in_channels, in_height, in_width]`. - w: Weight tensor of shape - `[out_channels, in_channels//groups, kernel_height, kernel_width]`. - f: Low-pass filter for up/downsampling. Must be prepared beforehand by - calling upfirdn2d.setup_filter(). None = identity (default). - up: Integer upsampling factor (default: 1). - down: Integer downsampling factor (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - groups: Split input channels into N groups (default: 1). - flip_weight: False = convolution, True = correlation (default: True). - flip_filter: False = convolution, True = correlation (default: False). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and (x.ndim == 4) - assert isinstance(w, torch.Tensor) and (w.ndim == 4) and (w.dtype == x.dtype) - assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [1, 2] and f.dtype == torch.float32) - assert isinstance(up, int) and (up >= 1) - assert isinstance(down, int) and (down >= 1) - assert isinstance(groups, int) and (groups >= 1) - out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w) - fw, fh = _get_filter_size(f) - px0, px1, py0, py1 = _parse_padding(padding) - - # Adjust padding to account for up/downsampling. - if up > 1: - px0 += (fw + up - 1) // 2 - px1 += (fw - up) // 2 - py0 += (fh + up - 1) // 2 - py1 += (fh - up) // 2 - if down > 1: - px0 += (fw - down + 1) // 2 - px1 += (fw - down) // 2 - py0 += (fh - down + 1) // 2 - py1 += (fh - down) // 2 - - # Fast path: 1x1 convolution with downsampling only => downsample first, then convolve. - if kw == 1 and kh == 1 and (down > 1 and up == 1): - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[px0,px1,py0,py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: 1x1 convolution with upsampling only => convolve first, then upsample. - if kw == 1 and kh == 1 and (up > 1 and down == 1): - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter) - return x - - # Fast path: downsampling only => use strided convolution. - if down > 1 and up == 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0,px1,py0,py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, stride=down, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: upsampling with optional downsampling => use transpose strided convolution. - if up > 1: - if groups == 1: - w = w.transpose(0, 1) - else: - w = w.reshape(groups, out_channels // groups, in_channels_per_group, kh, kw) - w = w.transpose(1, 2) - w = w.reshape(groups * in_channels_per_group, out_channels // groups, kh, kw) - px0 -= kw - 1 - px1 -= kw - up - py0 -= kh - 1 - py1 -= kh - up - pxt = max(min(-px0, -px1), 0) - pyt = max(min(-py0, -py1), 0) - x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[pyt,pxt], groups=groups, transpose=True, flip_weight=(not flip_weight)) - x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0+pxt,px1+pxt,py0+pyt,py1+pyt], gain=up**2, flip_filter=flip_filter) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - - # Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d. - if up == 1 and down == 1: - if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0: - return _conv2d_wrapper(x=x, w=w, padding=[py0,px0], groups=groups, flip_weight=flip_weight) - - # Fallback: Generic reference implementation. - x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - -#---------------------------------------------------------------------------- diff --git a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/helpers.py b/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/helpers.py deleted file mode 100644 index c4a58b34ea5ca6912fe53c63dede0a8696f5c024..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/StyleGAN-NADA/e4e/models/encoders/helpers.py +++ /dev/null @@ -1,140 +0,0 @@ -from collections import namedtuple -import torch -import torch.nn.functional as F -from torch.nn import Conv2d, BatchNorm2d, PReLU, ReLU, Sigmoid, MaxPool2d, AdaptiveAvgPool2d, Sequential, Module - -""" -ArcFace implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Flatten(Module): - def forward(self, input): - return input.view(input.size(0), -1) - - -def l2_norm(input, axis=1): - norm = torch.norm(input, 2, axis, True) - output = torch.div(input, norm) - return output - - -class Bottleneck(namedtuple('Block', ['in_channel', 'depth', 'stride'])): - """ A named tuple describing a ResNet block. """ - - -def get_block(in_channel, depth, num_units, stride=2): - return [Bottleneck(in_channel, depth, stride)] + [Bottleneck(depth, depth, 1) for i in range(num_units - 1)] - - -def get_blocks(num_layers): - if num_layers == 50: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=4), - get_block(in_channel=128, depth=256, num_units=14), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 100: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=13), - get_block(in_channel=128, depth=256, num_units=30), - get_block(in_channel=256, depth=512, num_units=3) - ] - elif num_layers == 152: - blocks = [ - get_block(in_channel=64, depth=64, num_units=3), - get_block(in_channel=64, depth=128, num_units=8), - get_block(in_channel=128, depth=256, num_units=36), - get_block(in_channel=256, depth=512, num_units=3) - ] - else: - raise ValueError("Invalid number of layers: {}. Must be one of [50, 100, 152]".format(num_layers)) - return blocks - - -class SEModule(Module): - def __init__(self, channels, reduction): - super(SEModule, self).__init__() - self.avg_pool = AdaptiveAvgPool2d(1) - self.fc1 = Conv2d(channels, channels // reduction, kernel_size=1, padding=0, bias=False) - self.relu = ReLU(inplace=True) - self.fc2 = Conv2d(channels // reduction, channels, kernel_size=1, padding=0, bias=False) - self.sigmoid = Sigmoid() - - def forward(self, x): - module_input = x - x = self.avg_pool(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.sigmoid(x) - return module_input * x - - -class bottleneck_IR(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), BatchNorm2d(depth) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -class bottleneck_IR_SE(Module): - def __init__(self, in_channel, depth, stride): - super(bottleneck_IR_SE, self).__init__() - if in_channel == depth: - self.shortcut_layer = MaxPool2d(1, stride) - else: - self.shortcut_layer = Sequential( - Conv2d(in_channel, depth, (1, 1), stride, bias=False), - BatchNorm2d(depth) - ) - self.res_layer = Sequential( - BatchNorm2d(in_channel), - Conv2d(in_channel, depth, (3, 3), (1, 1), 1, bias=False), - PReLU(depth), - Conv2d(depth, depth, (3, 3), stride, 1, bias=False), - BatchNorm2d(depth), - SEModule(depth, 16) - ) - - def forward(self, x): - shortcut = self.shortcut_layer(x) - res = self.res_layer(x) - return res + shortcut - - -def _upsample_add(x, y): - """Upsample and add two feature maps. - Args: - x: (Variable) top feature map to be upsampled. - y: (Variable) lateral feature map. - Returns: - (Variable) added feature map. - Note in PyTorch, when input size is odd, the upsampled feature map - with `F.upsample(..., scale_factor=2, mode='nearest')` - maybe not equal to the lateral feature map size. - e.g. - original input size: [N,_,15,15] -> - conv2d feature map size: [N,_,8,8] -> - upsampled feature map size: [N,_,16,16] - So we choose bilinear upsample which supports arbitrary output sizes. - """ - _, _, H, W = y.size() - return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/pspnet_unet_s5-d16.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/pspnet_unet_s5-d16.py deleted file mode 100644 index fcff9ec4f41fad158344ecd77313dc14564f3682..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/pspnet_unet_s5-d16.py +++ /dev/null @@ -1,50 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='PSPHead', - in_channels=64, - in_index=4, - channels=16, - pool_scales=(1, 2, 3, 6), - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/models/modules.py b/spaces/Grezz/generate_human_motion/VQ-Trans/models/modules.py deleted file mode 100644 index 4f06cd98d4f6029bd3df073095cf50498483d54a..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/VQ-Trans/models/modules.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch -import torch.nn as nn -from torch.nn.utils.rnn import pack_padded_sequence - -def init_weight(m): - if isinstance(m, nn.Conv1d) or isinstance(m, nn.Linear) or isinstance(m, nn.ConvTranspose1d): - nn.init.xavier_normal_(m.weight) - # m.bias.data.fill_(0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - -class MovementConvEncoder(nn.Module): - def __init__(self, input_size, hidden_size, output_size): - super(MovementConvEncoder, self).__init__() - self.main = nn.Sequential( - nn.Conv1d(input_size, hidden_size, 4, 2, 1), - nn.Dropout(0.2, inplace=True), - nn.LeakyReLU(0.2, inplace=True), - nn.Conv1d(hidden_size, output_size, 4, 2, 1), - nn.Dropout(0.2, inplace=True), - nn.LeakyReLU(0.2, inplace=True), - ) - self.out_net = nn.Linear(output_size, output_size) - self.main.apply(init_weight) - self.out_net.apply(init_weight) - - def forward(self, inputs): - inputs = inputs.permute(0, 2, 1) - outputs = self.main(inputs).permute(0, 2, 1) - # print(outputs.shape) - return self.out_net(outputs) - - - -class TextEncoderBiGRUCo(nn.Module): - def __init__(self, word_size, pos_size, hidden_size, output_size, device): - super(TextEncoderBiGRUCo, self).__init__() - self.device = device - - self.pos_emb = nn.Linear(pos_size, word_size) - self.input_emb = nn.Linear(word_size, hidden_size) - self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, bidirectional=True) - self.output_net = nn.Sequential( - nn.Linear(hidden_size * 2, hidden_size), - nn.LayerNorm(hidden_size), - nn.LeakyReLU(0.2, inplace=True), - nn.Linear(hidden_size, output_size) - ) - - self.input_emb.apply(init_weight) - self.pos_emb.apply(init_weight) - self.output_net.apply(init_weight) - self.hidden_size = hidden_size - self.hidden = nn.Parameter(torch.randn((2, 1, self.hidden_size), requires_grad=True)) - - # input(batch_size, seq_len, dim) - def forward(self, word_embs, pos_onehot, cap_lens): - num_samples = word_embs.shape[0] - - pos_embs = self.pos_emb(pos_onehot) - inputs = word_embs + pos_embs - input_embs = self.input_emb(inputs) - hidden = self.hidden.repeat(1, num_samples, 1) - - cap_lens = cap_lens.data.tolist() - emb = pack_padded_sequence(input_embs, cap_lens, batch_first=True) - - gru_seq, gru_last = self.gru(emb, hidden) - - gru_last = torch.cat([gru_last[0], gru_last[1]], dim=-1) - - return self.output_net(gru_last) - - -class MotionEncoderBiGRUCo(nn.Module): - def __init__(self, input_size, hidden_size, output_size, device): - super(MotionEncoderBiGRUCo, self).__init__() - self.device = device - - self.input_emb = nn.Linear(input_size, hidden_size) - self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, bidirectional=True) - self.output_net = nn.Sequential( - nn.Linear(hidden_size*2, hidden_size), - nn.LayerNorm(hidden_size), - nn.LeakyReLU(0.2, inplace=True), - nn.Linear(hidden_size, output_size) - ) - - self.input_emb.apply(init_weight) - self.output_net.apply(init_weight) - self.hidden_size = hidden_size - self.hidden = nn.Parameter(torch.randn((2, 1, self.hidden_size), requires_grad=True)) - - # input(batch_size, seq_len, dim) - def forward(self, inputs, m_lens): - num_samples = inputs.shape[0] - - input_embs = self.input_emb(inputs) - hidden = self.hidden.repeat(1, num_samples, 1) - - cap_lens = m_lens.data.tolist() - emb = pack_padded_sequence(input_embs, cap_lens, batch_first=True, enforce_sorted=False) - - gru_seq, gru_last = self.gru(emb, hidden) - - gru_last = torch.cat([gru_last[0], gru_last[1]], dim=-1) - - return self.output_net(gru_last) diff --git a/spaces/GroveStreet/GTA_SOVITS/diffusion/logger/saver.py b/spaces/GroveStreet/GTA_SOVITS/diffusion/logger/saver.py deleted file mode 100644 index ef78b52b6bcd32106f962b731d3784d72d5f0cce..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/diffusion/logger/saver.py +++ /dev/null @@ -1,150 +0,0 @@ -''' -author: wayn391@mastertones -''' - -import os -import json -import time -import yaml -import datetime -import torch -import matplotlib.pyplot as plt -from . import utils -from torch.utils.tensorboard import SummaryWriter - -class Saver(object): - def __init__( - self, - args, - initial_global_step=-1): - - self.expdir = args.env.expdir - self.sample_rate = args.data.sampling_rate - - # cold start - self.global_step = initial_global_step - self.init_time = time.time() - self.last_time = time.time() - - # makedirs - os.makedirs(self.expdir, exist_ok=True) - - # path - self.path_log_info = os.path.join(self.expdir, 'log_info.txt') - - # ckpt - os.makedirs(self.expdir, exist_ok=True) - - # writer - self.writer = SummaryWriter(os.path.join(self.expdir, 'logs')) - - # save config - path_config = os.path.join(self.expdir, 'config.yaml') - with open(path_config, "w") as out_config: - yaml.dump(dict(args), out_config) - - - def log_info(self, msg): - '''log method''' - if isinstance(msg, dict): - msg_list = [] - for k, v in msg.items(): - tmp_str = '' - if isinstance(v, int): - tmp_str = '{}: {:,}'.format(k, v) - else: - tmp_str = '{}: {}'.format(k, v) - - msg_list.append(tmp_str) - msg_str = '\n'.join(msg_list) - else: - msg_str = msg - - # dsplay - print(msg_str) - - # save - with open(self.path_log_info, 'a') as fp: - fp.write(msg_str+'\n') - - def log_value(self, dict): - for k, v in dict.items(): - self.writer.add_scalar(k, v, self.global_step) - - def log_spec(self, name, spec, spec_out, vmin=-14, vmax=3.5): - spec_cat = torch.cat([(spec_out - spec).abs() + vmin, spec, spec_out], -1) - spec = spec_cat[0] - if isinstance(spec, torch.Tensor): - spec = spec.cpu().numpy() - fig = plt.figure(figsize=(12, 9)) - plt.pcolor(spec.T, vmin=vmin, vmax=vmax) - plt.tight_layout() - self.writer.add_figure(name, fig, self.global_step) - - def log_audio(self, dict): - for k, v in dict.items(): - self.writer.add_audio(k, v, global_step=self.global_step, sample_rate=self.sample_rate) - - def get_interval_time(self, update=True): - cur_time = time.time() - time_interval = cur_time - self.last_time - if update: - self.last_time = cur_time - return time_interval - - def get_total_time(self, to_str=True): - total_time = time.time() - self.init_time - if to_str: - total_time = str(datetime.timedelta( - seconds=total_time))[:-5] - return total_time - - def save_model( - self, - model, - optimizer, - name='model', - postfix='', - to_json=False): - # path - if postfix: - postfix = '_' + postfix - path_pt = os.path.join( - self.expdir , name+postfix+'.pt') - - # check - print(' [*] model checkpoint saved: {}'.format(path_pt)) - - # save - if optimizer is not None: - torch.save({ - 'global_step': self.global_step, - 'model': model.state_dict(), - 'optimizer': optimizer.state_dict()}, path_pt) - else: - torch.save({ - 'global_step': self.global_step, - 'model': model.state_dict()}, path_pt) - - # to json - if to_json: - path_json = os.path.join( - self.expdir , name+'.json') - utils.to_json(path_params, path_json) - - def delete_model(self, name='model', postfix=''): - # path - if postfix: - postfix = '_' + postfix - path_pt = os.path.join( - self.expdir , name+postfix+'.pt') - - # delete - if os.path.exists(path_pt): - os.remove(path_pt) - print(' [*] model checkpoint deleted: {}'.format(path_pt)) - - def global_step_increment(self): - self.global_step += 1 - - diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/masking_generator.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/masking_generator.py deleted file mode 100644 index 5603eb30b40e6fea64f23d1f406f47041cc000fc..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/utils/masking_generator.py +++ /dev/null @@ -1,33 +0,0 @@ -# -------------------------------------------------------- -# Based on BEiT, timm, DINO and DeiT code bases -# https://github.com/microsoft/unilm/tree/master/beit -# https://github.com/rwightman/pytorch-image-models/tree/master/timm -# https://github.com/facebookresearch/deit -# https://github.com/facebookresearch/dino -# -------------------------------------------------------- -import numpy as np - - -class RandomMaskingGenerator: - def __init__(self, input_size, mask_ratio): - if not isinstance(input_size, tuple): - input_size = (input_size,) * 2 - - self.height, self.width = input_size - - self.num_patches = self.height * self.width - self.num_mask = int(mask_ratio * self.num_patches) - - def __repr__(self): - repr_str = "Maks: total patches {}, mask patches {}".format( - self.num_patches, self.num_mask - ) - return repr_str - - def __call__(self): - mask = np.hstack([ - np.zeros(self.num_patches - self.num_mask), - np.ones(self.num_mask), - ]) - np.random.shuffle(mask) - return mask # [196] diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/FastDemo/README.md b/spaces/HaloMaster/chinesesummary/fengshen/examples/FastDemo/README.md deleted file mode 100644 index 132519b95da3fd35f4c4fb6aae5d8c44faad3a42..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/FastDemo/README.md +++ /dev/null @@ -1,105 +0,0 @@ -# 「streamlit」快速搭建你的算法demo -在搭建demo之前,首先得做好这些准备工作: -- 模型训练完毕 -- 模型的入参确定 -- 安装streamlit库,`pip install streamlit` 就可以安装。 - -streamlit脚本的启动方式是 `streamlit run demo.py`,很简单就启动了一个demo页面,页面会随着脚本代码的改变实时刷新的。所以在没有经验的时候,可以创建一个demo.py的文件,照着下面的教程一步一步添加代码,看页面的展示情况。下面开始上干货,具体细节在代码注释中有说明! - -### 第一步 导包 -```python -import streamlit as st -# 其他包更具你的需要导入 -``` -[streamlit](https://streamlit.io)是一个用于构建机器学习、深度学习、数据可视化demo的python框架。它不需要你有web开发的经验,会写python就可以高效的开发你的demo。 - -### 第二步 页面导航信息以及布局配置 - -```python -st.set_page_config( - page_title="余元医疗问答", # 页面标签标题 - page_icon=":shark:", # 页面标签图标 - layout="wide", # 页面的布局 - initial_sidebar_state="expanded", # 左侧的sidebar的布局方式 - # 配置菜单按钮的信息 - menu_items={ - 'Get Help': 'https://www.extremelycoolapp.com/help', - 'Report a bug': "https://www.extremelycoolapp.com/bug", - 'About': "# This is a header. This is an *extremely* cool app!" - } - ) -``` -这一步可以省略,如果想让app更加个性化,可以添加这些设置。 - -### 第三步 设置demo标题 -```python -st.title('Demo for MedicalQA') -``` -streamlit的每一个小组件对应于页面都有一个默认的样式展示。 - -### 第四步 配置demo的参数 - -```python -# 此处是用的sidebar,侧边栏作为参数配置模块 -st.sidebar.header("参数配置") -# 这里是在sidebar里面创建了表单,每个表单一定有一个标题和提交按钮 -sbform = st.sidebar.form("固定参数设置") -# slider是滑动条组建,可以配置数值型参数 -n_sample = sbform.slider("设置返回条数",min_value=1,max_value=10,value=3) -text_length = sbform.slider('生成长度:',min_value=32,max_value=512,value=64,step=32) -text_level = sbform.slider('文本多样性:',min_value=0.1,max_value=1.0,value=0.9,step=0.1) -# number_input也可以配置数值型参数 -model_id = sbform.number_input('选择模型号:',min_value=0,max_value=13,value=13,step=1) -# selectbox选择组建,只能选择配置的选项 -trans = sbform.selectbox('选择翻译内核',['百度通用','医疗生物']) -# 提交表单的配置,这些参数的赋值才生效 -sbform.form_submit_button("提交配置") - -# 这里是页面中的参数配置,也是demo的主体之一 -form = st.form("参数设置") -# 本demo是qa demo,所以要录入用户的文本输入,text_input组建可以实现 -input_text = form.text_input('请输入你的问题:',value='',placeholder='例如:糖尿病的症状有哪些?') -form.form_submit_button("提交") -``` -以上就把demo的参数基本配置完成了。 - -### 第五步 模型预测 -```python -# 定义一个前向预测的方法 -# @st.cache(suppress_st_warning=True) -def generate_qa(input_text,n_sample,model_id='7',length=64,translator='baidu',level=0.7): - # 这里我们是把模型用fastapi搭建了一个api服务 - URL = 'http://192.168.190.63:6605/qa' - data = { - "text":input_text,"n_sample":n_sample, - "model_id":model_id,"length":length, - 'translator':translator,'level':level - } - r = requests.get(URL,params=data) - return r.text -# 模型预测结果 -results = generate_qa(input_text,n_sample,model_id=str(model_id), - translator=translator,length=text_length,level=text_level) -``` -这里说明一下,由于demo展示机器没有GPU,所以模型部署采用的是Fastapi部署在后台的。如果demo展示的机器可以直接部署模型,这里可以直接把模型预测的方法写在这里,不需要另外部署模型,再用api的方式调用。这样做有一个值得注意的地方,因为streamlit的代码每一次运行,都是从头到尾执行一遍,就导致模型可能会重复加载,所以这里需要用到st.cache组建,当内容没有更新的时候,会把这一步的结果缓存,而不会重新执行。保证了效率不会因此而下降。 - -### 第六步 结果展示 -```python -with st.spinner('老夫正在思考中🤔...'): - if input_text: - results = generate_qa(input_text,n_sample,model_id=str(model_id), - translator=translator,length=text_length,level=text_level) - for idx,item in enumerate(eval(results),start=1): - st.markdown(f""" - **候选回答「{idx}」:**\n - """) - st.info('中文:%s'%item['fy_next_sentence']) - st.info('英文:%s'%item['next_sentence']) -``` -streamlit对不同格式的内容展示,有丰富的组建,对于文本可以用`st.markdown`组建以及`st.text`和`st.write`展示。更多组建和功能可以参考官方文档:https://docs.streamlit.io - -至此,一个完整的demo展示就完成了。效果图如下: - -![](./image/demo.png) - -完整的代码可以参考:`Fengshenbang-LM/fengshen/examples/FastDemo/YuyuanQA.py` diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/wenzhong_qa/finetune_wenzhong.py b/spaces/HaloMaster/chinesesummary/fengshen/examples/wenzhong_qa/finetune_wenzhong.py deleted file mode 100644 index bcdeda71fd2d2d70dd56148451ddf2d4946bf31c..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/wenzhong_qa/finetune_wenzhong.py +++ /dev/null @@ -1,153 +0,0 @@ -# sys.path.append('./') -import os -import torch -import argparse -import pytorch_lightning as pl -from pytorch_lightning.callbacks import ModelCheckpoint -from pytorch_lightning import Trainer, loggers -from transformers.optimization import get_linear_schedule_with_warmup -from transformers import GPT2LMHeadModel -from fengshen.data.task_dataloader.medicalQADataset import GPT2QADataModel - - -class GPT2FinetuneMedicalQAModelCheckpoint: - @staticmethod - def add_argparse_args(parent_args): - parser = parent_args.add_argument_group('BaseModel') - - parser.add_argument('--monitor', default='train_loss', type=str) - parser.add_argument('--mode', default='min', type=str) - parser.add_argument('--dirpath', default='./ckpt/', type=str) - parser.add_argument( - '--filename', default='model-{epoch:02d}-{train_loss:.4f}', type=str) - parser.add_argument('--save_last', action='store_true', default=True) - parser.add_argument('--save_top_k', default=3, type=float) - parser.add_argument('--every_n_train_steps', default=100, type=float) - parser.add_argument('--save_weights_only', default=True, type=bool) - - return parent_args - - def __init__(self, args): - self.callbacks = ModelCheckpoint(monitor=args.monitor, - save_top_k=args.save_top_k, - mode=args.mode, - every_n_train_steps=args.every_n_train_steps, - save_weights_only=args.save_weights_only, - dirpath=args.dirpath, - filename=args.filename, - save_last=args.save_last) - - -class GPT2FinetuneMedicalQA(pl.LightningModule): - - @staticmethod - def add_model_specific_args(parent_args): - parser = parent_args.add_argument_group('BaseModel') - parser.add_argument('--learning_rate', default=1e-4, type=float) - parser.add_argument('--weight_decay', default=0.1, type=float) - parser.add_argument('--warmup', default=0.01, type=float) - return parent_args - - def __init__(self, args, num_data): - super().__init__() - self.args = args - self.num_data = num_data - print('num_data:', num_data) - self.model = GPT2LMHeadModel.from_pretrained(args.pretrained_model_path) - - def setup(self, stage) -> None: - if stage == 'fit': - num_gpus = self.trainer.gpus if self.trainer.gpus is not None else 0 - self.total_step = int(self.trainer.max_epochs * self.num_data - / (max(1, num_gpus) * self.trainer.accumulate_grad_batches)) - print('Total training step:', self.total_step) - - def training_step(self, batch, batch_idx): - output = self.model( - input_ids=batch['input_ids'], attention_mask=batch['attention_mask'], labels=batch['labels']) - # output = self.model(input_ids=batch['input_ids'], labels=batch['labels']) - # acc = self.comput_metrix(output.logits, batch['labels']) - self.log('train_loss', output.loss) - return output.loss - - def comput_metrix(self, logits, labels): - y_pred = torch.argmax(logits, dim=-1) - y_pred = y_pred.view(size=(-1,)) - y_true = labels.view(size=(-1,)).float() - corr = torch.eq(y_pred, y_true) - acc = torch.sum(corr.float()) / labels.size()[0] - return acc - - def validation_step(self, batch, batch_idx): - output = self.model( - input_ids=batch['input_ids'], attention_mask=batch['attention_mask'], labels=batch['labels']) - # output = self.model(input_ids=batch['input_ids'], labels=batch['labels']) - # acc = self.comput_metrix(output.logits, batch['labels']) - self.log('val_loss', output.loss) - # self.log('val_acc', acc) - - def configure_optimizers(self): - no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight'] - paras = list( - filter(lambda p: p[1].requires_grad, self.named_parameters())) - paras = [{ - 'params': - [p for n, p in paras if not any(nd in n for nd in no_decay)], - 'weight_decay': self.args.weight_decay - }, { - 'params': [p for n, p in paras if any(nd in n for nd in no_decay)], - 'weight_decay': 0.0 - }] - optimizer = torch.optim.AdamW(paras, lr=self.args.learning_rate) - scheduler = get_linear_schedule_with_warmup( - optimizer, int(self.total_step * self.args.warmup), - self.total_step) - - return [{ - 'optimizer': optimizer, - 'lr_scheduler': { - 'scheduler': scheduler, - 'interval': 'step', - 'frequency': 1 - } - }] - - -def main(): - total_parser = argparse.ArgumentParser("QA Task") - total_parser.add_argument('--do_eval_only', action='store_true', default=False) - total_parser.add_argument('--pretrained_model_path', default='google/mt5-small', type=str) - total_parser.add_argument('--output_save_path', default='./predict.json', type=str) - # * Args for data preprocessing - total_parser = GPT2QADataModel.add_data_specific_args(total_parser) - # * Args for training - total_parser = Trainer.add_argparse_args(total_parser) - total_parser = GPT2FinetuneMedicalQAModelCheckpoint.add_argparse_args(total_parser) - total_parser = GPT2FinetuneMedicalQA.add_model_specific_args(total_parser) - # * Args for base model - args = total_parser.parse_args() - - data_model = GPT2QADataModel(args) - if not args.do_eval_only: - model = GPT2FinetuneMedicalQA(args, len(data_model.train_dataloader())) - checkpoint_callback = GPT2FinetuneMedicalQAModelCheckpoint(args).callbacks - logger = loggers.TensorBoardLogger(save_dir=os.path.join( - args.default_root_dir, 'log/'), name='WenZhong') - trainer = Trainer.from_argparse_args(args, - logger=logger, - callbacks=[checkpoint_callback] - ) - trainer.fit(model, data_model) - - -if __name__ == '__main__': - main() - # test() - -''' -# python examples/mt5_summary.py --gpus=1 --test_data=test_public.jsonl -# --default_root_dir=/cognitive_comp/ganruyi/fengshen/mt5_summary/eval -# --do_eval_only -# --resume_from_checkpoint=/cognitive_comp/ganruyi/fengshen/mt5_summary/ckpt/model-epoch=01-train_loss=1.9166.ckpt -# --strategy=ddp -''' diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/longformer/__init__.py b/spaces/HaloMaster/chinesesummary/fengshen/models/longformer/__init__.py deleted file mode 100644 index 8c068ccdcd2a786128a6a90032fea2ff74d3ea0f..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/models/longformer/__init__.py +++ /dev/null @@ -1,55 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The IDEA Authors. All rights reserved. - -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at - -# http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from transformers.file_utils import _LazyModule, is_torch_available - - -_import_structure = { - "configuration_longformer": ["LongformerConfig"], - "tokenization_longformer": ["LongformerTokenizer"], -} - -if is_torch_available(): - _import_structure["modeling_longformer"] = [ - "LongformerModel", - "LongformerForMaskedLM", - "LongformerForMultipleChoice", - "LongformerPreTrainedModel", - "LongformerForQuestionAnswering", - "LongformerForSequenceClassification", - "LongformerForTokenClassification", - ] - - -if TYPE_CHECKING: - from .configuration_longformer import LongformerConfig - from .tokenization_longformer import LongformerTokenizer - - if is_torch_available(): - from .modeling_longformer import ( - LongformerModel, - LongformerForMaskedLM, - LongformerForMultipleChoice, - LongformerPreTrainedModel, - LongformerForQuestionAnswering, - LongformerForSequenceClassification, - LongformerForTokenClassification, - ) -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/backtranslation/prepare-de-monolingual.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/backtranslation/prepare-de-monolingual.sh deleted file mode 100644 index 5e67b2b3bcf27d3436031453e796e58a0ae79ec4..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/backtranslation/prepare-de-monolingual.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -SCRIPTS=mosesdecoder/scripts -TOKENIZER=$SCRIPTS/tokenizer/tokenizer.perl -NORM_PUNC=$SCRIPTS/tokenizer/normalize-punctuation.perl -REM_NON_PRINT_CHAR=$SCRIPTS/tokenizer/remove-non-printing-char.perl -BPEROOT=subword-nmt/subword_nmt - - -BPE_CODE=wmt18_en_de/code -SUBSAMPLE_SIZE=25000000 -LANG=de - - -OUTDIR=wmt18_${LANG}_mono -orig=orig -tmp=$OUTDIR/tmp -mkdir -p $OUTDIR $tmp - - -URLS=( - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2007.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2008.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2009.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2010.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2011.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2012.de.shuffled.gz" - "http://www.statmt.org/wmt14/training-monolingual-news-crawl/news.2013.de.shuffled.gz" - "http://www.statmt.org/wmt15/training-monolingual-news-crawl-v2/news.2014.de.shuffled.v2.gz" - "http://data.statmt.org/wmt16/translation-task/news.2015.de.shuffled.gz" - "http://data.statmt.org/wmt17/translation-task/news.2016.de.shuffled.gz" - "http://data.statmt.org/wmt18/translation-task/news.2017.de.shuffled.deduped.gz" -) -FILES=( - "news.2007.de.shuffled.gz" - "news.2008.de.shuffled.gz" - "news.2009.de.shuffled.gz" - "news.2010.de.shuffled.gz" - "news.2011.de.shuffled.gz" - "news.2012.de.shuffled.gz" - "news.2013.de.shuffled.gz" - "news.2014.de.shuffled.v2.gz" - "news.2015.de.shuffled.gz" - "news.2016.de.shuffled.gz" - "news.2017.de.shuffled.deduped.gz" -) - - -cd $orig -for ((i=0;i<${#URLS[@]};++i)); do - file=${FILES[i]} - if [ -f $file ]; then - echo "$file already exists, skipping download" - else - url=${URLS[i]} - wget "$url" - fi -done -cd .. - - -if [ -f $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} ]; then - echo "found monolingual sample, skipping shuffle/sample/tokenize" -else - gzip -c -d -k $(for FILE in "${FILES[@]}"; do echo $orig/$FILE; done) \ - | shuf -n $SUBSAMPLE_SIZE \ - | perl $NORM_PUNC $LANG \ - | perl $REM_NON_PRINT_CHAR \ - | perl $TOKENIZER -threads 8 -a -l $LANG \ - > $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} -fi - - -if [ -f $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} ]; then - echo "found BPE monolingual sample, skipping BPE step" -else - python $BPEROOT/apply_bpe.py -c $BPE_CODE \ - < $tmp/monolingual.${SUBSAMPLE_SIZE}.${LANG} \ - > $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} -fi - - -if [ -f $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} ]; then - echo "found deduplicated monolingual sample, skipping deduplication step" -else - python deduplicate_lines.py $tmp/bpe.monolingual.${SUBSAMPLE_SIZE}.${LANG} \ - > $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} -fi - - -if [ -f $OUTDIR/bpe.monolingual.dedup.00.de ]; then - echo "found sharded data, skipping sharding step" -else - split --lines 1000000 --numeric-suffixes \ - --additional-suffix .${LANG} \ - $tmp/bpe.monolingual.dedup.${SUBSAMPLE_SIZE}.${LANG} \ - $OUTDIR/bpe.monolingual.dedup. -fi diff --git a/spaces/Hexamind/swarms/dronemodel.py b/spaces/Hexamind/swarms/dronemodel.py deleted file mode 100644 index caf99b95b5794071da29af9fbf8736875f94c27c..0000000000000000000000000000000000000000 --- a/spaces/Hexamind/swarms/dronemodel.py +++ /dev/null @@ -1,103 +0,0 @@ -from dataclasses import dataclass -from scipy.integrate import odeint -import numpy as np - -import param_ - - -@dataclass -class DroneModel: - """ - Creates a drone_model of a drone - """ - - def __init__(self, is_blue): - self.drone_model = param_.DRONE_MODELS[param_.DRONE_MODEL[is_blue]] - - self.angle_to_neutralisation = self.drone_model['angle_to_neutralisation'] - self.distance_to_neutralisation = self.drone_model['distance_to_neutralisation'] - self.duration_to_neutralisation = self.drone_model['duration_to_neutralisation'] - - self.Cxy = self.drone_model['Cxy'] - self.Cz = self.drone_model['Cz'] - self.mass = self.drone_model['mass'] - - self.Fxy_ratio = self.drone_model['Fxy_ratio'] - self.Fz_min_ratio = self.drone_model['Fz_min_ratio'] - self.Fz_max_ratio = self.drone_model['Fz_max_ratio'] - - self.weight_eq = self.mass * param_.g * (1 - self.Fz_min_ratio) - self.Fz_plus = (self.Fz_max_ratio - 1) * self.mass * param_.g - self.Fz_minus = (1 - self.Fz_min_ratio) * self.mass * param_.g - self.Fxy = self.mass * param_.g * self.Fxy_ratio - - self.max_speed = np.sqrt(self.Fxy / self.Cxy) - self.max_up_speed = np.sqrt(self.Fz_plus / self.Cz) - self.max_down_speed = np.sqrt(self.Fz_minus / self.Cz) - self.max_rot_speed = 2 * np.pi - - def get_trajectory(self, pos_xyz, speed_xyz, action: np.ndarray(3,), time_: np.ndarray(1,)) -> np.ndarray(3,): - ''' - returns next position given the current position, speed and applied forces - :param pos_xyz: - :param speed_xyz: - :param action: - :param time_: - :return: - ''' - - rho = action[0] # in 0, 1 - theta = 2*np.pi * action[1] # in 0, 2pi - psy = np.pi * (action[2] - 0.5) # in -pi/2, pi/2 - - fx = rho * np.cos(theta) * np.cos(psy) * self.Fxy - fy = rho * np.sin(theta) * np.cos(psy) * self.Fxy - fz = rho * np.sin(psy) * (self.Fz_plus if 0 < psy else self.Fz_minus) - - pos_speed = np.hstack((pos_xyz, speed_xyz)) - - result_ = odeint( - lambda u, v: self.drone_dynamics(u, v, fx, fy, fz, self.Cxy, self.Cz, self.mass), - pos_speed, - time_, - Dfun=lambda u, v: self.fulljac(u, v, self.Cxy, self.Cz, self.mass) - ) - x, y, z, dx, dy, dz = result_.T - - return np.array([x, y, z], dtype='float32'), np.array([dx, dy, dz], dtype='float32') - - def drone_dynamics(self, pos_speed, time_, f_x, f_y, f_z, Cxy, Cz, m): - x, y, z, dx, dy, dz = pos_speed - return [dx, - dy, - dz, - 1/m * (f_x - Cxy * dx * np.sqrt(dx**2 + dy**2 + dz**2)), - 1/m * (f_y - Cxy * dy * np.sqrt(dx**2 + dy**2 + dz**2)), - 1/m * (f_z - Cz * dz * np.sqrt(dx**2 + dy**2 + dz**2))] - - def fulljac(self, pos_speed, time_, Cxy, Cz, m) -> np.ndarray((6, 6), ): - ''' - returns the Jacobian of the differential equation of the trajectory - :param pos_speed: - :param time_: - :param Cxy: - :param Cz: - :param m: - :return: - ''' - - x, y, z, dx, dy, dz = pos_speed - J = np.zeros((6, 6)) - J[0, 3] = 1 - J[1, 4] = 1 - J[2, 5] = 1 - J[3, 3] = -Cxy/m * ((np.sqrt(dx**2 + dy**2 + dz**2)) + dx**2 / np.sqrt(dx**2 + dy**2 + dz**2)) - J[3, 4] = -Cxy/m * (dx * dy / np.sqrt(dx**2 + dy**2 + dz**2)) - J[3, 5] = -Cxy/m * (dx * dz / np.sqrt(dx**2 + dy**2 + dz**2)) - J[4, 4] = -Cxy/m * ((np.sqrt(dx**2 + dy**2 + dz**2)) + dy**2 / np.sqrt(dx**2 + dy**2 + dz**2)) - J[4, 3] = -Cxy/m * (dy * dx / np.sqrt(dx**2 + dy**2 + dz**2)) - J[4, 5] = -Cxy/m * (dy * dz / np.sqrt(dx**2 + dy**2 + dz**2)) - J[5, 5] = -Cz/m * ((np.sqrt(dx**2 + dy**2 + dz**2)) + dz**2 / np.sqrt(dx**2 + dy**2 + dz**2)) - J[5, 3] = -Cz/m * (dz * dx / np.sqrt(dx**2 + dy**2 + dz**2)) - J[5, 4] = -Cz/m * (dz * dy / np.sqrt(dx**2 + dy**2 + dz**2)) - return J diff --git a/spaces/HighCWu/GPEN/retinaface/facemodels/retinaface.py b/spaces/HighCWu/GPEN/retinaface/facemodels/retinaface.py deleted file mode 100644 index b7092a2bc2f35d06ce99d25473bce913ef3fd8e7..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/GPEN/retinaface/facemodels/retinaface.py +++ /dev/null @@ -1,127 +0,0 @@ -import torch -import torch.nn as nn -import torchvision.models.detection.backbone_utils as backbone_utils -import torchvision.models._utils as _utils -import torch.nn.functional as F -from collections import OrderedDict - -from facemodels.net import MobileNetV1 as MobileNetV1 -from facemodels.net import FPN as FPN -from facemodels.net import SSH as SSH - - - -class ClassHead(nn.Module): - def __init__(self,inchannels=512,num_anchors=3): - super(ClassHead,self).__init__() - self.num_anchors = num_anchors - self.conv1x1 = nn.Conv2d(inchannels,self.num_anchors*2,kernel_size=(1,1),stride=1,padding=0) - - def forward(self,x): - out = self.conv1x1(x) - out = out.permute(0,2,3,1).contiguous() - - return out.view(out.shape[0], -1, 2) - -class BboxHead(nn.Module): - def __init__(self,inchannels=512,num_anchors=3): - super(BboxHead,self).__init__() - self.conv1x1 = nn.Conv2d(inchannels,num_anchors*4,kernel_size=(1,1),stride=1,padding=0) - - def forward(self,x): - out = self.conv1x1(x) - out = out.permute(0,2,3,1).contiguous() - - return out.view(out.shape[0], -1, 4) - -class LandmarkHead(nn.Module): - def __init__(self,inchannels=512,num_anchors=3): - super(LandmarkHead,self).__init__() - self.conv1x1 = nn.Conv2d(inchannels,num_anchors*10,kernel_size=(1,1),stride=1,padding=0) - - def forward(self,x): - out = self.conv1x1(x) - out = out.permute(0,2,3,1).contiguous() - - return out.view(out.shape[0], -1, 10) - -class RetinaFace(nn.Module): - def __init__(self, cfg = None, phase = 'train'): - """ - :param cfg: Network related settings. - :param phase: train or test. - """ - super(RetinaFace,self).__init__() - self.phase = phase - backbone = None - if cfg['name'] == 'mobilenet0.25': - backbone = MobileNetV1() - if cfg['pretrain']: - checkpoint = torch.load("./weights/mobilenetV1X0.25_pretrain.tar", map_location=torch.device('cpu')) - from collections import OrderedDict - new_state_dict = OrderedDict() - for k, v in checkpoint['state_dict'].items(): - name = k[7:] # remove module. - new_state_dict[name] = v - # load params - backbone.load_state_dict(new_state_dict) - elif cfg['name'] == 'Resnet50': - import torchvision.models as models - backbone = models.resnet50(pretrained=cfg['pretrain']) - - self.body = _utils.IntermediateLayerGetter(backbone, cfg['return_layers']) - in_channels_stage2 = cfg['in_channel'] - in_channels_list = [ - in_channels_stage2 * 2, - in_channels_stage2 * 4, - in_channels_stage2 * 8, - ] - out_channels = cfg['out_channel'] - self.fpn = FPN(in_channels_list,out_channels) - self.ssh1 = SSH(out_channels, out_channels) - self.ssh2 = SSH(out_channels, out_channels) - self.ssh3 = SSH(out_channels, out_channels) - - self.ClassHead = self._make_class_head(fpn_num=3, inchannels=cfg['out_channel']) - self.BboxHead = self._make_bbox_head(fpn_num=3, inchannels=cfg['out_channel']) - self.LandmarkHead = self._make_landmark_head(fpn_num=3, inchannels=cfg['out_channel']) - - def _make_class_head(self,fpn_num=3,inchannels=64,anchor_num=2): - classhead = nn.ModuleList() - for i in range(fpn_num): - classhead.append(ClassHead(inchannels,anchor_num)) - return classhead - - def _make_bbox_head(self,fpn_num=3,inchannels=64,anchor_num=2): - bboxhead = nn.ModuleList() - for i in range(fpn_num): - bboxhead.append(BboxHead(inchannels,anchor_num)) - return bboxhead - - def _make_landmark_head(self,fpn_num=3,inchannels=64,anchor_num=2): - landmarkhead = nn.ModuleList() - for i in range(fpn_num): - landmarkhead.append(LandmarkHead(inchannels,anchor_num)) - return landmarkhead - - def forward(self,inputs): - out = self.body(inputs) - - # FPN - fpn = self.fpn(out) - - # SSH - feature1 = self.ssh1(fpn[0]) - feature2 = self.ssh2(fpn[1]) - feature3 = self.ssh3(fpn[2]) - features = [feature1, feature2, feature3] - - bbox_regressions = torch.cat([self.BboxHead[i](feature) for i, feature in enumerate(features)], dim=1) - classifications = torch.cat([self.ClassHead[i](feature) for i, feature in enumerate(features)],dim=1) - ldm_regressions = torch.cat([self.LandmarkHead[i](feature) for i, feature in enumerate(features)], dim=1) - - if self.phase == 'train': - output = (bbox_regressions, classifications, ldm_regressions) - else: - output = (bbox_regressions, F.softmax(classifications, dim=-1), ldm_regressions) - return output \ No newline at end of file diff --git a/spaces/HighCWu/Style2Paints-4-Gradio/decompositioner.py b/spaces/HighCWu/Style2Paints-4-Gradio/decompositioner.py deleted file mode 100644 index f54236f341d97d241ef1050bd28c8d00012d1a3e..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/Style2Paints-4-Gradio/decompositioner.py +++ /dev/null @@ -1,185 +0,0 @@ -import os -import numpy as np -from scipy.spatial import ConvexHull -from sklearn.cluster import MiniBatchKMeans -from tricks import * -import cv2 - - -ksd = 8 -mbc = MiniBatchKMeans(ksd) - - -def get_theme(img): - images = np.reshape(cv2.resize(img, (256, 256)), (256 * 256, 3)) - hull = ConvexHull(images) - return hull.points[hull.vertices] - - -def simplify_points(points, img): - labels = mbc.fit(points) - new_points = [] - all_center = np.mean(labels.cluster_centers_, axis=0) - distances = np.sum((points - all_center) ** 2, axis=1) ** 0.5 - - for idx in range(ksd): - candidates = points[labels.labels_ == idx] - scores = distances[labels.labels_ == idx] - best_id = np.argmax(scores) - new_points.append(candidates[best_id]) - - new_points.sort(key=np.sum, reverse=True) - - new_points = np.stack(new_points, axis=0) - return new_points.clip(0, 255).astype(np.uint8) - - -def get_ini_layers(miku, points): - results = [] - final_target = miku.astype(np.float32) - bg = np.zeros_like(final_target, dtype=np.float32) + points[0] - results.append(np.concatenate([bg, np.zeros_like(bg, dtype=np.float32) + 255], axis=2)[:, :, 0:4]) - current_result = bg.copy() - for layer_index in range(1, ksd): - current_base = current_result.astype(np.float32) - current_color = np.zeros_like(final_target, dtype=np.float32) + points[layer_index] - overall_direction = final_target - current_base - avaliable_direction = current_color - current_base - current_alpha = np.sum(overall_direction * avaliable_direction, axis=2, keepdims=True) / np.sum( - avaliable_direction * avaliable_direction, axis=2, keepdims=True) - current_alpha = current_alpha.clip(0, 1) - current_result = (current_color * current_alpha + current_base * (1 - current_alpha)).clip(0, 255) - results.append(np.concatenate([current_color, current_alpha * 255.0], axis=2)) - return results - - -def make_reconstruction(layers): - bg = np.zeros_like(layers[0], dtype=np.float32)[:, :, 0:3] + 255 - for item in layers: - current_alpha = item[:, :, 3:4] / 255.0 - bg = item[:, :, 0:3] * current_alpha + bg * (1 - current_alpha) - return bg - - -def improve_layers(layers, miku): - reconstruction = make_reconstruction(layers) - b = miku - reconstruction - new_layers = [] - for item in layers: - new_item = item.copy() - new_item[:, :, 0:3] = (new_item[:, :, 0:3] + b).clip(0, 255) - new_layers.append(new_item) - return new_layers - - -def cluster_all(labeled_array, num_features): - xs = [[] for _ in range(num_features)] - ys = [[] for _ in range(num_features)] - M = labeled_array.shape[0] - N = labeled_array.shape[1] - for x in range(M): - for y in range(N): - i = labeled_array[x, y] - xs[i].append(x) - ys[i].append(y) - result = [] - for _ in range(num_features): - result.append((np.array(xs[_]), np.array(ys[_]))) - return result - - -def meder(x): - y = x.copy() - y = cv2.medianBlur(y, 5) - y = cv2.medianBlur(y, 5) - y = cv2.medianBlur(y, 3) - y = cv2.medianBlur(y, 3) - return y - - -def re_med(s_2048): - - sample_2048 = s_2048.astype(np.float32) - sample_1024 = cv2.pyrDown(sample_2048) - sample_512 = cv2.pyrDown(sample_1024) - sample_256 = cv2.pyrDown(sample_512) - - gradient_2048 = sample_2048 - cv2.pyrUp(sample_1024) - gradient_1024 = sample_1024 - cv2.pyrUp(sample_512) - gradient_512 = sample_512 - cv2.pyrUp(sample_256) - - rec_256 = meder(sample_256) - rec_512 = cv2.pyrUp(rec_256) + meder(gradient_512) - rec_1024 = cv2.pyrUp(rec_512) + meder(gradient_1024) - rec_2048 = cv2.pyrUp(rec_1024) + meder(gradient_2048) - return rec_2048 - - -def process_ctx(sketch, solid, render): - solid = solid.astype(np.float32) - sketch = d_resize(cv2.cvtColor(sketch, cv2.COLOR_GRAY2RGB), solid.shape).astype(np.float32) - render = d_resize(render, solid.shape).astype(np.float32) - alpha = sketch / 255.0 - all_diff = render - solid - all_lines = render.copy() - all_lines = cv2.erode(all_lines, np.ones((3,3), np.uint8)) * 0.618 - all_diff = re_med(all_diff) - all_lines = re_med(all_lines) - recon = solid + all_diff - recon = recon * alpha + all_lines * (1 - alpha) - recon2 = (solid + all_diff) * alpha + re_med(solid) * (1 - alpha) - recon3 = reason_blending(recon2, sketch) - return recon.clip(0, 255).astype(np.uint8), recon2.clip(0, 255).astype(np.uint8), recon3.clip(0, 255).astype(np.uint8) - - -def process_psd(sketch, solid, render, path='./'): - recon = process_ctx(sketch, solid, render) - points = get_theme(solid) - points = simplify_points(points, solid) - compositions = get_ini_layers(solid, points) - compositions = improve_layers(compositions, solid) - for _ in range(ksd): - cv2.imwrite(path + str(_ + 1) + '.color.png', compositions[_].clip(0, 255).astype(np.uint8)) - solid = make_reconstruction(compositions).clip(0, 255).astype(np.uint8) - os.makedirs(path, exist_ok=True) - alpha = 1 - sketch.astype(np.float32) / 255.0 - now = solid - now = (now.astype(np.float32) + sketch.astype(np.float32) - 255.0).clip(0, 255) - sketch = 255 + now - solid - cv2.imwrite(path + '9.sketch.png', sketch.clip(0, 255).astype(np.uint8)) - all_diff = recon.astype(np.float32) - now - all_light = all_diff.copy() - all_shadow = - all_diff.copy() - all_light[all_light < 0] = 0 - all_shadow[all_shadow < 0] = 0 - sketch_color = all_light * alpha - light = all_light * (1 - alpha) - all_shadow = 255 - all_shadow - cv2.imwrite(path + '10.sketch_color.png', sketch_color.clip(0, 255).astype(np.uint8)) - cv2.imwrite(path + '11.light.png', light.clip(0, 255).astype(np.uint8)) - cv2.imwrite(path + '12.shadow.png', all_shadow.clip(0, 255).astype(np.uint8)) - return recon - - -def process_albedo(albedo, composition, sketch): - DEL = albedo.astype(np.float32) - HSV = cv2.cvtColor(albedo, cv2.COLOR_RGB2HSV).astype(np.float32) - YUV = cv2.cvtColor(albedo, cv2.COLOR_RGB2YUV).astype(np.float32) - solid = composition.astype(np.float32) - light = sketch[:, :, None].astype(np.float32) - - DEL = DEL * light / 255.0 + solid * (1 - light / 255.0) - HSV[:, :, 2:3] = np.minimum(HSV[:, :, 2:3], light) - YUV[:, :, 0:1] = np.minimum(YUV[:, :, 0:1], light) - - DEL = DEL.clip(0, 255).astype(np.uint8) - HSV = HSV.clip(0, 255).astype(np.uint8) - YUV = YUV.clip(0, 255).astype(np.uint8) - - return cv2.cvtColor(HSV, cv2.COLOR_HSV2RGB), cv2.cvtColor(YUV, cv2.COLOR_YUV2RGB), DEL - - -def process_overlay(composition, sketch): - RGB = composition.astype(np.float32) - alpha = sketch[:, :, None].astype(np.float32) / 255.0 - return (RGB * alpha).clip(0, 255).astype(np.uint8) diff --git a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/module.d8037460.js b/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/module.d8037460.js deleted file mode 100644 index 8d5e84b3696a9ef1b576f84f8a09e2600aaa9d02..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/anime-colorization-with-hint/gradio-modified/gradio/templates/frontend/assets/module.d8037460.js +++ /dev/null @@ -1,2 +0,0 @@ -import{c as i}from"./module.e2741a44.js";const c=i({characterize:({call:e})=>()=>e("characterize"),encode:({call:e})=>(r,n)=>e("encode",{recordingId:r,timeslice:n}),record:({call:e})=>async(r,n,o)=>{await e("record",{recordingId:r,sampleRate:n,typedArrays:o},o.map(({buffer:a})=>a))}}),u=e=>{const r=new Worker(e);return c(r)},l=`(()=>{var e={775:function(e,t,r){!function(e,t,r,n){"use strict";function o(e){return e&&"object"==typeof e&&"default"in e?e:{default:e}}var s=o(t),a=o(r),i=o(n),u=function(e,t){return void 0===t?e:t.reduce((function(e,t){if("capitalize"===t){var r=e.charAt(0).toUpperCase(),n=e.slice(1);return"".concat(r).concat(n)}return"dashify"===t?a.default(e):"prependIndefiniteArticle"===t?"".concat(i.default(e)," ").concat(e):e}),e)},c=function(e){var t=e.name+e.modifiers.map((function(e){return"\\\\.".concat(e,"\\\\(\\\\)")})).join("");return new RegExp("\\\\$\\\\{".concat(t,"}"),"g")},l=function(e,t){for(var r=/\\\${([^.}]+)((\\.[^(]+\\(\\))*)}/g,n=[],o=r.exec(e);null!==o;){var a={modifiers:[],name:o[1]};if(void 0!==o[3])for(var i=/\\.[^(]+\\(\\)/g,l=i.exec(o[2]);null!==l;)a.modifiers.push(l[0].slice(1,-2)),l=i.exec(o[2]);n.push(a),o=r.exec(e)}var d=n.reduce((function(e,r){return e.map((function(e){return"string"==typeof e?e.split(c(r)).reduce((function(e,n,o){return 0===o?[n]:r.name in t?[].concat(s.default(e),[u(t[r.name],r.modifiers),n]):[].concat(s.default(e),[function(e){return u(e[r.name],r.modifiers)},n])}),[]):[e]})).reduce((function(e,t){return[].concat(s.default(e),s.default(t))}),[])}),[e]);return function(e){return d.reduce((function(t,r){return[].concat(s.default(t),"string"==typeof r?[r]:[r(e)])}),[]).join("")}},d=function(e){var t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:{},r=void 0===e.code?void 0:l(e.code,t),n=void 0===e.message?void 0:l(e.message,t);function o(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},o=arguments.length>1?arguments[1]:void 0,s=void 0===o&&(t instanceof Error||void 0!==t.code&&"Exception"===t.code.slice(-9))?{cause:t,missingParameters:{}}:{cause:o,missingParameters:t},a=s.cause,i=s.missingParameters,u=void 0===n?new Error:new Error(n(i));return null!==a&&(u.cause=a),void 0!==r&&(u.code=r(i)),void 0!==e.status&&(u.status=e.status),u}return o};e.compile=d,Object.defineProperty(e,"__esModule",{value:!0})}(t,r(106),r(881),r(507))},881:e=>{"use strict";e.exports=(e,t)=>{if("string"!=typeof e)throw new TypeError("expected a string");return e.trim().replace(/([a-z])([A-Z])/g,"$1-$2").replace(/\\W/g,(e=>/[\xC0-\u017E]/.test(e)?e:"-")).replace(/^-+|-+$/g,"").replace(/-{2,}/g,(e=>t&&t.condense?"-":e)).toLowerCase()}},107:function(e,t){!function(e){"use strict";var t=function(e){return function(t){var r=e(t);return t.add(r),r}},r=function(e){return function(t,r){return e.set(t,r),r}},n=void 0===Number.MAX_SAFE_INTEGER?9007199254740991:Number.MAX_SAFE_INTEGER,o=536870912,s=2*o,a=function(e,t){return function(r){var a=t.get(r),i=void 0===a?r.size:an)throw new Error("Congratulations, you created a collection of unique numbers which uses all available integers!");for(;r.has(i);)i=Math.floor(Math.random()*n);return e(r,i)}},i=new WeakMap,u=r(i),c=a(u,i),l=t(c);e.addUniqueNumber=l,e.generateUniqueNumber=c,Object.defineProperty(e,"__esModule",{value:!0})}(t)},507:e=>{var t=function(e){var t,r,n=/\\w+/.exec(e);if(!n)return"an";var o=(r=n[0]).toLowerCase(),s=["honest","hour","hono"];for(t in s)if(0==o.indexOf(s[t]))return"an";if(1==o.length)return"aedhilmnorsx".indexOf(o)>=0?"an":"a";if(r.match(/(?!FJO|[HLMNS]Y.|RY[EO]|SQU|(F[LR]?|[HL]|MN?|N|RH?|S[CHKLMNPTVW]?|X(YL)?)[AEIOU])[FHLMNRSX][A-Z]/))return"an";var a=[/^e[uw]/,/^onc?e\\b/,/^uni([^nmd]|mo)/,/^u[bcfhjkqrst][aeiou]/];for(t=0;t=0?"an":"a":"aeiou".indexOf(o[0])>=0||o.match(/^y(b[lor]|cl[ea]|fere|gg|p[ios]|rou|tt)/)?"an":"a"};void 0!==e.exports?e.exports=t:window.indefiniteArticle=t},768:e=>{e.exports=function(e,t){(null==t||t>e.length)&&(t=e.length);for(var r=0,n=new Array(t);r{var n=r(768);e.exports=function(e){if(Array.isArray(e))return n(e)},e.exports.__esModule=!0,e.exports.default=e.exports},642:e=>{e.exports=function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)},e.exports.__esModule=!0,e.exports.default=e.exports},344:e=>{e.exports=function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")},e.exports.__esModule=!0,e.exports.default=e.exports},106:(e,t,r)=>{var n=r(907),o=r(642),s=r(906),a=r(344);e.exports=function(e){return n(e)||o(e)||s(e)||a()},e.exports.__esModule=!0,e.exports.default=e.exports},906:(e,t,r)=>{var n=r(768);e.exports=function(e,t){if(e){if("string"==typeof e)return n(e,t);var r=Object.prototype.toString.call(e).slice(8,-1);return"Object"===r&&e.constructor&&(r=e.constructor.name),"Map"===r||"Set"===r?Array.from(e):"Arguments"===r||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(r)?n(e,t):void 0}},e.exports.__esModule=!0,e.exports.default=e.exports}},t={};function r(n){var o=t[n];if(void 0!==o)return o.exports;var s=t[n]={exports:{}};return e[n].call(s.exports,s,s.exports,r),s.exports}(()=>{"use strict";var e=r(775);const t=-32603,n=-32602,o=-32601,s=(0,e.compile)({message:'The requested method called "\${method}" is not supported.',status:o}),a=(0,e.compile)({message:'The handler of the method called "\${method}" returned no required result.',status:t}),i=(0,e.compile)({message:'The handler of the method called "\${method}" returned an unexpected result.',status:t}),u=(0,e.compile)({message:'The specified parameter called "portId" with the given value "\${portId}" does not identify a port connected to this worker.',status:n}),c=(e,t)=>async r=>{let{data:{id:n,method:o,params:u}}=r;const c=t[o];try{if(void 0===c)throw s({method:o});const t=void 0===u?c():c(u);if(void 0===t)throw a({method:o});const r=t instanceof Promise?await t:t;if(null===n){if(void 0!==r.result)throw i({method:o})}else{if(void 0===r.result)throw i({method:o});const{result:t,transferables:s=[]}=r;e.postMessage({id:n,result:t},s)}}catch(t){const{message:r,status:o=-32603}=t;e.postMessage({error:{code:o,message:r},id:n})}};var l=r(107);const d=new Map,f=(e,t,r)=>({...t,connect:r=>{let{port:n}=r;n.start();const o=e(n,t),s=(0,l.generateUniqueNumber)(d);return d.set(s,(()=>{o(),n.close(),d.delete(s)})),{result:s}},disconnect:e=>{let{portId:t}=e;const r=d.get(t);if(void 0===r)throw u({portId:t.toString()});return r(),{result:null}},isSupported:async()=>{if(await new Promise((e=>{const t=new ArrayBuffer(0),{port1:r,port2:n}=new MessageChannel;r.onmessage=t=>{let{data:r}=t;return e(null!==r)},n.postMessage(t,[t])}))){const e=r();return{result:e instanceof Promise?await e:e}}return{result:!1}}}),p=function(e,t){let r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:()=>!0;const n=f(p,t,r),o=c(e,n);return e.addEventListener("message",o),()=>e.removeEventListener("message",o)},m=e=>e.reduce(((e,t)=>e+t.length),0),h=(e,t)=>{const r=[];let n=0;e:for(;nt){const o=n-t;r.forEach(((t,r)=>{const n=t.pop(),s=n.length-o;t.push(n.subarray(0,s)),e[r].unshift(n.subarray(s))}))}return r},v=new Map,g=(e=>(t,r,n)=>{const o=e.get(t);if(void 0===o){const o={channelDataArrays:n.map((e=>[e])),isComplete:!0,sampleRate:r};return e.set(t,o),o}return o.channelDataArrays.forEach(((e,t)=>e.push(n[t]))),o})(v),x=((e,t)=>(r,n,o,s)=>{const a=o>>3,i="subsequent"===n?0:44,u=r.length,c=e(r[0]),l=new ArrayBuffer(c*u*a+i),d=new DataView(l);return"subsequent"!==n&&t(d,o,u,"complete"===n?c:Number.POSITIVE_INFINITY,s),r.forEach(((e,t)=>{let r=i+t*a;e.forEach((e=>{const t=e.length;for(let n=0;n{const s=t>>3,a=Math.min(n*r*s,4294967251);e.setUint32(0,1380533830),e.setUint32(4,a+36,!0),e.setUint32(8,1463899717),e.setUint32(12,1718449184),e.setUint32(16,16,!0),e.setUint16(20,1,!0),e.setUint16(22,r,!0),e.setUint32(24,o,!0),e.setUint32(28,o*r*s,!0),e.setUint16(32,r*s,!0),e.setUint16(34,t,!0),e.setUint32(36,1684108385),e.setUint32(40,a,!0)})),w=new Map;p(self,{characterize:()=>({result:/^audio\\/wav$/}),encode:e=>{let{recordingId:t,timeslice:r}=e;const n=w.get(t);void 0!==n&&(w.delete(t),n.reject(new Error("Another request was made to initiate an encoding.")));const o=v.get(t);if(null!==r){if(void 0===o||m(o.channelDataArrays[0])*(1e3/o.sampleRate){w.set(t,{reject:n,resolve:e,timeslice:r})}));const e=h(o.channelDataArrays,Math.ceil(r*(o.sampleRate/1e3))),n=x(e,o.isComplete?"initial":"subsequent",16,o.sampleRate);return o.isComplete=!1,{result:n,transferables:n}}if(void 0!==o){const e=x(o.channelDataArrays,o.isComplete?"complete":"subsequent",16,o.sampleRate);return v.delete(t),{result:e,transferables:e}}return{result:[],transferables:[]}},record:e=>{let{recordingId:t,sampleRate:r,typedArrays:n}=e;const o=g(t,r,n),s=w.get(t);if(void 0!==s&&m(o.channelDataArrays[0])*(1e3/r)>=s.timeslice){const e=h(o.channelDataArrays,Math.ceil(s.timeslice*(r/1e3))),n=x(e,o.isComplete?"initial":"subsequent",16,r);o.isComplete=!1,w.delete(t),s.resolve({result:n,transferables:n})}return{result:null}}})})()})();`,d=new Blob([l],{type:"application/javascript; charset=utf-8"}),s=URL.createObjectURL(d),t=u(s),p=t.characterize,m=t.connect,h=t.disconnect,v=t.encode,g=t.isSupported,x=t.record;URL.revokeObjectURL(s);export{p as characterize,m as connect,h as disconnect,v as encode,g as isSupported,x as record}; -//# sourceMappingURL=module.d8037460.js.map diff --git a/spaces/Hila/RobustViT/robustness_dataset.py b/spaces/Hila/RobustViT/robustness_dataset.py deleted file mode 100644 index e067332e680e9707587dfc4ac509e2b9af5c17bd..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/robustness_dataset.py +++ /dev/null @@ -1,66 +0,0 @@ -import json -from torch.utils import data -from torchvision.datasets import ImageFolder -import torch -import os -from PIL import Image -import numpy as np -import argparse -from tqdm import tqdm -from munkres import Munkres -import multiprocessing -from multiprocessing import Process, Manager -import collections -import torchvision.transforms as transforms -import torchvision.transforms.functional as TF -import random -import torchvision -import cv2 -from label_str_to_imagenet_classes import label_str_to_imagenet_classes - -torch.manual_seed(0) - -ImageItem = collections.namedtuple('ImageItem', ('image_name', 'tag')) -normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5], - std=[0.5, 0.5, 0.5]) - -transform = transforms.Compose([ - transforms.Resize(256), - transforms.CenterCrop(224), - transforms.ToTensor(), - normalize, -]) - -class RobustnessDataset(ImageFolder): - def __init__(self, imagenet_path, imagenet_classes_path='imagenet_classes.json', isV2=False, isSI=False): - self._isV2 = isV2 - self._isSI = isSI - self._imagenet_path = imagenet_path - with open(imagenet_classes_path, 'r') as f: - self._imagenet_classes = json.load(f) - self._tag_list = [tag for tag in os.listdir(self._imagenet_path)] - self._all_images = [] - for tag in self._tag_list: - base_dir = os.path.join(self._imagenet_path, tag) - for i, file in enumerate(os.listdir(base_dir)): - self._all_images.append(ImageItem(file, tag)) - - - def __getitem__(self, item): - image_item = self._all_images[item] - image_path = os.path.join(self._imagenet_path, image_item.tag, image_item.image_name) - image = Image.open(image_path) - image = image.convert('RGB') - image = transform(image) - - if self._isV2: - class_name = int(image_item.tag) - elif self._isSI: - class_name = int(label_str_to_imagenet_classes[image_item.tag]) - else: - class_name = int(self._imagenet_classes[image_item.tag]) - - return image, class_name - - def __len__(self): - return len(self._all_images) \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/data_utils.py b/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/data_utils.py deleted file mode 100644 index 41afac0bf8f6d70e06bee1a34e220ab396ec247d..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/data_utils.py +++ /dev/null @@ -1,382 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -from pathlib import Path -import zipfile -from functools import reduce -from multiprocessing import cpu_count -from typing import Any, Dict, List, Optional, Union -import io - -import numpy as np -import pandas as pd -import sentencepiece as sp -from fairseq.data.audio.audio_utils import ( - convert_waveform, _get_kaldi_fbank, _get_torchaudio_fbank, is_npy_data, - is_sf_audio_data -) -import torch -import soundfile as sf -from tqdm import tqdm - - -UNK_TOKEN, UNK_TOKEN_ID = "", 3 -BOS_TOKEN, BOS_TOKEN_ID = "", 0 -EOS_TOKEN, EOS_TOKEN_ID = "", 2 -PAD_TOKEN, PAD_TOKEN_ID = "", 1 - - -def gen_vocab( - input_path: Path, output_path_prefix: Path, model_type="bpe", - vocab_size=1000, special_symbols: Optional[List[str]] = None -): - # Train SentencePiece Model - arguments = [ - f"--input={input_path.as_posix()}", - f"--model_prefix={output_path_prefix.as_posix()}", - f"--model_type={model_type}", - f"--vocab_size={vocab_size}", - "--character_coverage=1.0", - f"--num_threads={cpu_count()}", - f"--unk_id={UNK_TOKEN_ID}", - f"--bos_id={BOS_TOKEN_ID}", - f"--eos_id={EOS_TOKEN_ID}", - f"--pad_id={PAD_TOKEN_ID}", - ] - if special_symbols is not None: - _special_symbols = ",".join(special_symbols) - arguments.append(f"--user_defined_symbols={_special_symbols}") - sp.SentencePieceTrainer.Train(" ".join(arguments)) - # Export fairseq dictionary - spm = sp.SentencePieceProcessor() - spm.Load(output_path_prefix.as_posix() + ".model") - vocab = {i: spm.IdToPiece(i) for i in range(spm.GetPieceSize())} - assert ( - vocab.get(UNK_TOKEN_ID) == UNK_TOKEN - and vocab.get(PAD_TOKEN_ID) == PAD_TOKEN - and vocab.get(BOS_TOKEN_ID) == BOS_TOKEN - and vocab.get(EOS_TOKEN_ID) == EOS_TOKEN - ) - vocab = { - i: s - for i, s in vocab.items() - if s not in {UNK_TOKEN, BOS_TOKEN, EOS_TOKEN, PAD_TOKEN} - } - with open(output_path_prefix.as_posix() + ".txt", "w") as f_out: - for _, s in sorted(vocab.items(), key=lambda x: x[0]): - f_out.write(f"{s} 1\n") - - -def extract_fbank_features( - waveform: torch.FloatTensor, - sample_rate: int, - output_path: Optional[Path] = None, - n_mel_bins: int = 80, - overwrite: bool = False, -): - if output_path is not None and output_path.is_file() and not overwrite: - return - - _waveform = convert_waveform(waveform, sample_rate, to_mono=True) - # Kaldi compliance: 16-bit signed integers - _waveform = _waveform * (2 ** 15) - _waveform = _waveform.numpy() - - features = _get_kaldi_fbank(_waveform, sample_rate, n_mel_bins) - if features is None: - features = _get_torchaudio_fbank(_waveform, sample_rate, n_mel_bins) - if features is None: - raise ImportError( - "Please install pyKaldi or torchaudio to enable fbank feature extraction" - ) - - if output_path is not None: - np.save(output_path.as_posix(), features) - return features - - -def create_zip(data_root: Path, zip_path: Path): - paths = list(data_root.glob("*.npy")) - with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_STORED) as f: - for path in tqdm(paths): - f.write(path, arcname=path.name) - - -def get_zip_manifest( - zip_path: Path, zip_root: Optional[Path] = None, is_audio=False -): - _zip_path = Path.joinpath(zip_root or Path(""), zip_path) - with zipfile.ZipFile(_zip_path, mode="r") as f: - info = f.infolist() - paths, lengths = {}, {} - for i in tqdm(info): - utt_id = Path(i.filename).stem - offset, file_size = i.header_offset + 30 + len(i.filename), i.file_size - paths[utt_id] = f"{zip_path.as_posix()}:{offset}:{file_size}" - with open(_zip_path, "rb") as f: - f.seek(offset) - byte_data = f.read(file_size) - assert len(byte_data) > 1 - if is_audio: - assert is_sf_audio_data(byte_data), i - else: - assert is_npy_data(byte_data), i - byte_data_fp = io.BytesIO(byte_data) - if is_audio: - lengths[utt_id] = sf.info(byte_data_fp).frames - else: - lengths[utt_id] = np.load(byte_data_fp).shape[0] - return paths, lengths - - -def gen_config_yaml( - manifest_root: Path, - spm_filename: Optional[str] = None, - vocab_name: Optional[str] = None, - yaml_filename: str = "config.yaml", - specaugment_policy: Optional[str] = "lb", - prepend_tgt_lang_tag: bool = False, - sampling_alpha: Optional[float] = None, - input_channels: Optional[int] = 1, - input_feat_per_channel: Optional[int] = 80, - audio_root: str = "", - cmvn_type: str = "utterance", - gcmvn_path: Optional[Path] = None, - extra=None -): - manifest_root = manifest_root.absolute() - writer = S2TDataConfigWriter(manifest_root / yaml_filename) - assert spm_filename is not None or vocab_name is not None - vocab_name = spm_filename.replace(".model", ".txt") if vocab_name is None \ - else vocab_name - writer.set_vocab_filename(vocab_name) - if input_channels is not None: - writer.set_input_channels(input_channels) - if input_feat_per_channel is not None: - writer.set_input_feat_per_channel(input_feat_per_channel) - specaugment_setters = { - "lb": writer.set_specaugment_lb_policy, - "ld": writer.set_specaugment_ld_policy, - "sm": writer.set_specaugment_sm_policy, - "ss": writer.set_specaugment_ss_policy, - } - specaugment_setter = specaugment_setters.get(specaugment_policy, None) - if specaugment_setter is not None: - specaugment_setter() - if spm_filename is not None: - writer.set_bpe_tokenizer( - { - "bpe": "sentencepiece", - "sentencepiece_model": (manifest_root / spm_filename).as_posix(), - } - ) - if prepend_tgt_lang_tag: - writer.set_prepend_tgt_lang_tag(True) - if sampling_alpha is not None: - writer.set_sampling_alpha(sampling_alpha) - - if cmvn_type not in ["global", "utterance"]: - raise NotImplementedError - - if specaugment_policy is not None: - writer.set_feature_transforms( - "_train", [f"{cmvn_type}_cmvn", "specaugment"] - ) - writer.set_feature_transforms("*", [f"{cmvn_type}_cmvn"]) - - if cmvn_type == "global": - if gcmvn_path is None: - raise ValueError("Please provide path of global cmvn file.") - else: - writer.set_global_cmvn(gcmvn_path.as_posix()) - - if len(audio_root) > 0: - writer.set_audio_root(audio_root) - - if extra is not None: - writer.set_extra(extra) - writer.flush() - - -def load_df_from_tsv(path: Union[str, Path]) -> pd.DataFrame: - _path = path if isinstance(path, str) else path.as_posix() - return pd.read_csv( - _path, - sep="\t", - header=0, - encoding="utf-8", - escapechar="\\", - quoting=csv.QUOTE_NONE, - na_filter=False, - ) - - -def save_df_to_tsv(dataframe, path: Union[str, Path]): - _path = path if isinstance(path, str) else path.as_posix() - dataframe.to_csv( - _path, - sep="\t", - header=True, - index=False, - encoding="utf-8", - escapechar="\\", - quoting=csv.QUOTE_NONE, - ) - - -def load_tsv_to_dicts(path: Union[str, Path]) -> List[dict]: - with open(path, "r") as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - rows = [dict(e) for e in reader] - return rows - - -def filter_manifest_df( - df, is_train_split=False, extra_filters=None, min_n_frames=5, max_n_frames=3000 -): - filters = { - "no speech": df["audio"] == "", - f"short speech (<{min_n_frames} frames)": df["n_frames"] < min_n_frames, - "empty sentence": df["tgt_text"] == "", - } - if is_train_split: - filters[f"long speech (>{max_n_frames} frames)"] = df["n_frames"] > max_n_frames - if extra_filters is not None: - filters.update(extra_filters) - invalid = reduce(lambda x, y: x | y, filters.values()) - valid = ~invalid - print( - "| " - + ", ".join(f"{n}: {f.sum()}" for n, f in filters.items()) - + f", total {invalid.sum()} filtered, {valid.sum()} remained." - ) - return df[valid] - - -def cal_gcmvn_stats(features_list): - features = np.concatenate(features_list) - square_sums = (features ** 2).sum(axis=0) - mean = features.mean(axis=0) - features = np.subtract(features, mean) - var = square_sums / features.shape[0] - mean ** 2 - std = np.sqrt(np.maximum(var, 1e-8)) - return {"mean": mean.astype("float32"), "std": std.astype("float32")} - - -class S2TDataConfigWriter(object): - DEFAULT_VOCAB_FILENAME = "dict.txt" - DEFAULT_INPUT_FEAT_PER_CHANNEL = 80 - DEFAULT_INPUT_CHANNELS = 1 - - def __init__(self, yaml_path: Path): - try: - import yaml - except ImportError: - print("Please install PyYAML for S2T data config YAML files") - self.yaml = yaml - self.yaml_path = yaml_path - self.config = {} - - def flush(self): - with open(self.yaml_path, "w") as f: - self.yaml.dump(self.config, f) - - def set_audio_root(self, audio_root=""): - self.config["audio_root"] = audio_root - - def set_vocab_filename(self, vocab_filename: str = "dict.txt"): - self.config["vocab_filename"] = vocab_filename - - def set_specaugment( - self, - time_wrap_w: int, - freq_mask_n: int, - freq_mask_f: int, - time_mask_n: int, - time_mask_t: int, - time_mask_p: float, - ): - self.config["specaugment"] = { - "time_wrap_W": time_wrap_w, - "freq_mask_N": freq_mask_n, - "freq_mask_F": freq_mask_f, - "time_mask_N": time_mask_n, - "time_mask_T": time_mask_t, - "time_mask_p": time_mask_p, - } - - def set_specaugment_lb_policy(self): - self.set_specaugment( - time_wrap_w=0, - freq_mask_n=1, - freq_mask_f=27, - time_mask_n=1, - time_mask_t=100, - time_mask_p=1.0, - ) - - def set_specaugment_ld_policy(self): - self.set_specaugment( - time_wrap_w=0, - freq_mask_n=2, - freq_mask_f=27, - time_mask_n=2, - time_mask_t=100, - time_mask_p=1.0, - ) - - def set_specaugment_sm_policy(self): - self.set_specaugment( - time_wrap_w=0, - freq_mask_n=2, - freq_mask_f=15, - time_mask_n=2, - time_mask_t=70, - time_mask_p=0.2, - ) - - def set_specaugment_ss_policy(self): - self.set_specaugment( - time_wrap_w=0, - freq_mask_n=2, - freq_mask_f=27, - time_mask_n=2, - time_mask_t=70, - time_mask_p=0.2, - ) - - def set_input_channels(self, input_channels: int = 1): - self.config["input_channels"] = input_channels - - def set_input_feat_per_channel(self, input_feat_per_channel: int = 80): - self.config["input_feat_per_channel"] = input_feat_per_channel - - def set_bpe_tokenizer(self, bpe_tokenizer: Dict[str, Any]): - self.config["bpe_tokenizer"] = bpe_tokenizer - - def set_global_cmvn(self, stats_npz_path: str): - self.config["global_cmvn"] = {"stats_npz_path": stats_npz_path} - - def set_feature_transforms(self, split: str, transforms: List[str]): - if "transforms" not in self.config: - self.config["transforms"] = {} - self.config["transforms"][split] = transforms - - def set_prepend_tgt_lang_tag(self, flag: bool = True): - self.config["prepend_tgt_lang_tag"] = flag - - def set_sampling_alpha(self, sampling_alpha: float = 1.0): - self.config["sampling_alpha"] = sampling_alpha - - def set_extra(self, data): - self.config.update(data) diff --git a/spaces/Illumotion/Koboldcpp/include/CL/cl_ext_intel.h b/spaces/Illumotion/Koboldcpp/include/CL/cl_ext_intel.h deleted file mode 100644 index a7ae87a3400ffb0d3f3411dc0f4a3a330fcccf70..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/include/CL/cl_ext_intel.h +++ /dev/null @@ -1,19 +0,0 @@ -/******************************************************************************* - * Copyright (c) 2008-2020 The Khronos Group Inc. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - * - ******************************************************************************/ - -#include -#pragma message("The Intel extensions have been moved into cl_ext.h. Please include cl_ext.h directly.") diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.css b/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.css deleted file mode 100644 index 22108488886cfc8d7772214dd9b83727b3fca6a3..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/assets/custom.css +++ /dev/null @@ -1,468 +0,0 @@ -:root { - --chatbot-color-light: #000000; - --chatbot-color-dark: #FFFFFF; - --chatbot-background-color-light: #F3F3F3; - --chatbot-background-color-dark: #121111; - --message-user-background-color-light: #95EC69; - --message-user-background-color-dark: #26B561; - --message-bot-background-color-light: #FFFFFF; - --message-bot-background-color-dark: #2C2C2C; -} - -#app_title { - font-weight: var(--prose-header-text-weight); - font-size: var(--text-xxl); - line-height: 1.3; - text-align: left; - margin-top: 6px; - white-space: nowrap; -} -#description { - text-align: center; - margin: 32px 0 4px 0; -} - -/* gradio的页脚信息 */ -footer { - /* display: none !important; */ - margin-top: .2em !important; - font-size: 85%; -} -#footer { - text-align: center; -} -#footer div { - display: inline-block; -} -#footer .versions{ - font-size: 85%; - opacity: 0.60; -} - -#float_display { - position: absolute; - max-height: 30px; -} -/* user_info */ -#user_info { - white-space: nowrap; - position: absolute; left: 8em; top: .2em; - z-index: var(--layer-2); - box-shadow: var(--block-shadow); - border: none; border-radius: var(--block-label-radius); - background: var(--color-accent); - padding: var(--block-label-padding); - font-size: var(--block-label-text-size); line-height: var(--line-sm); - width: auto; min-height: 30px!important; - opacity: 1; - transition: opacity 0.3s ease-in-out; -} -#user_info .wrap { - opacity: 0; -} -#user_info p { - color: white; - font-weight: var(--block-label-text-weight); -} -#user_info.hideK { - opacity: 0; - transition: opacity 1s ease-in-out; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: ui-monospace, "SF Mono", "SFMono-Regular", "Menlo", "Consolas", "Liberation Mono", "Microsoft Yahei UI", "Microsoft Yahei", monospace; - /* Windows下中文的monospace会fallback为新宋体,实在太丑,这里折中使用微软雅黑 */ - color: var(--body-text-color-subdued); -} - -#status_display { - transition: all 0.6s; -} -#chuanhu_chatbot { - transition: height 0.3s ease; -} - -/* usage_display */ -.insert_block { - position: relative; - margin: 0; - padding: .5em 1em; - box-shadow: var(--block-shadow); - border-width: var(--block-border-width); - border-color: var(--block-border-color); - border-radius: var(--block-radius); - background: var(--block-background-fill); - width: 100%; - line-height: var(--line-sm); - min-height: 2em; -} -#usage_display p, #usage_display span { - margin: 0; - font-size: .85em; - color: var(--body-text-color-subdued); -} -.progress-bar { - background-color: var(--input-background-fill);; - margin: .5em 0 !important; - height: 20px; - border-radius: 10px; - overflow: hidden; -} -.progress { - background-color: var(--block-title-background-fill); - height: 100%; - border-radius: 10px; - text-align: right; - transition: width 0.5s ease-in-out; -} -.progress-text { - /* color: white; */ - color: var(--color-accent) !important; - font-size: 1em !important; - font-weight: bold; - padding-right: 10px; - line-height: 20px; -} - -.apSwitch { - top: 2px; - display: inline-block; - height: 24px; - position: relative; - width: 48px; - border-radius: 12px; -} -.apSwitch input { - display: none !important; -} -.apSlider { - background-color: var(--neutral-200); - bottom: 0; - cursor: pointer; - left: 0; - position: absolute; - right: 0; - top: 0; - transition: .4s; - font-size: 18px; - border-radius: 12px; -} -.apSlider::before { - bottom: -1.5px; - left: 1px; - position: absolute; - transition: .4s; - content: "🌞"; -} -input:checked + .apSlider { - background-color: var(--primary-600); -} -input:checked + .apSlider::before { - transform: translateX(23px); - content:"🌚"; -} - -/* Override Slider Styles (for webkit browsers like Safari and Chrome) - * 好希望这份提案能早日实现 https://github.com/w3c/csswg-drafts/issues/4410 - * 进度滑块在各个平台还是太不统一了 - */ -input[type="range"] { - -webkit-appearance: none; - height: 4px; - background: var(--input-background-fill); - border-radius: 5px; - background-image: linear-gradient(var(--primary-500),var(--primary-500)); - background-size: 0% 100%; - background-repeat: no-repeat; -} -input[type="range"]::-webkit-slider-thumb { - -webkit-appearance: none; - height: 20px; - width: 20px; - border-radius: 50%; - border: solid 0.5px #ddd; - background-color: white; - cursor: ew-resize; - box-shadow: var(--input-shadow); - transition: background-color .1s ease; -} -input[type="range"]::-webkit-slider-thumb:hover { - background: var(--neutral-50); -} -input[type=range]::-webkit-slider-runnable-track { - -webkit-appearance: none; - box-shadow: none; - border: none; - background: transparent; -} - -#submit_btn, #cancel_btn { - height: 42px !important; -} -#submit_btn::before { - content: url("data:image/svg+xml, %3Csvg width='21px' height='20px' viewBox='0 0 21 20' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='page' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cg id='send' transform='translate(0.435849, 0.088463)' fill='%23FFFFFF' fill-rule='nonzero'%3E %3Cpath d='M0.579148261,0.0428666046 C0.301105539,-0.0961547561 -0.036517765,0.122307382 0.0032026237,0.420210298 L1.4927172,18.1553639 C1.5125774,18.4334066 1.79062012,18.5922882 2.04880264,18.4929872 L8.24518329,15.8913017 L11.6412765,19.7441794 C11.8597387,19.9825018 12.2370824,19.8832008 12.3165231,19.5852979 L13.9450591,13.4882182 L19.7839562,11.0255541 C20.0619989,10.8865327 20.0818591,10.4694687 19.7839562,10.3105871 L0.579148261,0.0428666046 Z M11.6138902,17.0883151 L9.85385903,14.7195502 L0.718169621,0.618812241 L12.69945,12.9346347 L11.6138902,17.0883151 Z' id='shape'%3E%3C/path%3E %3C/g%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -#cancel_btn::before { - content: url("data:image/svg+xml,%3Csvg width='21px' height='21px' viewBox='0 0 21 21' version='1.1' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'%3E %3Cg id='pg' stroke='none' stroke-width='1' fill='none' fill-rule='evenodd'%3E %3Cpath d='M10.2072007,20.088463 C11.5727865,20.088463 12.8594566,19.8259823 14.067211,19.3010209 C15.2749653,18.7760595 16.3386126,18.0538087 17.2581528,17.1342685 C18.177693,16.2147282 18.8982283,15.1527965 19.4197586,13.9484733 C19.9412889,12.7441501 20.202054,11.4557644 20.202054,10.0833163 C20.202054,8.71773046 19.9395733,7.43106036 19.4146119,6.22330603 C18.8896505,5.01555169 18.1673997,3.95018885 17.2478595,3.0272175 C16.3283192,2.10424615 15.2646719,1.3837109 14.0569176,0.865611739 C12.8491633,0.34751258 11.5624932,0.088463 10.1969073,0.088463 C8.83132146,0.088463 7.54636692,0.34751258 6.34204371,0.865611739 C5.1377205,1.3837109 4.07407321,2.10424615 3.15110186,3.0272175 C2.22813051,3.95018885 1.5058797,5.01555169 0.984349419,6.22330603 C0.46281914,7.43106036 0.202054,8.71773046 0.202054,10.0833163 C0.202054,11.4557644 0.4645347,12.7441501 0.9894961,13.9484733 C1.5144575,15.1527965 2.23670831,16.2147282 3.15624854,17.1342685 C4.07578877,18.0538087 5.1377205,18.7760595 6.34204371,19.3010209 C7.54636692,19.8259823 8.83475258,20.088463 10.2072007,20.088463 Z M10.2072007,18.2562448 C9.07493099,18.2562448 8.01471483,18.0452309 7.0265522,17.6232031 C6.03838956,17.2011753 5.17031614,16.6161693 4.42233192,15.8681851 C3.6743477,15.1202009 3.09105726,14.2521274 2.67246059,13.2639648 C2.25386392,12.2758022 2.04456558,11.215586 2.04456558,10.0833163 C2.04456558,8.95104663 2.25386392,7.89083047 2.67246059,6.90266784 C3.09105726,5.9145052 3.6743477,5.04643178 4.42233192,4.29844756 C5.17031614,3.55046334 6.036674,2.9671729 7.02140552,2.54857623 C8.00613703,2.12997956 9.06463763,1.92068122 10.1969073,1.92068122 C11.329177,1.92068122 12.3911087,2.12997956 13.3827025,2.54857623 C14.3742962,2.9671729 15.2440852,3.55046334 15.9920694,4.29844756 C16.7400537,5.04643178 17.3233441,5.9145052 17.7419408,6.90266784 C18.1605374,7.89083047 18.3698358,8.95104663 18.3698358,10.0833163 C18.3698358,11.215586 18.1605374,12.2758022 17.7419408,13.2639648 C17.3233441,14.2521274 16.7400537,15.1202009 15.9920694,15.8681851 C15.2440852,16.6161693 14.3760118,17.2011753 13.3878492,17.6232031 C12.3996865,18.0452309 11.3394704,18.2562448 10.2072007,18.2562448 Z M7.65444721,13.6242324 L12.7496608,13.6242324 C13.0584616,13.6242324 13.3003556,13.5384544 13.4753427,13.3668984 C13.6503299,13.1953424 13.7378234,12.9585951 13.7378234,12.6566565 L13.7378234,7.49968276 C13.7378234,7.19774418 13.6503299,6.96099688 13.4753427,6.78944087 C13.3003556,6.61788486 13.0584616,6.53210685 12.7496608,6.53210685 L7.65444721,6.53210685 C7.33878414,6.53210685 7.09345904,6.61788486 6.91847191,6.78944087 C6.74348478,6.96099688 6.65599121,7.19774418 6.65599121,7.49968276 L6.65599121,12.6566565 C6.65599121,12.9585951 6.74348478,13.1953424 6.91847191,13.3668984 C7.09345904,13.5384544 7.33878414,13.6242324 7.65444721,13.6242324 Z' id='shape' fill='%23FF3B30' fill-rule='nonzero'%3E%3C/path%3E %3C/g%3E %3C/svg%3E"); - height: 21px; -} -/* list */ -ol:not(.options), ul:not(.options) { - padding-inline-start: 2em !important; -} - -/* 亮色(默认) */ -#chuanhu_chatbot { - background-color: var(--chatbot-background-color-light) !important; - color: var(--chatbot-color-light) !important; -} -[data-testid = "bot"] { - background-color: var(--message-bot-background-color-light) !important; -} -[data-testid = "user"] { - background-color: var(--message-user-background-color-light) !important; -} -/* 暗色 */ -.dark #chuanhu_chatbot { - background-color: var(--chatbot-background-color-dark) !important; - color: var(--chatbot-color-dark) !important; -} -.dark [data-testid = "bot"] { - background-color: var(--message-bot-background-color-dark) !important; -} -.dark [data-testid = "user"] { - background-color: var(--message-user-background-color-dark) !important; -} - -/* 屏幕宽度大于等于500px的设备 */ -/* update on 2023.4.8: 高度的细致调整已写入JavaScript */ -@media screen and (min-width: 500px) { - #chuanhu_chatbot { - height: calc(100vh - 200px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 200px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } -} -/* 屏幕宽度小于500px的设备 */ -@media screen and (max-width: 499px) { - #chuanhu_chatbot { - height: calc(100vh - 140px); - } - #chuanhu_chatbot .wrap { - max-height: calc(100vh - 140px - var(--line-sm)*1rem - 2*var(--block-label-margin) ); - } - [data-testid = "bot"] { - max-width: 95% !important; - } - #app_title h1{ - letter-spacing: -1px; font-size: 22px; - } -} -#chuanhu_chatbot .wrap { - overflow-x: hidden; -} -/* 对话气泡 */ -.message { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; - min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); - min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} - -.message.user p { - white-space: pre-wrap; -} -.message .user-message { - display: block; - padding: 0 !important; - white-space: pre-wrap; -} - -.message .md-message p { - margin-top: 0.6em !important; - margin-bottom: 0.6em !important; -} -.message .md-message p:first-child { margin-top: 0 !important; } -.message .md-message p:last-of-type { margin-bottom: 0 !important; } - -.message .md-message { - display: block; - padding: 0 !important; -} -.message .raw-message p { - margin:0 !important; -} -.message .raw-message { - display: block; - padding: 0 !important; - white-space: pre-wrap; -} -.raw-message.hideM, .md-message.hideM { - display: none; -} - -/* custom buttons */ -.chuanhu-btn { - border-radius: 5px; - /* background-color: #E6E6E6 !important; */ - color: rgba(120, 120, 120, 0.64) !important; - padding: 4px !important; - position: absolute; - right: -22px; - cursor: pointer !important; - transition: color .2s ease, background-color .2s ease; -} -.chuanhu-btn:hover { - background-color: rgba(167, 167, 167, 0.25) !important; - color: unset !important; -} -.chuanhu-btn:active { - background-color: rgba(167, 167, 167, 0.5) !important; -} -.chuanhu-btn:focus { - outline: none; -} -.copy-bot-btn { - /* top: 18px; */ - bottom: 0; -} -.toggle-md-btn { - /* top: 0; */ - bottom: 20px; -} -.copy-code-btn { - position: relative; - float: right; - font-size: 1em; - cursor: pointer; -} - -.message-wrap>div img{ - border-radius: 10px !important; -} - -/* history message */ -.wrap>.history-message { - padding: 10px !important; -} -.history-message { - /* padding: 0 !important; */ - opacity: 80%; - display: flex; - flex-direction: column; -} -.history-message>.history-message { - padding: 0 !important; -} -.history-message>.message-wrap { - padding: 0 !important; - margin-bottom: 16px; -} -.history-message>.message { - margin-bottom: 16px; -} -.wrap>.history-message::after { - content: ""; - display: block; - height: 2px; - background-color: var(--body-text-color-subdued); - margin-bottom: 10px; - margin-top: -10px; - clear: both; -} -.wrap>.history-message>:last-child::after { - content: "仅供查看"; - display: block; - text-align: center; - color: var(--body-text-color-subdued); - font-size: 0.8em; -} - -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -.message :not(pre) code { - display: inline; - white-space: break-spaces; - font-family: var(--font-mono); - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -.message pre, -.message pre[class*=language-] { - color: #fff; - overflow-x: auto; - overflow-y: hidden; - margin: .8em 1em 1em 0em !important; - padding: var(--spacing-xl) 1.2em !important; - border-radius: var(--radius-lg) !important; -} -.message pre code, -.message pre code[class*=language-] { - color: #fff; - padding: 0; - margin: 0; - background-color: unset; - text-shadow: none; - font-family: var(--font-mono); -} -/* 覆盖 gradio 丑陋的复制按钮样式 */ -pre button[title="copy"] { - border-radius: 5px; - transition: background-color .2s ease; -} -pre button[title="copy"]:hover { - background-color: #333232; -} -pre button .check { - color: #fff !important; - background: var(--neutral-950) !important; -} - -/* 覆盖prism.css */ -.language-css .token.string, -.style .token.string, -.token.entity, -.token.operator, -.token.url { - background: none !important; -} diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/Google_PaLM.py b/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/Google_PaLM.py deleted file mode 100644 index 79ca042e228b25546600e4258a0b75790e25bb52..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/models/Google_PaLM.py +++ /dev/null @@ -1,26 +0,0 @@ -from .base_model import BaseLLMModel -import google.generativeai as palm - -class Google_PaLM_Client(BaseLLMModel): - def __init__(self, model_name, api_key, user_name="") -> None: - super().__init__(model_name=model_name, user=user_name) - self.api_key = api_key - - def _get_palm_style_input(self): - new_history = [] - for item in self.history: - if item["role"] == "user": - new_history.append({'author': '1', 'content': item["content"]}) - else: - new_history.append({'author': '0', 'content': item["content"]}) - return new_history - - def get_answer_at_once(self): - palm.configure(api_key=self.api_key) - messages = self._get_palm_style_input() - response = palm.chat(context=self.system_prompt, messages=messages, temperature=self.temperature, top_p=self.top_p) - if response.last is not None: - return response.last, len(response.last) - else: - reasons = '\n\n'.join(reason['reason'].name for reason in response.filters) - return "由于下面的原因,Google 拒绝返回 PaLM 的回答:\n\n" + reasons, 0 \ No newline at end of file diff --git a/spaces/KonradSzafer/HF-QA-Demo/tests/discord_bot/__init__.py b/spaces/KonradSzafer/HF-QA-Demo/tests/discord_bot/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/KyanChen/RSPrompter/mmdet/engine/runner/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/engine/runner/__init__.py deleted file mode 100644 index e8bcce4448e48e2d64354ba6770f9f426fb3d869..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/engine/runner/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .loops import TeacherStudentValLoop - -__all__ = ['TeacherStudentValLoop'] diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/trident_faster_rcnn.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/trident_faster_rcnn.py deleted file mode 100644 index 4244925beaebea820f836b41ab5463f5f499f4d0..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/trident_faster_rcnn.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from torch import Tensor - -from mmdet.registry import MODELS -from mmdet.structures import SampleList -from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig -from .faster_rcnn import FasterRCNN - - -@MODELS.register_module() -class TridentFasterRCNN(FasterRCNN): - """Implementation of `TridentNet `_""" - - def __init__(self, - backbone: ConfigType, - rpn_head: ConfigType, - roi_head: ConfigType, - train_cfg: ConfigType, - test_cfg: ConfigType, - neck: OptConfigType = None, - data_preprocessor: OptConfigType = None, - init_cfg: OptMultiConfig = None) -> None: - - super().__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - data_preprocessor=data_preprocessor, - init_cfg=init_cfg) - assert self.backbone.num_branch == self.roi_head.num_branch - assert self.backbone.test_branch_idx == self.roi_head.test_branch_idx - self.num_branch = self.backbone.num_branch - self.test_branch_idx = self.backbone.test_branch_idx - - def _forward(self, batch_inputs: Tensor, - batch_data_samples: SampleList) -> tuple: - """copy the ``batch_data_samples`` to fit multi-branch.""" - num_branch = self.num_branch \ - if self.training or self.test_branch_idx == -1 else 1 - trident_data_samples = batch_data_samples * num_branch - return super()._forward( - batch_inputs=batch_inputs, batch_data_samples=trident_data_samples) - - def loss(self, batch_inputs: Tensor, - batch_data_samples: SampleList) -> dict: - """copy the ``batch_data_samples`` to fit multi-branch.""" - num_branch = self.num_branch \ - if self.training or self.test_branch_idx == -1 else 1 - trident_data_samples = batch_data_samples * num_branch - return super().loss( - batch_inputs=batch_inputs, batch_data_samples=trident_data_samples) - - def predict(self, - batch_inputs: Tensor, - batch_data_samples: SampleList, - rescale: bool = True) -> SampleList: - """copy the ``batch_data_samples`` to fit multi-branch.""" - num_branch = self.num_branch \ - if self.training or self.test_branch_idx == -1 else 1 - trident_data_samples = batch_data_samples * num_branch - return super().predict( - batch_inputs=batch_inputs, - batch_data_samples=trident_data_samples, - rescale=rescale) - - # TODO need to refactor - def aug_test(self, imgs, img_metas, rescale=False): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ - x = self.extract_feats(imgs) - num_branch = (self.num_branch if self.test_branch_idx == -1 else 1) - trident_img_metas = [img_metas * num_branch for img_metas in img_metas] - proposal_list = self.rpn_head.aug_test_rpn(x, trident_img_metas) - return self.roi_head.aug_test( - x, proposal_list, img_metas, rescale=rescale) diff --git a/spaces/KyanChen/RSPrompter/mmpl/evaluation/metrics/builder.py b/spaces/KyanChen/RSPrompter/mmpl/evaluation/metrics/builder.py deleted file mode 100644 index bd55df759561b73656a71941e67f9c033d900dd7..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/evaluation/metrics/builder.py +++ /dev/null @@ -1,34 +0,0 @@ -import copy -import inspect -from typing import List, Union - -import torch -import torch.nn as nn -import lightning -import torchmetrics -import torchmetrics.detection - -from mmengine.config import Config, ConfigDict -from mmpl.registry import METRICS - - -def register_pl_metrics() -> List[str]: - """Register loggers in ``lightning.pytorch.loggers`` to the ``LOGGERS`` registry. - - Returns: - List[str]: A list of registered optimizers' name. - """ - pl_metrics = [] - for modules in [torchmetrics, torchmetrics.detection]: - for module_name in dir(modules): - if module_name.startswith('__'): - continue - _metric = getattr(modules, module_name) - if inspect.isclass(_metric): - METRICS.register_module(module=_metric) - pl_metrics.append(module_name) - return pl_metrics - - -PL_METRICS = register_pl_metrics() - diff --git a/spaces/KyanChen/RSPrompter/mmpl/utils/typing_utils.py b/spaces/KyanChen/RSPrompter/mmpl/utils/typing_utils.py deleted file mode 100644 index 6caf6de53274594e139dbe7c1973c747229bf010..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/utils/typing_utils.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -"""Collecting some commonly used type hint in mmdetection.""" -from typing import List, Optional, Sequence, Tuple, Union - -from mmengine.config import ConfigDict -from mmengine.structures import InstanceData, PixelData - -# TODO: Need to avoid circular import with assigner and sampler -# Type hint of config data -ConfigType = Union[ConfigDict, dict] -OptConfigType = Optional[ConfigType] -# Type hint of one or more config data -MultiConfig = Union[ConfigType, List[ConfigType]] -OptMultiConfig = Optional[MultiConfig] - -InstanceList = List[InstanceData] -OptInstanceList = Optional[InstanceList] - -PixelList = List[PixelData] -OptPixelList = Optional[PixelList] - -RangeType = Sequence[Tuple[int, int]] diff --git a/spaces/LabAlproITS/CyberDAS-FE/main.py b/spaces/LabAlproITS/CyberDAS-FE/main.py deleted file mode 100644 index a4005077331080ef19a0ac5118f31d8b322bff5d..0000000000000000000000000000000000000000 --- a/spaces/LabAlproITS/CyberDAS-FE/main.py +++ /dev/null @@ -1,40 +0,0 @@ -#!/usr/bin/env python -# encoding: utf-8 - -from fastapi import FastAPI, Form, Depends, Request -from fastapi.templating import Jinja2Templates -from pydantic import BaseModel -import pickle -import json - -app = FastAPI() - -# Menentukan direktori templates -templates = Jinja2Templates(directory="templates") - -class Msg(BaseModel): - msg: str - - -class Req(BaseModel): - age: int - sex: int - smoker: int - bmi: float - children: int - region: int - - -@app.get("/welcomeMessage") -async def welcome(): - return {"message": "Hello World. Welcome to FastAPI!"} - -@app.get("/") -async def root(request: Request): - return templates.TemplateResponse( - "index.html", - { - "request": request, - "insurance_cost": 0, - } - ) diff --git a/spaces/Laihiujin/OneFormer/README.md b/spaces/Laihiujin/OneFormer/README.md deleted file mode 100644 index 0adcc679d28eb3ec75ab7b60ed753f6e17795106..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: OneFormer -emoji: 🎗️ -colorFrom: red -colorTo: blue -sdk: docker -app_port: 7860 -pinned: false -license: mit -duplicated_from: shi-labs/OneFormer ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Makiing/coolb-in-gtest/cloudflare/worker.js b/spaces/Makiing/coolb-in-gtest/cloudflare/worker.js deleted file mode 100644 index e0debd750615f1329b2c72fbce73e1b9291f7137..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/cloudflare/worker.js +++ /dev/null @@ -1,18 +0,0 @@ -const TRAGET_HOST='hf4all-bingo.hf.space' // 请将此域名改成你自己的,域名信息在设置》站点域名查看。 - -export default { - async fetch(request) { - const uri = new URL(request.url); - if (uri.protocol === 'http:') { - uri.protocol = 'https:'; - return new Response('', { - status: 301, - headers: { - location: uri.toString(), - }, - }) - } - uri.host = TRAGET_HOST - return fetch(new Request(uri.toString(), request)); - }, -}; diff --git a/spaces/Makiing/coolb-in-gtest/src/app/layout.tsx b/spaces/Makiing/coolb-in-gtest/src/app/layout.tsx deleted file mode 100644 index 8b5122759987177b8dc4e4356d1d06cea25c15ea..0000000000000000000000000000000000000000 --- a/spaces/Makiing/coolb-in-gtest/src/app/layout.tsx +++ /dev/null @@ -1,47 +0,0 @@ -import { Metadata } from 'next' -import { Toaster } from 'react-hot-toast' -import { TailwindIndicator } from '@/components/tailwind-indicator' -import { Providers } from '@/components/providers' -import { Header } from '@/components/header' - -import '@/app/globals.scss' - - -export const metadata: Metadata = { - title: { - default: 'Bing AI Chatbot', - template: `%s - Bing AI Chatbot` - }, - description: 'Bing AI Chatbot Web App.', - themeColor: [ - { media: '(prefers-color-scheme: light)', color: 'white' }, - { media: '(prefers-color-scheme: dark)', color: 'dark' } - ], - icons: { - icon: '/favicon.ico', - shortcut: '../assets/images/logo.svg', - apple: '../assets/images/logo.svg' - } -} - -interface RootLayoutProps { - children: React.ReactNode -} - -export default function RootLayout({ children }: RootLayoutProps) { - return ( - - - - -
- {/* @ts-ignore */} -
-
{children}
-
- -
- - - ) -} diff --git a/spaces/MarcusSu1216/XingTong/app.py b/spaces/MarcusSu1216/XingTong/app.py deleted file mode 100644 index 8310b81340923a9aaea9ee5aba1d6e7811859097..0000000000000000000000000000000000000000 --- a/spaces/MarcusSu1216/XingTong/app.py +++ /dev/null @@ -1,75 +0,0 @@ -import io -import os - -os.system("wget -P hubert/ https://huggingface.co/spaces/MarcusSu1216/XingTong/blob/main/hubert/checkpoint_best_legacy_500.pt") -import gradio as gr -import librosa -import numpy as np -import soundfile -from inference.infer_tool import Svc -import logging - -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('markdown_it').setLevel(logging.WARNING) -logging.getLogger('urllib3').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -model = Svc("logs/44k/G_99200.pth", "configs/config.json", cluster_model_path="logs/44k/kmeans_10000.pt") - -def vc_fn(sid, input_audio, vc_transform, auto_f0,cluster_ratio, noise_scale): - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - # print(audio.shape,sampling_rate) - duration = audio.shape[0] / sampling_rate - if duration > 100: - return "请上传小于100s的音频,需要转换长音频请本地进行转换", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - print(audio.shape) - out_wav_path = "temp.wav" - soundfile.write(out_wav_path, audio, 16000, format="wav") - print( cluster_ratio, auto_f0, noise_scale) - out_audio, out_sr = model.infer(sid, vc_transform, out_wav_path, - cluster_infer_ratio=cluster_ratio, - auto_predict_f0=auto_f0, - noice_scale=noise_scale - ) - return "转换成功", (44100, out_audio.numpy()) - - -app = gr.Blocks() -with app: - with gr.Tabs(): - with gr.TabItem("介绍"): - gr.Markdown(value=""" - 星瞳_Official的语音在线合成,基于so-vits-svc-4.0生成。\n - - 使用须知:\n - 1、请使用伴奏和声去除干净的人声素材,时长小于100秒,格式为mp3或wav。\n - 2、去除伴奏推荐使用UVR5软件,B站上有详细教程。\n - 3、条件不支持推荐使用以下几个去伴奏的网站:\n - https://vocalremover.org/zh/\n - https://tuanziai.com/vocal-remover/upload\n - https://www.lalal.ai/zh-hans/\n - 4、在线版服务器为2核16G免费版,转换效率较慢请耐心等待。\n - 5、使用此模型请标注作者:一闪一闪小星瞳,以及该项目地址。\n - 6、有问题可以在B站私聊我反馈:https://space.bilibili.com/38523418\n - 7、语音模型转换出的音频请勿用于商业化。 - """) - spks = list(model.spk2id.keys()) - sid = gr.Dropdown(label="音色", choices=["XT4.0"], value="XT4.0") - vc_input3 = gr.Audio(label="上传音频(长度建议小于100秒)") - vc_transform = gr.Number(label="变调(整数,可以正负,半音数量,升高八度就是12)", value=0) - cluster_ratio = gr.Number(label="聚类模型混合比例,0-1之间,默认为0不启用聚类,能提升音色相似度,但会导致咬字下降(如果使用建议0.5左右)", value=0) - auto_f0 = gr.Checkbox(label="自动f0预测,配合聚类模型f0预测效果更好,会导致变调功能失效(仅限转换语音,歌声不要勾选此项会究极跑调)", value=False) - noise_scale = gr.Number(label="noise_scale 建议不要动,会影响音质,玄学参数", value=0.4) - vc_submit = gr.Button("转换", variant="primary") - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [sid, vc_input3, vc_transform,auto_f0,cluster_ratio, noise_scale], [vc_output1, vc_output2]) - - app.launch() diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/u2net.py b/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/u2net.py deleted file mode 100644 index 4144a10e8b4bfa7a19e480dd955923d800931540..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/rembg/sessions/u2net.py +++ /dev/null @@ -1,49 +0,0 @@ -import os -from typing import List - -import numpy as np -import pooch -from PIL import Image -from PIL.Image import Image as PILImage - -from .base import BaseSession - - -class U2netSession(BaseSession): - def predict(self, img: PILImage, *args, **kwargs) -> List[PILImage]: - ort_outs = self.inner_session.run( - None, - self.normalize( - img, (0.485, 0.456, 0.406), (0.229, 0.224, 0.225), (320, 320) - ), - ) - - pred = ort_outs[0][:, 0, :, :] - - ma = np.max(pred) - mi = np.min(pred) - - pred = (pred - mi) / (ma - mi) - pred = np.squeeze(pred) - - mask = Image.fromarray((pred * 255).astype("uint8"), mode="L") - mask = mask.resize(img.size, Image.LANCZOS) - - return [mask] - - @classmethod - def download_models(cls, *args, **kwargs): - fname = f"{cls.name()}.onnx" - pooch.retrieve( - "https://github.com/danielgatis/rembg/releases/download/v0.0.0/u2net.onnx", - "md5:60024c5c889badc19c04ad937298a77b", - fname=fname, - path=cls.u2net_home(), - progressbar=True, - ) - - return os.path.join(cls.u2net_home(), fname) - - @classmethod - def name(cls, *args, **kwargs): - return "u2net" diff --git a/spaces/MonkeyDBoa/AvengersDetector/README.md b/spaces/MonkeyDBoa/AvengersDetector/README.md deleted file mode 100644 index 867385765abfad0d8dfc95ab0b4be4d30f429578..0000000000000000000000000000000000000000 --- a/spaces/MonkeyDBoa/AvengersDetector/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: AvengersDetector -emoji: ⚡ -colorFrom: green -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/resnet_cifar_main.py b/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/resnet_cifar_main.py deleted file mode 100644 index 4a02fec8b96e25228e6e0467d646c26995f944fc..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/benchmark/models/resnet_cifar_main.py +++ /dev/null @@ -1,284 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Runs a ResNet model on the Cifar-10 dataset.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from absl import app -from absl import flags -from absl import logging -import numpy as np -import tensorflow as tf -from official.benchmark.models import cifar_preprocessing -from official.benchmark.models import resnet_cifar_model -from official.benchmark.models import synthetic_util -from official.utils.flags import core as flags_core -from official.utils.misc import distribution_utils -from official.utils.misc import keras_utils -from official.vision.image_classification.resnet import common - - -LR_SCHEDULE = [ # (multiplier, epoch to start) tuples - (0.1, 91), (0.01, 136), (0.001, 182) -] - - -def learning_rate_schedule(current_epoch, - current_batch, - batches_per_epoch, - batch_size): - """Handles linear scaling rule and LR decay. - - Scale learning rate at epoch boundaries provided in LR_SCHEDULE by the - provided scaling factor. - - Args: - current_epoch: integer, current epoch indexed from 0. - current_batch: integer, current batch in the current epoch, indexed from 0. - batches_per_epoch: integer, number of steps in an epoch. - batch_size: integer, total batch sized. - - Returns: - Adjusted learning rate. - """ - del current_batch, batches_per_epoch # not used - initial_learning_rate = common.BASE_LEARNING_RATE * batch_size / 128 - learning_rate = initial_learning_rate - for mult, start_epoch in LR_SCHEDULE: - if current_epoch >= start_epoch: - learning_rate = initial_learning_rate * mult - else: - break - return learning_rate - - -class LearningRateBatchScheduler(tf.keras.callbacks.Callback): - """Callback to update learning rate on every batch (not epoch boundaries). - - N.B. Only support Keras optimizers, not TF optimizers. - - Attributes: - schedule: a function that takes an epoch index and a batch index as input - (both integer, indexed from 0) and returns a new learning rate as - output (float). - """ - - def __init__(self, schedule, batch_size, steps_per_epoch): - super(LearningRateBatchScheduler, self).__init__() - self.schedule = schedule - self.steps_per_epoch = steps_per_epoch - self.batch_size = batch_size - self.epochs = -1 - self.prev_lr = -1 - - def on_epoch_begin(self, epoch, logs=None): - if not hasattr(self.model.optimizer, 'learning_rate'): - raise ValueError('Optimizer must have a "learning_rate" attribute.') - self.epochs += 1 - - def on_batch_begin(self, batch, logs=None): - """Executes before step begins.""" - lr = self.schedule(self.epochs, - batch, - self.steps_per_epoch, - self.batch_size) - if not isinstance(lr, (float, np.float32, np.float64)): - raise ValueError('The output of the "schedule" function should be float.') - if lr != self.prev_lr: - self.model.optimizer.learning_rate = lr # lr should be a float here - self.prev_lr = lr - logging.debug( - 'Epoch %05d Batch %05d: LearningRateBatchScheduler ' - 'change learning rate to %s.', self.epochs, batch, lr) - - -def run(flags_obj): - """Run ResNet Cifar-10 training and eval loop using native Keras APIs. - - Args: - flags_obj: An object containing parsed flag values. - - Raises: - ValueError: If fp16 is passed as it is not currently supported. - - Returns: - Dictionary of training and eval stats. - """ - keras_utils.set_session_config( - enable_xla=flags_obj.enable_xla) - - # Execute flag override logic for better model performance - if flags_obj.tf_gpu_thread_mode: - keras_utils.set_gpu_thread_mode_and_count( - per_gpu_thread_count=flags_obj.per_gpu_thread_count, - gpu_thread_mode=flags_obj.tf_gpu_thread_mode, - num_gpus=flags_obj.num_gpus, - datasets_num_private_threads=flags_obj.datasets_num_private_threads) - common.set_cudnn_batchnorm_mode() - - dtype = flags_core.get_tf_dtype(flags_obj) - if dtype == 'fp16': - raise ValueError('dtype fp16 is not supported in Keras. Use the default ' - 'value(fp32).') - - data_format = flags_obj.data_format - if data_format is None: - data_format = ('channels_first' if tf.config.list_physical_devices('GPU') - else 'channels_last') - tf.keras.backend.set_image_data_format(data_format) - - strategy = distribution_utils.get_distribution_strategy( - distribution_strategy=flags_obj.distribution_strategy, - num_gpus=flags_obj.num_gpus, - all_reduce_alg=flags_obj.all_reduce_alg, - num_packs=flags_obj.num_packs) - - if strategy: - # flags_obj.enable_get_next_as_optional controls whether enabling - # get_next_as_optional behavior in DistributedIterator. If true, last - # partial batch can be supported. - strategy.extended.experimental_enable_get_next_as_optional = ( - flags_obj.enable_get_next_as_optional - ) - - strategy_scope = distribution_utils.get_strategy_scope(strategy) - - if flags_obj.use_synthetic_data: - synthetic_util.set_up_synthetic_data() - input_fn = common.get_synth_input_fn( - height=cifar_preprocessing.HEIGHT, - width=cifar_preprocessing.WIDTH, - num_channels=cifar_preprocessing.NUM_CHANNELS, - num_classes=cifar_preprocessing.NUM_CLASSES, - dtype=flags_core.get_tf_dtype(flags_obj), - drop_remainder=True) - else: - synthetic_util.undo_set_up_synthetic_data() - input_fn = cifar_preprocessing.input_fn - - train_input_dataset = input_fn( - is_training=True, - data_dir=flags_obj.data_dir, - batch_size=flags_obj.batch_size, - parse_record_fn=cifar_preprocessing.parse_record, - datasets_num_private_threads=flags_obj.datasets_num_private_threads, - dtype=dtype, - # Setting drop_remainder to avoid the partial batch logic in normalization - # layer, which triggers tf.where and leads to extra memory copy of input - # sizes between host and GPU. - drop_remainder=(not flags_obj.enable_get_next_as_optional)) - - eval_input_dataset = None - if not flags_obj.skip_eval: - eval_input_dataset = input_fn( - is_training=False, - data_dir=flags_obj.data_dir, - batch_size=flags_obj.batch_size, - parse_record_fn=cifar_preprocessing.parse_record) - - steps_per_epoch = ( - cifar_preprocessing.NUM_IMAGES['train'] // flags_obj.batch_size) - lr_schedule = 0.1 - if flags_obj.use_tensor_lr: - initial_learning_rate = common.BASE_LEARNING_RATE * flags_obj.batch_size / 128 - lr_schedule = tf.keras.optimizers.schedules.PiecewiseConstantDecay( - boundaries=list(p[1] * steps_per_epoch for p in LR_SCHEDULE), - values=[initial_learning_rate] + - list(p[0] * initial_learning_rate for p in LR_SCHEDULE)) - - with strategy_scope: - optimizer = common.get_optimizer(lr_schedule) - model = resnet_cifar_model.resnet56(classes=cifar_preprocessing.NUM_CLASSES) - model.compile( - loss='sparse_categorical_crossentropy', - optimizer=optimizer, - metrics=(['sparse_categorical_accuracy'] - if flags_obj.report_accuracy_metrics else None), - run_eagerly=flags_obj.run_eagerly) - - train_epochs = flags_obj.train_epochs - - callbacks = common.get_callbacks() - - if not flags_obj.use_tensor_lr: - lr_callback = LearningRateBatchScheduler( - schedule=learning_rate_schedule, - batch_size=flags_obj.batch_size, - steps_per_epoch=steps_per_epoch) - callbacks.append(lr_callback) - - # if mutliple epochs, ignore the train_steps flag. - if train_epochs <= 1 and flags_obj.train_steps: - steps_per_epoch = min(flags_obj.train_steps, steps_per_epoch) - train_epochs = 1 - - num_eval_steps = (cifar_preprocessing.NUM_IMAGES['validation'] // - flags_obj.batch_size) - - validation_data = eval_input_dataset - if flags_obj.skip_eval: - if flags_obj.set_learning_phase_to_train: - # TODO(haoyuzhang): Understand slowdown of setting learning phase when - # not using distribution strategy. - tf.keras.backend.set_learning_phase(1) - num_eval_steps = None - validation_data = None - - if not strategy and flags_obj.explicit_gpu_placement: - # TODO(b/135607227): Add device scope automatically in Keras training loop - # when not using distribition strategy. - no_dist_strat_device = tf.device('/device:GPU:0') - no_dist_strat_device.__enter__() - - history = model.fit(train_input_dataset, - epochs=train_epochs, - steps_per_epoch=steps_per_epoch, - callbacks=callbacks, - validation_steps=num_eval_steps, - validation_data=validation_data, - validation_freq=flags_obj.epochs_between_evals, - verbose=2) - eval_output = None - if not flags_obj.skip_eval: - eval_output = model.evaluate(eval_input_dataset, - steps=num_eval_steps, - verbose=2) - - if not strategy and flags_obj.explicit_gpu_placement: - no_dist_strat_device.__exit__() - - stats = common.build_stats(history, eval_output, callbacks) - return stats - - -def define_cifar_flags(): - common.define_keras_flags(dynamic_loss_scale=False) - - flags_core.set_defaults(data_dir='/tmp/cifar10_data/cifar-10-batches-bin', - model_dir='/tmp/cifar10_model', - epochs_between_evals=10, - batch_size=128) - - -def main(_): - return run(flags.FLAGS) - - -if __name__ == '__main__': - logging.set_verbosity(logging.INFO) - define_cifar_flags() - app.run(main) diff --git a/spaces/Nesip/meta-llama-Llama-2-70b-chat-hf/app.py b/spaces/Nesip/meta-llama-Llama-2-70b-chat-hf/app.py deleted file mode 100644 index a461703287a9bda9c93cfdfbb94d4c3cf90aaba9..0000000000000000000000000000000000000000 --- a/spaces/Nesip/meta-llama-Llama-2-70b-chat-hf/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/meta-llama/Llama-2-70b-chat-hf").launch() \ No newline at end of file diff --git a/spaces/Nick1/rvc-models/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/Nick1/rvc-models/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index b412ba2814e114ca7bb00b6fd6ef217f63d788a3..0000000000000000000000000000000000000000 --- a/spaces/Nick1/rvc-models/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/Nultx/VITS-TTS/text/sanskrit.py b/spaces/Nultx/VITS-TTS/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/Nultx/VITS-TTS/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/OAOA/DifFace/facelib/detection/yolov5face/utils/extract_ckpt.py b/spaces/OAOA/DifFace/facelib/detection/yolov5face/utils/extract_ckpt.py deleted file mode 100644 index 4b8b631348f2d0cdea4e5a3594bb59f3e8f34a0f..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/facelib/detection/yolov5face/utils/extract_ckpt.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch -import sys -sys.path.insert(0,'./facelib/detection/yolov5face') -model = torch.load('facelib/detection/yolov5face/yolov5n-face.pt', map_location='cpu')['model'] -torch.save(model.state_dict(),'weights/facelib/yolov5n-face.pth') \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/__init__.py deleted file mode 100644 index eee484d427a68828462469d133144a8d7c052c40..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/truncated_bptt/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import transformer_xl_model, truncated_bptt_lm_task # noqa diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/lightconv.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/lightconv.py deleted file mode 100644 index 4edfe359379bc2445c1ae1ada04bd34ca4a32798..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/lightconv.py +++ /dev/null @@ -1,1019 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - AdaptiveSoftmax, - DynamicConv, - FairseqDropout, - LayerNorm, - LightweightConv, - MultiheadAttention, - PositionalEmbedding, -) -from fairseq.utils import safe_hasattr - - -@register_model("lightconv") -class LightConvModel(FairseqEncoderDecoderModel): - """ - LightConv and DynamicConv model from `"Pay Less Attention with Lightweight and Dynamic Convolutions" (Wu, et al, 2019) - `_. - To use LightConv please set ``--encoder-conv-type lightweight --decoder-conv-type lightweight`` - To use DynamicConv please set ``--encoder-conv-type dynamic --decoder-conv-type dynamic`` - - Args: - encoder (LightConvEncoder): the encoder - decoder (LightConvDecoder): the decoder - - The LightConv model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.lightconv_parser - :prog: - """ - - @classmethod - def hub_models(cls): - # fmt: off - - def moses_subword(path): - return { - 'path': path, - 'tokenizer': 'moses', - 'bpe': 'subword_nmt', - } - - return { - 'lightconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz'), - 'dynamicconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz'), - 'lightconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz'), - 'dynamicconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz'), - 'lightconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz'), - } - # fmt: on - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after ReLU in FFN", - ) - parser.add_argument( - "--input-dropout", - type=float, - metavar="D", - help="dropout probability of the inputs", - ) - parser.add_argument( - "--encoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained encoder embedding", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-conv-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--encoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the encoder", - ) - parser.add_argument( - "--decoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained decoder embedding", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-conv-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--decoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the decoder", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--share-all-embeddings", - action="store_true", - help="share encoder, decoder and output embeddings" - " (requires shared dictionary and embed dim)", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ), - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - - """LightConv and DynamicConv arguments""" - parser.add_argument( - "--encoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31,31]")', - ) - parser.add_argument( - "--decoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31]")', - ) - parser.add_argument( - "--encoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--decoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--encoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument( - "--decoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool) - parser.add_argument( - "--weight-dropout", - type=float, - metavar="D", - help="dropout probability for conv weights", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if not safe_hasattr(args, "max_source_positions"): - args.max_source_positions = 1024 - if not safe_hasattr(args, "max_target_positions"): - args.max_target_positions = 1024 - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - def build_embedding(dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise RuntimeError( - "--share-all-embeddings requires a joined dictionary" - ) - if args.encoder_embed_dim != args.decoder_embed_dim: - raise RuntimeError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise RuntimeError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = build_embedding( - tgt_dict, args.decoder_embed_dim, args.decoder_embed_path - ) - - encoder = LightConvEncoder(args, src_dict, encoder_embed_tokens) - decoder = LightConvDecoder(args, tgt_dict, decoder_embed_tokens) - return LightConvModel(encoder, decoder) - - -class LightConvEncoder(FairseqEncoder): - """ - LightConv encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`LightConvEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, args, dictionary, embed_tokens): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - - embed_dim = embed_tokens.embedding_dim - self.padding_idx = embed_tokens.padding_idx - self.max_source_positions = args.max_source_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) - self.embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - embed_dim, - self.padding_idx, - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvEncoderLayer( - args, kernel_size=args.encoder_kernel_size_list[i] - ) - for i in range(args.encoder_layers) - ] - ) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.encoder_normalize_before - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward(self, src_tokens, **unused): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - """ - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(src_tokens) - if self.embed_positions is not None: - x += self.embed_positions(src_tokens) - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - # encoder layers - for layer in self.layers: - x = layer(x, encoder_padding_mask) - - if self.normalize: - x = self.layer_norm(x) - - return { - "encoder_out": x, # T x B x C - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - def reorder_encoder_out(self, encoder_out, new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if encoder_out["encoder_out"] is not None: - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(0, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embed_positions is None: - return self.max_source_positions - return min(self.max_source_positions, self.embed_positions.max_positions) - - -class LightConvDecoder(FairseqIncrementalDecoder): - """ - LightConv decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`LightConvDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - """ - - def __init__( - self, args, dictionary, embed_tokens, no_encoder_attn=False, final_norm=True - ): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.share_input_output_embed = args.share_decoder_input_output_embed - - input_embed_dim = embed_tokens.embedding_dim - embed_dim = args.decoder_embed_dim - output_embed_dim = args.decoder_output_dim - - padding_idx = embed_tokens.padding_idx - self.max_target_positions = args.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - embed_dim, - padding_idx, - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvDecoderLayer( - args, no_encoder_attn, kernel_size=args.decoder_kernel_size_list[i] - ) - for i in range(args.decoder_layers) - ] - ) - - self.adaptive_softmax = None - - self.project_out_dim = ( - Linear(embed_dim, output_embed_dim, bias=False) - if embed_dim != output_embed_dim and not args.tie_adaptive_weights - else None - ) - - if args.adaptive_softmax_cutoff is not None: - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - output_embed_dim, - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int), - dropout=args.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None, - factor=args.adaptive_softmax_factor, - tie_proj=args.tie_adaptive_proj, - ) - elif not self.share_input_output_embed: - self.embed_out = nn.Parameter( - torch.Tensor(len(dictionary), output_embed_dim) - ) - nn.init.normal_(self.embed_out, mean=0, std=output_embed_dim ** -0.5) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.decoder_normalize_before and final_norm - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - - Returns: - tuple: - - the last decoder layer's output of shape `(batch, tgt_len, - vocab)` - - the last decoder layer's attention weights of shape `(batch, - tgt_len, src_len)` - """ - # embed positions - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - - inner_states = [x] - - # decoder layers - for layer in self.layers: - x, attn = layer( - x, - encoder_out["encoder_out"] if encoder_out is not None else None, - encoder_out["encoder_padding_mask"] - if encoder_out is not None - else None, - incremental_state, - ) - inner_states.append(x) - - if self.normalize: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - x = F.linear(x, self.embed_tokens.weight) - else: - x = F.linear(x, self.embed_out) - - return x, {"attn": attn, "inner_states": inner_states} - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embed_positions is None: - return self.max_target_positions - return min(self.max_target_positions, self.embed_positions.max_positions) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - -class LightConvEncoderLayer(nn.Module): - """Encoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, kernel_size=0): - super().__init__() - self.embed_dim = args.encoder_embed_dim - self.conv_dim = args.encoder_conv_dim - padding_l = ( - kernel_size // 2 - if kernel_size % 2 == 1 - else ((kernel_size - 1) // 2, kernel_size // 2) - ) - - if args.encoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.encoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.encoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.encoder_normalize_before - self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim) - self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim) - self.layer_norms = nn.ModuleList([LayerNorm(self.embed_dim) for _ in range(2)]) - - def forward(self, x, encoder_padding_mask): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(0, x, before=True) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - if encoder_padding_mask is not None: - x = x.masked_fill(encoder_padding_mask.transpose(0, 1).unsqueeze(2), 0) - x = self.conv(x) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(0, x, after=True) - - residual = x - x = self.maybe_layer_norm(1, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(1, x, after=True) - return x - - def maybe_layer_norm(self, i, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return self.layer_norms[i](x) - else: - return x - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -class LightConvDecoderLayer(nn.Module): - """Decoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, no_encoder_attn=False, kernel_size=0): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.conv_dim = args.decoder_conv_dim - if args.decoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.decoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.decoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.decoder_normalize_before - - self.conv_layer_norm = LayerNorm(self.embed_dim) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = MultiheadAttention( - self.embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim) - - self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim) - self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = LayerNorm(self.embed_dim) - self.need_attn = True - - def forward( - self, - x, - encoder_out, - encoder_padding_mask, - incremental_state, - prev_conv_state=None, - prev_attn_state=None, - conv_mask=None, - conv_padding_mask=None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(self.conv_layer_norm, x, before=True) - if prev_conv_state is not None: - if incremental_state is None: - incremental_state = {} - self.conv._set_input_buffer(incremental_state, prev_conv_state) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - x = self.conv(x, incremental_state=incremental_state) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.conv_layer_norm, x, after=True) - - attn = None - if self.encoder_attn is not None: - residual = x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True) - if prev_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=(not self.training and self.need_attn), - ) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - return x, attn - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m - - -@register_model_architecture("lightconv", "lightconv") -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.encoder_conv_dim = getattr(args, "encoder_conv_dim", args.encoder_embed_dim) - args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim) - - args.encoder_kernel_size_list = getattr( - args, "encoder_kernel_size_list", [3, 7, 15, 31, 31, 31, 31] - ) - args.decoder_kernel_size_list = getattr( - args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31] - ) - if len(args.encoder_kernel_size_list) == 1: - args.encoder_kernel_size_list = ( - args.encoder_kernel_size_list * args.encoder_layers - ) - if len(args.decoder_kernel_size_list) == 1: - args.decoder_kernel_size_list = ( - args.decoder_kernel_size_list * args.decoder_layers - ) - assert ( - len(args.encoder_kernel_size_list) == args.encoder_layers - ), "encoder_kernel_size_list doesn't match encoder_layers" - assert ( - len(args.decoder_kernel_size_list) == args.decoder_layers - ), "decoder_kernel_size_list doesn't match decoder_layers" - args.encoder_glu = getattr(args, "encoder_glu", True) - args.decoder_glu = getattr(args, "decoder_glu", True) - args.input_dropout = getattr(args, "input_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout) - - -@register_model_architecture("lightconv", "lightconv_iwslt_de_en") -def lightconv_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", 0.1) - args.encoder_glu = getattr(args, "encoder_glu", False) - args.decoder_glu = getattr(args, "decoder_glu", False) - args.input_dropout = getattr(args, "input_dropout", 0.0) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de") -def lightconv_wmt_en_de(args): - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de_big") -def lightconv_wmt_en_de_big(args): - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_fr_big") -def lightconv_wmt_en_fr_big(args): - args.dropout = getattr(args, "dropout", 0.1) - lightconv_wmt_en_de_big(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_zh_en_big") -def lightconv_wmt_zh_en_big(args): - args.dropout = getattr(args, "dropout", 0.2) - args.attention_dropout = getattr(args, "attention_dropout", 0.2) - args.weight_dropout = getattr(args, "weight_dropout", 0.2) - lightconv_wmt_en_de_big(args) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/utils/cider/pyciderevalcap/ciderD/ciderD.py b/spaces/OFA-Sys/OFA-Image_Caption/utils/cider/pyciderevalcap/ciderD/ciderD.py deleted file mode 100644 index 280f9890312a76b54695b2a8c456c5d52a87e186..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/utils/cider/pyciderevalcap/ciderD/ciderD.py +++ /dev/null @@ -1,58 +0,0 @@ -# Filename: ciderD.py -# -# Description: Describes the class to compute the CIDEr-D (Consensus-Based Image Description Evaluation) Metric -# by Vedantam, Zitnick, and Parikh (http://arxiv.org/abs/1411.5726) -# -# Creation Date: Sun Feb 8 14:16:54 2015 -# -# Authors: Ramakrishna Vedantam and Tsung-Yi Lin -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from .ciderD_scorer import CiderScorer -import pdb - -class CiderD: - """ - Main Class to compute the CIDEr metric - - """ - def __init__(self, n=4, sigma=6.0, df="corpus"): - # set cider to sum over 1 to 4-grams - self._n = n - # set the standard deviation parameter for gaussian penalty - self._sigma = sigma - # set which where to compute document frequencies from - self._df = df - self.cider_scorer = CiderScorer(n=self._n, df_mode=self._df) - - def compute_score(self, gts, res): - """ - Main function to compute CIDEr score - :param hypo_for_image (dict) : dictionary with key and value - ref_for_image (dict) : dictionary with key and value - :return: cider (float) : computed CIDEr score for the corpus - """ - - # clear all the previous hypos and refs - tmp_cider_scorer = self.cider_scorer.copy_empty() - tmp_cider_scorer.clear() - for res_id in res: - - hypo = res_id['caption'] - ref = gts[res_id['image_id']] - - # Sanity check. - assert(type(hypo) is list) - assert(len(hypo) == 1) - assert(type(ref) is list) - assert(len(ref) > 0) - tmp_cider_scorer += (hypo[0], ref) - - (score, scores) = tmp_cider_scorer.compute_score() - - return score, scores - - def method(self): - return "CIDEr-D" diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/pass_through.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/pass_through.py deleted file mode 100644 index 2f93db328c1de9b268e8ee1c0c1cad558fd089aa..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/optim/lr_scheduler/pass_through.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class PassThroughScheduleConfig(FairseqDataclass): - pass - - -@register_lr_scheduler("pass_through", dataclass=PassThroughScheduleConfig) -class PassThroughScheduleSchedule(FairseqLRScheduler): - """Delegate lr scheduling to the optimizer.""" - - def __init__(self, cfg: PassThroughScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - assert ( - hasattr(optimizer, "lr_scheduler") and optimizer.lr_scheduler is not None - ), "Pass-through schedule can only be used with optimizers with their own schedulers" - - def state_dict(self): - return self.optimizer.lr_scheduler.state_dict() - - def load_state_dict(self, state_dict): - self.optimizer.lr_scheduler.load_state_dict(state_dict) - - def step_begin_epoch(self, epoch): - """Update the learning rate at the beginning of the given epoch.""" - return self.optimizer.lr_scheduler.step_begin_epoch(epoch) - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - return self.optimizer.lr_scheduler.step_update(num_updates) diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/mhl/index.tsx b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/mhl/index.tsx deleted file mode 100644 index c69add7504c51f88d9b865e106b2b775bc642fa4..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/mhl/index.tsx +++ /dev/null @@ -1,26 +0,0 @@ -import React from 'react'; -import { Theme } from '../interface'; -import { DefaultSoundNames, defaultSounds } from '../default'; - -const imagesUrls = import.meta.glob('./images/*.png', { - import: 'default', - eager: true, -}); - -const mhls = Object.entries(imagesUrls).map(([key, value]) => ({ - name: key.slice(9, -4), - // eslint-disable-next-line @typescript-eslint/ban-ts-comment - // @ts-ignore - content: , -})); - -export const mhlTheme: Theme = { - name: 'kitten', - icons: mhls.map(({ name, content }) => ({ - name, - content, - clickSound: 'button-click', - tripleSound: 'triple', - })), - sounds: defaultSounds, -}; diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/__init__.py deleted file mode 100644 index d13e9c57235b982f3e0645bc316de2b75755dfda..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .box_head import ROI_BOX_HEAD_REGISTRY, build_box_head, FastRCNNConvFCHead -from .keypoint_head import ( - ROI_KEYPOINT_HEAD_REGISTRY, - build_keypoint_head, - BaseKeypointRCNNHead, - KRCNNConvDeconvUpsampleHead, -) -from .mask_head import ( - ROI_MASK_HEAD_REGISTRY, - build_mask_head, - BaseMaskRCNNHead, - MaskRCNNConvUpsampleHead, -) -from .roi_heads import ( - ROI_HEADS_REGISTRY, - ROIHeads, - Res5ROIHeads, - StandardROIHeads, - build_roi_heads, - select_foreground_proposals, -) -from .cascade_rcnn import CascadeROIHeads -from .rotated_fast_rcnn import RROIHeads -from .fast_rcnn import FastRCNNOutputLayers - -from . import cascade_rcnn # isort:skip - -__all__ = list(globals().keys()) diff --git a/spaces/OptimalScale/Robin-33b/lmflow/utils/__init__.py b/spaces/OptimalScale/Robin-33b/lmflow/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OthmaneJ/transcribe-distil-wav2vec2/README.md b/spaces/OthmaneJ/transcribe-distil-wav2vec2/README.md deleted file mode 100644 index cf1bbff05ef4f6abacc515a9059d09f1f9243509..0000000000000000000000000000000000000000 --- a/spaces/OthmaneJ/transcribe-distil-wav2vec2/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Transcribe Distil Wav2vec2 -emoji: 🐠 -colorFrom: red -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/video/processing.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/video/processing.py deleted file mode 100644 index 3d90b96e0823d5f116755e7f498d25d17017224a..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/video/processing.py +++ /dev/null @@ -1,160 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -import subprocess -import tempfile - -from annotator.uniformer.mmcv.utils import requires_executable - - -@requires_executable('ffmpeg') -def convert_video(in_file, - out_file, - print_cmd=False, - pre_options='', - **kwargs): - """Convert a video with ffmpeg. - - This provides a general api to ffmpeg, the executed command is:: - - `ffmpeg -y -i ` - - Options(kwargs) are mapped to ffmpeg commands with the following rules: - - - key=val: "-key val" - - key=True: "-key" - - key=False: "" - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - pre_options (str): Options appears before "-i ". - print_cmd (bool): Whether to print the final ffmpeg command. - """ - options = [] - for k, v in kwargs.items(): - if isinstance(v, bool): - if v: - options.append(f'-{k}') - elif k == 'log_level': - assert v in [ - 'quiet', 'panic', 'fatal', 'error', 'warning', 'info', - 'verbose', 'debug', 'trace' - ] - options.append(f'-loglevel {v}') - else: - options.append(f'-{k} {v}') - cmd = f'ffmpeg -y {pre_options} -i {in_file} {" ".join(options)} ' \ - f'{out_file}' - if print_cmd: - print(cmd) - subprocess.call(cmd, shell=True) - - -@requires_executable('ffmpeg') -def resize_video(in_file, - out_file, - size=None, - ratio=None, - keep_ar=False, - log_level='info', - print_cmd=False): - """Resize a video. - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - size (tuple): Expected size (w, h), eg, (320, 240) or (320, -1). - ratio (tuple or float): Expected resize ratio, (2, 0.5) means - (w*2, h*0.5). - keep_ar (bool): Whether to keep original aspect ratio. - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - if size is None and ratio is None: - raise ValueError('expected size or ratio must be specified') - if size is not None and ratio is not None: - raise ValueError('size and ratio cannot be specified at the same time') - options = {'log_level': log_level} - if size: - if not keep_ar: - options['vf'] = f'scale={size[0]}:{size[1]}' - else: - options['vf'] = f'scale=w={size[0]}:h={size[1]}:' \ - 'force_original_aspect_ratio=decrease' - else: - if not isinstance(ratio, tuple): - ratio = (ratio, ratio) - options['vf'] = f'scale="trunc(iw*{ratio[0]}):trunc(ih*{ratio[1]})"' - convert_video(in_file, out_file, print_cmd, **options) - - -@requires_executable('ffmpeg') -def cut_video(in_file, - out_file, - start=None, - end=None, - vcodec=None, - acodec=None, - log_level='info', - print_cmd=False): - """Cut a clip from a video. - - Args: - in_file (str): Input video filename. - out_file (str): Output video filename. - start (None or float): Start time (in seconds). - end (None or float): End time (in seconds). - vcodec (None or str): Output video codec, None for unchanged. - acodec (None or str): Output audio codec, None for unchanged. - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - options = {'log_level': log_level} - if vcodec is None: - options['vcodec'] = 'copy' - if acodec is None: - options['acodec'] = 'copy' - if start: - options['ss'] = start - else: - start = 0 - if end: - options['t'] = end - start - convert_video(in_file, out_file, print_cmd, **options) - - -@requires_executable('ffmpeg') -def concat_video(video_list, - out_file, - vcodec=None, - acodec=None, - log_level='info', - print_cmd=False): - """Concatenate multiple videos into a single one. - - Args: - video_list (list): A list of video filenames - out_file (str): Output video filename - vcodec (None or str): Output video codec, None for unchanged - acodec (None or str): Output audio codec, None for unchanged - log_level (str): Logging level of ffmpeg. - print_cmd (bool): Whether to print the final ffmpeg command. - """ - tmp_filehandler, tmp_filename = tempfile.mkstemp(suffix='.txt', text=True) - with open(tmp_filename, 'w') as f: - for filename in video_list: - f.write(f'file {osp.abspath(filename)}\n') - options = {'log_level': log_level} - if vcodec is None: - options['vcodec'] = 'copy' - if acodec is None: - options['acodec'] = 'copy' - convert_video( - tmp_filename, - out_file, - print_cmd, - pre_options='-f concat -safe 0', - **options) - os.close(tmp_filehandler) - os.remove(tmp_filename) diff --git a/spaces/PKUWilliamYang/VToonify/vtoonify/model/raft/core/extractor.py b/spaces/PKUWilliamYang/VToonify/vtoonify/model/raft/core/extractor.py deleted file mode 100644 index 9a9c759d1243d4694e8656c2f6f8a37e53edd009..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/VToonify/vtoonify/model/raft/core/extractor.py +++ /dev/null @@ -1,267 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class ResidualBlock(nn.Module): - def __init__(self, in_planes, planes, norm_fn='group', stride=1): - super(ResidualBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, padding=1, stride=stride) - self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1) - self.relu = nn.ReLU(inplace=True) - - num_groups = planes // 8 - - if norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - if not stride == 1: - self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - - elif norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(planes) - self.norm2 = nn.BatchNorm2d(planes) - if not stride == 1: - self.norm3 = nn.BatchNorm2d(planes) - - elif norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(planes) - self.norm2 = nn.InstanceNorm2d(planes) - if not stride == 1: - self.norm3 = nn.InstanceNorm2d(planes) - - elif norm_fn == 'none': - self.norm1 = nn.Sequential() - self.norm2 = nn.Sequential() - if not stride == 1: - self.norm3 = nn.Sequential() - - if stride == 1: - self.downsample = None - - else: - self.downsample = nn.Sequential( - nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm3) - - - def forward(self, x): - y = x - y = self.relu(self.norm1(self.conv1(y))) - y = self.relu(self.norm2(self.conv2(y))) - - if self.downsample is not None: - x = self.downsample(x) - - return self.relu(x+y) - - - -class BottleneckBlock(nn.Module): - def __init__(self, in_planes, planes, norm_fn='group', stride=1): - super(BottleneckBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_planes, planes//4, kernel_size=1, padding=0) - self.conv2 = nn.Conv2d(planes//4, planes//4, kernel_size=3, padding=1, stride=stride) - self.conv3 = nn.Conv2d(planes//4, planes, kernel_size=1, padding=0) - self.relu = nn.ReLU(inplace=True) - - num_groups = planes // 8 - - if norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4) - self.norm2 = nn.GroupNorm(num_groups=num_groups, num_channels=planes//4) - self.norm3 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - if not stride == 1: - self.norm4 = nn.GroupNorm(num_groups=num_groups, num_channels=planes) - - elif norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(planes//4) - self.norm2 = nn.BatchNorm2d(planes//4) - self.norm3 = nn.BatchNorm2d(planes) - if not stride == 1: - self.norm4 = nn.BatchNorm2d(planes) - - elif norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(planes//4) - self.norm2 = nn.InstanceNorm2d(planes//4) - self.norm3 = nn.InstanceNorm2d(planes) - if not stride == 1: - self.norm4 = nn.InstanceNorm2d(planes) - - elif norm_fn == 'none': - self.norm1 = nn.Sequential() - self.norm2 = nn.Sequential() - self.norm3 = nn.Sequential() - if not stride == 1: - self.norm4 = nn.Sequential() - - if stride == 1: - self.downsample = None - - else: - self.downsample = nn.Sequential( - nn.Conv2d(in_planes, planes, kernel_size=1, stride=stride), self.norm4) - - - def forward(self, x): - y = x - y = self.relu(self.norm1(self.conv1(y))) - y = self.relu(self.norm2(self.conv2(y))) - y = self.relu(self.norm3(self.conv3(y))) - - if self.downsample is not None: - x = self.downsample(x) - - return self.relu(x+y) - -class BasicEncoder(nn.Module): - def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0): - super(BasicEncoder, self).__init__() - self.norm_fn = norm_fn - - if self.norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=8, num_channels=64) - - elif self.norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(64) - - elif self.norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(64) - - elif self.norm_fn == 'none': - self.norm1 = nn.Sequential() - - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3) - self.relu1 = nn.ReLU(inplace=True) - - self.in_planes = 64 - self.layer1 = self._make_layer(64, stride=1) - self.layer2 = self._make_layer(96, stride=2) - self.layer3 = self._make_layer(128, stride=2) - - # output convolution - self.conv2 = nn.Conv2d(128, output_dim, kernel_size=1) - - self.dropout = None - if dropout > 0: - self.dropout = nn.Dropout2d(p=dropout) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)): - if m.weight is not None: - nn.init.constant_(m.weight, 1) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def _make_layer(self, dim, stride=1): - layer1 = ResidualBlock(self.in_planes, dim, self.norm_fn, stride=stride) - layer2 = ResidualBlock(dim, dim, self.norm_fn, stride=1) - layers = (layer1, layer2) - - self.in_planes = dim - return nn.Sequential(*layers) - - - def forward(self, x): - - # if input is list, combine batch dimension - is_list = isinstance(x, tuple) or isinstance(x, list) - if is_list: - batch_dim = x[0].shape[0] - x = torch.cat(x, dim=0) - - x = self.conv1(x) - x = self.norm1(x) - x = self.relu1(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - - x = self.conv2(x) - - if self.training and self.dropout is not None: - x = self.dropout(x) - - if is_list: - x = torch.split(x, [batch_dim, batch_dim], dim=0) - - return x - - -class SmallEncoder(nn.Module): - def __init__(self, output_dim=128, norm_fn='batch', dropout=0.0): - super(SmallEncoder, self).__init__() - self.norm_fn = norm_fn - - if self.norm_fn == 'group': - self.norm1 = nn.GroupNorm(num_groups=8, num_channels=32) - - elif self.norm_fn == 'batch': - self.norm1 = nn.BatchNorm2d(32) - - elif self.norm_fn == 'instance': - self.norm1 = nn.InstanceNorm2d(32) - - elif self.norm_fn == 'none': - self.norm1 = nn.Sequential() - - self.conv1 = nn.Conv2d(3, 32, kernel_size=7, stride=2, padding=3) - self.relu1 = nn.ReLU(inplace=True) - - self.in_planes = 32 - self.layer1 = self._make_layer(32, stride=1) - self.layer2 = self._make_layer(64, stride=2) - self.layer3 = self._make_layer(96, stride=2) - - self.dropout = None - if dropout > 0: - self.dropout = nn.Dropout2d(p=dropout) - - self.conv2 = nn.Conv2d(96, output_dim, kernel_size=1) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.InstanceNorm2d, nn.GroupNorm)): - if m.weight is not None: - nn.init.constant_(m.weight, 1) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def _make_layer(self, dim, stride=1): - layer1 = BottleneckBlock(self.in_planes, dim, self.norm_fn, stride=stride) - layer2 = BottleneckBlock(dim, dim, self.norm_fn, stride=1) - layers = (layer1, layer2) - - self.in_planes = dim - return nn.Sequential(*layers) - - - def forward(self, x): - - # if input is list, combine batch dimension - is_list = isinstance(x, tuple) or isinstance(x, list) - if is_list: - batch_dim = x[0].shape[0] - x = torch.cat(x, dim=0) - - x = self.conv1(x) - x = self.norm1(x) - x = self.relu1(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.conv2(x) - - if self.training and self.dropout is not None: - x = self.dropout(x) - - if is_list: - x = torch.split(x, [batch_dim, batch_dim], dim=0) - - return x diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/font-encodings.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/font-encodings.go deleted file mode 100644 index 825c989540a5a15236795b85e29fdce7b8f4af7e..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/font-encodings.go and /dev/null differ diff --git a/spaces/Proxdigestpills1/README/README.md b/spaces/Proxdigestpills1/README/README.md deleted file mode 100644 index 123b6fe172b61efe12ce67aee1e4830d3c1dbd91..0000000000000000000000000000000000000000 --- a/spaces/Proxdigestpills1/README/README.md +++ /dev/null @@ -1,163 +0,0 @@ -[![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjr4pXioqlPFZnZnv3a0Szq1CCSRS-gsuATvN-yz86r9qcEkg4hXvZJOp6kodpRsWK7Hh-6ot1fTpgdZAdGg_XtGxDZQfqMXDnZe4QohCQh0Nig9zJ6SwqDaIlWoXTf9cW1wNELCsuejGdGKT_-hOit2dxKGoeskXKC-KrRQPhgObPI0t8N_6GsaCX2/w640-h306/Screenshot%20(438).png)](https://www.glitco.com/get-pro-x-digest) - -What is Pro X Digest? -===================== - -Pro X Digest is a digestive health supplement featuring a blend of enzymes, probiotics, and other ingredients to support healthy digestion. - -Millions of people are diagnosed with a digestive disorder each year. As you get older, your risk of developing a digestive disorder increases. - -Pro X Digest claims to help by using a blend of natural ingredients to target the root cause of digestive discomfort. The blend of enzymes and probiotics can make it easier to break down food, helping your body digest everything you eat. - -Pro X Digest is made in the United States in an FDA-registered, GMP-certified facility. The manufacturer is based in West Jordan, Utah. - -**Pro X Digest Benefits** -------------------------- - -Pro X Digest contains a blend of digestive enzymes and probiotics to support healthy digestion, immunity, and overall health and wellness. - -### **[Here are some of the benefits of Pro X Digest, according to the official website:](https://www.glitco.com/get-pro-x-digest)** - -    All natural way to help with your digestive system - -    Keep your digestive system healthy and regular - -    Natural digestive enzymes to break down proteins, fats, oils, and carbs - -    Natural probiotics to support good bacteria, immune function, and overall gut health - -    Backed by 60 day moneyback guarantee - -    Made in the United States in FDA-registered, GMP-certified facility - -Order your supply of Pro X Digest now and start enjoying the benefits! - -**How Does Pro X Digest Work?** -------------------------------- - -Pro X Digest works using a blend of two main categories of ingredients: digestive enzymes and probiotics. The two ingredients work in different ways to support good digestion. - -Digestive enzymes, for example, help to break down the food you eat and extract its nutritional value. If you don’t have sufficient levels of digestive enzymes, then your body struggles to break down certain foods. - -Many people feel bloated after a protein-rich meal or protein shake, for example. This could be due to a lack of protease, a digestive enzyme to help break down protein. Others feel bloated or uncomfortable after dairy products, which could be caused by a lack of the lactase enzyme, which helps to break down the lactose protein in dairy. - -In addition to digestive enzymes, Pro X Digest contains probiotics, or good gut bacteria to help your gut flourish. A healthy gut is filled with billions of living bacteria that contribute to immunity, food breakdown, and overall gut wellness. People with poor gut health tend to have a less diverse gut microbiome than others. People with strong gut health tend to have thriving levels of billions of probiotic bacteria. - -Overall, Pro X Digest contains a blend of proven ingredients to target digestion in multiple ways. There are 3 probiotic strains, 7 digestive enzymes, and 1 fungi to help support digestive health and overall digestive balance. - -### **[Also Read: What Do You Mean by Gut Health Or Probiotic Supplements?](https://www.glitco.com/get-pro-x-digest)** - -**Pro X Digest Ingredients** ----------------------------- - -Pro X Digest contains a blend of two categories of ingredients: digestive enzymes and probiotic supplements. - -Digestive enzymes help to break down the foods you eat, while probiotics help your gut maintain a healthy balance overall. Enzymes can help extract nutrients, while probiotics can support immunity, weight loss, energy, metabolism, and other features linked to digestion. - -All three probiotics in Pro X Digest are part of the Lactobacillus family, including L. acidophilus, L. casei, and L. plantarum. - -Here are all of the ingredients in Pro X Digest and how they work, according to the manufacture: - -### [**Click here to order while supplies last!**](https://www.glitco.com/get-pro-x-digest) - -**Lactobacillus Acidophilus:** Lactobacillus acidophilus promotes the growth of good bacteria and helps treat digestive disorders, according to the manufacturer. Common digestive disorders include irritable bowel syndrome (IBS) or indigestion. Some also have poor probiotic balance because of Crohn’s disease, celiac disease, lactose intolerance, or other conditions. Although L. acidophilus can’t help with all of these, it’s found in many probiotic supplements and prized for its effects on overall gut balance. - -**Lactobacillus Casei:** Lactobacillus casei is a common probiotic found in your digestive tract. Like other probiotics, L. casei is considered friendly because it plays a valuable role in digestion and immunity. One study found L. casei increased the activity of natural killer (NK) cells, for example, while other studies have linked L. casei to general digestive health and discomfort. - -**Lactobacillus Plantarum:** The third probiotic strain in Pro X Digest and the third member of the Lactobacillus family, L. plantarum can improve cognitive function and help with gut immunity, according to the manufacturer. Over 70% of your immune system is found in your gut. If your gut bacteria are imbalanced, then your body’s immune system may struggle to defend itself. You need a balanced gut and thriving microflora to maintain good immunity, and Lactobacillus plantarum could help with that. - -**Bromelain**: Bromelain is a digestive enzyme found in pineapple. Many nutritional supplements contain bromelain from pineapple for its effects on digestion and the overall breakdown of food. Studies have linked bromelain to a range of effects – from weight loss to immune function. Today, many people take bromelain supplements daily for overall health and wellness. - -**Papain**: Papain is a digestive enzyme similar to bromelain. However, instead of coming from pineapple, papain comes from papaya. Papain can break down food for better digestion while helping to relieve bloating, constipation, and gas, according to the makers of Pro X Digest. - -**Aspergillus Oryzae:** Aspergillus oryzae is a fungus or mold used in food manufacturing in East Asia. It’s particularly common in fermented foods in Japan and China, for example. The makers of Pro X Digest added this unique ingredient to the formula to improve cognitive function and aid gut immunity. According to the manufacturer, the mold can support brain health and gut immunity, working in a similar way to probiotics. - -**Protease**: Pro X Digest contains protease, an enzyme designed to break down proteins. If you feel bloated or uncomfortable after eating protein, then you may need more protease. Pro X Digest can help your body break down protein, process its nutritional value, and absorb the maximum amount of protein from your foods. - -**Lipase**: Pro X Digest contains lipase, an enzyme to break down fats and oils. Many people feel bloated after eating a meal high in fats and oils. Pro X Digest can help by giving you a strong dose of lipase. Your body normally makes lipase in your pancreas. However, your salivary (spit) glands and stomach also produce lipase. As food enters your mouth, travels through your stomach, and enters your digestive tract, lipase helps to break down food along the way. As Mount Sinai explains, studies show lipase supplements can help reduce bloating, gas, and fullness after large meals. - -**Amylase**: Pro X Digest contains amylase, an enzyme to break down carbs. Like lipase and protease, amylase is designed to help your body process a specific type of ingredient: carbs. Your body produces amylase from its pancreas and salivary glands. Like lipase, amylase helps to break down food as it travels from your mouth throughout your digestive tract. Some people undergo amylase testing if unsure about the cause of their digestive problems. - -**Lactase:** Pro X Digest contains lactase, an enzyme that breaks down dairy. Some people naturally have less lactase than others, making it difficult to digest the lactose, or milk sugars, in dairy foods and beverages. Pro X Digest can help by breaking down these milk sugars to help you digest milk products more efficiently. Even if you don’t consume dairy, lactase can contribute to overall digestive comfort. - -**Alpha Galactosidase:** Pro X Digest contains alpha galactosidase, an enzyme involved in the metabolism of glycolipids, a specific type of fat that may contribute to digestive discomfort. A 2007 study showed alpha galactosidase supplementation led to a significant reduction in gas after a large meal. - -The makers of Pro X Digest claim all ingredients are tested by third-party labs to verify purity and potency. The company also assembles all ingredients together in the United States at an FDA-registered, GMP-certified facility. - -**[(Limited Supply) Order Pro X Digest Before Supplies Run Out!!](https://www.glitco.com/get-pro-x-digest)** - -Scientific Evidence for Pro X Digest ------------------------------------- - -As proof Pro X Digest works, the company cites several studies linking each of the ingredients to various health effects. We’ll review some of that scientific evidence below to validate the claims made on the **[Pro X Digest website.](https://www.npmjs.com/package/pro-x-digest-buy-official-site)** - -Pro X Digest contains alpha galactosidase, a digestive enzyme linked to health and wellness. In a 2000 study, researchers found the enzyme could play a valuable role in enzyme therapy. By taking alpha galactosidase enzymes from healthy adults and giving them to patients with enzyme deficiency, researchers found they could restore normal levels of enzymes. Alpha galactosidase appears to be particularly important for breaking down carbs. - -Lactobacillus casei has a long history of use as a probiotic supplement and overall digestive aid. In a 2019 study published in Nutrients, researchers found Lactobacillus casei could be beneficial for modulating gut microbiota. Researchers found people who took a L. casei supplement – like Pro X Digest – tended to have higher levels of L. casei in their system after taking the supplement, and those higher levels were linked to lower rates of diarrhea and other digestive issues. - -Lactobacillus acidophilus is backed by similar studies. A 2020 study found Lactobacillus acidophilus could help manage gastrointestinal disorders. Researchers found ample evidence L. acidophilus could help with acute diarrhea, chronic diarrhea, antibiotic-associated digestive problems, and even immune problems linked to the gut, among other benefits. - -As the National Center for Complementary and Integrative Health explains, bromelain is a group of enzymes found in the fruit and stem of the pineapple plant. Today, some take bromelain to reduce pain and swelling. Others take it for digestive problems. Some early studies have linked bromelain to promising digestive effects, although we need more research to conclusively make this connection. - -Aspergillus oryzae is one of the more unique ingredients in Pro X Digest. You can find plenty of digestive enzyme supplements and probiotic formulas online. However, aspergillus oryzae fills a more unique role. Also known as koji mold, A. oryzae is commonly used in food manufacturing. A 1999 study found the ingredient was commonly used in sake, miso, and soy sauce production in Japan, for example, describing its role as “pivotal” in food manufacturing. According to the makers of Pro X Digest, this same koji mold has powerful effects on cognition and digestion. - -Overall, Pro X Digest contains a blend of science-backed digestive enzymes and probiotics designed to support gut health in multiple ways. Although we don’t know specific dose or concentration information, Pro X Digest could work to support gut health by breaking down food, boosting immunity, and helping your digestive system function like normal. - -**[Place your order today by clicking here before stock runs out! >>>](https://www.glitco.com/get-pro-x-digest)** - -**How to Take Pro X Digest** ----------------------------- - -The makers of Pro X Digest recommend taking one capsule of Pro X Digest twice a day. Or, for best results, take it 20 to 30 minutes before a meal: - -    Take 1 capsule (1 serving) of Pro X Digest 2 times per day - -    For best results, take Pro X Digest 20 to 30 minutes before a meal - -Pro X Digest Pricing --------------------- - -Pro X Digest is normally priced at $199 per bottle. As part of a 2023 promotion, however, the manufacturer has reduced the price to just $59 per bottle. You can save even more money by ordering multiple bottles, which drops the price to $39 per bottle and comes bundled with free bonuses. - -[![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihsrKty5zRCY5zotu8ogGdSReEl2tuAw1H1aLDUc_Z9BfoGgQeOMrX92odsEAEvWym2Z7ZQkRfS9xcgcIAsMniD3uUZASP4cIeTszPOwGdseqJK7kpnXKWgrvtQ5A5B6uvpL7MqOOCFFGfP9lsxwFR-lqXEZ92druCVJbPmG6a4ZnQvlOspw_ybTDG/w640-h344/Screenshot%20(440).png)](https://www.glitco.com/get-pro-x-digest) - -### **Here’s how pricing works when ordering online today:** - -    1 Bottle: $59 + Shipping - -    3 Bottles: $147 ($49 Per Bottle) + 1 Free Bonus + Shipping - -    6 Bottles: $234 ($39 Per Bottle) + 1 Free Bonus + Free Shipping - -#### **Order Pro X Digest Right Here At The Best Prices!!** - -Each bottle contains a 30 day supply of Pro X Digest, or 30 servings. You take one serving daily to help with digestion. - -Pro X Digest Refund Policy --------------------------- - -Pro X Digest comes with a 60 day moneyback guarantee. You can request a refund on your purchase within 60 days with no questions asked if you’re unhappy with the supplement for any reason. - -**Returns Address: Health Heroes 8152 S. Welby Park Dr Ste B, West Jordan, UT 84088** - -**About Health Heroes** ------------------------ - -Pro X Digest is made in the United States in an FDA-registered, GMP-certified facility by a Utah-based company named Health Heroes. The company manufactures the supplement using natural ingredients. - -**You can contact the makers of Pro X Digest and the company’s customer service team via the following:** - -    Email: [support@proxdigest.com](https://www.glitco.com/get-pro-x-digest) - -    Phone: 702-859-3292 - -    Registered Address: Health Heroes 8152 S. Welby Park Dr Ste B, West Jordan, UT 84088 - -**Final Word** --------------- - -Pro X Digest is a digestive health supplement available exclusively online. Made by a West Jordan, Utah-based company, Pro X Digest features a blend of digestive enzymes and probiotics to support gut health. - -Millions of Americans deal with bloating and digestive discomfort after meals. In many cases, these problems are linked to low digestive enzyme levels or poor probiotic balance. Pro X Digest aims to solve both of these issues. - -To learn more about Pro X Digest and how it works or to buy the digestive health supplement today, **[visit the official website.](https://www.glitco.com/get-pro-x-digest)** \ No newline at end of file diff --git a/spaces/Raghav001/Experiment/README.md b/spaces/Raghav001/Experiment/README.md deleted file mode 100644 index e055b5bb6296ba8cee13d0d5f89ae23c87b9a390..0000000000000000000000000000000000000000 --- a/spaces/Raghav001/Experiment/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChatPDF -emoji: 💻 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.20.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: Raghav001/DocTalk ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Rami/validate_chat_utd/README.md b/spaces/Rami/validate_chat_utd/README.md deleted file mode 100644 index c80ed2fa95b5cf379345a87e4a8f9da0c9a99857..0000000000000000000000000000000000000000 --- a/spaces/Rami/validate_chat_utd/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Validate Chat Utd -emoji: 🌍 -colorFrom: green -colorTo: yellow -sdk: docker -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/svg.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/svg.py deleted file mode 100644 index 075150a4b586d668c1666513fbf90463cdbb11ab..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/pygments/formatters/svg.py +++ /dev/null @@ -1,188 +0,0 @@ -""" - pygments.formatters.svg - ~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for SVG output. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.token import Comment -from pip._vendor.pygments.util import get_bool_opt, get_int_opt - -__all__ = ['SvgFormatter'] - - -def escape_html(text): - """Escape &, <, > as well as single and double quotes for HTML.""" - return text.replace('&', '&'). \ - replace('<', '<'). \ - replace('>', '>'). \ - replace('"', '"'). \ - replace("'", ''') - - -class2style = {} - -class SvgFormatter(Formatter): - """ - Format tokens as an SVG graphics file. This formatter is still experimental. - Each line of code is a ```` element with explicit ``x`` and ``y`` - coordinates containing ```` elements with the individual token styles. - - By default, this formatter outputs a full SVG document including doctype - declaration and the ```` root element. - - .. versionadded:: 0.9 - - Additional options accepted: - - `nowrap` - Don't wrap the SVG ```` elements in ```` elements and - don't add a XML declaration and a doctype. If true, the `fontfamily` - and `fontsize` options are ignored. Defaults to ``False``. - - `fontfamily` - The value to give the wrapping ```` element's ``font-family`` - attribute, defaults to ``"monospace"``. - - `fontsize` - The value to give the wrapping ```` element's ``font-size`` - attribute, defaults to ``"14px"``. - - `linenos` - If ``True``, add line numbers (default: ``False``). - - `linenostart` - The line number for the first line (default: ``1``). - - `linenostep` - If set to a number n > 1, only every nth line number is printed. - - `linenowidth` - Maximum width devoted to line numbers (default: ``3*ystep``, sufficient - for up to 4-digit line numbers. Increase width for longer code blocks). - - `xoffset` - Starting offset in X direction, defaults to ``0``. - - `yoffset` - Starting offset in Y direction, defaults to the font size if it is given - in pixels, or ``20`` else. (This is necessary since text coordinates - refer to the text baseline, not the top edge.) - - `ystep` - Offset to add to the Y coordinate for each subsequent line. This should - roughly be the text size plus 5. It defaults to that value if the text - size is given in pixels, or ``25`` else. - - `spacehack` - Convert spaces in the source to `` ``, which are non-breaking - spaces. SVG provides the ``xml:space`` attribute to control how - whitespace inside tags is handled, in theory, the ``preserve`` value - could be used to keep all whitespace as-is. However, many current SVG - viewers don't obey that rule, so this option is provided as a workaround - and defaults to ``True``. - """ - name = 'SVG' - aliases = ['svg'] - filenames = ['*.svg'] - - def __init__(self, **options): - Formatter.__init__(self, **options) - self.nowrap = get_bool_opt(options, 'nowrap', False) - self.fontfamily = options.get('fontfamily', 'monospace') - self.fontsize = options.get('fontsize', '14px') - self.xoffset = get_int_opt(options, 'xoffset', 0) - fs = self.fontsize.strip() - if fs.endswith('px'): fs = fs[:-2].strip() - try: - int_fs = int(fs) - except: - int_fs = 20 - self.yoffset = get_int_opt(options, 'yoffset', int_fs) - self.ystep = get_int_opt(options, 'ystep', int_fs + 5) - self.spacehack = get_bool_opt(options, 'spacehack', True) - self.linenos = get_bool_opt(options,'linenos',False) - self.linenostart = get_int_opt(options,'linenostart',1) - self.linenostep = get_int_opt(options,'linenostep',1) - self.linenowidth = get_int_opt(options,'linenowidth', 3*self.ystep) - self._stylecache = {} - - def format_unencoded(self, tokensource, outfile): - """ - Format ``tokensource``, an iterable of ``(tokentype, tokenstring)`` - tuples and write it into ``outfile``. - - For our implementation we put all lines in their own 'line group'. - """ - x = self.xoffset - y = self.yoffset - if not self.nowrap: - if self.encoding: - outfile.write('\n' % - self.encoding) - else: - outfile.write('\n') - outfile.write('\n') - outfile.write('\n') - outfile.write('\n' % - (self.fontfamily, self.fontsize)) - - counter = self.linenostart - counter_step = self.linenostep - counter_style = self._get_style(Comment) - line_x = x - - if self.linenos: - if counter % counter_step == 0: - outfile.write('%s' % - (x+self.linenowidth,y,counter_style,counter)) - line_x += self.linenowidth + self.ystep - counter += 1 - - outfile.write('' % (line_x, y)) - for ttype, value in tokensource: - style = self._get_style(ttype) - tspan = style and '' or '' - tspanend = tspan and '' or '' - value = escape_html(value) - if self.spacehack: - value = value.expandtabs().replace(' ', ' ') - parts = value.split('\n') - for part in parts[:-1]: - outfile.write(tspan + part + tspanend) - y += self.ystep - outfile.write('\n') - if self.linenos and counter % counter_step == 0: - outfile.write('%s' % - (x+self.linenowidth,y,counter_style,counter)) - - counter += 1 - outfile.write('' % (line_x,y)) - outfile.write(tspan + parts[-1] + tspanend) - outfile.write('') - - if not self.nowrap: - outfile.write('\n') - - def _get_style(self, tokentype): - if tokentype in self._stylecache: - return self._stylecache[tokentype] - otokentype = tokentype - while not self.style.styles_token(tokentype): - tokentype = tokentype.parent - value = self.style.style_for_token(tokentype) - result = '' - if value['color']: - result = ' fill="#' + value['color'] + '"' - if value['bold']: - result += ' font-weight="bold"' - if value['italic']: - result += ' font-style="italic"' - self._stylecache[otokentype] = result - return result diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/style.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/style.py deleted file mode 100644 index b2e8aff71f50d0d308e3bcb206508912738029ad..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/style.py +++ /dev/null @@ -1,771 +0,0 @@ -import sys -from functools import lru_cache -from marshal import dumps, loads -from random import randint -from typing import Any, Dict, Iterable, List, Optional, Type, Union, cast - -from . import errors -from .color import Color, ColorParseError, ColorSystem, blend_rgb -from .repr import Result, rich_repr -from .terminal_theme import DEFAULT_TERMINAL_THEME, TerminalTheme - -# Style instances and style definitions are often interchangeable -StyleType = Union[str, "Style"] - - -class _Bit: - """A descriptor to get/set a style attribute bit.""" - - __slots__ = ["bit"] - - def __init__(self, bit_no: int) -> None: - self.bit = 1 << bit_no - - def __get__(self, obj: "Style", objtype: Type["Style"]) -> Optional[bool]: - if obj._set_attributes & self.bit: - return obj._attributes & self.bit != 0 - return None - - -@rich_repr -class Style: - """A terminal style. - - A terminal style consists of a color (`color`), a background color (`bgcolor`), and a number of attributes, such - as bold, italic etc. The attributes have 3 states: they can either be on - (``True``), off (``False``), or not set (``None``). - - Args: - color (Union[Color, str], optional): Color of terminal text. Defaults to None. - bgcolor (Union[Color, str], optional): Color of terminal background. Defaults to None. - bold (bool, optional): Enable bold text. Defaults to None. - dim (bool, optional): Enable dim text. Defaults to None. - italic (bool, optional): Enable italic text. Defaults to None. - underline (bool, optional): Enable underlined text. Defaults to None. - blink (bool, optional): Enabled blinking text. Defaults to None. - blink2 (bool, optional): Enable fast blinking text. Defaults to None. - reverse (bool, optional): Enabled reverse text. Defaults to None. - conceal (bool, optional): Enable concealed text. Defaults to None. - strike (bool, optional): Enable strikethrough text. Defaults to None. - underline2 (bool, optional): Enable doubly underlined text. Defaults to None. - frame (bool, optional): Enable framed text. Defaults to None. - encircle (bool, optional): Enable encircled text. Defaults to None. - overline (bool, optional): Enable overlined text. Defaults to None. - link (str, link): Link URL. Defaults to None. - - """ - - _color: Optional[Color] - _bgcolor: Optional[Color] - _attributes: int - _set_attributes: int - _hash: Optional[int] - _null: bool - _meta: Optional[bytes] - - __slots__ = [ - "_color", - "_bgcolor", - "_attributes", - "_set_attributes", - "_link", - "_link_id", - "_ansi", - "_style_definition", - "_hash", - "_null", - "_meta", - ] - - # maps bits on to SGR parameter - _style_map = { - 0: "1", - 1: "2", - 2: "3", - 3: "4", - 4: "5", - 5: "6", - 6: "7", - 7: "8", - 8: "9", - 9: "21", - 10: "51", - 11: "52", - 12: "53", - } - - STYLE_ATTRIBUTES = { - "dim": "dim", - "d": "dim", - "bold": "bold", - "b": "bold", - "italic": "italic", - "i": "italic", - "underline": "underline", - "u": "underline", - "blink": "blink", - "blink2": "blink2", - "reverse": "reverse", - "r": "reverse", - "conceal": "conceal", - "c": "conceal", - "strike": "strike", - "s": "strike", - "underline2": "underline2", - "uu": "underline2", - "frame": "frame", - "encircle": "encircle", - "overline": "overline", - "o": "overline", - } - - def __init__( - self, - *, - color: Optional[Union[Color, str]] = None, - bgcolor: Optional[Union[Color, str]] = None, - bold: Optional[bool] = None, - dim: Optional[bool] = None, - italic: Optional[bool] = None, - underline: Optional[bool] = None, - blink: Optional[bool] = None, - blink2: Optional[bool] = None, - reverse: Optional[bool] = None, - conceal: Optional[bool] = None, - strike: Optional[bool] = None, - underline2: Optional[bool] = None, - frame: Optional[bool] = None, - encircle: Optional[bool] = None, - overline: Optional[bool] = None, - link: Optional[str] = None, - meta: Optional[Dict[str, Any]] = None, - ): - self._ansi: Optional[str] = None - self._style_definition: Optional[str] = None - - def _make_color(color: Union[Color, str]) -> Color: - return color if isinstance(color, Color) else Color.parse(color) - - self._color = None if color is None else _make_color(color) - self._bgcolor = None if bgcolor is None else _make_color(bgcolor) - self._set_attributes = sum( - ( - bold is not None, - dim is not None and 2, - italic is not None and 4, - underline is not None and 8, - blink is not None and 16, - blink2 is not None and 32, - reverse is not None and 64, - conceal is not None and 128, - strike is not None and 256, - underline2 is not None and 512, - frame is not None and 1024, - encircle is not None and 2048, - overline is not None and 4096, - ) - ) - self._attributes = ( - sum( - ( - bold and 1 or 0, - dim and 2 or 0, - italic and 4 or 0, - underline and 8 or 0, - blink and 16 or 0, - blink2 and 32 or 0, - reverse and 64 or 0, - conceal and 128 or 0, - strike and 256 or 0, - underline2 and 512 or 0, - frame and 1024 or 0, - encircle and 2048 or 0, - overline and 4096 or 0, - ) - ) - if self._set_attributes - else 0 - ) - - self._link = link - self._link_id = f"{randint(0, 999999)}" if link else "" - self._meta = None if meta is None else dumps(meta) - self._hash: Optional[int] = None - self._null = not (self._set_attributes or color or bgcolor or link or meta) - - @classmethod - def null(cls) -> "Style": - """Create an 'null' style, equivalent to Style(), but more performant.""" - return NULL_STYLE - - @classmethod - def from_color( - cls, color: Optional[Color] = None, bgcolor: Optional[Color] = None - ) -> "Style": - """Create a new style with colors and no attributes. - - Returns: - color (Optional[Color]): A (foreground) color, or None for no color. Defaults to None. - bgcolor (Optional[Color]): A (background) color, or None for no color. Defaults to None. - """ - style: Style = cls.__new__(Style) - style._ansi = None - style._style_definition = None - style._color = color - style._bgcolor = bgcolor - style._set_attributes = 0 - style._attributes = 0 - style._link = None - style._link_id = "" - style._meta = None - style._null = not (color or bgcolor) - style._hash = None - return style - - @classmethod - def from_meta(cls, meta: Optional[Dict[str, Any]]) -> "Style": - """Create a new style with meta data. - - Returns: - meta (Optional[Dict[str, Any]]): A dictionary of meta data. Defaults to None. - """ - style: Style = cls.__new__(Style) - style._ansi = None - style._style_definition = None - style._color = None - style._bgcolor = None - style._set_attributes = 0 - style._attributes = 0 - style._link = None - style._link_id = "" - style._meta = dumps(meta) - style._hash = None - style._null = not (meta) - return style - - @classmethod - def on(cls, meta: Optional[Dict[str, Any]] = None, **handlers: Any) -> "Style": - """Create a blank style with meta information. - - Example: - style = Style.on(click=self.on_click) - - Args: - meta (Optional[Dict[str, Any]], optional): An optional dict of meta information. - **handlers (Any): Keyword arguments are translated in to handlers. - - Returns: - Style: A Style with meta information attached. - """ - meta = {} if meta is None else meta - meta.update({f"@{key}": value for key, value in handlers.items()}) - return cls.from_meta(meta) - - bold = _Bit(0) - dim = _Bit(1) - italic = _Bit(2) - underline = _Bit(3) - blink = _Bit(4) - blink2 = _Bit(5) - reverse = _Bit(6) - conceal = _Bit(7) - strike = _Bit(8) - underline2 = _Bit(9) - frame = _Bit(10) - encircle = _Bit(11) - overline = _Bit(12) - - @property - def link_id(self) -> str: - """Get a link id, used in ansi code for links.""" - return self._link_id - - def __str__(self) -> str: - """Re-generate style definition from attributes.""" - if self._style_definition is None: - attributes: List[str] = [] - append = attributes.append - bits = self._set_attributes - if bits & 0b0000000001111: - if bits & 1: - append("bold" if self.bold else "not bold") - if bits & (1 << 1): - append("dim" if self.dim else "not dim") - if bits & (1 << 2): - append("italic" if self.italic else "not italic") - if bits & (1 << 3): - append("underline" if self.underline else "not underline") - if bits & 0b0000111110000: - if bits & (1 << 4): - append("blink" if self.blink else "not blink") - if bits & (1 << 5): - append("blink2" if self.blink2 else "not blink2") - if bits & (1 << 6): - append("reverse" if self.reverse else "not reverse") - if bits & (1 << 7): - append("conceal" if self.conceal else "not conceal") - if bits & (1 << 8): - append("strike" if self.strike else "not strike") - if bits & 0b1111000000000: - if bits & (1 << 9): - append("underline2" if self.underline2 else "not underline2") - if bits & (1 << 10): - append("frame" if self.frame else "not frame") - if bits & (1 << 11): - append("encircle" if self.encircle else "not encircle") - if bits & (1 << 12): - append("overline" if self.overline else "not overline") - if self._color is not None: - append(self._color.name) - if self._bgcolor is not None: - append("on") - append(self._bgcolor.name) - if self._link: - append("link") - append(self._link) - self._style_definition = " ".join(attributes) or "none" - return self._style_definition - - def __bool__(self) -> bool: - """A Style is false if it has no attributes, colors, or links.""" - return not self._null - - def _make_ansi_codes(self, color_system: ColorSystem) -> str: - """Generate ANSI codes for this style. - - Args: - color_system (ColorSystem): Color system. - - Returns: - str: String containing codes. - """ - - if self._ansi is None: - sgr: List[str] = [] - append = sgr.append - _style_map = self._style_map - attributes = self._attributes & self._set_attributes - if attributes: - if attributes & 1: - append(_style_map[0]) - if attributes & 2: - append(_style_map[1]) - if attributes & 4: - append(_style_map[2]) - if attributes & 8: - append(_style_map[3]) - if attributes & 0b0000111110000: - for bit in range(4, 9): - if attributes & (1 << bit): - append(_style_map[bit]) - if attributes & 0b1111000000000: - for bit in range(9, 13): - if attributes & (1 << bit): - append(_style_map[bit]) - if self._color is not None: - sgr.extend(self._color.downgrade(color_system).get_ansi_codes()) - if self._bgcolor is not None: - sgr.extend( - self._bgcolor.downgrade(color_system).get_ansi_codes( - foreground=False - ) - ) - self._ansi = ";".join(sgr) - return self._ansi - - @classmethod - @lru_cache(maxsize=1024) - def normalize(cls, style: str) -> str: - """Normalize a style definition so that styles with the same effect have the same string - representation. - - Args: - style (str): A style definition. - - Returns: - str: Normal form of style definition. - """ - try: - return str(cls.parse(style)) - except errors.StyleSyntaxError: - return style.strip().lower() - - @classmethod - def pick_first(cls, *values: Optional[StyleType]) -> StyleType: - """Pick first non-None style.""" - for value in values: - if value is not None: - return value - raise ValueError("expected at least one non-None style") - - def __rich_repr__(self) -> Result: - yield "color", self.color, None - yield "bgcolor", self.bgcolor, None - yield "bold", self.bold, None, - yield "dim", self.dim, None, - yield "italic", self.italic, None - yield "underline", self.underline, None, - yield "blink", self.blink, None - yield "blink2", self.blink2, None - yield "reverse", self.reverse, None - yield "conceal", self.conceal, None - yield "strike", self.strike, None - yield "underline2", self.underline2, None - yield "frame", self.frame, None - yield "encircle", self.encircle, None - yield "link", self.link, None - if self._meta: - yield "meta", self.meta - - def __eq__(self, other: Any) -> bool: - if not isinstance(other, Style): - return NotImplemented - return self.__hash__() == other.__hash__() - - def __ne__(self, other: Any) -> bool: - if not isinstance(other, Style): - return NotImplemented - return self.__hash__() != other.__hash__() - - def __hash__(self) -> int: - if self._hash is not None: - return self._hash - self._hash = hash( - ( - self._color, - self._bgcolor, - self._attributes, - self._set_attributes, - self._link, - self._meta, - ) - ) - return self._hash - - @property - def color(self) -> Optional[Color]: - """The foreground color or None if it is not set.""" - return self._color - - @property - def bgcolor(self) -> Optional[Color]: - """The background color or None if it is not set.""" - return self._bgcolor - - @property - def link(self) -> Optional[str]: - """Link text, if set.""" - return self._link - - @property - def transparent_background(self) -> bool: - """Check if the style specified a transparent background.""" - return self.bgcolor is None or self.bgcolor.is_default - - @property - def background_style(self) -> "Style": - """A Style with background only.""" - return Style(bgcolor=self.bgcolor) - - @property - def meta(self) -> Dict[str, Any]: - """Get meta information (can not be changed after construction).""" - return {} if self._meta is None else cast(Dict[str, Any], loads(self._meta)) - - @property - def without_color(self) -> "Style": - """Get a copy of the style with color removed.""" - if self._null: - return NULL_STYLE - style: Style = self.__new__(Style) - style._ansi = None - style._style_definition = None - style._color = None - style._bgcolor = None - style._attributes = self._attributes - style._set_attributes = self._set_attributes - style._link = self._link - style._link_id = f"{randint(0, 999999)}" if self._link else "" - style._null = False - style._meta = None - style._hash = None - return style - - @classmethod - @lru_cache(maxsize=4096) - def parse(cls, style_definition: str) -> "Style": - """Parse a style definition. - - Args: - style_definition (str): A string containing a style. - - Raises: - errors.StyleSyntaxError: If the style definition syntax is invalid. - - Returns: - `Style`: A Style instance. - """ - if style_definition.strip() == "none" or not style_definition: - return cls.null() - - STYLE_ATTRIBUTES = cls.STYLE_ATTRIBUTES - color: Optional[str] = None - bgcolor: Optional[str] = None - attributes: Dict[str, Optional[Any]] = {} - link: Optional[str] = None - - words = iter(style_definition.split()) - for original_word in words: - word = original_word.lower() - if word == "on": - word = next(words, "") - if not word: - raise errors.StyleSyntaxError("color expected after 'on'") - try: - Color.parse(word) is None - except ColorParseError as error: - raise errors.StyleSyntaxError( - f"unable to parse {word!r} as background color; {error}" - ) from None - bgcolor = word - - elif word == "not": - word = next(words, "") - attribute = STYLE_ATTRIBUTES.get(word) - if attribute is None: - raise errors.StyleSyntaxError( - f"expected style attribute after 'not', found {word!r}" - ) - attributes[attribute] = False - - elif word == "link": - word = next(words, "") - if not word: - raise errors.StyleSyntaxError("URL expected after 'link'") - link = word - - elif word in STYLE_ATTRIBUTES: - attributes[STYLE_ATTRIBUTES[word]] = True - - else: - try: - Color.parse(word) - except ColorParseError as error: - raise errors.StyleSyntaxError( - f"unable to parse {word!r} as color; {error}" - ) from None - color = word - style = Style(color=color, bgcolor=bgcolor, link=link, **attributes) - return style - - @lru_cache(maxsize=1024) - def get_html_style(self, theme: Optional[TerminalTheme] = None) -> str: - """Get a CSS style rule.""" - theme = theme or DEFAULT_TERMINAL_THEME - css: List[str] = [] - append = css.append - - color = self.color - bgcolor = self.bgcolor - if self.reverse: - color, bgcolor = bgcolor, color - if self.dim: - foreground_color = ( - theme.foreground_color if color is None else color.get_truecolor(theme) - ) - color = Color.from_triplet( - blend_rgb(foreground_color, theme.background_color, 0.5) - ) - if color is not None: - theme_color = color.get_truecolor(theme) - append(f"color: {theme_color.hex}") - append(f"text-decoration-color: {theme_color.hex}") - if bgcolor is not None: - theme_color = bgcolor.get_truecolor(theme, foreground=False) - append(f"background-color: {theme_color.hex}") - if self.bold: - append("font-weight: bold") - if self.italic: - append("font-style: italic") - if self.underline: - append("text-decoration: underline") - if self.strike: - append("text-decoration: line-through") - if self.overline: - append("text-decoration: overline") - return "; ".join(css) - - @classmethod - def combine(cls, styles: Iterable["Style"]) -> "Style": - """Combine styles and get result. - - Args: - styles (Iterable[Style]): Styles to combine. - - Returns: - Style: A new style instance. - """ - iter_styles = iter(styles) - return sum(iter_styles, next(iter_styles)) - - @classmethod - def chain(cls, *styles: "Style") -> "Style": - """Combine styles from positional argument in to a single style. - - Args: - *styles (Iterable[Style]): Styles to combine. - - Returns: - Style: A new style instance. - """ - iter_styles = iter(styles) - return sum(iter_styles, next(iter_styles)) - - def copy(self) -> "Style": - """Get a copy of this style. - - Returns: - Style: A new Style instance with identical attributes. - """ - if self._null: - return NULL_STYLE - style: Style = self.__new__(Style) - style._ansi = self._ansi - style._style_definition = self._style_definition - style._color = self._color - style._bgcolor = self._bgcolor - style._attributes = self._attributes - style._set_attributes = self._set_attributes - style._link = self._link - style._link_id = f"{randint(0, 999999)}" if self._link else "" - style._hash = self._hash - style._null = False - style._meta = self._meta - return style - - def update_link(self, link: Optional[str] = None) -> "Style": - """Get a copy with a different value for link. - - Args: - link (str, optional): New value for link. Defaults to None. - - Returns: - Style: A new Style instance. - """ - style: Style = self.__new__(Style) - style._ansi = self._ansi - style._style_definition = self._style_definition - style._color = self._color - style._bgcolor = self._bgcolor - style._attributes = self._attributes - style._set_attributes = self._set_attributes - style._link = link - style._link_id = f"{randint(0, 999999)}" if link else "" - style._hash = None - style._null = False - style._meta = self._meta - return style - - def render( - self, - text: str = "", - *, - color_system: Optional[ColorSystem] = ColorSystem.TRUECOLOR, - legacy_windows: bool = False, - ) -> str: - """Render the ANSI codes for the style. - - Args: - text (str, optional): A string to style. Defaults to "". - color_system (Optional[ColorSystem], optional): Color system to render to. Defaults to ColorSystem.TRUECOLOR. - - Returns: - str: A string containing ANSI style codes. - """ - if not text or color_system is None: - return text - attrs = self._ansi or self._make_ansi_codes(color_system) - rendered = f"\x1b[{attrs}m{text}\x1b[0m" if attrs else text - if self._link and not legacy_windows: - rendered = ( - f"\x1b]8;id={self._link_id};{self._link}\x1b\\{rendered}\x1b]8;;\x1b\\" - ) - return rendered - - def test(self, text: Optional[str] = None) -> None: - """Write text with style directly to terminal. - - This method is for testing purposes only. - - Args: - text (Optional[str], optional): Text to style or None for style name. - - """ - text = text or str(self) - sys.stdout.write(f"{self.render(text)}\n") - - @lru_cache(maxsize=1024) - def _add(self, style: Optional["Style"]) -> "Style": - if style is None or style._null: - return self - if self._null: - return style - new_style: Style = self.__new__(Style) - new_style._ansi = None - new_style._style_definition = None - new_style._color = style._color or self._color - new_style._bgcolor = style._bgcolor or self._bgcolor - new_style._attributes = (self._attributes & ~style._set_attributes) | ( - style._attributes & style._set_attributes - ) - new_style._set_attributes = self._set_attributes | style._set_attributes - new_style._link = style._link or self._link - new_style._link_id = style._link_id or self._link_id - new_style._null = style._null - if self._meta and style._meta: - new_style._meta = dumps({**self.meta, **style.meta}) - else: - new_style._meta = self._meta or style._meta - new_style._hash = None - return new_style - - def __add__(self, style: Optional["Style"]) -> "Style": - combined_style = self._add(style) - return combined_style.copy() if combined_style.link else combined_style - - -NULL_STYLE = Style() - - -class StyleStack: - """A stack of styles.""" - - __slots__ = ["_stack"] - - def __init__(self, default_style: "Style") -> None: - self._stack: List[Style] = [default_style] - - def __repr__(self) -> str: - return f"" - - @property - def current(self) -> Style: - """Get the Style at the top of the stack.""" - return self._stack[-1] - - def push(self, style: Style) -> None: - """Push a new style on to the stack. - - Args: - style (Style): New style to combine with current style. - """ - self._stack.append(self._stack[-1] + style) - - def pop(self) -> Style: - """Pop last style and discard. - - Returns: - Style: New current style (also available as stack.current) - """ - self._stack.pop() - return self._stack[-1] diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/util/proxy.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/util/proxy.py deleted file mode 100644 index 2199cc7b7f004009493d032720c36d6568f9d89e..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/util/proxy.py +++ /dev/null @@ -1,57 +0,0 @@ -from .ssl_ import create_urllib3_context, resolve_cert_reqs, resolve_ssl_version - - -def connection_requires_http_tunnel( - proxy_url=None, proxy_config=None, destination_scheme=None -): - """ - Returns True if the connection requires an HTTP CONNECT through the proxy. - - :param URL proxy_url: - URL of the proxy. - :param ProxyConfig proxy_config: - Proxy configuration from poolmanager.py - :param str destination_scheme: - The scheme of the destination. (i.e https, http, etc) - """ - # If we're not using a proxy, no way to use a tunnel. - if proxy_url is None: - return False - - # HTTP destinations never require tunneling, we always forward. - if destination_scheme == "http": - return False - - # Support for forwarding with HTTPS proxies and HTTPS destinations. - if ( - proxy_url.scheme == "https" - and proxy_config - and proxy_config.use_forwarding_for_https - ): - return False - - # Otherwise always use a tunnel. - return True - - -def create_proxy_ssl_context( - ssl_version, cert_reqs, ca_certs=None, ca_cert_dir=None, ca_cert_data=None -): - """ - Generates a default proxy ssl context if one hasn't been provided by the - user. - """ - ssl_context = create_urllib3_context( - ssl_version=resolve_ssl_version(ssl_version), - cert_reqs=resolve_cert_reqs(cert_reqs), - ) - - if ( - not ca_certs - and not ca_cert_dir - and not ca_cert_data - and hasattr(ssl_context, "load_default_certs") - ): - ssl_context.load_default_certs() - - return ssl_context diff --git a/spaces/Redgon/bingo/src/app/page.tsx b/spaces/Redgon/bingo/src/app/page.tsx deleted file mode 100644 index 0dff3431b098ce4fe282cc83fc87a93a28a43090..0000000000000000000000000000000000000000 --- a/spaces/Redgon/bingo/src/app/page.tsx +++ /dev/null @@ -1,15 +0,0 @@ -import dynamic from 'next/dynamic' - -const DynamicComponentWithNoSSR = dynamic( - () => import('../components/chat'), - { ssr: false } -) - -export default function IndexPage() { - return ( - <> -
- - - ) -} diff --git a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op_gpu/upfirdn2d.cpp b/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op_gpu/upfirdn2d.cpp deleted file mode 100644 index 73928ece8150f847d98af65a95685a29fcceecde..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/op_gpu/upfirdn2d.cpp +++ /dev/null @@ -1,31 +0,0 @@ -#include -#include - -torch::Tensor upfirdn2d_op(const torch::Tensor &input, - const torch::Tensor &kernel, int up_x, int up_y, - int down_x, int down_y, int pad_x0, int pad_x1, - int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor &input, const torch::Tensor &kernel, - int up_x, int up_y, int down_x, int down_y, int pad_x0, - int pad_x1, int pad_y0, int pad_y1) { - CHECK_INPUT(input); - CHECK_INPUT(kernel); - - at::DeviceGuard guard(input.device()); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, - pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/text.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/text.py deleted file mode 100644 index 87b1a3eca9595a130121526f8b4c29915387ab35..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/logger/text.py +++ /dev/null @@ -1,256 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import datetime -import os -import os.path as osp -from collections import OrderedDict - -import torch -import torch.distributed as dist - -import annotator.uniformer.mmcv as mmcv -from annotator.uniformer.mmcv.fileio.file_client import FileClient -from annotator.uniformer.mmcv.utils import is_tuple_of, scandir -from ..hook import HOOKS -from .base import LoggerHook - - -@HOOKS.register_module() -class TextLoggerHook(LoggerHook): - """Logger hook in text. - - In this logger hook, the information will be printed on terminal and - saved in json file. - - Args: - by_epoch (bool, optional): Whether EpochBasedRunner is used. - Default: True. - interval (int, optional): Logging interval (every k iterations). - Default: 10. - ignore_last (bool, optional): Ignore the log of last iterations in each - epoch if less than :attr:`interval`. Default: True. - reset_flag (bool, optional): Whether to clear the output buffer after - logging. Default: False. - interval_exp_name (int, optional): Logging interval for experiment - name. This feature is to help users conveniently get the experiment - information from screen or log file. Default: 1000. - out_dir (str, optional): Logs are saved in ``runner.work_dir`` default. - If ``out_dir`` is specified, logs will be copied to a new directory - which is the concatenation of ``out_dir`` and the last level - directory of ``runner.work_dir``. Default: None. - `New in version 1.3.16.` - out_suffix (str or tuple[str], optional): Those filenames ending with - ``out_suffix`` will be copied to ``out_dir``. - Default: ('.log.json', '.log', '.py'). - `New in version 1.3.16.` - keep_local (bool, optional): Whether to keep local log when - :attr:`out_dir` is specified. If False, the local log will be - removed. Default: True. - `New in version 1.3.16.` - file_client_args (dict, optional): Arguments to instantiate a - FileClient. See :class:`mmcv.fileio.FileClient` for details. - Default: None. - `New in version 1.3.16.` - """ - - def __init__(self, - by_epoch=True, - interval=10, - ignore_last=True, - reset_flag=False, - interval_exp_name=1000, - out_dir=None, - out_suffix=('.log.json', '.log', '.py'), - keep_local=True, - file_client_args=None): - super(TextLoggerHook, self).__init__(interval, ignore_last, reset_flag, - by_epoch) - self.by_epoch = by_epoch - self.time_sec_tot = 0 - self.interval_exp_name = interval_exp_name - - if out_dir is None and file_client_args is not None: - raise ValueError( - 'file_client_args should be "None" when `out_dir` is not' - 'specified.') - self.out_dir = out_dir - - if not (out_dir is None or isinstance(out_dir, str) - or is_tuple_of(out_dir, str)): - raise TypeError('out_dir should be "None" or string or tuple of ' - 'string, but got {out_dir}') - self.out_suffix = out_suffix - - self.keep_local = keep_local - self.file_client_args = file_client_args - if self.out_dir is not None: - self.file_client = FileClient.infer_client(file_client_args, - self.out_dir) - - def before_run(self, runner): - super(TextLoggerHook, self).before_run(runner) - - if self.out_dir is not None: - self.file_client = FileClient.infer_client(self.file_client_args, - self.out_dir) - # The final `self.out_dir` is the concatenation of `self.out_dir` - # and the last level directory of `runner.work_dir` - basename = osp.basename(runner.work_dir.rstrip(osp.sep)) - self.out_dir = self.file_client.join_path(self.out_dir, basename) - runner.logger.info( - (f'Text logs will be saved to {self.out_dir} by ' - f'{self.file_client.name} after the training process.')) - - self.start_iter = runner.iter - self.json_log_path = osp.join(runner.work_dir, - f'{runner.timestamp}.log.json') - if runner.meta is not None: - self._dump_log(runner.meta, runner) - - def _get_max_memory(self, runner): - device = getattr(runner.model, 'output_device', None) - mem = torch.cuda.max_memory_allocated(device=device) - mem_mb = torch.tensor([mem / (1024 * 1024)], - dtype=torch.int, - device=device) - if runner.world_size > 1: - dist.reduce(mem_mb, 0, op=dist.ReduceOp.MAX) - return mem_mb.item() - - def _log_info(self, log_dict, runner): - # print exp name for users to distinguish experiments - # at every ``interval_exp_name`` iterations and the end of each epoch - if runner.meta is not None and 'exp_name' in runner.meta: - if (self.every_n_iters(runner, self.interval_exp_name)) or ( - self.by_epoch and self.end_of_epoch(runner)): - exp_info = f'Exp name: {runner.meta["exp_name"]}' - runner.logger.info(exp_info) - - if log_dict['mode'] == 'train': - if isinstance(log_dict['lr'], dict): - lr_str = [] - for k, val in log_dict['lr'].items(): - lr_str.append(f'lr_{k}: {val:.3e}') - lr_str = ' '.join(lr_str) - else: - lr_str = f'lr: {log_dict["lr"]:.3e}' - - # by epoch: Epoch [4][100/1000] - # by iter: Iter [100/100000] - if self.by_epoch: - log_str = f'Epoch [{log_dict["epoch"]}]' \ - f'[{log_dict["iter"]}/{len(runner.data_loader)}]\t' - else: - log_str = f'Iter [{log_dict["iter"]}/{runner.max_iters}]\t' - log_str += f'{lr_str}, ' - - if 'time' in log_dict.keys(): - self.time_sec_tot += (log_dict['time'] * self.interval) - time_sec_avg = self.time_sec_tot / ( - runner.iter - self.start_iter + 1) - eta_sec = time_sec_avg * (runner.max_iters - runner.iter - 1) - eta_str = str(datetime.timedelta(seconds=int(eta_sec))) - log_str += f'eta: {eta_str}, ' - log_str += f'time: {log_dict["time"]:.3f}, ' \ - f'data_time: {log_dict["data_time"]:.3f}, ' - # statistic memory - if torch.cuda.is_available(): - log_str += f'memory: {log_dict["memory"]}, ' - else: - # val/test time - # here 1000 is the length of the val dataloader - # by epoch: Epoch[val] [4][1000] - # by iter: Iter[val] [1000] - if self.by_epoch: - log_str = f'Epoch({log_dict["mode"]}) ' \ - f'[{log_dict["epoch"]}][{log_dict["iter"]}]\t' - else: - log_str = f'Iter({log_dict["mode"]}) [{log_dict["iter"]}]\t' - - log_items = [] - for name, val in log_dict.items(): - # TODO: resolve this hack - # these items have been in log_str - if name in [ - 'mode', 'Epoch', 'iter', 'lr', 'time', 'data_time', - 'memory', 'epoch' - ]: - continue - if isinstance(val, float): - val = f'{val:.4f}' - log_items.append(f'{name}: {val}') - log_str += ', '.join(log_items) - - runner.logger.info(log_str) - - def _dump_log(self, log_dict, runner): - # dump log in json format - json_log = OrderedDict() - for k, v in log_dict.items(): - json_log[k] = self._round_float(v) - # only append log at last line - if runner.rank == 0: - with open(self.json_log_path, 'a+') as f: - mmcv.dump(json_log, f, file_format='json') - f.write('\n') - - def _round_float(self, items): - if isinstance(items, list): - return [self._round_float(item) for item in items] - elif isinstance(items, float): - return round(items, 5) - else: - return items - - def log(self, runner): - if 'eval_iter_num' in runner.log_buffer.output: - # this doesn't modify runner.iter and is regardless of by_epoch - cur_iter = runner.log_buffer.output.pop('eval_iter_num') - else: - cur_iter = self.get_iter(runner, inner_iter=True) - - log_dict = OrderedDict( - mode=self.get_mode(runner), - epoch=self.get_epoch(runner), - iter=cur_iter) - - # only record lr of the first param group - cur_lr = runner.current_lr() - if isinstance(cur_lr, list): - log_dict['lr'] = cur_lr[0] - else: - assert isinstance(cur_lr, dict) - log_dict['lr'] = {} - for k, lr_ in cur_lr.items(): - assert isinstance(lr_, list) - log_dict['lr'].update({k: lr_[0]}) - - if 'time' in runner.log_buffer.output: - # statistic memory - if torch.cuda.is_available(): - log_dict['memory'] = self._get_max_memory(runner) - - log_dict = dict(log_dict, **runner.log_buffer.output) - - self._log_info(log_dict, runner) - self._dump_log(log_dict, runner) - return log_dict - - def after_run(self, runner): - # copy or upload logs to self.out_dir - if self.out_dir is not None: - for filename in scandir(runner.work_dir, self.out_suffix, True): - local_filepath = osp.join(runner.work_dir, filename) - out_filepath = self.file_client.join_path( - self.out_dir, filename) - with open(local_filepath, 'r') as f: - self.file_client.put_text(f.read(), out_filepath) - - runner.logger.info( - (f'The file {local_filepath} has been uploaded to ' - f'{out_filepath}.')) - - if not self.keep_local: - os.remove(local_filepath) - runner.logger.info( - (f'{local_filepath} was removed due to the ' - '`self.keep_local=False`')) diff --git a/spaces/Robo2000/ClinicalTerminologyAISearch-GR/README.md b/spaces/Robo2000/ClinicalTerminologyAISearch-GR/README.md deleted file mode 100644 index 178e216824738a81d70c12235b2b95e2082df9e0..0000000000000000000000000000000000000000 --- a/spaces/Robo2000/ClinicalTerminologyAISearch-GR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ClinicalTerminologyAISearch -emoji: 🐠 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/models/unet_2d_condition.py b/spaces/Salesforce/EDICT/my_half_diffusers/models/unet_2d_condition.py deleted file mode 100644 index 8546ea4c475ead158f9ae16a0c391c1267d6a4ec..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/models/unet_2d_condition.py +++ /dev/null @@ -1,273 +0,0 @@ -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.nn as nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..modeling_utils import ModelMixin -from ..utils import BaseOutput -from .embeddings import TimestepEmbedding, Timesteps -from .unet_blocks import UNetMidBlock2DCrossAttn, get_down_block, get_up_block - - -@dataclass -class UNet2DConditionOutput(BaseOutput): - """ - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Hidden states conditioned on `encoder_hidden_states` input. Output of last layer of model. - """ - - sample: torch.FloatTensor - - -class UNet2DConditionModel(ModelMixin, ConfigMixin): - r""" - UNet2DConditionModel is a conditional 2D UNet model that takes in a noisy sample, conditional state, and a timestep - and returns sample shaped output. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library - implements for all the model (such as downloading or saving, etc.) - - Parameters: - sample_size (`int`, *optional*): The size of the input sample. - in_channels (`int`, *optional*, defaults to 4): The number of channels in the input sample. - out_channels (`int`, *optional*, defaults to 4): The number of channels in the output. - center_input_sample (`bool`, *optional*, defaults to `False`): Whether to center the input sample. - flip_sin_to_cos (`bool`, *optional*, defaults to `False`): - Whether to flip the sin to cos in the time embedding. - freq_shift (`int`, *optional*, defaults to 0): The frequency shift to apply to the time embedding. - down_block_types (`Tuple[str]`, *optional*, defaults to `("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")`): - The tuple of downsample blocks to use. - up_block_types (`Tuple[str]`, *optional*, defaults to `("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D",)`): - The tuple of upsample blocks to use. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(320, 640, 1280, 1280)`): - The tuple of output channels for each block. - layers_per_block (`int`, *optional*, defaults to 2): The number of layers per block. - downsample_padding (`int`, *optional*, defaults to 1): The padding to use for the downsampling convolution. - mid_block_scale_factor (`float`, *optional*, defaults to 1.0): The scale factor to use for the mid block. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - norm_num_groups (`int`, *optional*, defaults to 32): The number of groups to use for the normalization. - norm_eps (`float`, *optional*, defaults to 1e-5): The epsilon to use for the normalization. - cross_attention_dim (`int`, *optional*, defaults to 1280): The dimension of the cross attention features. - attention_head_dim (`int`, *optional*, defaults to 8): The dimension of the attention heads. - """ - - @register_to_config - def __init__( - self, - sample_size: Optional[int] = None, - in_channels: int = 4, - out_channels: int = 4, - center_input_sample: bool = False, - flip_sin_to_cos: bool = True, - freq_shift: int = 0, - down_block_types: Tuple[str] = ( - "CrossAttnDownBlock2D", - "CrossAttnDownBlock2D", - "CrossAttnDownBlock2D", - "DownBlock2D", - ), - up_block_types: Tuple[str] = ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D"), - block_out_channels: Tuple[int] = (320, 640, 1280, 1280), - layers_per_block: int = 2, - downsample_padding: int = 1, - mid_block_scale_factor: float = 1, - act_fn: str = "silu", - norm_num_groups: int = 32, - norm_eps: float = 1e-5, - cross_attention_dim: int = 1280, - attention_head_dim: int = 8, - ): - super().__init__() - - self.sample_size = sample_size - time_embed_dim = block_out_channels[0] * 4 - - # input - self.conv_in = nn.Conv2d(in_channels, block_out_channels[0], kernel_size=3, padding=(1, 1)) - - # time - self.time_proj = Timesteps(block_out_channels[0], flip_sin_to_cos, freq_shift) - timestep_input_dim = block_out_channels[0] - - self.time_embedding = TimestepEmbedding(timestep_input_dim, time_embed_dim) - - self.down_blocks = nn.ModuleList([]) - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - - # down - output_channel = block_out_channels[0] - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - temb_channels=time_embed_dim, - add_downsample=not is_final_block, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim, - downsample_padding=downsample_padding, - ) - self.down_blocks.append(down_block) - - # mid - self.mid_block = UNetMidBlock2DCrossAttn( - in_channels=block_out_channels[-1], - temb_channels=time_embed_dim, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - output_scale_factor=mid_block_scale_factor, - resnet_time_scale_shift="default", - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim, - resnet_groups=norm_num_groups, - ) - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - for i, up_block_type in enumerate(up_block_types): - prev_output_channel = output_channel - output_channel = reversed_block_out_channels[i] - input_channel = reversed_block_out_channels[min(i + 1, len(block_out_channels) - 1)] - - is_final_block = i == len(block_out_channels) - 1 - - up_block = get_up_block( - up_block_type, - num_layers=layers_per_block + 1, - in_channels=input_channel, - out_channels=output_channel, - prev_output_channel=prev_output_channel, - temb_channels=time_embed_dim, - add_upsample=not is_final_block, - resnet_eps=norm_eps, - resnet_act_fn=act_fn, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attention_head_dim, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - self.conv_norm_out = nn.GroupNorm(num_channels=block_out_channels[0], num_groups=norm_num_groups, eps=norm_eps) - self.conv_act = nn.SiLU() - self.conv_out = nn.Conv2d(block_out_channels[0], out_channels, 3, padding=1) - - def set_attention_slice(self, slice_size): - if slice_size is not None and self.config.attention_head_dim % slice_size != 0: - raise ValueError( - f"Make sure slice_size {slice_size} is a divisor of " - f"the number of heads used in cross_attention {self.config.attention_head_dim}" - ) - if slice_size is not None and slice_size > self.config.attention_head_dim: - raise ValueError( - f"Chunk_size {slice_size} has to be smaller or equal to " - f"the number of heads used in cross_attention {self.config.attention_head_dim}" - ) - - for block in self.down_blocks: - if hasattr(block, "attentions") and block.attentions is not None: - block.set_attention_slice(slice_size) - - self.mid_block.set_attention_slice(slice_size) - - for block in self.up_blocks: - if hasattr(block, "attentions") and block.attentions is not None: - block.set_attention_slice(slice_size) - - def forward( - self, - sample: torch.FloatTensor, - timestep: Union[torch.Tensor, float, int], - encoder_hidden_states: torch.Tensor, - return_dict: bool = True, - ) -> Union[UNet2DConditionOutput, Tuple]: - """r - Args: - sample (`torch.FloatTensor`): (batch, channel, height, width) noisy inputs tensor - timestep (`torch.FloatTensor` or `float` or `int): (batch) timesteps - encoder_hidden_states (`torch.FloatTensor`): (batch, channel, height, width) encoder hidden states - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple. - - Returns: - [`~models.unet_2d_condition.UNet2DConditionOutput`] or `tuple`: - [`~models.unet_2d_condition.UNet2DConditionOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - # 0. center input if necessary - if self.config.center_input_sample: - sample = 2 * sample - 1.0 - - # 1. time - timesteps = timestep - if not torch.is_tensor(timesteps): - timesteps = torch.tensor([timesteps], dtype=torch.long, device=sample.device) - elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0: - timesteps = timesteps.to(dtype=torch.float16) - timesteps = timesteps[None].to(device=sample.device) - - # broadcast to batch dimension in a way that's compatible with ONNX/Core ML - timesteps = timesteps.expand(sample.shape[0]) - - t_emb = self.time_proj(timesteps) - # print(t_emb.dtype) - t_emb = t_emb.to(sample.dtype).to(sample.device) - emb = self.time_embedding(t_emb) - - # 2. pre-process - sample = self.conv_in(sample) - - # 3. down - down_block_res_samples = (sample,) - for downsample_block in self.down_blocks: - if hasattr(downsample_block, "attentions") and downsample_block.attentions is not None: - # print(sample.dtype, emb.dtype, encoder_hidden_states.dtype) - sample, res_samples = downsample_block( - hidden_states=sample, temb=emb, encoder_hidden_states=encoder_hidden_states - ) - else: - sample, res_samples = downsample_block(hidden_states=sample, temb=emb) - - down_block_res_samples += res_samples - - # 4. mid - sample = self.mid_block(sample, emb, encoder_hidden_states=encoder_hidden_states) - - # 5. up - for upsample_block in self.up_blocks: - res_samples = down_block_res_samples[-len(upsample_block.resnets) :] - down_block_res_samples = down_block_res_samples[: -len(upsample_block.resnets)] - - if hasattr(upsample_block, "attentions") and upsample_block.attentions is not None: - sample = upsample_block( - hidden_states=sample, - temb=emb, - res_hidden_states_tuple=res_samples, - encoder_hidden_states=encoder_hidden_states, - ) - else: - sample = upsample_block(hidden_states=sample, temb=emb, res_hidden_states_tuple=res_samples) - - # 6. post-process - # make sure hidden states is in float32 - # when running in half-precision - sample = self.conv_norm_out(sample).type(sample.dtype) - sample = self.conv_act(sample) - sample = self.conv_out(sample) - - if not return_dict: - return (sample,) - - return UNet2DConditionOutput(sample=sample) diff --git a/spaces/ServerX/PorcoDiaz/go-applio-manager-recode.bat b/spaces/ServerX/PorcoDiaz/go-applio-manager-recode.bat deleted file mode 100644 index 91b8acfc0c69a356fd5b1d77650b2cd728b1072b..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/go-applio-manager-recode.bat +++ /dev/null @@ -1,322 +0,0 @@ -@echo off -title Applio Installer - -::: _ _ _____ _ -::: /\ | (_) | __ \ | | -::: / \ _ __ _ __ | |_ ___ | |__) |___ ___ ___ __| | ___ -::: / /\ \ | '_ \| '_ \| | |/ _ \ | _ // _ \/ __/ _ \ / _` |/ _ \ -::: / ____ \| |_) | |_) | | | (_) | | | \ \ __/ (_| (_) | (_| | __/ -::: /_/ \_\ .__/| .__/|_|_|\___/ |_| \_\___|\___\___/ \__,_|\___| -::: | | | | -::: |_| |_| -::: -::: - -setlocal -set "branch=applio-recode" -set "runtime=runtime-recode" -set "repoUrl=https://github.com/IAHispano/Applio-RVC-Fork/archive/refs/heads/%branch%.zip" -set "fixesFolder=fixes" -set "localFixesPy=local_fixes.py" -set "principal=%cd%" -set "URL_BASE=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main" -set "URL_EXTRA=https://huggingface.co/IAHispano/applio/resolve/main" - -:menu -for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A - -echo [1] Reinstall Applio -echo [2] Update Applio -echo [3] Update Applio + Runtime -echo. - -set /p choice=Select an option: -set choice=%choice: =% - -if "%choice%"=="1" ( - cls - echo Starting Applio Reinstaller... - echo. - goto reinstaller - pause - cls - goto menu - -) - -if "%choice%"=="2" ( - cls - echo Starting Applio Updater... - echo. - goto updater - pause - cls - goto menu -) - -if "%choice%"=="3" ( - cls - echo Updating Applio + Runtime... - echo. - goto updaterRuntime - pause - cls - goto menu - -) - -cls -echo Invalid option. Please enter a number from 1 to 3. -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - -:reinstaller - -echo WARNING: Remember to install Microsoft C++ Build Tools, Redistributable, Python, and Git before continuing. -echo. -echo Step-by-step guide: https://rentry.org/appliolocal -echo Build Tools: https://aka.ms/vs/17/release/vs_BuildTools.exe -echo Redistributable: https://aka.ms/vs/17/release/vc_redist.x64.exe -echo Git: https://github.com/git-for-windows/git/releases/download/v2.42.0.windows.2/Git-2.42.0.2-64-bit.exe -echo Python: Add this route to the windows enviroment variables the user path variable: %principal%\runtime\Scripts -echo. -pause -cls - -echo Downloading ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Proceeding to download the models... -echo. - -echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models. -pause -cls - -echo Downloading models in the assets folder... -cd "assets" -echo. -echo Downloading the "pretrained" folder... -cd "pretrained" -curl -LJO "%URL_BASE%/pretrained/D32k.pth" -curl -LJO "%URL_BASE%/pretrained/D40k.pth" -curl -LJO "%URL_BASE%/pretrained/D48k.pth" -curl -LJO "%URL_BASE%/pretrained/G32k.pth" -curl -LJO "%URL_BASE%/pretrained/G40k.pth" -curl -LJO "%URL_BASE%/pretrained/G48k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D32k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D40k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D48k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G32k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G40k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G48k.pth" -cd ".." -echo. -cls - -echo Downloading the "pretrained_v2" folder... -cd "pretrained_v2" -curl -LJO "%URL_BASE%/pretrained_v2/D32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/D40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/D48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G48k.pth" -cd ".." -echo. -cls - -echo Downloading the hubert_base.pt file... -cd "hubert" -curl -LJO "%URL_BASE%/hubert_base.pt" -cd ".." -echo. -cls - - -echo Downloading the rmvpe.pt file... -cd "rmvpe" -curl -LJO "%URL_BASE%/rmvpe.pt" -echo. -cls - -echo Downloading the rmvpe.onnx file... -curl -LJO "%URL_BASE%/rmvpe.onnx" -cd ".." -cd ".." -echo. -cls - -echo Downloading the rest of the large files - -echo Downloading the "uvr5_weights" folder... -cd "uvr5_weights" -curl -LJO "%URL_BASE%/uvr5_weights/HP2_all_vocals.pth" -curl -LJO "%URL_BASE%/uvr5_weights/HP3_all_vocals.pth" -curl -LJO "%URL_BASE%/uvr5_weights/HP5_only_main_vocal.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoAggressive.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoDeReverb.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoNormal.pth" -cd ".." -echo. -cls - -echo Downloading the ffmpeg.exe file... -curl -LJO "%URL_BASE%/ffmpeg.exe" -echo. -cls - -echo Downloading the ffprobe.exe file... -curl -LJO "%URL_BASE%/ffprobe.exe" -echo. -cls - -echo Downloading the runtime.zip file... -curl -LJO "%URL_EXTRA%/%runtime%.zip" -echo. -cls - -echo Extracting the runtime.zip file, this might take a while... -powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'" -del %runtime%.zip -echo. -cls - -echo Downloads completed! -echo. - -echo Checking if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The "%localFixesPy%" file was not found in the "Fixes" folder. -) -echo. - -echo Fixes Applied! -echo. - -echo Applio has been reinstalled! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - - -:updater - -echo Downloading the ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of the subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Verifying if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The file "%localFixesPy%" was not found in the "Fixes" folder. -) -echo. - -echo Applio has been updated! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - - -:updaterRuntime - -echo Downloading the ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of the subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Downloading the runtime.zip file... -curl -LJO "%URL_EXTRA%/%runtime%.zip" -echo. -cls -echo Extracting the runtime.zip file, this might take a while... -powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'" -del runtime.zip -echo. -cls - -echo Verifying if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The file "%localFixesPy%" was not found in the "Fixes" folder. -) -echo. - -echo Applio has been updated! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu diff --git a/spaces/ShadowDominator/sentence-sentiment-analysis/app.py b/spaces/ShadowDominator/sentence-sentiment-analysis/app.py deleted file mode 100644 index 54d30896615c6e1aa41fe1510c29bb7c2e95af5f..0000000000000000000000000000000000000000 --- a/spaces/ShadowDominator/sentence-sentiment-analysis/app.py +++ /dev/null @@ -1,40 +0,0 @@ -import gradio as gr -import torch -from transformers import AutoTokenizer, AutoModelForSequenceClassification - -tokenizer_sentence_analysis = AutoTokenizer.from_pretrained("finiteautomata/bertweet-base-sentiment-analysis") -model_sentence_analysis = AutoModelForSequenceClassification.from_pretrained("finiteautomata/bertweet-base-sentiment-analysis") -paragraph = """ -I woke up this morning feeling refreshed and excited for the day ahead. -I had a great night's sleep, and I was looking forward to spending time with my family and friends. -I went for a walk in the park, and I enjoyed the beautiful weather. I also stopped by my favorite coffee shop and got a delicious cup of coffee. -I felt so happy and content, and I knew that it was going to be a great day. - -""" -def sentence_sentiment_model(text, tokenizer, model): - inputs = tokenizer(text, padding=True, truncation=True, return_tensors="pt") - with torch.no_grad(): - result = model(inputs['input_ids'], attention_mask=inputs['attention_mask']) - logits = result.logits.detach() - probs = torch.softmax(logits, dim=1) - pos_prob = probs[0][2].item() - neu_prob = probs[0][1].item() - neg_prob = probs[0][0].item() - return {'Positive': [round(float(pos_prob), 2)],"Neutural":[round(float(neu_prob), 2)], 'Negative': [round(float(neg_prob), 2)]} - -def sentence_sentiment(text): - result = sentence_sentiment_model(text,tokenizer_sentence_analysis,model_sentence_analysis) - return result - -with gr.Blocks(title="Sentence",css="footer {visibility: hidden}") as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown("## Sentence sentiment") - with gr.Row(): - with gr.Column(): - inputs = gr.TextArea(label="sentence",value=paragraph,interactive=True) - btn = gr.Button(value="RUN") - with gr.Column(): - output = gr.Label(label="output") - btn.click(fn=sentence_sentiment,inputs=[inputs],outputs=[output]) -demo.launch() \ No newline at end of file diff --git a/spaces/Shakeb100/GroomingGenie_AI/clipseg/training.py b/spaces/Shakeb100/GroomingGenie_AI/clipseg/training.py deleted file mode 100644 index ce12cf443f37e2520658614e15d0e64eb554b7f1..0000000000000000000000000000000000000000 --- a/spaces/Shakeb100/GroomingGenie_AI/clipseg/training.py +++ /dev/null @@ -1,266 +0,0 @@ -import torch -import inspect -import json -import yaml -import math -import os -import sys - -from general_utils import log - -import numpy as np -from functools import partial -from os.path import expanduser, join, isfile, basename - -from torch.cuda.amp import autocast, GradScaler -from torch.optim.lr_scheduler import LambdaLR -from contextlib import nullcontext -from torch.utils.data import DataLoader - -from general_utils import TrainingLogger, get_attribute, filter_args, log, training_config_from_cli_args - - -def cosine_warmup_lr(i, warmup=10, max_iter=90): - """ Cosine LR with Warmup """ - if i < warmup: - return (i+1)/(warmup+1) - else: - return 0.5 + 0.5*math.cos(math.pi*(((i-warmup)/(max_iter- warmup)))) - - -def validate(model, dataset, config): - data_loader = torch.utils.data.DataLoader(dataset, batch_size=4, shuffle=False) - - metric_class, use_metric = config.val_metric_class, config.use_val_metric - loss_fn = get_attribute(config.loss) - - model.eval() - model.cuda() - - if metric_class is not None: - metric = get_attribute(metric_class)() - - with torch.no_grad(): - - i, losses = 0, [] - for data_x, data_y in data_loader: - - data_x = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_x] - data_y = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_y] - - prompts = model.sample_prompts(data_x[1], prompt_list=('a photo of a {}',)) - pred, visual_q, _, _ = model(data_x[0], prompts, return_features=True) - - if metric_class is not None: - metric.add([pred], data_y) - - # pred = model(data_x[0], prompts) - # loss = loss_fn(pred[0], data_y[0]) - loss = loss_fn(pred, data_y[0]) - losses += [float(loss)] - - i += 1 - - if config.val_max_iterations is not None and i > config.val_max_iterations: - break - - if use_metric is None: - return np.mean(losses), {}, False - else: - metric_scores = {m: s for m, s in zip(metric.names(), metric.value())} if metric is not None else {} - return np.mean(losses), metric_scores, True - - -def main(): - - config = training_config_from_cli_args() - - val_interval, best_val_loss, best_val_score = config.val_interval, float('inf'), float('-inf') - - model_cls = get_attribute(config.model) - _, model_args, _ = filter_args(config, inspect.signature(model_cls).parameters) - model = model_cls(**model_args).cuda() - - dataset_cls = get_attribute(config.dataset) - _, dataset_args, _ = filter_args(config, inspect.signature(dataset_cls).parameters) - - dataset = dataset_cls(**dataset_args) - - log.info(f'Train dataset {dataset.__class__.__name__} (length: {len(dataset)})') - - if val_interval is not None: - dataset_val_args = {k[4:]: v for k,v in config.items() if k.startswith('val_') and k != 'val_interval'} - _, dataset_val_args, _ = filter_args(dataset_val_args, inspect.signature(dataset_cls).parameters) - print('val args', {**dataset_args, **{'split': 'val', 'aug': 0}, **dataset_val_args}) - - dataset_val = dataset_cls(**{**dataset_args, **{'split': 'val', 'aug': 0}, **dataset_val_args}) - - # optimizer - opt_cls = get_attribute(config.optimizer) - if config.optimize == 'torch.optim.SGD': - opt_args = {'momentum': config.momentum if 'momentum' in config else 0} - else: - opt_args = {} - opt = opt_cls(model.parameters(), lr=config.lr, **opt_args) - - if config.lr_scheduler == 'cosine': - assert config.T_max is not None and config.eta_min is not None - lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(opt, config.T_max, config.eta_min) - elif config.lr_scheduler == 'warmup_cosine': - lr_scheduler = LambdaLR(opt, partial(cosine_warmup_lr, max_iter=(config.max_iterations), warmup=config.warmup)) - else: - lr_scheduler = None - - batch_size, max_iterations = config.batch_size, config.max_iterations - - loss_fn = get_attribute(config.loss) - - if config.amp: - log.info('Using AMP') - autocast_fn = autocast - scaler = GradScaler() - else: - autocast_fn, scaler = nullcontext, None - - - save_only_trainable = True - data_loader = DataLoader(dataset, batch_size=batch_size, num_workers=4) - - # disable config when hyperparam. opt. to avoid writing logs. - tracker_config = config if not config.hyperparameter_optimization else None - - with TrainingLogger(log_dir=config.name, model=model, config=tracker_config) as logger: - - i = 0 - while True: - for data_x, data_y in data_loader: - - # between caption and output feature. - # 1. Sample random captions - # 2. Check alignment with CLIP - - # randomly mix text and visual support conditionals - if config.mix: - - assert config.mask.startswith('text_and') - - with autocast_fn(): - # data_x[1] = text label - prompts = model.sample_prompts(data_x[1]) - - # model.clip_model() - - text_cond = model.compute_conditional(prompts) - if model.__class__.__name__ == 'CLIPDensePredTMasked': - # when mask=='separate' - visual_s_cond, _, _ = model.visual_forward_masked(data_x[2].cuda(), data_x[3].cuda()) - else: - # data_x[2] = visual prompt - visual_s_cond, _, _ = model.visual_forward(data_x[2].cuda()) - - max_txt = config.mix_text_max if config.mix_text_max is not None else 1 - batch_size = text_cond.shape[0] - - # sample weights for each element in batch - text_weights = torch.distributions.Uniform(config.mix_text_min, max_txt).sample((batch_size,))[:, None] - text_weights = text_weights.cuda() - - if dataset.__class__.__name__ == 'PhraseCut': - # give full weight to text where support_image is invalid - visual_is_valid = data_x[4] if model.__class__.__name__ == 'CLIPDensePredTMasked' else data_x[3] - text_weights = torch.max(text_weights[:,0], 1 - visual_is_valid.float().cuda()).unsqueeze(1) - - cond = text_cond * text_weights + visual_s_cond * (1 - text_weights) - - else: - # no mix - - if model.__class__.__name__ == 'CLIPDensePredTMasked': - # compute conditional vector using CLIP masking - with autocast_fn(): - assert config.mask == 'separate' - cond, _, _ = model.visual_forward_masked(data_x[1].cuda(), data_x[2].cuda()) - else: - cond = data_x[1] - if isinstance(cond, torch.Tensor): - cond = cond.cuda() - - with autocast_fn(): - visual_q = None - - pred, visual_q, _, _ = model(data_x[0].cuda(), cond, return_features=True) - - loss = loss_fn(pred, data_y[0].cuda()) - - if torch.isnan(loss) or torch.isinf(loss): - # skip if loss is nan - log.warning('Training stopped due to inf/nan loss.') - sys.exit(-1) - - extra_loss = 0 - loss += extra_loss - - opt.zero_grad() - - if scaler is None: - loss.backward() - opt.step() - else: - scaler.scale(loss).backward() - scaler.step(opt) - scaler.update() - - if lr_scheduler is not None: - lr_scheduler.step() - if i % 2000 == 0: - current_lr = [g['lr'] for g in opt.param_groups][0] - log.info(f'current lr: {current_lr:.5f} ({len(opt.param_groups)} parameter groups)') - - logger.iter(i=i, loss=loss) - i += 1 - - if i >= max_iterations: - - if not isfile(join(logger.base_path, 'weights.pth')): - # only write if no weights were already written - logger.save_weights(only_trainable=save_only_trainable) - - sys.exit(0) - - - if config.checkpoint_iterations is not None and i in config.checkpoint_iterations: - logger.save_weights(only_trainable=save_only_trainable, weight_file=f'weights_{i}.pth') - - - if val_interval is not None and i % val_interval == val_interval - 1: - - val_loss, val_scores, maximize = validate(model, dataset_val, config) - - if len(val_scores) > 0: - - score_str = f', scores: ' + ', '.join(f'{k}: {v}' for k, v in val_scores.items()) - - if maximize and val_scores[config.use_val_metric] > best_val_score: - logger.save_weights(only_trainable=save_only_trainable) - best_val_score = val_scores[config.use_val_metric] - - elif not maximize and val_scores[config.use_val_metric] < best_val_score: - logger.save_weights(only_trainable=save_only_trainable) - best_val_score = val_scores[config.use_val_metric] - - else: - score_str = '' - # if no score is used, fall back to loss - if val_loss < best_val_loss: - logger.save_weights(only_trainable=save_only_trainable) - best_val_loss = val_loss - - log.info(f'Validation loss: {val_loss}' + score_str) - logger.iter(i=i, val_loss=val_loss, extra_loss=float(extra_loss), **val_scores) - model.train() - - print('epoch complete') - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_multibanddiffusion.py b/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_multibanddiffusion.py deleted file mode 100644 index 2702a3cb5fe402bf96911dbc992d2749cb18a4c0..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/tests/models/test_multibanddiffusion.py +++ /dev/null @@ -1,53 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random - -import numpy as np -import torch -from audiocraft.models.multibanddiffusion import MultiBandDiffusion, DiffusionProcess -from audiocraft.models import EncodecModel, DiffusionUnet -from audiocraft.modules import SEANetEncoder, SEANetDecoder -from audiocraft.modules.diffusion_schedule import NoiseSchedule -from audiocraft.quantization import DummyQuantizer - - -class TestMBD: - - def _create_mbd(self, - sample_rate: int, - channels: int, - n_filters: int = 3, - n_residual_layers: int = 1, - ratios: list = [5, 4, 3, 2], - num_steps: int = 1000, - codec_dim: int = 128, - **kwargs): - frame_rate = np.prod(ratios) - encoder = SEANetEncoder(channels=channels, dimension=codec_dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - decoder = SEANetDecoder(channels=channels, dimension=codec_dim, n_filters=n_filters, - n_residual_layers=n_residual_layers, ratios=ratios) - quantizer = DummyQuantizer() - compression_model = EncodecModel(encoder, decoder, quantizer, frame_rate=frame_rate, - sample_rate=sample_rate, channels=channels, **kwargs) - diffusion_model = DiffusionUnet(chin=channels, num_steps=num_steps, codec_dim=codec_dim) - schedule = NoiseSchedule(device='cpu', num_steps=num_steps) - DP = DiffusionProcess(model=diffusion_model, noise_schedule=schedule) - mbd = MultiBandDiffusion(DPs=[DP], codec_model=compression_model) - return mbd - - def test_model(self): - random.seed(1234) - sample_rate = 24_000 - channels = 1 - codec_dim = 128 - mbd = self._create_mbd(sample_rate=sample_rate, channels=channels, codec_dim=codec_dim) - for _ in range(10): - length = random.randrange(1, 10_000) - x = torch.randn(2, channels, length) - res = mbd.regenerate(x, sample_rate) - assert res.shape == x.shape diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attrs/filters.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attrs/filters.py deleted file mode 100644 index 52959005b088f0e5116c8b6acdbcc5937bbaacc8..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attrs/filters.py +++ /dev/null @@ -1,3 +0,0 @@ -# SPDX-License-Identifier: MIT - -from attr.filters import * # noqa diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/db/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/db/__init__.py deleted file mode 100644 index d90e3db9a585e78c1c3418dc7a7dd05cd08eb332..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/db/__init__.py +++ /dev/null @@ -1,142 +0,0 @@ -from abc import abstractmethod -from typing import List, Sequence, Optional, Tuple -from uuid import UUID -import numpy.typing as npt -from chromadb.api.types import ( - Embeddings, - Documents, - IDs, - Metadatas, - Metadata, - Where, - WhereDocument, -) -from chromadb.config import Component -from overrides import override - - -class DB(Component): - @abstractmethod - def create_collection( - self, - name: str, - metadata: Optional[Metadata] = None, - get_or_create: bool = False, - ) -> Sequence: # type: ignore - pass - - @abstractmethod - def get_collection(self, name: str) -> Sequence: # type: ignore - pass - - @abstractmethod - def list_collections(self) -> Sequence: # type: ignore - pass - - @abstractmethod - def update_collection( - self, - id: UUID, - new_name: Optional[str] = None, - new_metadata: Optional[Metadata] = None, - ) -> None: - pass - - @abstractmethod - def delete_collection(self, name: str) -> None: - pass - - @abstractmethod - def get_collection_uuid_from_name(self, collection_name: str) -> UUID: - pass - - @abstractmethod - def add( - self, - collection_uuid: UUID, - embeddings: Embeddings, - metadatas: Optional[Metadatas], - documents: Optional[Documents], - ids: List[str], - ) -> List[UUID]: - pass - - @abstractmethod - def add_incremental( - self, collection_uuid: UUID, ids: List[UUID], embeddings: Embeddings - ) -> None: - pass - - @abstractmethod - def get( - self, - where: Where = {}, - collection_name: Optional[str] = None, - collection_uuid: Optional[UUID] = None, - ids: Optional[IDs] = None, - sort: Optional[str] = None, - limit: Optional[int] = None, - offset: Optional[int] = None, - where_document: WhereDocument = {}, - columns: Optional[List[str]] = None, - ) -> Sequence: # type: ignore - pass - - @abstractmethod - def update( - self, - collection_uuid: UUID, - ids: IDs, - embeddings: Optional[Embeddings] = None, - metadatas: Optional[Metadatas] = None, - documents: Optional[Documents] = None, - ) -> bool: - pass - - @abstractmethod - def count(self, collection_id: UUID) -> int: - pass - - @abstractmethod - def delete( - self, - where: Where = {}, - collection_uuid: Optional[UUID] = None, - ids: Optional[IDs] = None, - where_document: WhereDocument = {}, - ) -> List[str]: - pass - - @abstractmethod - @override - def reset(self) -> None: - pass - - @abstractmethod - def get_nearest_neighbors( - self, - collection_uuid: UUID, - where: Where = {}, - embeddings: Optional[Embeddings] = None, - n_results: int = 10, - where_document: WhereDocument = {}, - ) -> Tuple[List[List[UUID]], npt.NDArray]: - pass - - @abstractmethod - def get_by_ids( - self, uuids: List[UUID], columns: Optional[List[str]] = None - ) -> Sequence: # type: ignore - pass - - @abstractmethod - def raw_sql(self, raw_sql): # type: ignore - pass - - @abstractmethod - def create_index(self, collection_uuid: UUID): # type: ignore - pass - - @abstractmethod - def persist(self) -> None: - pass diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/comm/__init__.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/comm/__init__.py deleted file mode 100644 index eb8f9048b3ac9f81cbc6be23e977731c984c876a..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/comm/__init__.py +++ /dev/null @@ -1,50 +0,0 @@ -"""Comm package. - -Copyright (c) IPython Development Team. -Distributed under the terms of the Modified BSD License. - -This package provides a way to register a Kernel Comm implementation, as per -the Jupyter kernel protocol. -It also provides a base Comm implementation and a default CommManager for the IPython case. -""" - -from .base_comm import BaseComm, CommManager - -__version__ = "0.1.3" -__all__ = [ - "create_comm", - "get_comm_manager", - "__version__", -] - -_comm_manager = None - - -class DummyComm(BaseComm): - def publish_msg(self, msg_type, data=None, metadata=None, buffers=None, **keys): - pass - - -def _create_comm(*args, **kwargs): - """Create a Comm. - - This method is intended to be replaced, so that it returns your Comm instance. - """ - return DummyComm(*args, **kwargs) - - -def _get_comm_manager(): - """Get the current Comm manager, creates one if there is none. - - This method is intended to be replaced if needed (if you want to manage multiple CommManagers). - """ - global _comm_manager - - if _comm_manager is None: - _comm_manager = CommManager() - - return _comm_manager - - -create_comm = _create_comm -get_comm_manager = _get_comm_manager diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_sys_patch.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_sys_patch.py deleted file mode 100644 index f5067509f4207370dc868da943113695ec67926a..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydev_bundle/_pydev_sys_patch.py +++ /dev/null @@ -1,73 +0,0 @@ -import sys - - -def patch_sys_module(): - - def patched_exc_info(fun): - - def pydev_debugger_exc_info(): - type, value, traceback = fun() - if type == ImportError: - # we should not show frame added by plugin_import call - if traceback and hasattr(traceback, "tb_next"): - return type, value, traceback.tb_next - return type, value, traceback - - return pydev_debugger_exc_info - - system_exc_info = sys.exc_info - sys.exc_info = patched_exc_info(system_exc_info) - if not hasattr(sys, "system_exc_info"): - sys.system_exc_info = system_exc_info - - -def patched_reload(orig_reload): - - def pydev_debugger_reload(module): - orig_reload(module) - if module.__name__ == "sys": - # if sys module was reloaded we should patch it again - patch_sys_module() - - return pydev_debugger_reload - - -def patch_reload(): - import builtins # Py3 - - if hasattr(builtins, "reload"): - sys.builtin_orig_reload = builtins.reload - builtins.reload = patched_reload(sys.builtin_orig_reload) # @UndefinedVariable - try: - import imp - sys.imp_orig_reload = imp.reload - imp.reload = patched_reload(sys.imp_orig_reload) # @UndefinedVariable - except: - pass - else: - try: - import importlib - sys.importlib_orig_reload = importlib.reload # @UndefinedVariable - importlib.reload = patched_reload(sys.importlib_orig_reload) # @UndefinedVariable - except: - pass - - del builtins - - -def cancel_patches_in_sys_module(): - sys.exc_info = sys.system_exc_info # @UndefinedVariable - import builtins # Py3 - - if hasattr(sys, "builtin_orig_reload"): - builtins.reload = sys.builtin_orig_reload - - if hasattr(sys, "imp_orig_reload"): - import imp - imp.reload = sys.imp_orig_reload - - if hasattr(sys, "importlib_orig_reload"): - import importlib - importlib.reload = sys.importlib_orig_reload - - del builtins diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd.py deleted file mode 100644 index ae865b1614ad2fe26a07ed097edf741cabe4e3e4..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd.py +++ /dev/null @@ -1,3489 +0,0 @@ -''' -Entry point module (keep at root): - -This module starts the debugger. -''' -import sys # @NoMove -if sys.version_info[:2] < (3, 6): - raise RuntimeError('The PyDev.Debugger requires Python 3.6 onwards to be run. If you need to use an older Python version, use an older version of the debugger.') -import os - -try: - # Just empty packages to check if they're in the PYTHONPATH. - import _pydev_bundle -except ImportError: - # On the first import of a pydevd module, add pydevd itself to the PYTHONPATH - # if its dependencies cannot be imported. - sys.path.append(os.path.dirname(os.path.abspath(__file__))) - import _pydev_bundle - -# Import this first as it'll check for shadowed modules and will make sure that we import -# things as needed for gevent. -from _pydevd_bundle import pydevd_constants - -import atexit -import dis -import io -from collections import defaultdict -from contextlib import contextmanager -from functools import partial -import itertools -import traceback -import weakref -import getpass as getpass_mod -import functools - -import pydevd_file_utils -from _pydev_bundle import pydev_imports, pydev_log -from _pydev_bundle._pydev_filesystem_encoding import getfilesystemencoding -from _pydev_bundle.pydev_is_thread_alive import is_thread_alive -from _pydev_bundle.pydev_override import overrides -from _pydev_bundle._pydev_saved_modules import threading, time, thread -from _pydevd_bundle import pydevd_extension_utils, pydevd_frame_utils -from _pydevd_bundle.pydevd_filtering import FilesFiltering, glob_matches_path -from _pydevd_bundle import pydevd_io, pydevd_vm_type, pydevd_defaults -from _pydevd_bundle import pydevd_utils -from _pydevd_bundle import pydevd_runpy -from _pydev_bundle.pydev_console_utils import DebugConsoleStdIn -from _pydevd_bundle.pydevd_additional_thread_info import set_additional_thread_info -from _pydevd_bundle.pydevd_breakpoints import ExceptionBreakpoint, get_exception_breakpoint -from _pydevd_bundle.pydevd_comm_constants import (CMD_THREAD_SUSPEND, CMD_STEP_INTO, CMD_SET_BREAK, - CMD_STEP_INTO_MY_CODE, CMD_STEP_OVER, CMD_SMART_STEP_INTO, CMD_RUN_TO_LINE, - CMD_SET_NEXT_STATEMENT, CMD_STEP_RETURN, CMD_ADD_EXCEPTION_BREAK, CMD_STEP_RETURN_MY_CODE, - CMD_STEP_OVER_MY_CODE, constant_to_str, CMD_STEP_INTO_COROUTINE) -from _pydevd_bundle.pydevd_constants import (get_thread_id, get_current_thread_id, - DebugInfoHolder, PYTHON_SUSPEND, STATE_SUSPEND, STATE_RUN, get_frame, - clear_cached_thread_id, INTERACTIVE_MODE_AVAILABLE, SHOW_DEBUG_INFO_ENV, NULL, - NO_FTRACE, IS_IRONPYTHON, JSON_PROTOCOL, IS_CPYTHON, HTTP_JSON_PROTOCOL, USE_CUSTOM_SYS_CURRENT_FRAMES_MAP, call_only_once, - ForkSafeLock, IGNORE_BASENAMES_STARTING_WITH, EXCEPTION_TYPE_UNHANDLED, SUPPORT_GEVENT, - PYDEVD_IPYTHON_COMPATIBLE_DEBUGGING, PYDEVD_IPYTHON_CONTEXT) -from _pydevd_bundle.pydevd_defaults import PydevdCustomization # Note: import alias used on pydev_monkey. -from _pydevd_bundle.pydevd_custom_frames import CustomFramesContainer, custom_frames_container_init -from _pydevd_bundle.pydevd_dont_trace_files import DONT_TRACE, PYDEV_FILE, LIB_FILE, DONT_TRACE_DIRS -from _pydevd_bundle.pydevd_extension_api import DebuggerEventHandler -from _pydevd_bundle.pydevd_frame_utils import add_exception_to_frame, remove_exception_from_frame -from _pydevd_bundle.pydevd_net_command_factory_xml import NetCommandFactory -from _pydevd_bundle.pydevd_trace_dispatch import ( - trace_dispatch as _trace_dispatch, global_cache_skips, global_cache_frame_skips, fix_top_level_trace_and_get_trace_func, USING_CYTHON) -from _pydevd_bundle.pydevd_utils import save_main_module, is_current_thread_main_thread, \ - import_attr_from_module -from _pydevd_frame_eval.pydevd_frame_eval_main import ( - frame_eval_func, dummy_trace_dispatch, USING_FRAME_EVAL) -import pydev_ipython # @UnusedImport -from _pydevd_bundle.pydevd_source_mapping import SourceMapping -from _pydevd_bundle.pydevd_concurrency_analyser.pydevd_concurrency_logger import ThreadingLogger, AsyncioLogger, send_concurrency_message, cur_time -from _pydevd_bundle.pydevd_concurrency_analyser.pydevd_thread_wrappers import wrap_threads -from pydevd_file_utils import get_abs_path_real_path_and_base_from_frame, NORM_PATHS_AND_BASE_CONTAINER -from pydevd_file_utils import get_fullname, get_package_dir -from os.path import abspath as os_path_abspath -import pydevd_tracing -from _pydevd_bundle.pydevd_comm import (InternalThreadCommand, InternalThreadCommandForAnyThread, - create_server_socket, FSNotifyThread) -from _pydevd_bundle.pydevd_comm import(InternalConsoleExec, - _queue, ReaderThread, GetGlobalDebugger, get_global_debugger, - set_global_debugger, WriterThread, - start_client, start_server, InternalGetBreakpointException, InternalSendCurrExceptionTrace, - InternalSendCurrExceptionTraceProceeded) -from _pydevd_bundle.pydevd_daemon_thread import PyDBDaemonThread, mark_as_pydevd_daemon_thread -from _pydevd_bundle.pydevd_process_net_command_json import PyDevJsonCommandProcessor -from _pydevd_bundle.pydevd_process_net_command import process_net_command -from _pydevd_bundle.pydevd_net_command import NetCommand, NULL_NET_COMMAND - -from _pydevd_bundle.pydevd_breakpoints import stop_on_unhandled_exception -from _pydevd_bundle.pydevd_collect_bytecode_info import collect_try_except_info, collect_return_info, collect_try_except_info_from_source -from _pydevd_bundle.pydevd_suspended_frames import SuspendedFramesManager -from socket import SHUT_RDWR -from _pydevd_bundle.pydevd_api import PyDevdAPI -from _pydevd_bundle.pydevd_timeout import TimeoutTracker -from _pydevd_bundle.pydevd_thread_lifecycle import suspend_all_threads, mark_thread_suspended - -pydevd_gevent_integration = None - -if SUPPORT_GEVENT: - try: - from _pydevd_bundle import pydevd_gevent_integration - except: - pydev_log.exception( - 'pydevd: GEVENT_SUPPORT is set but gevent is not available in the environment.\n' - 'Please unset GEVENT_SUPPORT from the environment variables or install gevent.') - else: - pydevd_gevent_integration.log_gevent_debug_info() - -if USE_CUSTOM_SYS_CURRENT_FRAMES_MAP: - from _pydevd_bundle.pydevd_constants import constructed_tid_to_last_frame - -__version_info__ = (2, 9, 5) -__version_info_str__ = [] -for v in __version_info__: - __version_info_str__.append(str(v)) - -__version__ = '.'.join(__version_info_str__) - -# IMPORTANT: pydevd_constants must be the 1st thing defined because it'll keep a reference to the original sys._getframe - - -def install_breakpointhook(pydevd_breakpointhook=None): - if pydevd_breakpointhook is None: - - def pydevd_breakpointhook(*args, **kwargs): - hookname = os.getenv('PYTHONBREAKPOINT') - if ( - hookname is not None - and len(hookname) > 0 - and hasattr(sys, '__breakpointhook__') - and sys.__breakpointhook__ != pydevd_breakpointhook - ): - sys.__breakpointhook__(*args, **kwargs) - else: - settrace(*args, **kwargs) - - if sys.version_info[0:2] >= (3, 7): - # There are some choices on how to provide the breakpoint hook. Namely, we can provide a - # PYTHONBREAKPOINT which provides the import path for a method to be executed or we - # can override sys.breakpointhook. - # pydevd overrides sys.breakpointhook instead of providing an environment variable because - # it's possible that the debugger starts the user program but is not available in the - # PYTHONPATH (and would thus fail to be imported if PYTHONBREAKPOINT was set to pydevd.settrace). - # Note that the implementation still takes PYTHONBREAKPOINT in account (so, if it was provided - # by someone else, it'd still work). - sys.breakpointhook = pydevd_breakpointhook - else: - if sys.version_info[0] >= 3: - import builtins as __builtin__ # Py3 noqa - else: - import __builtin__ # noqa - - # In older versions, breakpoint() isn't really available, so, install the hook directly - # in the builtins. - __builtin__.breakpoint = pydevd_breakpointhook - sys.__breakpointhook__ = pydevd_breakpointhook - - -# Install the breakpoint hook at import time. -install_breakpointhook() - -from _pydevd_bundle.pydevd_plugin_utils import PluginManager - -threadingEnumerate = threading.enumerate -threadingCurrentThread = threading.current_thread - -try: - 'dummy'.encode('utf-8') # Added because otherwise Jython 2.2.1 wasn't finding the encoding (if it wasn't loaded in the main thread). -except: - pass - -_global_redirect_stdout_to_server = False -_global_redirect_stderr_to_server = False - -file_system_encoding = getfilesystemencoding() - -_CACHE_FILE_TYPE = {} - -pydev_log.debug('Using GEVENT_SUPPORT: %s', pydevd_constants.SUPPORT_GEVENT) -pydev_log.debug('Using GEVENT_SHOW_PAUSED_GREENLETS: %s', pydevd_constants.GEVENT_SHOW_PAUSED_GREENLETS) -pydev_log.debug('pydevd __file__: %s', os.path.abspath(__file__)) -pydev_log.debug('Using PYDEVD_IPYTHON_COMPATIBLE_DEBUGGING: %s', pydevd_constants.PYDEVD_IPYTHON_COMPATIBLE_DEBUGGING) -if pydevd_constants.PYDEVD_IPYTHON_COMPATIBLE_DEBUGGING: - pydev_log.debug('PYDEVD_IPYTHON_CONTEXT: %s', pydevd_constants.PYDEVD_IPYTHON_CONTEXT) - - -#======================================================================================================================= -# PyDBCommandThread -#======================================================================================================================= -class PyDBCommandThread(PyDBDaemonThread): - - def __init__(self, py_db): - PyDBDaemonThread.__init__(self, py_db) - self._py_db_command_thread_event = py_db._py_db_command_thread_event - self.name = 'pydevd.CommandThread' - - @overrides(PyDBDaemonThread._on_run) - def _on_run(self): - # Delay a bit this initialization to wait for the main program to start. - self._py_db_command_thread_event.wait(0.3) - - if self._kill_received: - return - - try: - while not self._kill_received: - try: - self.py_db.process_internal_commands() - except: - pydev_log.info('Finishing debug communication...(2)') - self._py_db_command_thread_event.clear() - self._py_db_command_thread_event.wait(0.3) - except: - try: - pydev_log.debug(sys.exc_info()[0]) - except: - # In interpreter shutdown many things can go wrong (any module variables may - # be None, streams can be closed, etc). - pass - - # only got this error in interpreter shutdown - # pydev_log.info('Finishing debug communication...(3)') - - @overrides(PyDBDaemonThread.do_kill_pydev_thread) - def do_kill_pydev_thread(self): - PyDBDaemonThread.do_kill_pydev_thread(self) - # Set flag so that it can exit before the usual timeout. - self._py_db_command_thread_event.set() - - -#======================================================================================================================= -# CheckAliveThread -# Non-daemon thread: guarantees that all data is written even if program is finished -#======================================================================================================================= -class CheckAliveThread(PyDBDaemonThread): - - def __init__(self, py_db): - PyDBDaemonThread.__init__(self, py_db) - self.name = 'pydevd.CheckAliveThread' - self.daemon = False - self._wait_event = threading.Event() - - @overrides(PyDBDaemonThread._on_run) - def _on_run(self): - py_db = self.py_db - - def can_exit(): - with py_db._main_lock: - # Note: it's important to get the lock besides checking that it's empty (this - # means that we're not in the middle of some command processing). - writer = py_db.writer - writer_empty = writer is not None and writer.empty() - - return not py_db.has_user_threads_alive() and writer_empty - - try: - while not self._kill_received: - self._wait_event.wait(0.3) - if can_exit(): - break - - py_db.check_output_redirect() - - if can_exit(): - pydev_log.debug("No threads alive, finishing debug session") - py_db.dispose_and_kill_all_pydevd_threads() - except: - pydev_log.exception() - - def join(self, timeout=None): - # If someone tries to join this thread, mark it to be killed. - # This is the case for CherryPy when auto-reload is turned on. - self.do_kill_pydev_thread() - PyDBDaemonThread.join(self, timeout=timeout) - - @overrides(PyDBDaemonThread.do_kill_pydev_thread) - def do_kill_pydev_thread(self): - PyDBDaemonThread.do_kill_pydev_thread(self) - # Set flag so that it can exit before the usual timeout. - self._wait_event.set() - - -class AbstractSingleNotificationBehavior(object): - ''' - The basic usage should be: - - # Increment the request time for the suspend. - single_notification_behavior.increment_suspend_time() - - # Notify that this is a pause request (when a pause, not a breakpoint). - single_notification_behavior.on_pause() - - # Mark threads to be suspended. - set_suspend(...) - - # On do_wait_suspend, use notify_thread_suspended: - def do_wait_suspend(...): - with single_notification_behavior.notify_thread_suspended(thread_id, thread, reason): - ... - ''' - - __slots__ = [ - '_last_resume_notification_time', - '_last_suspend_notification_time', - '_lock', - '_next_request_time', - '_suspend_time_request', - '_suspended_thread_id_to_thread', - '_pause_requested', - '_py_db', - ] - - NOTIFY_OF_PAUSE_TIMEOUT = .5 - - def __init__(self, py_db): - self._py_db = weakref.ref(py_db) - self._next_request_time = partial(next, itertools.count()) - self._last_suspend_notification_time = -1 - self._last_resume_notification_time = -1 - self._suspend_time_request = self._next_request_time() - self._lock = thread.allocate_lock() - self._suspended_thread_id_to_thread = {} - self._pause_requested = False - - def send_suspend_notification(self, thread_id, thread, stop_reason): - raise AssertionError('abstract: subclasses must override.') - - def send_resume_notification(self, thread_id): - raise AssertionError('abstract: subclasses must override.') - - def increment_suspend_time(self): - with self._lock: - self._suspend_time_request = self._next_request_time() - - def on_pause(self): - # Upon a pause, we should force sending new suspend notifications - # if no notification is sent after some time and there's some thread already stopped. - with self._lock: - self._pause_requested = True - global_suspend_time = self._suspend_time_request - py_db = self._py_db() - if py_db is not None: - py_db.timeout_tracker.call_on_timeout( - self.NOTIFY_OF_PAUSE_TIMEOUT, - self._notify_after_timeout, - kwargs={'global_suspend_time': global_suspend_time} - ) - - def _notify_after_timeout(self, global_suspend_time): - with self._lock: - if self._suspended_thread_id_to_thread: - if global_suspend_time > self._last_suspend_notification_time: - self._last_suspend_notification_time = global_suspend_time - # Notify about any thread which is currently suspended. - pydev_log.info('Sending suspend notification after timeout.') - thread_id, thread = next(iter(self._suspended_thread_id_to_thread.items())) - self.send_suspend_notification(thread_id, thread, CMD_THREAD_SUSPEND) - - def on_thread_suspend(self, thread_id, thread, stop_reason): - with self._lock: - pause_requested = self._pause_requested - if pause_requested: - # When a suspend notification is sent, reset the pause flag. - self._pause_requested = False - - self._suspended_thread_id_to_thread[thread_id] = thread - - # CMD_THREAD_SUSPEND should always be a side-effect of a break, so, only - # issue for a CMD_THREAD_SUSPEND if a pause is pending. - if stop_reason != CMD_THREAD_SUSPEND or pause_requested: - if self._suspend_time_request > self._last_suspend_notification_time: - pydev_log.info('Sending suspend notification.') - self._last_suspend_notification_time = self._suspend_time_request - self.send_suspend_notification(thread_id, thread, stop_reason) - else: - pydev_log.info( - 'Suspend not sent (it was already sent). Last suspend % <= Last resume %s', - self._last_suspend_notification_time, - self._last_resume_notification_time, - ) - else: - pydev_log.info( - 'Suspend not sent because stop reason is thread suspend and pause was not requested.', - ) - - def on_thread_resume(self, thread_id, thread): - # on resume (step, continue all): - with self._lock: - self._suspended_thread_id_to_thread.pop(thread_id) - if self._last_resume_notification_time < self._last_suspend_notification_time: - pydev_log.info('Sending resume notification.') - self._last_resume_notification_time = self._last_suspend_notification_time - self.send_resume_notification(thread_id) - else: - pydev_log.info( - 'Resume not sent (it was already sent). Last resume %s >= Last suspend %s', - self._last_resume_notification_time, - self._last_suspend_notification_time, - ) - - @contextmanager - def notify_thread_suspended(self, thread_id, thread, stop_reason): - self.on_thread_suspend(thread_id, thread, stop_reason) - try: - yield # At this point the thread must be actually suspended. - finally: - self.on_thread_resume(thread_id, thread) - - -class ThreadsSuspendedSingleNotification(AbstractSingleNotificationBehavior): - - __slots__ = AbstractSingleNotificationBehavior.__slots__ + [ - 'multi_threads_single_notification', '_callbacks', '_callbacks_lock'] - - def __init__(self, py_db): - AbstractSingleNotificationBehavior.__init__(self, py_db) - # If True, pydevd will send a single notification when all threads are suspended/resumed. - self.multi_threads_single_notification = False - self._callbacks_lock = threading.Lock() - self._callbacks = [] - - def add_on_resumed_callback(self, callback): - with self._callbacks_lock: - self._callbacks.append(callback) - - @overrides(AbstractSingleNotificationBehavior.send_resume_notification) - def send_resume_notification(self, thread_id): - py_db = self._py_db() - if py_db is not None: - py_db.writer.add_command(py_db.cmd_factory.make_thread_resume_single_notification(thread_id)) - - with self._callbacks_lock: - callbacks = self._callbacks - self._callbacks = [] - - for callback in callbacks: - callback() - - @overrides(AbstractSingleNotificationBehavior.send_suspend_notification) - def send_suspend_notification(self, thread_id, thread, stop_reason): - py_db = self._py_db() - if py_db is not None: - py_db.writer.add_command( - py_db.cmd_factory.make_thread_suspend_single_notification( - py_db, thread_id, thread, stop_reason)) - - @overrides(AbstractSingleNotificationBehavior.notify_thread_suspended) - @contextmanager - def notify_thread_suspended(self, thread_id, thread, stop_reason): - if self.multi_threads_single_notification: - with AbstractSingleNotificationBehavior.notify_thread_suspended(self, thread_id, thread, stop_reason): - yield - else: - yield - - -class _Authentication(object): - - __slots__ = ['access_token', 'client_access_token', '_authenticated', '_wrong_attempts'] - - def __init__(self): - # A token to be send in the command line or through the settrace api -- when such token - # is given, the first message sent to the IDE must pass the same token to authenticate. - # Note that if a disconnect is sent, the same message must be resent to authenticate. - self.access_token = None - - # This token is the one that the client requires to accept a connection from pydevd - # (it's stored here and just passed back when required, it's not used internally - # for anything else). - self.client_access_token = None - - self._authenticated = None - - self._wrong_attempts = 0 - - def is_authenticated(self): - if self._authenticated is None: - return self.access_token is None - return self._authenticated - - def login(self, access_token): - if self._wrong_attempts >= 10: # A user can fail to authenticate at most 10 times. - return - - self._authenticated = access_token == self.access_token - if not self._authenticated: - self._wrong_attempts += 1 - else: - self._wrong_attempts = 0 - - def logout(self): - self._authenticated = None - self._wrong_attempts = 0 - - -class PyDB(object): - """ Main debugging class - Lots of stuff going on here: - - PyDB starts two threads on startup that connect to remote debugger (RDB) - The threads continuously read & write commands to RDB. - PyDB communicates with these threads through command queues. - Every RDB command is processed by calling process_net_command. - Every PyDB net command is sent to the net by posting NetCommand to WriterThread queue - - Some commands need to be executed on the right thread (suspend/resume & friends) - These are placed on the internal command queue. - """ - - # Direct child pids which should not be terminated when terminating processes. - # Note: class instance because it should outlive PyDB instances. - dont_terminate_child_pids = set() - - def __init__(self, set_as_global=True): - if set_as_global: - pydevd_tracing.replace_sys_set_trace_func() - - self.authentication = _Authentication() - - self.reader = None - self.writer = None - self._fsnotify_thread = None - self.created_pydb_daemon_threads = {} - self._waiting_for_connection_thread = None - self._on_configuration_done_event = threading.Event() - self.check_alive_thread = None - self.py_db_command_thread = None - self.quitting = None - self.cmd_factory = NetCommandFactory() - self._cmd_queue = defaultdict(_queue.Queue) # Key is thread id or '*', value is Queue - self.suspended_frames_manager = SuspendedFramesManager() - self._files_filtering = FilesFiltering() - self.timeout_tracker = TimeoutTracker(self) - - # Note: when the source mapping is changed we also have to clear the file types cache - # (because if a given file is a part of the project or not may depend on it being - # defined in the source mapping). - self.source_mapping = SourceMapping(on_source_mapping_changed=self._clear_filters_caches) - - # Determines whether we should terminate child processes when asked to terminate. - self.terminate_child_processes = True - - # Determines whether we should try to do a soft terminate (i.e.: interrupt the main - # thread with a KeyboardInterrupt). - self.terminate_keyboard_interrupt = False - - # Set to True after a keyboard interrupt is requested the first time. - self.keyboard_interrupt_requested = False - - # These are the breakpoints received by the PyDevdAPI. They are meant to store - # the breakpoints in the api -- its actual contents are managed by the api. - self.api_received_breakpoints = {} - - # These are the breakpoints meant to be consumed during runtime. - self.breakpoints = {} - self.function_breakpoint_name_to_breakpoint = {} - - # Set communication protocol - PyDevdAPI().set_protocol(self, 0, PydevdCustomization.DEFAULT_PROTOCOL) - - self.variable_presentation = PyDevdAPI.VariablePresentation() - - # mtime to be raised when breakpoints change - self.mtime = 0 - - self.file_to_id_to_line_breakpoint = {} - self.file_to_id_to_plugin_breakpoint = {} - - # Note: breakpoints dict should not be mutated: a copy should be created - # and later it should be assigned back (to prevent concurrency issues). - self.break_on_uncaught_exceptions = {} - self.break_on_caught_exceptions = {} - self.break_on_user_uncaught_exceptions = {} - - self.ready_to_run = False - self._main_lock = thread.allocate_lock() - self._lock_running_thread_ids = thread.allocate_lock() - self._lock_create_fs_notify = thread.allocate_lock() - self._py_db_command_thread_event = threading.Event() - if set_as_global: - CustomFramesContainer._py_db_command_thread_event = self._py_db_command_thread_event - - self.pydb_disposed = False - self._wait_for_threads_to_finish_called = False - self._wait_for_threads_to_finish_called_lock = thread.allocate_lock() - self._wait_for_threads_to_finish_called_event = threading.Event() - - self.terminate_requested = False - self._disposed_lock = thread.allocate_lock() - self.signature_factory = None - self.SetTrace = pydevd_tracing.SetTrace - self.skip_on_exceptions_thrown_in_same_context = False - self.ignore_exceptions_thrown_in_lines_with_ignore_exception = True - - # Suspend debugger even if breakpoint condition raises an exception. - # May be changed with CMD_PYDEVD_JSON_CONFIG. - self.skip_suspend_on_breakpoint_exception = () # By default suspend on any Exception. - self.skip_print_breakpoint_exception = () # By default print on any Exception. - - # By default user can step into properties getter/setter/deleter methods - self.disable_property_trace = False - self.disable_property_getter_trace = False - self.disable_property_setter_trace = False - self.disable_property_deleter_trace = False - - # this is a dict of thread ids pointing to thread ids. Whenever a command is passed to the java end that - # acknowledges that a thread was created, the thread id should be passed here -- and if at some time we do not - # find that thread alive anymore, we must remove it from this list and make the java side know that the thread - # was killed. - self._running_thread_ids = {} - # Note: also access '_enable_thread_notifications' with '_lock_running_thread_ids' - self._enable_thread_notifications = False - - self._set_breakpoints_with_id = False - - # This attribute holds the file-> lines which have an @IgnoreException. - self.filename_to_lines_where_exceptions_are_ignored = {} - - # working with plugins (lazily initialized) - self.plugin = None - self.has_plugin_line_breaks = False - self.has_plugin_exception_breaks = False - self.thread_analyser = None - self.asyncio_analyser = None - - # The GUI event loop that's going to run. - # Possible values: - # matplotlib - Whatever GUI backend matplotlib is using. - # 'wx'/'qt'/'none'/... - GUI toolkits that have bulitin support. See pydevd_ipython/inputhook.py:24. - # Other - A custom function that'll be imported and run. - self._gui_event_loop = 'matplotlib' - self._installed_gui_support = False - self.gui_in_use = False - - # GUI event loop support in debugger - self.activate_gui_function = None - - # matplotlib support in debugger and debug console - self.mpl_hooks_in_debug_console = False - self.mpl_modules_for_patching = {} - - self._filename_to_not_in_scope = {} - self.first_breakpoint_reached = False - self._exclude_filters_enabled = self._files_filtering.use_exclude_filters() - self._is_libraries_filter_enabled = self._files_filtering.use_libraries_filter() - self.is_files_filter_enabled = self._exclude_filters_enabled or self._is_libraries_filter_enabled - self.show_return_values = False - self.remove_return_values_flag = False - self.redirect_output = False - # Note that besides the `redirect_output` flag, we also need to consider that someone - # else is already redirecting (i.e.: debugpy). - self.is_output_redirected = False - - # this flag disables frame evaluation even if it's available - self.use_frame_eval = True - - # If True, pydevd will send a single notification when all threads are suspended/resumed. - self._threads_suspended_single_notification = ThreadsSuspendedSingleNotification(self) - - # If True a step command will do a step in one thread and will also resume all other threads. - self.stepping_resumes_all_threads = False - - self._local_thread_trace_func = threading.local() - - self._server_socket_ready_event = threading.Event() - self._server_socket_name = None - - # Bind many locals to the debugger because upon teardown those names may become None - # in the namespace (and thus can't be relied upon unless the reference was previously - # saved). - if IS_IRONPYTHON: - - # A partial() cannot be used in IronPython for sys.settrace. - def new_trace_dispatch(frame, event, arg): - return _trace_dispatch(self, frame, event, arg) - - self.trace_dispatch = new_trace_dispatch - else: - self.trace_dispatch = partial(_trace_dispatch, self) - self.fix_top_level_trace_and_get_trace_func = fix_top_level_trace_and_get_trace_func - self.frame_eval_func = frame_eval_func - self.dummy_trace_dispatch = dummy_trace_dispatch - - # Note: this is different from pydevd_constants.thread_get_ident because we want Jython - # to be None here because it also doesn't have threading._active. - try: - self.threading_get_ident = threading.get_ident # Python 3 - self.threading_active = threading._active - except: - try: - self.threading_get_ident = threading._get_ident # Python 2 noqa - self.threading_active = threading._active - except: - self.threading_get_ident = None # Jython - self.threading_active = None - self.threading_current_thread = threading.currentThread - self.set_additional_thread_info = set_additional_thread_info - self.stop_on_unhandled_exception = stop_on_unhandled_exception - self.collect_return_info = collect_return_info - self.get_exception_breakpoint = get_exception_breakpoint - self._dont_trace_get_file_type = DONT_TRACE.get - self._dont_trace_dirs_get_file_type = DONT_TRACE_DIRS.get - self.PYDEV_FILE = PYDEV_FILE - self.LIB_FILE = LIB_FILE - - self._in_project_scope_cache = {} - self._exclude_by_filter_cache = {} - self._apply_filter_cache = {} - self._ignore_system_exit_codes = set() - - # DAP related - self._dap_messages_listeners = [] - - if set_as_global: - # Set as the global instance only after it's initialized. - set_global_debugger(self) - - pydevd_defaults.on_pydb_init(self) - # Stop the tracing as the last thing before the actual shutdown for a clean exit. - atexit.register(stoptrace) - - def collect_try_except_info(self, code_obj): - filename = code_obj.co_filename - try: - if os.path.exists(filename): - pydev_log.debug('Collecting try..except info from source for %s', filename) - try_except_infos = collect_try_except_info_from_source(filename) - if try_except_infos: - # Filter for the current function - max_line = -1 - min_line = sys.maxsize - for _, line in dis.findlinestarts(code_obj): - - if line > max_line: - max_line = line - - if line < min_line: - min_line = line - - try_except_infos = [x for x in try_except_infos if min_line <= x.try_line <= max_line] - return try_except_infos - - except: - pydev_log.exception('Error collecting try..except info from source (%s)', filename) - - pydev_log.debug('Collecting try..except info from bytecode for %s', filename) - return collect_try_except_info(code_obj) - - def setup_auto_reload_watcher(self, enable_auto_reload, watch_dirs, poll_target_time, exclude_patterns, include_patterns): - try: - with self._lock_create_fs_notify: - - # When setting up, dispose of the previous one (if any). - if self._fsnotify_thread is not None: - self._fsnotify_thread.do_kill_pydev_thread() - self._fsnotify_thread = None - - if not enable_auto_reload: - return - - exclude_patterns = tuple(exclude_patterns) - include_patterns = tuple(include_patterns) - - def accept_directory(absolute_filename, cache={}): - try: - return cache[absolute_filename] - except: - if absolute_filename and absolute_filename[-1] not in ('/', '\\'): - # I.e.: for directories we always end with '/' or '\\' so that - # we match exclusions such as "**/node_modules/**" - absolute_filename += os.path.sep - - # First include what we want - for include_pattern in include_patterns: - if glob_matches_path(absolute_filename, include_pattern): - cache[absolute_filename] = True - return True - - # Then exclude what we don't want - for exclude_pattern in exclude_patterns: - if glob_matches_path(absolute_filename, exclude_pattern): - cache[absolute_filename] = False - return False - - # By default track all directories not excluded. - cache[absolute_filename] = True - return True - - def accept_file(absolute_filename, cache={}): - try: - return cache[absolute_filename] - except: - # First include what we want - for include_pattern in include_patterns: - if glob_matches_path(absolute_filename, include_pattern): - cache[absolute_filename] = True - return True - - # Then exclude what we don't want - for exclude_pattern in exclude_patterns: - if glob_matches_path(absolute_filename, exclude_pattern): - cache[absolute_filename] = False - return False - - # By default don't track files not included. - cache[absolute_filename] = False - return False - - self._fsnotify_thread = FSNotifyThread(self, PyDevdAPI(), watch_dirs) - watcher = self._fsnotify_thread.watcher - watcher.accept_directory = accept_directory - watcher.accept_file = accept_file - - watcher.target_time_for_single_scan = poll_target_time - watcher.target_time_for_notification = poll_target_time - self._fsnotify_thread.start() - except: - pydev_log.exception('Error setting up auto-reload.') - - def get_arg_ppid(self): - try: - setup = SetupHolder.setup - if setup: - return int(setup.get('ppid', 0)) - except: - pydev_log.exception('Error getting ppid.') - - return 0 - - def wait_for_ready_to_run(self): - while not self.ready_to_run: - # busy wait until we receive run command - self.process_internal_commands() - self._py_db_command_thread_event.clear() - self._py_db_command_thread_event.wait(0.1) - - def on_initialize(self): - ''' - Note: only called when using the DAP (Debug Adapter Protocol). - ''' - self._on_configuration_done_event.clear() - - def on_configuration_done(self): - ''' - Note: only called when using the DAP (Debug Adapter Protocol). - ''' - self._on_configuration_done_event.set() - self._py_db_command_thread_event.set() - - def is_attached(self): - return self._on_configuration_done_event.is_set() - - def on_disconnect(self): - ''' - Note: only called when using the DAP (Debug Adapter Protocol). - ''' - self.authentication.logout() - self._on_configuration_done_event.clear() - - def set_ignore_system_exit_codes(self, ignore_system_exit_codes): - assert isinstance(ignore_system_exit_codes, (list, tuple, set)) - self._ignore_system_exit_codes = set(ignore_system_exit_codes) - - def ignore_system_exit_code(self, system_exit_exc): - if hasattr(system_exit_exc, 'code'): - return system_exit_exc.code in self._ignore_system_exit_codes - else: - return system_exit_exc in self._ignore_system_exit_codes - - def block_until_configuration_done(self, cancel=None): - if cancel is None: - cancel = NULL - - while not cancel.is_set(): - if self._on_configuration_done_event.is_set(): - cancel.set() # Set cancel to prevent reuse - return - - self.process_internal_commands() - self._py_db_command_thread_event.clear() - self._py_db_command_thread_event.wait(1 / 15.) - - def add_fake_frame(self, thread_id, frame_id, frame): - self.suspended_frames_manager.add_fake_frame(thread_id, frame_id, frame) - - def handle_breakpoint_condition(self, info, pybreakpoint, new_frame): - condition = pybreakpoint.condition - try: - if pybreakpoint.handle_hit_condition(new_frame): - return True - - if not condition: - return False - - return eval(condition, new_frame.f_globals, new_frame.f_locals) - except Exception as e: - if not isinstance(e, self.skip_print_breakpoint_exception): - stack_trace = io.StringIO() - etype, value, tb = sys.exc_info() - traceback.print_exception(etype, value, tb.tb_next, file=stack_trace) - - msg = 'Error while evaluating expression in conditional breakpoint: %s\n%s' % ( - condition, stack_trace.getvalue()) - api = PyDevdAPI() - api.send_error_message(self, msg) - - if not isinstance(e, self.skip_suspend_on_breakpoint_exception): - try: - # add exception_type and stacktrace into thread additional info - etype, value, tb = sys.exc_info() - error = ''.join(traceback.format_exception_only(etype, value)) - stack = traceback.extract_stack(f=tb.tb_frame.f_back) - - # On self.set_suspend(thread, CMD_SET_BREAK) this info will be - # sent to the client. - info.conditional_breakpoint_exception = \ - ('Condition:\n' + condition + '\n\nError:\n' + error, stack) - except: - pydev_log.exception() - return True - - return False - - finally: - etype, value, tb = None, None, None - - def handle_breakpoint_expression(self, pybreakpoint, info, new_frame): - try: - try: - val = eval(pybreakpoint.expression, new_frame.f_globals, new_frame.f_locals) - except: - val = sys.exc_info()[1] - finally: - if val is not None: - info.pydev_message = str(val) - - def _internal_get_file_type(self, abs_real_path_and_basename): - basename = abs_real_path_and_basename[-1] - if ( - basename.startswith(IGNORE_BASENAMES_STARTING_WITH) or - abs_real_path_and_basename[0].startswith(IGNORE_BASENAMES_STARTING_WITH) - ): - # Note: these are the files that are completely ignored (they aren't shown to the user - # as user nor library code as it's usually just noise in the frame stack). - return self.PYDEV_FILE - file_type = self._dont_trace_get_file_type(basename) - if file_type is not None: - return file_type - - if basename.startswith('__init__.py'): - # i.e.: ignore the __init__ files inside pydevd (the other - # files are ignored just by their name). - abs_path = abs_real_path_and_basename[0] - i = max(abs_path.rfind('/'), abs_path.rfind('\\')) - if i: - abs_path = abs_path[0:i] - i = max(abs_path.rfind('/'), abs_path.rfind('\\')) - if i: - dirname = abs_path[i + 1:] - # At this point, something as: - # "my_path\_pydev_runfiles\__init__.py" - # is now "_pydev_runfiles". - return self._dont_trace_dirs_get_file_type(dirname) - return None - - def dont_trace_external_files(self, abs_path): - ''' - :param abs_path: - The result from get_abs_path_real_path_and_base_from_file or - get_abs_path_real_path_and_base_from_frame. - - :return - True : - If files should NOT be traced. - - False: - If files should be traced. - ''' - # By default all external files are traced. Note: this function is expected to - # be changed for another function in PyDevdAPI.set_dont_trace_start_end_patterns. - return False - - def get_file_type(self, frame, abs_real_path_and_basename=None, _cache_file_type=_CACHE_FILE_TYPE): - ''' - :param abs_real_path_and_basename: - The result from get_abs_path_real_path_and_base_from_file or - get_abs_path_real_path_and_base_from_frame. - - :return - _pydevd_bundle.pydevd_dont_trace_files.PYDEV_FILE: - If it's a file internal to the debugger which shouldn't be - traced nor shown to the user. - - _pydevd_bundle.pydevd_dont_trace_files.LIB_FILE: - If it's a file in a library which shouldn't be traced. - - None: - If it's a regular user file which should be traced. - ''' - if abs_real_path_and_basename is None: - try: - # Make fast path faster! - abs_real_path_and_basename = NORM_PATHS_AND_BASE_CONTAINER[frame.f_code.co_filename] - except: - abs_real_path_and_basename = get_abs_path_real_path_and_base_from_frame(frame) - - # Note 1: we have to take into account that we may have files as '', and that in - # this case the cache key can't rely only on the filename. With the current cache, there's - # still a potential miss if 2 functions which have exactly the same content are compiled - # with '', but in practice as we only separate the one from python -c from the rest - # this shouldn't be a problem in practice. - - # Note 2: firstlineno added to make misses faster in the first comparison. - - # Note 3: this cache key is repeated in pydevd_frame_evaluator.pyx:get_func_code_info (for - # speedups). - cache_key = (frame.f_code.co_firstlineno, abs_real_path_and_basename[0], frame.f_code) - try: - return _cache_file_type[cache_key] - except: - if abs_real_path_and_basename[0] == '': - - # Consider it an untraceable file unless there's no back frame (ignoring - # internal files and runpy.py). - f = frame.f_back - while f is not None: - if (self.get_file_type(f) != self.PYDEV_FILE and - pydevd_file_utils.basename(f.f_code.co_filename) not in ('runpy.py', '')): - # We found some back frame that's not internal, which means we must consider - # this a library file. - # This is done because we only want to trace files as if they don't - # have any back frame (which is the case for python -c ...), for all other - # cases we don't want to trace them because we can't show the source to the - # user (at least for now...). - - # Note that we return as a LIB_FILE and not PYDEV_FILE because we still want - # to show it in the stack. - _cache_file_type[cache_key] = LIB_FILE - return LIB_FILE - f = f.f_back - else: - # This is a top-level file (used in python -c), so, trace it as usual... we - # still won't be able to show the sources, but some tests require this to work. - _cache_file_type[cache_key] = None - return None - - file_type = self._internal_get_file_type(abs_real_path_and_basename) - if file_type is None: - if self.dont_trace_external_files(abs_real_path_and_basename[0]): - file_type = PYDEV_FILE - - _cache_file_type[cache_key] = file_type - return file_type - - def is_cache_file_type_empty(self): - return not _CACHE_FILE_TYPE - - def get_cache_file_type(self, _cache=_CACHE_FILE_TYPE): # i.e.: Make it local. - return _cache - - def get_thread_local_trace_func(self): - try: - thread_trace_func = self._local_thread_trace_func.thread_trace_func - except AttributeError: - thread_trace_func = self.trace_dispatch - return thread_trace_func - - def enable_tracing(self, thread_trace_func=None, apply_to_all_threads=False): - ''' - Enables tracing. - - If in regular mode (tracing), will set the tracing function to the tracing - function for this thread -- by default it's `PyDB.trace_dispatch`, but after - `PyDB.enable_tracing` is called with a `thread_trace_func`, the given function will - be the default for the given thread. - - :param bool apply_to_all_threads: - If True we'll set the tracing function in all threads, not only in the current thread. - If False only the tracing for the current function should be changed. - In general apply_to_all_threads should only be true if this is the first time - this function is called on a multi-threaded program (either programmatically or attach - to pid). - ''' - if pydevd_gevent_integration is not None: - pydevd_gevent_integration.enable_gevent_integration() - - if self.frame_eval_func is not None: - self.frame_eval_func() - pydevd_tracing.SetTrace(self.dummy_trace_dispatch) - - if IS_CPYTHON and apply_to_all_threads: - pydevd_tracing.set_trace_to_threads(self.dummy_trace_dispatch) - return - - if apply_to_all_threads: - # If applying to all threads, don't use the local thread trace function. - assert thread_trace_func is not None - else: - if thread_trace_func is None: - thread_trace_func = self.get_thread_local_trace_func() - else: - self._local_thread_trace_func.thread_trace_func = thread_trace_func - - pydevd_tracing.SetTrace(thread_trace_func) - if IS_CPYTHON and apply_to_all_threads: - pydevd_tracing.set_trace_to_threads(thread_trace_func) - - def disable_tracing(self): - pydevd_tracing.SetTrace(None) - - def on_breakpoints_changed(self, removed=False): - ''' - When breakpoints change, we have to re-evaluate all the assumptions we've made so far. - ''' - if not self.ready_to_run: - # No need to do anything if we're still not running. - return - - self.mtime += 1 - if not removed: - # When removing breakpoints we can leave tracing as was, but if a breakpoint was added - # we have to reset the tracing for the existing functions to be re-evaluated. - self.set_tracing_for_untraced_contexts() - - def set_tracing_for_untraced_contexts(self): - # Enable the tracing for existing threads (because there may be frames being executed that - # are currently untraced). - - if IS_CPYTHON: - # Note: use sys._current_frames instead of threading.enumerate() because this way - # we also see C/C++ threads, not only the ones visible to the threading module. - tid_to_frame = sys._current_frames() - - ignore_thread_ids = set( - t.ident for t in threadingEnumerate() - if getattr(t, 'is_pydev_daemon_thread', False) or getattr(t, 'pydev_do_not_trace', False) - ) - - for thread_id, frame in tid_to_frame.items(): - if thread_id not in ignore_thread_ids: - self.set_trace_for_frame_and_parents(frame) - - else: - try: - threads = threadingEnumerate() - for t in threads: - if getattr(t, 'is_pydev_daemon_thread', False) or getattr(t, 'pydev_do_not_trace', False): - continue - - additional_info = set_additional_thread_info(t) - frame = additional_info.get_topmost_frame(t) - try: - if frame is not None: - self.set_trace_for_frame_and_parents(frame) - finally: - frame = None - finally: - frame = None - t = None - threads = None - additional_info = None - - @property - def multi_threads_single_notification(self): - return self._threads_suspended_single_notification.multi_threads_single_notification - - @multi_threads_single_notification.setter - def multi_threads_single_notification(self, notify): - self._threads_suspended_single_notification.multi_threads_single_notification = notify - - @property - def threads_suspended_single_notification(self): - return self._threads_suspended_single_notification - - def get_plugin_lazy_init(self): - if self.plugin is None: - self.plugin = PluginManager(self) - return self.plugin - - def in_project_scope(self, frame, absolute_filename=None): - ''' - Note: in general this method should not be used (apply_files_filter should be used - in most cases as it also handles the project scope check). - - :param frame: - The frame we want to check. - - :param absolute_filename: - Must be the result from get_abs_path_real_path_and_base_from_frame(frame)[0] (can - be used to speed this function a bit if it's already available to the caller, but - in general it's not needed). - ''' - try: - if absolute_filename is None: - try: - # Make fast path faster! - abs_real_path_and_basename = NORM_PATHS_AND_BASE_CONTAINER[frame.f_code.co_filename] - except: - abs_real_path_and_basename = get_abs_path_real_path_and_base_from_frame(frame) - - absolute_filename = abs_real_path_and_basename[0] - - cache_key = (frame.f_code.co_firstlineno, absolute_filename, frame.f_code) - - return self._in_project_scope_cache[cache_key] - except KeyError: - cache = self._in_project_scope_cache - try: - abs_real_path_and_basename # If we've gotten it previously, use it again. - except NameError: - abs_real_path_and_basename = get_abs_path_real_path_and_base_from_frame(frame) - - # pydevd files are never considered to be in the project scope. - file_type = self.get_file_type(frame, abs_real_path_and_basename) - if file_type == self.PYDEV_FILE: - cache[cache_key] = False - - elif absolute_filename == '': - # Special handling for '' - if file_type == self.LIB_FILE: - cache[cache_key] = False - else: - cache[cache_key] = True - - elif self.source_mapping.has_mapping_entry(absolute_filename): - cache[cache_key] = True - - else: - cache[cache_key] = self._files_filtering.in_project_roots(absolute_filename) - - return cache[cache_key] - - def in_project_roots_filename_uncached(self, absolute_filename): - return self._files_filtering.in_project_roots(absolute_filename) - - def _clear_filters_caches(self): - self._in_project_scope_cache.clear() - self._exclude_by_filter_cache.clear() - self._apply_filter_cache.clear() - self._exclude_filters_enabled = self._files_filtering.use_exclude_filters() - self._is_libraries_filter_enabled = self._files_filtering.use_libraries_filter() - self.is_files_filter_enabled = self._exclude_filters_enabled or self._is_libraries_filter_enabled - - def clear_dont_trace_start_end_patterns_caches(self): - # When start/end patterns are changed we must clear all caches which would be - # affected by a change in get_file_type() and reset the tracing function - # as places which were traced may no longer need to be traced and vice-versa. - self.on_breakpoints_changed() - _CACHE_FILE_TYPE.clear() - self._clear_filters_caches() - self._clear_skip_caches() - - def _exclude_by_filter(self, frame, absolute_filename): - ''' - :return: True if it should be excluded, False if it should be included and None - if no rule matched the given file. - - :note: it'll be normalized as needed inside of this method. - ''' - cache_key = (absolute_filename, frame.f_code.co_name, frame.f_code.co_firstlineno) - try: - return self._exclude_by_filter_cache[cache_key] - except KeyError: - cache = self._exclude_by_filter_cache - - # pydevd files are always filtered out - if self.get_file_type(frame) == self.PYDEV_FILE: - cache[cache_key] = True - else: - module_name = None - if self._files_filtering.require_module: - module_name = frame.f_globals.get('__name__', '') - cache[cache_key] = self._files_filtering.exclude_by_filter(absolute_filename, module_name) - - return cache[cache_key] - - def apply_files_filter(self, frame, original_filename, force_check_project_scope): - ''' - Should only be called if `self.is_files_filter_enabled == True` or `force_check_project_scope == True`. - - Note that it covers both the filter by specific paths includes/excludes as well - as the check which filters out libraries if not in the project scope. - - :param original_filename: - Note can either be the original filename or the absolute version of that filename. - - :param force_check_project_scope: - Check that the file is in the project scope even if the global setting - is off. - - :return bool: - True if it should be excluded when stepping and False if it should be - included. - ''' - cache_key = (frame.f_code.co_firstlineno, original_filename, force_check_project_scope, frame.f_code) - try: - return self._apply_filter_cache[cache_key] - except KeyError: - if self.plugin is not None and (self.has_plugin_line_breaks or self.has_plugin_exception_breaks): - # If it's explicitly needed by some plugin, we can't skip it. - if not self.plugin.can_skip(self, frame): - pydev_log.debug_once('File traced (included by plugins): %s', original_filename) - self._apply_filter_cache[cache_key] = False - return False - - if self._exclude_filters_enabled: - absolute_filename = pydevd_file_utils.absolute_path(original_filename) - exclude_by_filter = self._exclude_by_filter(frame, absolute_filename) - if exclude_by_filter is not None: - if exclude_by_filter: - # ignore files matching stepping filters - pydev_log.debug_once('File not traced (excluded by filters): %s', original_filename) - - self._apply_filter_cache[cache_key] = True - return True - else: - pydev_log.debug_once('File traced (explicitly included by filters): %s', original_filename) - - self._apply_filter_cache[cache_key] = False - return False - - if (self._is_libraries_filter_enabled or force_check_project_scope) and not self.in_project_scope(frame): - # ignore library files while stepping - self._apply_filter_cache[cache_key] = True - if force_check_project_scope: - pydev_log.debug_once('File not traced (not in project): %s', original_filename) - else: - pydev_log.debug_once('File not traced (not in project - force_check_project_scope): %s', original_filename) - - return True - - if force_check_project_scope: - pydev_log.debug_once('File traced: %s (force_check_project_scope)', original_filename) - else: - pydev_log.debug_once('File traced: %s', original_filename) - self._apply_filter_cache[cache_key] = False - return False - - def exclude_exception_by_filter(self, exception_breakpoint, trace): - if not exception_breakpoint.ignore_libraries and not self._exclude_filters_enabled: - return False - - if trace is None: - return True - - ignore_libraries = exception_breakpoint.ignore_libraries - exclude_filters_enabled = self._exclude_filters_enabled - - if (ignore_libraries and not self.in_project_scope(trace.tb_frame)) \ - or (exclude_filters_enabled and self._exclude_by_filter( - trace.tb_frame, - pydevd_file_utils.absolute_path(trace.tb_frame.f_code.co_filename))): - return True - - return False - - def set_project_roots(self, project_roots): - self._files_filtering.set_project_roots(project_roots) - self._clear_skip_caches() - self._clear_filters_caches() - - def set_exclude_filters(self, exclude_filters): - self._files_filtering.set_exclude_filters(exclude_filters) - self._clear_skip_caches() - self._clear_filters_caches() - - def set_use_libraries_filter(self, use_libraries_filter): - self._files_filtering.set_use_libraries_filter(use_libraries_filter) - self._clear_skip_caches() - self._clear_filters_caches() - - def get_use_libraries_filter(self): - return self._files_filtering.use_libraries_filter() - - def get_require_module_for_filters(self): - return self._files_filtering.require_module - - def has_user_threads_alive(self): - for t in pydevd_utils.get_non_pydevd_threads(): - if isinstance(t, PyDBDaemonThread): - pydev_log.error_once( - 'Error in debugger: Found PyDBDaemonThread not marked with is_pydev_daemon_thread=True.\n') - - if is_thread_alive(t): - if not t.daemon or hasattr(t, "__pydevd_main_thread"): - return True - - return False - - def initialize_network(self, sock, terminate_on_socket_close=True): - assert sock is not None - try: - sock.settimeout(None) # infinite, no timeouts from now on - jython does not have it - except: - pass - curr_reader = getattr(self, 'reader', None) - curr_writer = getattr(self, 'writer', None) - if curr_reader: - curr_reader.do_kill_pydev_thread() - if curr_writer: - curr_writer.do_kill_pydev_thread() - - self.writer = WriterThread(sock, self, terminate_on_socket_close=terminate_on_socket_close) - self.reader = ReaderThread( - sock, - self, - PyDevJsonCommandProcessor=PyDevJsonCommandProcessor, - process_net_command=process_net_command, - terminate_on_socket_close=terminate_on_socket_close - ) - self.writer.start() - self.reader.start() - - time.sleep(0.1) # give threads time to start - - def connect(self, host, port): - if host: - s = start_client(host, port) - else: - s = start_server(port) - - self.initialize_network(s) - - def create_wait_for_connection_thread(self): - if self._waiting_for_connection_thread is not None: - raise AssertionError('There is already another thread waiting for a connection.') - - self._server_socket_ready_event.clear() - self._waiting_for_connection_thread = self._WaitForConnectionThread(self) - self._waiting_for_connection_thread.start() - - def set_server_socket_ready(self): - self._server_socket_ready_event.set() - - def wait_for_server_socket_ready(self): - self._server_socket_ready_event.wait() - - @property - def dap_messages_listeners(self): - return self._dap_messages_listeners - - def add_dap_messages_listener(self, listener): - self._dap_messages_listeners.append(listener) - - class _WaitForConnectionThread(PyDBDaemonThread): - - def __init__(self, py_db): - PyDBDaemonThread.__init__(self, py_db) - self._server_socket = None - - def run(self): - host = SetupHolder.setup['client'] - port = SetupHolder.setup['port'] - - self._server_socket = create_server_socket(host=host, port=port) - self.py_db._server_socket_name = self._server_socket.getsockname() - self.py_db.set_server_socket_ready() - - while not self._kill_received: - try: - s = self._server_socket - if s is None: - return - - s.listen(1) - new_socket, _addr = s.accept() - if self._kill_received: - pydev_log.info("Connection (from wait_for_attach) accepted but ignored as kill was already received.") - return - - pydev_log.info("Connection (from wait_for_attach) accepted.") - reader = getattr(self.py_db, 'reader', None) - if reader is not None: - # This is needed if a new connection is done without the client properly - # sending a disconnect for the previous connection. - api = PyDevdAPI() - api.request_disconnect(self.py_db, resume_threads=False) - - self.py_db.initialize_network(new_socket, terminate_on_socket_close=False) - - except: - if DebugInfoHolder.DEBUG_TRACE_LEVEL > 0: - pydev_log.exception() - pydev_log.debug("Exiting _WaitForConnectionThread: %s\n", port) - - def do_kill_pydev_thread(self): - PyDBDaemonThread.do_kill_pydev_thread(self) - s = self._server_socket - if s is not None: - try: - s.close() - except: - pass - self._server_socket = None - - def get_internal_queue(self, thread_id): - """ returns internal command queue for a given thread. - if new queue is created, notify the RDB about it """ - if thread_id.startswith('__frame__'): - thread_id = thread_id[thread_id.rfind('|') + 1:] - return self._cmd_queue[thread_id] - - def post_method_as_internal_command(self, thread_id, method, *args, **kwargs): - if thread_id == '*': - internal_cmd = InternalThreadCommandForAnyThread(thread_id, method, *args, **kwargs) - else: - internal_cmd = InternalThreadCommand(thread_id, method, *args, **kwargs) - self.post_internal_command(internal_cmd, thread_id) - if thread_id == '*': - # Notify so that the command is handled as soon as possible. - self._py_db_command_thread_event.set() - - def post_internal_command(self, int_cmd, thread_id): - """ if thread_id is *, post to the '*' queue""" - queue = self.get_internal_queue(thread_id) - queue.put(int_cmd) - - def enable_output_redirection(self, redirect_stdout, redirect_stderr): - global _global_redirect_stdout_to_server - global _global_redirect_stderr_to_server - - _global_redirect_stdout_to_server = redirect_stdout - _global_redirect_stderr_to_server = redirect_stderr - self.redirect_output = redirect_stdout or redirect_stderr - if _global_redirect_stdout_to_server: - _init_stdout_redirect() - if _global_redirect_stderr_to_server: - _init_stderr_redirect() - - def check_output_redirect(self): - global _global_redirect_stdout_to_server - global _global_redirect_stderr_to_server - - if _global_redirect_stdout_to_server: - _init_stdout_redirect() - - if _global_redirect_stderr_to_server: - _init_stderr_redirect() - - def init_matplotlib_in_debug_console(self): - # import hook and patches for matplotlib support in debug console - from _pydev_bundle.pydev_import_hook import import_hook_manager - if is_current_thread_main_thread(): - for module in list(self.mpl_modules_for_patching): - import_hook_manager.add_module_name(module, self.mpl_modules_for_patching.pop(module)) - - def init_gui_support(self): - if self._installed_gui_support: - return - self._installed_gui_support = True - - # enable_gui and enable_gui_function in activate_matplotlib should be called in main thread. Unlike integrated console, - # in the debug console we have no interpreter instance with exec_queue, but we run this code in the main - # thread and can call it directly. - class _ReturnGuiLoopControlHelper: - _return_control_osc = False - - def return_control(): - # Some of the input hooks (e.g. Qt4Agg) check return control without doing - # a single operation, so we don't return True on every - # call when the debug hook is in place to allow the GUI to run - _ReturnGuiLoopControlHelper._return_control_osc = not _ReturnGuiLoopControlHelper._return_control_osc - return _ReturnGuiLoopControlHelper._return_control_osc - - from pydev_ipython.inputhook import set_return_control_callback, enable_gui - - set_return_control_callback(return_control) - - if self._gui_event_loop == 'matplotlib': - # prepare debugger for matplotlib integration with GUI event loop - from pydev_ipython.matplotlibtools import activate_matplotlib, activate_pylab, activate_pyplot, do_enable_gui - - self.mpl_modules_for_patching = {"matplotlib": lambda: activate_matplotlib(do_enable_gui), - "matplotlib.pyplot": activate_pyplot, - "pylab": activate_pylab } - else: - self.activate_gui_function = enable_gui - - def _activate_gui_if_needed(self): - if self.gui_in_use: - return - - if len(self.mpl_modules_for_patching) > 0: - if is_current_thread_main_thread(): # Note that we call only in the main thread. - for module in list(self.mpl_modules_for_patching): - if module in sys.modules: - activate_function = self.mpl_modules_for_patching.pop(module, None) - if activate_function is not None: - activate_function() - self.gui_in_use = True - - if self.activate_gui_function: - if is_current_thread_main_thread(): # Only call enable_gui in the main thread. - try: - # First try to activate builtin GUI event loops. - self.activate_gui_function(self._gui_event_loop) - self.activate_gui_function = None - self.gui_in_use = True - except ValueError: - # The user requested a custom GUI event loop, try to import it. - from pydev_ipython.inputhook import set_inputhook - try: - inputhook_function = import_attr_from_module(self._gui_event_loop) - set_inputhook(inputhook_function) - self.gui_in_use = True - except Exception as e: - pydev_log.debug("Cannot activate custom GUI event loop {}: {}".format(self._gui_event_loop, e)) - finally: - self.activate_gui_function = None - - def _call_input_hook(self): - try: - from pydev_ipython.inputhook import get_inputhook - inputhook = get_inputhook() - if inputhook: - inputhook() - except: - pass - - def notify_skipped_step_in_because_of_filters(self, frame): - self.writer.add_command(self.cmd_factory.make_skipped_step_in_because_of_filters(self, frame)) - - def notify_thread_created(self, thread_id, thread, use_lock=True): - if self.writer is None: - # Protect about threads being created before the communication structure is in place - # (note that they will appear later on anyways as pydevd does reconcile live/dead threads - # when processing internal commands, albeit it may take longer and in general this should - # not be usual as it's expected that the debugger is live before other threads are created). - return - - with self._lock_running_thread_ids if use_lock else NULL: - if not self._enable_thread_notifications: - return - - if thread_id in self._running_thread_ids: - return - - additional_info = set_additional_thread_info(thread) - if additional_info.pydev_notify_kill: - # After we notify it should be killed, make sure we don't notify it's alive (on a racing condition - # this could happen as we may notify before the thread is stopped internally). - return - - self._running_thread_ids[thread_id] = thread - - self.writer.add_command(self.cmd_factory.make_thread_created_message(thread)) - - def notify_thread_not_alive(self, thread_id, use_lock=True): - """ if thread is not alive, cancel trace_dispatch processing """ - if self.writer is None: - return - - with self._lock_running_thread_ids if use_lock else NULL: - if not self._enable_thread_notifications: - return - - thread = self._running_thread_ids.pop(thread_id, None) - if thread is None: - return - - additional_info = set_additional_thread_info(thread) - was_notified = additional_info.pydev_notify_kill - if not was_notified: - additional_info.pydev_notify_kill = True - - self.writer.add_command(self.cmd_factory.make_thread_killed_message(thread_id)) - - def set_enable_thread_notifications(self, enable): - with self._lock_running_thread_ids: - if self._enable_thread_notifications != enable: - self._enable_thread_notifications = enable - - if enable: - # As it was previously disabled, we have to notify about existing threads again - # (so, clear the cache related to that). - self._running_thread_ids = {} - - def process_internal_commands(self): - ''' - This function processes internal commands. - ''' - # If this method is being called before the debugger is ready to run we should not notify - # about threads and should only process commands sent to all threads. - ready_to_run = self.ready_to_run - - dispose = False - with self._main_lock: - program_threads_alive = {} - if ready_to_run: - self.check_output_redirect() - - all_threads = threadingEnumerate() - program_threads_dead = [] - with self._lock_running_thread_ids: - reset_cache = not self._running_thread_ids - - for t in all_threads: - if getattr(t, 'is_pydev_daemon_thread', False): - pass # I.e.: skip the DummyThreads created from pydev daemon threads - elif isinstance(t, PyDBDaemonThread): - pydev_log.error_once('Error in debugger: Found PyDBDaemonThread not marked with is_pydev_daemon_thread=True.') - - elif is_thread_alive(t): - if reset_cache: - # Fix multiprocessing debug with breakpoints in both main and child processes - # (https://youtrack.jetbrains.com/issue/PY-17092) When the new process is created, the main - # thread in the new process already has the attribute 'pydevd_id', so the new thread doesn't - # get new id with its process number and the debugger loses access to both threads. - # Therefore we should update thread_id for every main thread in the new process. - clear_cached_thread_id(t) - - thread_id = get_thread_id(t) - program_threads_alive[thread_id] = t - - self.notify_thread_created(thread_id, t, use_lock=False) - - # Compute and notify about threads which are no longer alive. - thread_ids = list(self._running_thread_ids.keys()) - for thread_id in thread_ids: - if thread_id not in program_threads_alive: - program_threads_dead.append(thread_id) - - for thread_id in program_threads_dead: - self.notify_thread_not_alive(thread_id, use_lock=False) - - cmds_to_execute = [] - - # Without self._lock_running_thread_ids - if len(program_threads_alive) == 0 and ready_to_run: - dispose = True - else: - # Actually process the commands now (make sure we don't have a lock for _lock_running_thread_ids - # acquired at this point as it could lead to a deadlock if some command evaluated tried to - # create a thread and wait for it -- which would try to notify about it getting that lock). - curr_thread_id = get_current_thread_id(threadingCurrentThread()) - if ready_to_run: - process_thread_ids = (curr_thread_id, '*') - else: - process_thread_ids = ('*',) - - for thread_id in process_thread_ids: - queue = self.get_internal_queue(thread_id) - - # some commands must be processed by the thread itself... if that's the case, - # we will re-add the commands to the queue after executing. - cmds_to_add_back = [] - - try: - while True: - int_cmd = queue.get(False) - - if not self.mpl_hooks_in_debug_console and isinstance(int_cmd, InternalConsoleExec) and not self.gui_in_use: - # add import hooks for matplotlib patches if only debug console was started - try: - self.init_matplotlib_in_debug_console() - self.gui_in_use = True - except: - pydev_log.debug("Matplotlib support in debug console failed", traceback.format_exc()) - self.mpl_hooks_in_debug_console = True - - if int_cmd.can_be_executed_by(curr_thread_id): - cmds_to_execute.append(int_cmd) - else: - pydev_log.verbose("NOT processing internal command: %s ", int_cmd) - cmds_to_add_back.append(int_cmd) - - except _queue.Empty: # @UndefinedVariable - # this is how we exit - for int_cmd in cmds_to_add_back: - queue.put(int_cmd) - - if dispose: - # Note: must be called without the main lock to avoid deadlocks. - self.dispose_and_kill_all_pydevd_threads() - else: - # Actually execute the commands without the main lock! - for int_cmd in cmds_to_execute: - pydev_log.verbose("processing internal command: %s", int_cmd) - try: - int_cmd.do_it(self) - except: - pydev_log.exception('Error processing internal command.') - - def consolidate_breakpoints(self, canonical_normalized_filename, id_to_breakpoint, file_to_line_to_breakpoints): - break_dict = {} - for _breakpoint_id, pybreakpoint in id_to_breakpoint.items(): - break_dict[pybreakpoint.line] = pybreakpoint - - file_to_line_to_breakpoints[canonical_normalized_filename] = break_dict - self._clear_skip_caches() - - def _clear_skip_caches(self): - global_cache_skips.clear() - global_cache_frame_skips.clear() - - def add_break_on_exception( - self, - exception, - condition, - expression, - notify_on_handled_exceptions, - notify_on_unhandled_exceptions, - notify_on_user_unhandled_exceptions, - notify_on_first_raise_only, - ignore_libraries=False - ): - try: - eb = ExceptionBreakpoint( - exception, - condition, - expression, - notify_on_handled_exceptions, - notify_on_unhandled_exceptions, - notify_on_user_unhandled_exceptions, - notify_on_first_raise_only, - ignore_libraries - ) - except ImportError: - pydev_log.critical("Error unable to add break on exception for: %s (exception could not be imported).", exception) - return None - - if eb.notify_on_unhandled_exceptions: - cp = self.break_on_uncaught_exceptions.copy() - cp[exception] = eb - pydev_log.info("Exceptions to hook on terminate: %s.", cp) - self.break_on_uncaught_exceptions = cp - - if eb.notify_on_handled_exceptions: - cp = self.break_on_caught_exceptions.copy() - cp[exception] = eb - pydev_log.info("Exceptions to hook always: %s.", cp) - self.break_on_caught_exceptions = cp - - if eb.notify_on_user_unhandled_exceptions: - cp = self.break_on_user_uncaught_exceptions.copy() - cp[exception] = eb - pydev_log.info("Exceptions to hook on user uncaught code: %s.", cp) - self.break_on_user_uncaught_exceptions = cp - - return eb - - def set_suspend(self, thread, stop_reason, suspend_other_threads=False, is_pause=False, original_step_cmd=-1): - ''' - :param thread: - The thread which should be suspended. - - :param stop_reason: - Reason why the thread was suspended. - - :param suspend_other_threads: - Whether to force other threads to be suspended (i.e.: when hitting a breakpoint - with a suspend all threads policy). - - :param is_pause: - If this is a pause to suspend all threads, any thread can be considered as the 'main' - thread paused. - - :param original_step_cmd: - If given we may change the stop reason to this. - ''' - self._threads_suspended_single_notification.increment_suspend_time() - if is_pause: - self._threads_suspended_single_notification.on_pause() - - info = mark_thread_suspended(thread, stop_reason, original_step_cmd=original_step_cmd) - - if is_pause: - # Must set tracing after setting the state to suspend. - frame = info.get_topmost_frame(thread) - if frame is not None: - try: - self.set_trace_for_frame_and_parents(frame) - finally: - frame = None - - # If conditional breakpoint raises any exception during evaluation send the details to the client. - if stop_reason == CMD_SET_BREAK and info.conditional_breakpoint_exception is not None: - conditional_breakpoint_exception_tuple = info.conditional_breakpoint_exception - info.conditional_breakpoint_exception = None - self._send_breakpoint_condition_exception(thread, conditional_breakpoint_exception_tuple) - - if not suspend_other_threads and self.multi_threads_single_notification: - # In the mode which gives a single notification when all threads are - # stopped, stop all threads whenever a set_suspend is issued. - suspend_other_threads = True - - if suspend_other_threads: - # Suspend all except the current one (which we're currently suspending already). - suspend_all_threads(self, except_thread=thread) - - def _send_breakpoint_condition_exception(self, thread, conditional_breakpoint_exception_tuple): - """If conditional breakpoint raises an exception during evaluation - send exception details to java - """ - thread_id = get_thread_id(thread) - # conditional_breakpoint_exception_tuple - should contain 2 values (exception_type, stacktrace) - if conditional_breakpoint_exception_tuple and len(conditional_breakpoint_exception_tuple) == 2: - exc_type, stacktrace = conditional_breakpoint_exception_tuple - int_cmd = InternalGetBreakpointException(thread_id, exc_type, stacktrace) - self.post_internal_command(int_cmd, thread_id) - - def send_caught_exception_stack(self, thread, arg, curr_frame_id): - """Sends details on the exception which was caught (and where we stopped) to the java side. - - arg is: exception type, description, traceback object - """ - thread_id = get_thread_id(thread) - int_cmd = InternalSendCurrExceptionTrace(thread_id, arg, curr_frame_id) - self.post_internal_command(int_cmd, thread_id) - - def send_caught_exception_stack_proceeded(self, thread): - """Sends that some thread was resumed and is no longer showing an exception trace. - """ - thread_id = get_thread_id(thread) - int_cmd = InternalSendCurrExceptionTraceProceeded(thread_id) - self.post_internal_command(int_cmd, thread_id) - self.process_internal_commands() - - def send_process_created_message(self): - """Sends a message that a new process has been created. - """ - if self.writer is None or self.cmd_factory is None: - return - cmd = self.cmd_factory.make_process_created_message() - self.writer.add_command(cmd) - - def send_process_about_to_be_replaced(self): - """Sends a message that a new process has been created. - """ - if self.writer is None or self.cmd_factory is None: - return - cmd = self.cmd_factory.make_process_about_to_be_replaced_message() - if cmd is NULL_NET_COMMAND: - return - - sent = [False] - - def after_sent(*args, **kwargs): - sent[0] = True - - cmd.call_after_send(after_sent) - self.writer.add_command(cmd) - - timeout = 5 # Wait up to 5 seconds - initial_time = time.time() - while not sent[0]: - time.sleep(.05) - - if (time.time() - initial_time) > timeout: - pydev_log.critical('pydevd: Sending message related to process being replaced timed-out after %s seconds', timeout) - break - - def set_next_statement(self, frame, event, func_name, next_line): - stop = False - response_msg = "" - old_line = frame.f_lineno - if event == 'line' or event == 'exception': - # If we're already in the correct context, we have to stop it now, because we can act only on - # line events -- if a return was the next statement it wouldn't work (so, we have this code - # repeated at pydevd_frame). - - curr_func_name = frame.f_code.co_name - - # global context is set with an empty name - if curr_func_name in ('?', ''): - curr_func_name = '' - - if func_name == '*' or curr_func_name == func_name: - line = next_line - frame.f_trace = self.trace_dispatch - frame.f_lineno = line - stop = True - else: - response_msg = "jump is available only within the bottom frame" - return stop, old_line, response_msg - - def cancel_async_evaluation(self, thread_id, frame_id): - with self._main_lock: - try: - all_threads = threadingEnumerate() - for t in all_threads: - if getattr(t, 'is_pydev_daemon_thread', False) and hasattr(t, 'cancel_event') and t.thread_id == thread_id and \ - t.frame_id == frame_id: - t.cancel_event.set() - except: - pydev_log.exception() - - def find_frame(self, thread_id, frame_id): - """ returns a frame on the thread that has a given frame_id """ - return self.suspended_frames_manager.find_frame(thread_id, frame_id) - - def do_wait_suspend(self, thread, frame, event, arg, exception_type=None): # @UnusedVariable - """ busy waits until the thread state changes to RUN - it expects thread's state as attributes of the thread. - Upon running, processes any outstanding Stepping commands. - - :param exception_type: - If pausing due to an exception, its type. - """ - if USE_CUSTOM_SYS_CURRENT_FRAMES_MAP: - constructed_tid_to_last_frame[thread.ident] = sys._getframe() - self.process_internal_commands() - - thread_id = get_current_thread_id(thread) - - # print('do_wait_suspend %s %s %s %s %s %s (%s)' % (frame.f_lineno, frame.f_code.co_name, frame.f_code.co_filename, event, arg, constant_to_str(thread.additional_info.pydev_step_cmd), constant_to_str(thread.additional_info.pydev_original_step_cmd))) - # print('--- stack ---') - # print(traceback.print_stack(file=sys.stdout)) - # print('--- end stack ---') - - # Send the suspend message - message = thread.additional_info.pydev_message - suspend_type = thread.additional_info.trace_suspend_type - thread.additional_info.trace_suspend_type = 'trace' # Reset to trace mode for next call. - stop_reason = thread.stop_reason - - frames_list = None - - if arg is not None and event == 'exception': - # arg must be the exception info (tuple(exc_type, exc, traceback)) - exc_type, exc_desc, trace_obj = arg - if trace_obj is not None: - frames_list = pydevd_frame_utils.create_frames_list_from_traceback(trace_obj, frame, exc_type, exc_desc, exception_type=exception_type) - - if frames_list is None: - frames_list = pydevd_frame_utils.create_frames_list_from_frame(frame) - - if DebugInfoHolder.DEBUG_TRACE_LEVEL > 2: - pydev_log.debug( - 'PyDB.do_wait_suspend\nname: %s (line: %s)\n file: %s\n event: %s\n arg: %s\n step: %s (original step: %s)\n thread: %s, thread id: %s, id(thread): %s', - frame.f_code.co_name, - frame.f_lineno, - frame.f_code.co_filename, - event, - arg, - constant_to_str(thread.additional_info.pydev_step_cmd), - constant_to_str(thread.additional_info.pydev_original_step_cmd), - thread, - thread_id, - id(thread), - ) - for f in frames_list: - pydev_log.debug(' Stack: %s, %s, %s', f.f_code.co_filename, f.f_code.co_name, f.f_lineno) - - with self.suspended_frames_manager.track_frames(self) as frames_tracker: - frames_tracker.track(thread_id, frames_list) - cmd = frames_tracker.create_thread_suspend_command(thread_id, stop_reason, message, suspend_type) - self.writer.add_command(cmd) - - with CustomFramesContainer.custom_frames_lock: # @UndefinedVariable - from_this_thread = [] - - for frame_custom_thread_id, custom_frame in CustomFramesContainer.custom_frames.items(): - if custom_frame.thread_id == thread.ident: - frames_tracker.track(thread_id, pydevd_frame_utils.create_frames_list_from_frame(custom_frame.frame), frame_custom_thread_id=frame_custom_thread_id) - # print('Frame created as thread: %s' % (frame_custom_thread_id,)) - - self.writer.add_command(self.cmd_factory.make_custom_frame_created_message( - frame_custom_thread_id, custom_frame.name)) - - self.writer.add_command( - frames_tracker.create_thread_suspend_command(frame_custom_thread_id, CMD_THREAD_SUSPEND, "", suspend_type)) - - from_this_thread.append(frame_custom_thread_id) - - with self._threads_suspended_single_notification.notify_thread_suspended(thread_id, thread, stop_reason): - keep_suspended = self._do_wait_suspend(thread, frame, event, arg, suspend_type, from_this_thread, frames_tracker) - - frames_list = None - - if keep_suspended: - # This means that we should pause again after a set next statement. - self._threads_suspended_single_notification.increment_suspend_time() - self.do_wait_suspend(thread, frame, event, arg, exception_type) - if DebugInfoHolder.DEBUG_TRACE_LEVEL > 2: - pydev_log.debug('Leaving PyDB.do_wait_suspend: %s (%s) %s', thread, thread_id, id(thread)) - - def _do_wait_suspend(self, thread, frame, event, arg, suspend_type, from_this_thread, frames_tracker): - info = thread.additional_info - info.step_in_initial_location = None - keep_suspended = False - - with self._main_lock: # Use lock to check if suspended state changed - activate_gui = info.pydev_state == STATE_SUSPEND and not self.pydb_disposed - - in_main_thread = is_current_thread_main_thread() - if activate_gui and in_main_thread: - # before every stop check if matplotlib modules were imported inside script code - # or some GUI event loop needs to be activated - self._activate_gui_if_needed() - - while True: - with self._main_lock: # Use lock to check if suspended state changed - if info.pydev_state != STATE_SUSPEND or (self.pydb_disposed and not self.terminate_requested): - # Note: we can't exit here if terminate was requested while a breakpoint was hit. - break - - if in_main_thread and self.gui_in_use: - # call input hooks if only GUI is in use - self._call_input_hook() - - self.process_internal_commands() - time.sleep(0.01) - - self.cancel_async_evaluation(get_current_thread_id(thread), str(id(frame))) - - # process any stepping instructions - if info.pydev_step_cmd in (CMD_STEP_INTO, CMD_STEP_INTO_MY_CODE): - info.step_in_initial_location = (frame, frame.f_lineno) - if frame.f_code.co_flags & 0x80: # CO_COROUTINE = 0x80 - # When in a coroutine we switch to CMD_STEP_INTO_COROUTINE. - info.pydev_step_cmd = CMD_STEP_INTO_COROUTINE - info.pydev_step_stop = frame - self.set_trace_for_frame_and_parents(frame) - else: - info.pydev_step_stop = None - self.set_trace_for_frame_and_parents(frame) - - elif info.pydev_step_cmd in (CMD_STEP_OVER, CMD_STEP_OVER_MY_CODE, CMD_SMART_STEP_INTO): - info.pydev_step_stop = frame - self.set_trace_for_frame_and_parents(frame) - - elif info.pydev_step_cmd == CMD_RUN_TO_LINE or info.pydev_step_cmd == CMD_SET_NEXT_STATEMENT: - info.pydev_step_stop = None - self.set_trace_for_frame_and_parents(frame) - stop = False - response_msg = "" - try: - stop, _old_line, response_msg = self.set_next_statement(frame, event, info.pydev_func_name, info.pydev_next_line) - except ValueError as e: - response_msg = "%s" % e - finally: - seq = info.pydev_message - cmd = self.cmd_factory.make_set_next_stmnt_status_message(seq, stop, response_msg) - self.writer.add_command(cmd) - info.pydev_message = '' - - if stop: - # Uninstall the current frames tracker before running it. - frames_tracker.untrack_all() - cmd = self.cmd_factory.make_thread_run_message(get_current_thread_id(thread), info.pydev_step_cmd) - self.writer.add_command(cmd) - info.pydev_state = STATE_SUSPEND - thread.stop_reason = CMD_SET_NEXT_STATEMENT - keep_suspended = True - - else: - # Set next did not work... - info.pydev_original_step_cmd = -1 - info.pydev_step_cmd = -1 - info.pydev_state = STATE_SUSPEND - thread.stop_reason = CMD_THREAD_SUSPEND - # return to the suspend state and wait for other command (without sending any - # additional notification to the client). - return self._do_wait_suspend(thread, frame, event, arg, suspend_type, from_this_thread, frames_tracker) - - elif info.pydev_step_cmd in (CMD_STEP_RETURN, CMD_STEP_RETURN_MY_CODE): - back_frame = frame.f_back - force_check_project_scope = info.pydev_step_cmd == CMD_STEP_RETURN_MY_CODE - - if force_check_project_scope or self.is_files_filter_enabled: - while back_frame is not None: - if self.apply_files_filter(back_frame, back_frame.f_code.co_filename, force_check_project_scope): - frame = back_frame - back_frame = back_frame.f_back - else: - break - - if back_frame is not None: - # steps back to the same frame (in a return call it will stop in the 'back frame' for the user) - info.pydev_step_stop = frame - self.set_trace_for_frame_and_parents(frame) - else: - # No back frame?!? -- this happens in jython when we have some frame created from an awt event - # (the previous frame would be the awt event, but this doesn't make part of 'jython', only 'java') - # so, if we're doing a step return in this situation, it's the same as just making it run - info.pydev_step_stop = None - info.pydev_original_step_cmd = -1 - info.pydev_step_cmd = -1 - info.pydev_state = STATE_RUN - - if PYDEVD_IPYTHON_COMPATIBLE_DEBUGGING: - info.pydev_use_scoped_step_frame = False - if info.pydev_step_cmd in ( - CMD_STEP_OVER, CMD_STEP_OVER_MY_CODE, - CMD_STEP_INTO, CMD_STEP_INTO_MY_CODE - ): - # i.e.: We're stepping: check if the stepping should be scoped (i.e.: in ipython - # each line is executed separately in a new frame, in which case we need to consider - # the next line as if it was still in the same frame). - f = frame.f_back - if f and f.f_code.co_name == PYDEVD_IPYTHON_CONTEXT[1]: - f = f.f_back - if f and f.f_code.co_name == PYDEVD_IPYTHON_CONTEXT[2]: - info.pydev_use_scoped_step_frame = True - pydev_log.info('Using (ipython) scoped stepping.') - del f - - del frame - cmd = self.cmd_factory.make_thread_run_message(get_current_thread_id(thread), info.pydev_step_cmd) - self.writer.add_command(cmd) - - with CustomFramesContainer.custom_frames_lock: - # The ones that remained on last_running must now be removed. - for frame_id in from_this_thread: - # print('Removing created frame: %s' % (frame_id,)) - self.writer.add_command(self.cmd_factory.make_thread_killed_message(frame_id)) - - return keep_suspended - - def do_stop_on_unhandled_exception(self, thread, frame, frames_byid, arg): - pydev_log.debug("We are stopping in unhandled exception.") - try: - add_exception_to_frame(frame, arg) - self.send_caught_exception_stack(thread, arg, id(frame)) - try: - self.set_suspend(thread, CMD_ADD_EXCEPTION_BREAK) - self.do_wait_suspend(thread, frame, 'exception', arg, EXCEPTION_TYPE_UNHANDLED) - except: - self.send_caught_exception_stack_proceeded(thread) - except: - pydev_log.exception("We've got an error while stopping in unhandled exception: %s.", arg[0]) - finally: - remove_exception_from_frame(frame) - frame = None - - def set_trace_for_frame_and_parents(self, frame, **kwargs): - disable = kwargs.pop('disable', False) - assert not kwargs - - while frame is not None: - # Don't change the tracing on debugger-related files - file_type = self.get_file_type(frame) - - if file_type is None: - if disable: - pydev_log.debug('Disable tracing of frame: %s - %s', frame.f_code.co_filename, frame.f_code.co_name) - if frame.f_trace is not None and frame.f_trace is not NO_FTRACE: - frame.f_trace = NO_FTRACE - - elif frame.f_trace is not self.trace_dispatch: - pydev_log.debug('Set tracing of frame: %s - %s', frame.f_code.co_filename, frame.f_code.co_name) - frame.f_trace = self.trace_dispatch - else: - pydev_log.debug('SKIP set tracing of frame: %s - %s', frame.f_code.co_filename, frame.f_code.co_name) - - frame = frame.f_back - - del frame - - def _create_pydb_command_thread(self): - curr_pydb_command_thread = self.py_db_command_thread - if curr_pydb_command_thread is not None: - curr_pydb_command_thread.do_kill_pydev_thread() - - new_pydb_command_thread = self.py_db_command_thread = PyDBCommandThread(self) - new_pydb_command_thread.start() - - def _create_check_output_thread(self): - curr_output_checker_thread = self.check_alive_thread - if curr_output_checker_thread is not None: - curr_output_checker_thread.do_kill_pydev_thread() - - check_alive_thread = self.check_alive_thread = CheckAliveThread(self) - check_alive_thread.start() - - def start_auxiliary_daemon_threads(self): - self._create_pydb_command_thread() - self._create_check_output_thread() - - def __wait_for_threads_to_finish(self, timeout): - try: - with self._wait_for_threads_to_finish_called_lock: - wait_for_threads_to_finish_called = self._wait_for_threads_to_finish_called - self._wait_for_threads_to_finish_called = True - - if wait_for_threads_to_finish_called: - # Make sure that we wait for the previous call to be finished. - self._wait_for_threads_to_finish_called_event.wait(timeout=timeout) - else: - try: - - def get_pydb_daemon_threads_to_wait(): - pydb_daemon_threads = set(self.created_pydb_daemon_threads) - pydb_daemon_threads.discard(self.check_alive_thread) - pydb_daemon_threads.discard(threading.current_thread()) - return pydb_daemon_threads - - pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads waiting for pydb daemon threads to finish") - started_at = time.time() - # Note: we wait for all except the check_alive_thread (which is not really a daemon - # thread and it can call this method itself). - while time.time() < started_at + timeout: - if len(get_pydb_daemon_threads_to_wait()) == 0: - break - time.sleep(1 / 10.) - else: - thread_names = [t.name for t in get_pydb_daemon_threads_to_wait()] - if thread_names: - pydev_log.debug("The following pydb threads may not have finished correctly: %s", - ', '.join(thread_names)) - finally: - self._wait_for_threads_to_finish_called_event.set() - except: - pydev_log.exception() - - def dispose_and_kill_all_pydevd_threads(self, wait=True, timeout=.5): - ''' - When this method is called we finish the debug session, terminate threads - and if this was registered as the global instance, unregister it -- afterwards - it should be possible to create a new instance and set as global to start - a new debug session. - - :param bool wait: - If True we'll wait for the threads to be actually finished before proceeding - (based on the available timeout). - Note that this must be thread-safe and if one thread is waiting the other thread should - also wait. - ''' - try: - back_frame = sys._getframe().f_back - pydev_log.debug( - 'PyDB.dispose_and_kill_all_pydevd_threads (called from: File "%s", line %s, in %s)', - back_frame.f_code.co_filename, back_frame.f_lineno, back_frame.f_code.co_name - ) - back_frame = None - with self._disposed_lock: - disposed = self.pydb_disposed - self.pydb_disposed = True - - if disposed: - if wait: - pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads (already disposed - wait)") - self.__wait_for_threads_to_finish(timeout) - else: - pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads (already disposed - no wait)") - return - - pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads (first call)") - - # Wait until a time when there are no commands being processed to kill the threads. - started_at = time.time() - while time.time() < started_at + timeout: - with self._main_lock: - writer = self.writer - if writer is None or writer.empty(): - pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads no commands being processed.") - break - else: - pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads timed out waiting for writer to be empty.") - - pydb_daemon_threads = set(self.created_pydb_daemon_threads) - for t in pydb_daemon_threads: - if hasattr(t, 'do_kill_pydev_thread'): - pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads killing thread: %s", t) - t.do_kill_pydev_thread() - - if wait: - self.__wait_for_threads_to_finish(timeout) - else: - pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads: no wait") - - py_db = get_global_debugger() - if py_db is self: - set_global_debugger(None) - except: - pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads: exception") - try: - if DebugInfoHolder.DEBUG_TRACE_LEVEL >= 3: - pydev_log.exception() - except: - pass - finally: - pydev_log.debug("PyDB.dispose_and_kill_all_pydevd_threads: finished") - - def prepare_to_run(self): - ''' Shared code to prepare debugging by installing traces and registering threads ''' - self.patch_threads() - self.start_auxiliary_daemon_threads() - - def patch_threads(self): - try: - # not available in jython! - threading.settrace(self.trace_dispatch) # for all future threads - except: - pass - - from _pydev_bundle.pydev_monkey import patch_thread_modules - patch_thread_modules() - - def run(self, file, globals=None, locals=None, is_module=False, set_trace=True): - module_name = None - entry_point_fn = '' - if is_module: - # When launching with `python -m `, python automatically adds - # an empty path to the PYTHONPATH which resolves files in the current - # directory, so, depending how pydevd itself is launched, we may need - # to manually add such an entry to properly resolve modules in the - # current directory (see: https://github.com/Microsoft/ptvsd/issues/1010). - if '' not in sys.path: - sys.path.insert(0, '') - file, _, entry_point_fn = file.partition(':') - module_name = file - filename = get_fullname(file) - if filename is None: - mod_dir = get_package_dir(module_name) - if mod_dir is None: - sys.stderr.write("No module named %s\n" % file) - return - else: - filename = get_fullname("%s.__main__" % module_name) - if filename is None: - sys.stderr.write("No module named %s\n" % file) - return - else: - file = filename - else: - file = filename - mod_dir = os.path.dirname(filename) - main_py = os.path.join(mod_dir, '__main__.py') - main_pyc = os.path.join(mod_dir, '__main__.pyc') - if filename.endswith('__init__.pyc'): - if os.path.exists(main_pyc): - filename = main_pyc - elif os.path.exists(main_py): - filename = main_py - elif filename.endswith('__init__.py'): - if os.path.exists(main_pyc) and not os.path.exists(main_py): - filename = main_pyc - elif os.path.exists(main_py): - filename = main_py - - sys.argv[0] = filename - - if os.path.isdir(file): - new_target = os.path.join(file, '__main__.py') - if os.path.isfile(new_target): - file = new_target - - m = None - if globals is None: - m = save_main_module(file, 'pydevd') - globals = m.__dict__ - try: - globals['__builtins__'] = __builtins__ - except NameError: - pass # Not there on Jython... - - if locals is None: - locals = globals - - # Predefined (writable) attributes: __name__ is the module's name; - # __doc__ is the module's documentation string, or None if unavailable; - # __file__ is the pathname of the file from which the module was loaded, - # if it was loaded from a file. The __file__ attribute is not present for - # C modules that are statically linked into the interpreter; for extension modules - # loaded dynamically from a shared library, it is the pathname of the shared library file. - - # I think this is an ugly hack, bug it works (seems to) for the bug that says that sys.path should be the same in - # debug and run. - if sys.path[0] != '' and m is not None and m.__file__.startswith(sys.path[0]): - # print >> sys.stderr, 'Deleting: ', sys.path[0] - del sys.path[0] - - if not is_module: - # now, the local directory has to be added to the pythonpath - # sys.path.insert(0, os.getcwd()) - # Changed: it's not the local directory, but the directory of the file launched - # The file being run must be in the pythonpath (even if it was not before) - sys.path.insert(0, os.path.split(os_path_abspath(file))[0]) - - if set_trace: - self.wait_for_ready_to_run() - - # call prepare_to_run when we already have all information about breakpoints - self.prepare_to_run() - - t = threadingCurrentThread() - thread_id = get_current_thread_id(t) - - if self.thread_analyser is not None: - wrap_threads() - self.thread_analyser.set_start_time(cur_time()) - send_concurrency_message("threading_event", 0, t.name, thread_id, "thread", "start", file, 1, None, parent=thread_id) - - if self.asyncio_analyser is not None: - # we don't have main thread in asyncio graph, so we should add a fake event - send_concurrency_message("asyncio_event", 0, "Task", "Task", "thread", "stop", file, 1, frame=None, parent=None) - - try: - if INTERACTIVE_MODE_AVAILABLE: - self.init_gui_support() - except: - pydev_log.exception("Matplotlib support in debugger failed") - - if hasattr(sys, 'exc_clear'): - # we should clean exception information in Python 2, before user's code execution - sys.exc_clear() - - # Notify that the main thread is created. - self.notify_thread_created(thread_id, t) - - # Note: important: set the tracing right before calling _exec. - if set_trace: - self.enable_tracing() - - return self._exec(is_module, entry_point_fn, module_name, file, globals, locals) - - def _exec(self, is_module, entry_point_fn, module_name, file, globals, locals): - ''' - This function should have frames tracked by unhandled exceptions (the `_exec` name is important). - ''' - if not is_module: - globals = pydevd_runpy.run_path(file, globals, '__main__') - else: - # treat ':' as a separator between module and entry point function - # if there is no entry point we run we same as with -m switch. Otherwise we perform - # an import and execute the entry point - if entry_point_fn: - mod = __import__(module_name, level=0, fromlist=[entry_point_fn], globals=globals, locals=locals) - func = getattr(mod, entry_point_fn) - func() - else: - # Run with the -m switch - globals = pydevd_runpy._run_module_as_main(module_name, alter_argv=False) - return globals - - def wait_for_commands(self, globals): - self._activate_gui_if_needed() - - thread = threading.current_thread() - from _pydevd_bundle import pydevd_frame_utils - frame = pydevd_frame_utils.Frame(None, -1, pydevd_frame_utils.FCode("Console", - os.path.abspath(os.path.dirname(__file__))), globals, globals) - thread_id = get_current_thread_id(thread) - self.add_fake_frame(thread_id, id(frame), frame) - - cmd = self.cmd_factory.make_show_console_message(self, thread_id, frame) - if self.writer is not None: - self.writer.add_command(cmd) - - while True: - if self.gui_in_use: - # call input hooks if only GUI is in use - self._call_input_hook() - self.process_internal_commands() - time.sleep(0.01) - - -class IDAPMessagesListener(object): - - def before_send(self, message_as_dict): - ''' - Called just before a message is sent to the IDE. - - :type message_as_dict: dict - ''' - - def after_receive(self, message_as_dict): - ''' - Called just after a message is received from the IDE. - - :type message_as_dict: dict - ''' - - -def add_dap_messages_listener(dap_messages_listener): - ''' - Adds a listener for the DAP (debug adapter protocol) messages. - - :type dap_messages_listener: IDAPMessagesListener - - :note: messages from the xml backend are not notified through this API. - - :note: the notifications are sent from threads and they are not synchronized (so, - it's possible that a message is sent and received from different threads at the same time). - ''' - py_db = get_global_debugger() - if py_db is None: - raise AssertionError('PyDB is still not setup.') - - py_db.add_dap_messages_listener(dap_messages_listener) - - -def send_json_message(msg): - ''' - API to send some custom json message. - - :param dict|pydevd_schema.BaseSchema msg: - The custom message to be sent. - - :return bool: - True if the message was added to the queue to be sent and False otherwise. - ''' - py_db = get_global_debugger() - if py_db is None: - return False - - writer = py_db.writer - if writer is None: - return False - - cmd = NetCommand(-1, 0, msg, is_json=True) - writer.add_command(cmd) - return True - - -def enable_qt_support(qt_support_mode): - from _pydev_bundle import pydev_monkey_qt - pydev_monkey_qt.patch_qt(qt_support_mode) - - -def start_dump_threads_thread(filename_template, timeout, recurrent): - ''' - Helper to dump threads after a timeout. - - :param filename_template: - A template filename, such as 'c:/temp/thread_dump_%s.txt', where the %s will - be replaced by the time for the dump. - :param timeout: - The timeout (in seconds) for the dump. - :param recurrent: - If True we'll keep on doing thread dumps. - ''' - assert filename_template.count('%s') == 1, \ - 'Expected one %%s to appear in: %s' % (filename_template,) - - def _threads_on_timeout(): - try: - while True: - time.sleep(timeout) - filename = filename_template % (time.time(),) - try: - os.makedirs(os.path.dirname(filename)) - except Exception: - pass - with open(filename, 'w') as stream: - dump_threads(stream) - if not recurrent: - return - except Exception: - pydev_log.exception() - - t = threading.Thread(target=_threads_on_timeout) - mark_as_pydevd_daemon_thread(t) - t.start() - - -def dump_threads(stream=None): - ''' - Helper to dump thread info (default is printing to stderr). - ''' - pydevd_utils.dump_threads(stream) - - -def usage(doExit=0): - sys.stdout.write('Usage:\n') - sys.stdout.write('pydevd.py --port N [(--client hostname) | --server] --file executable [file_options]\n') - if doExit: - sys.exit(0) - - -def _init_stdout_redirect(): - pydevd_io.redirect_stream_to_pydb_io_messages(std='stdout') - - -def _init_stderr_redirect(): - pydevd_io.redirect_stream_to_pydb_io_messages(std='stderr') - - -def _enable_attach( - address, - dont_trace_start_patterns=(), - dont_trace_end_patterns=(), - patch_multiprocessing=False, - access_token=None, - client_access_token=None, - ): - ''' - Starts accepting connections at the given host/port. The debugger will not be initialized nor - configured, it'll only start accepting connections (and will have the tracing setup in this - thread). - - Meant to be used with the DAP (Debug Adapter Protocol) with _wait_for_attach(). - - :param address: (host, port) - :type address: tuple(str, int) - ''' - host = address[0] - port = int(address[1]) - - if SetupHolder.setup is not None: - if port != SetupHolder.setup['port']: - raise AssertionError('Unable to listen in port: %s (already listening in port: %s)' % (port, SetupHolder.setup['port'])) - settrace( - host=host, - port=port, - suspend=False, - wait_for_ready_to_run=False, - block_until_connected=False, - dont_trace_start_patterns=dont_trace_start_patterns, - dont_trace_end_patterns=dont_trace_end_patterns, - patch_multiprocessing=patch_multiprocessing, - access_token=access_token, - client_access_token=client_access_token, - ) - - py_db = get_global_debugger() - py_db.wait_for_server_socket_ready() - return py_db._server_socket_name - - -def _wait_for_attach(cancel=None): - ''' - Meant to be called after _enable_attach() -- the current thread will only unblock after a - connection is in place and the DAP (Debug Adapter Protocol) sends the ConfigurationDone - request. - ''' - py_db = get_global_debugger() - if py_db is None: - raise AssertionError('Debugger still not created. Please use _enable_attach() before using _wait_for_attach().') - - py_db.block_until_configuration_done(cancel=cancel) - - -def _is_attached(): - ''' - Can be called any time to check if the connection was established and the DAP (Debug Adapter Protocol) has sent - the ConfigurationDone request. - ''' - py_db = get_global_debugger() - return (py_db is not None) and py_db.is_attached() - - -#======================================================================================================================= -# settrace -#======================================================================================================================= -def settrace( - host=None, - stdout_to_server=False, - stderr_to_server=False, - port=5678, - suspend=True, - trace_only_current_thread=False, - overwrite_prev_trace=False, - patch_multiprocessing=False, - stop_at_frame=None, - block_until_connected=True, - wait_for_ready_to_run=True, - dont_trace_start_patterns=(), - dont_trace_end_patterns=(), - access_token=None, - client_access_token=None, - notify_stdin=True, - **kwargs - ): - '''Sets the tracing function with the pydev debug function and initializes needed facilities. - - :param host: the user may specify another host, if the debug server is not in the same machine (default is the local - host) - - :param stdout_to_server: when this is true, the stdout is passed to the debug server - - :param stderr_to_server: when this is true, the stderr is passed to the debug server - so that they are printed in its console and not in this process console. - - :param port: specifies which port to use for communicating with the server (note that the server must be started - in the same port). @note: currently it's hard-coded at 5678 in the client - - :param suspend: whether a breakpoint should be emulated as soon as this function is called. - - :param trace_only_current_thread: determines if only the current thread will be traced or all current and future - threads will also have the tracing enabled. - - :param overwrite_prev_trace: deprecated - - :param patch_multiprocessing: if True we'll patch the functions which create new processes so that launched - processes are debugged. - - :param stop_at_frame: if passed it'll stop at the given frame, otherwise it'll stop in the function which - called this method. - - :param wait_for_ready_to_run: if True settrace will block until the ready_to_run flag is set to True, - otherwise, it'll set ready_to_run to True and this function won't block. - - Note that if wait_for_ready_to_run == False, there are no guarantees that the debugger is synchronized - with what's configured in the client (IDE), the only guarantee is that when leaving this function - the debugger will be already connected. - - :param dont_trace_start_patterns: if set, then any path that starts with one fo the patterns in the collection - will not be traced - - :param dont_trace_end_patterns: if set, then any path that ends with one fo the patterns in the collection - will not be traced - - :param access_token: token to be sent from the client (i.e.: IDE) to the debugger when a connection - is established (verified by the debugger). - - :param client_access_token: token to be sent from the debugger to the client (i.e.: IDE) when - a connection is established (verified by the client). - - :param notify_stdin: - If True sys.stdin will be patched to notify the client when a message is requested - from the IDE. This is done so that when reading the stdin the client is notified. - Clients may need this to know when something that is being written should be interpreted - as an input to the process or as a command to be evaluated. - Note that parallel-python has issues with this (because it tries to assert that sys.stdin - is of a given type instead of just checking that it has what it needs). - ''' - - stdout_to_server = stdout_to_server or kwargs.get('stdoutToServer', False) # Backward compatibility - stderr_to_server = stderr_to_server or kwargs.get('stderrToServer', False) # Backward compatibility - - # Internal use (may be used to set the setup info directly for subprocesess). - __setup_holder__ = kwargs.get('__setup_holder__') - - with _set_trace_lock: - _locked_settrace( - host, - stdout_to_server, - stderr_to_server, - port, - suspend, - trace_only_current_thread, - patch_multiprocessing, - stop_at_frame, - block_until_connected, - wait_for_ready_to_run, - dont_trace_start_patterns, - dont_trace_end_patterns, - access_token, - client_access_token, - __setup_holder__=__setup_holder__, - notify_stdin=notify_stdin, - ) - - -_set_trace_lock = ForkSafeLock() - - -def _locked_settrace( - host, - stdout_to_server, - stderr_to_server, - port, - suspend, - trace_only_current_thread, - patch_multiprocessing, - stop_at_frame, - block_until_connected, - wait_for_ready_to_run, - dont_trace_start_patterns, - dont_trace_end_patterns, - access_token, - client_access_token, - __setup_holder__, - notify_stdin, - ): - if patch_multiprocessing: - try: - from _pydev_bundle import pydev_monkey - except: - pass - else: - pydev_monkey.patch_new_process_functions() - - if host is None: - from _pydev_bundle import pydev_localhost - host = pydev_localhost.get_localhost() - - global _global_redirect_stdout_to_server - global _global_redirect_stderr_to_server - - py_db = get_global_debugger() - if __setup_holder__: - SetupHolder.setup = __setup_holder__ - if py_db is None: - py_db = PyDB() - pydevd_vm_type.setup_type() - - if SetupHolder.setup is None: - setup = { - 'client': host, # dispatch expects client to be set to the host address when server is False - 'server': False, - 'port': int(port), - 'multiprocess': patch_multiprocessing, - 'skip-notify-stdin': not notify_stdin, - } - SetupHolder.setup = setup - - if access_token is not None: - py_db.authentication.access_token = access_token - SetupHolder.setup['access-token'] = access_token - if client_access_token is not None: - py_db.authentication.client_access_token = client_access_token - SetupHolder.setup['client-access-token'] = client_access_token - - if block_until_connected: - py_db.connect(host, port) # Note: connect can raise error. - else: - # Create a dummy writer and wait for the real connection. - py_db.writer = WriterThread(NULL, py_db, terminate_on_socket_close=False) - py_db.create_wait_for_connection_thread() - - if dont_trace_start_patterns or dont_trace_end_patterns: - PyDevdAPI().set_dont_trace_start_end_patterns(py_db, dont_trace_start_patterns, dont_trace_end_patterns) - - _global_redirect_stdout_to_server = stdout_to_server - _global_redirect_stderr_to_server = stderr_to_server - - if _global_redirect_stdout_to_server: - _init_stdout_redirect() - - if _global_redirect_stderr_to_server: - _init_stderr_redirect() - - if notify_stdin: - patch_stdin() - - t = threadingCurrentThread() - additional_info = set_additional_thread_info(t) - - if not wait_for_ready_to_run: - py_db.ready_to_run = True - - py_db.wait_for_ready_to_run() - py_db.start_auxiliary_daemon_threads() - - try: - if INTERACTIVE_MODE_AVAILABLE: - py_db.init_gui_support() - except: - pydev_log.exception("Matplotlib support in debugger failed") - - if trace_only_current_thread: - py_db.enable_tracing() - else: - # Trace future threads. - py_db.patch_threads() - - py_db.enable_tracing(py_db.trace_dispatch, apply_to_all_threads=True) - - # As this is the first connection, also set tracing for any untraced threads - py_db.set_tracing_for_untraced_contexts() - - py_db.set_trace_for_frame_and_parents(get_frame().f_back) - - with CustomFramesContainer.custom_frames_lock: # @UndefinedVariable - for _frameId, custom_frame in CustomFramesContainer.custom_frames.items(): - py_db.set_trace_for_frame_and_parents(custom_frame.frame) - - else: - # ok, we're already in debug mode, with all set, so, let's just set the break - if access_token is not None: - py_db.authentication.access_token = access_token - if client_access_token is not None: - py_db.authentication.client_access_token = client_access_token - - py_db.set_trace_for_frame_and_parents(get_frame().f_back) - - t = threadingCurrentThread() - additional_info = set_additional_thread_info(t) - - if trace_only_current_thread: - py_db.enable_tracing() - else: - # Trace future threads. - py_db.patch_threads() - py_db.enable_tracing(py_db.trace_dispatch, apply_to_all_threads=True) - - # Suspend as the last thing after all tracing is in place. - if suspend: - if stop_at_frame is not None: - # If the step was set we have to go to run state and - # set the proper frame for it to stop. - additional_info.pydev_state = STATE_RUN - additional_info.pydev_original_step_cmd = CMD_STEP_OVER - additional_info.pydev_step_cmd = CMD_STEP_OVER - additional_info.pydev_step_stop = stop_at_frame - additional_info.suspend_type = PYTHON_SUSPEND - else: - # Ask to break as soon as possible. - py_db.set_suspend(t, CMD_SET_BREAK) - - -def stoptrace(): - pydev_log.debug("pydevd.stoptrace()") - pydevd_tracing.restore_sys_set_trace_func() - sys.settrace(None) - try: - # not available in jython! - threading.settrace(None) # for all future threads - except: - pass - - from _pydev_bundle.pydev_monkey import undo_patch_thread_modules - undo_patch_thread_modules() - - # Either or both standard streams can be closed at this point, - # in which case flush() will fail. - try: - sys.stdout.flush() - except: - pass - try: - sys.stderr.flush() - except: - pass - - py_db = get_global_debugger() - - if py_db is not None: - py_db.dispose_and_kill_all_pydevd_threads() - - -class Dispatcher(object): - - def __init__(self): - self.port = None - - def connect(self, host, port): - self.host = host - self.port = port - self.client = start_client(self.host, self.port) - self.reader = DispatchReader(self) - self.reader.pydev_do_not_trace = False # we run reader in the same thread so we don't want to loose tracing - self.reader.run() - - def close(self): - try: - self.reader.do_kill_pydev_thread() - except: - pass - - -class DispatchReader(ReaderThread): - - def __init__(self, dispatcher): - self.dispatcher = dispatcher - - ReaderThread.__init__( - self, - get_global_debugger(), - self.dispatcher.client, - PyDevJsonCommandProcessor=PyDevJsonCommandProcessor, - process_net_command=process_net_command, - ) - - @overrides(ReaderThread._on_run) - def _on_run(self): - dummy_thread = threading.current_thread() - dummy_thread.is_pydev_daemon_thread = False - return ReaderThread._on_run(self) - - @overrides(PyDBDaemonThread.do_kill_pydev_thread) - def do_kill_pydev_thread(self): - if not self._kill_received: - ReaderThread.do_kill_pydev_thread(self) - try: - self.sock.shutdown(SHUT_RDWR) - except: - pass - try: - self.sock.close() - except: - pass - - def process_command(self, cmd_id, seq, text): - if cmd_id == 99: - self.dispatcher.port = int(text) - self._kill_received = True - - -DISPATCH_APPROACH_NEW_CONNECTION = 1 # Used by PyDev -DISPATCH_APPROACH_EXISTING_CONNECTION = 2 # Used by PyCharm -DISPATCH_APPROACH = DISPATCH_APPROACH_NEW_CONNECTION - - -def dispatch(): - setup = SetupHolder.setup - host = setup['client'] - port = setup['port'] - if DISPATCH_APPROACH == DISPATCH_APPROACH_EXISTING_CONNECTION: - dispatcher = Dispatcher() - try: - dispatcher.connect(host, port) - port = dispatcher.port - finally: - dispatcher.close() - return host, port - - -def settrace_forked(setup_tracing=True): - ''' - When creating a fork from a process in the debugger, we need to reset the whole debugger environment! - ''' - from _pydevd_bundle.pydevd_constants import GlobalDebuggerHolder - py_db = GlobalDebuggerHolder.global_dbg - if py_db is not None: - py_db.created_pydb_daemon_threads = {} # Just making sure we won't touch those (paused) threads. - py_db = None - - GlobalDebuggerHolder.global_dbg = None - threading.current_thread().additional_info = None - - # Make sure that we keep the same access tokens for subprocesses started through fork. - setup = SetupHolder.setup - if setup is None: - setup = {} - else: - # i.e.: Get the ppid at this point as it just changed. - # If we later do an exec() it should remain the same ppid. - setup[pydevd_constants.ARGUMENT_PPID] = PyDevdAPI().get_ppid() - access_token = setup.get('access-token') - client_access_token = setup.get('client-access-token') - - if setup_tracing: - from _pydevd_frame_eval.pydevd_frame_eval_main import clear_thread_local_info - host, port = dispatch() - - import pydevd_tracing - pydevd_tracing.restore_sys_set_trace_func() - - if setup_tracing: - if port is not None: - custom_frames_container_init() - - if clear_thread_local_info is not None: - clear_thread_local_info() - - settrace( - host, - port=port, - suspend=False, - trace_only_current_thread=False, - overwrite_prev_trace=True, - patch_multiprocessing=True, - access_token=access_token, - client_access_token=client_access_token, - ) - - -@contextmanager -def skip_subprocess_arg_patch(): - ''' - May be used to skip the monkey-patching that pydevd does to - skip changing arguments to embed the debugger into child processes. - - i.e.: - - with pydevd.skip_subprocess_arg_patch(): - subprocess.call(...) - ''' - from _pydev_bundle import pydev_monkey - with pydev_monkey.skip_subprocess_arg_patch(): - yield - - -def add_dont_terminate_child_pid(pid): - ''' - May be used to ask pydevd to skip the termination of some process - when it's asked to terminate (debug adapter protocol only). - - :param int pid: - The pid to be ignored. - - i.e.: - - process = subprocess.Popen(...) - pydevd.add_dont_terminate_child_pid(process.pid) - ''' - py_db = get_global_debugger() - if py_db is not None: - py_db.dont_terminate_child_pids.add(pid) - - -class SetupHolder: - - setup = None - - -def apply_debugger_options(setup_options): - """ - - :type setup_options: dict[str, bool] - """ - default_options = {'save-signatures': False, 'qt-support': ''} - default_options.update(setup_options) - setup_options = default_options - - debugger = get_global_debugger() - if setup_options['save-signatures']: - if pydevd_vm_type.get_vm_type() == pydevd_vm_type.PydevdVmType.JYTHON: - sys.stderr.write("Collecting run-time type information is not supported for Jython\n") - else: - # Only import it if we're going to use it! - from _pydevd_bundle.pydevd_signature import SignatureFactory - debugger.signature_factory = SignatureFactory() - - if setup_options['qt-support']: - enable_qt_support(setup_options['qt-support']) - - -@call_only_once -def patch_stdin(): - _internal_patch_stdin(None, sys, getpass_mod) - - -def _internal_patch_stdin(py_db=None, sys=None, getpass_mod=None): - ''' - Note: don't use this function directly, use `patch_stdin()` instead. - (this function is only meant to be used on test-cases to avoid patching the actual globals). - ''' - # Patch stdin so that we notify when readline() is called. - original_sys_stdin = sys.stdin - debug_console_stdin = DebugConsoleStdIn(py_db, original_sys_stdin) - sys.stdin = debug_console_stdin - - _original_getpass = getpass_mod.getpass - - @functools.wraps(_original_getpass) - def getpass(*args, **kwargs): - with DebugConsoleStdIn.notify_input_requested(debug_console_stdin): - try: - curr_stdin = sys.stdin - if curr_stdin is debug_console_stdin: - sys.stdin = original_sys_stdin - return _original_getpass(*args, **kwargs) - finally: - sys.stdin = curr_stdin - - getpass_mod.getpass = getpass - -# Dispatch on_debugger_modules_loaded here, after all primary py_db modules are loaded - - -for handler in pydevd_extension_utils.extensions_of_type(DebuggerEventHandler): - handler.on_debugger_modules_loaded(debugger_version=__version__) - - -def log_to(log_file:str, log_level=3) -> None: - ''' - In pydevd it's possible to log by setting the following environment variables: - - PYDEVD_DEBUG=1 (sets the default log level to 3 along with other default options) - PYDEVD_DEBUG_FILE= - - Note that the file will have the pid of the process added to it (so, logging to - /path/to/file.log would actually start logging to /path/to/file..log -- if subprocesses are - logged, each new subprocess will have the logging set to its own pid). - - Usually setting the environment variable is preferred as it'd log information while - pydevd is still doing its imports and not just after this method is called, but on - cases where this is hard to do this function may be called to set the tracing after - pydevd itself is already imported. - ''' - pydev_log.log_to(log_file, log_level) - - -def _log_initial_info(): - pydev_log.debug("Initial arguments: %s", (sys.argv,)) - pydev_log.debug("Current pid: %s", os.getpid()) - pydev_log.debug("Using cython: %s", USING_CYTHON) - pydev_log.debug("Using frame eval: %s", USING_FRAME_EVAL) - pydev_log.debug("Using gevent mode: %s / imported gevent module support: %s", SUPPORT_GEVENT, bool(pydevd_gevent_integration)) - - -def config(protocol='', debug_mode='', preimport=''): - pydev_log.debug('Config: protocol: %s, debug_mode: %s, preimport: %s', protocol, debug_mode, preimport) - PydevdCustomization.DEFAULT_PROTOCOL = protocol - PydevdCustomization.DEBUG_MODE = debug_mode - PydevdCustomization.PREIMPORT = preimport - - -#======================================================================================================================= -# main -#======================================================================================================================= -def main(): - - # parse the command line. --file is our last argument that is required - _log_initial_info() - original_argv = sys.argv[:] - try: - from _pydevd_bundle.pydevd_command_line_handling import process_command_line - setup = process_command_line(sys.argv) - SetupHolder.setup = setup - except ValueError: - pydev_log.exception() - usage(1) - - preimport = setup.get('preimport') - if preimport: - pydevd_defaults.PydevdCustomization.PREIMPORT = preimport - - debug_mode = setup.get('debug-mode') - if debug_mode: - pydevd_defaults.PydevdCustomization.DEBUG_MODE = debug_mode - - log_trace_level = setup.get('log-level') - - # Note: the logging info could've been changed (this would happen if this is a - # subprocess and the value in the environment variable does not match the value in the - # argument because the user used `pydevd.log_to` instead of supplying the environment - # variable). If this is the case, update the logging info and re-log some information - # in the new target. - new_debug_file = setup.get('log-file') - if new_debug_file and DebugInfoHolder.PYDEVD_DEBUG_FILE != new_debug_file: - # The debug file can't be set directly, we need to use log_to() so that the a - # new stream is actually created for the new file. - log_to(new_debug_file, log_trace_level if log_trace_level is not None else 3) - _log_initial_info() # The redirection info just changed, log it again. - - elif log_trace_level is not None: - # The log file was not specified - DebugInfoHolder.DEBUG_TRACE_LEVEL = log_trace_level - pydev_log.debug('Original sys.argv: %s', original_argv) - - if setup['print-in-debugger-startup']: - try: - pid = ' (pid: %s)' % os.getpid() - except: - pid = '' - sys.stderr.write("pydev debugger: starting%s\n" % pid) - - pydev_log.debug("Executing file %s", setup['file']) - pydev_log.debug("arguments: %s", (sys.argv,)) - - pydevd_vm_type.setup_type(setup.get('vm_type', None)) - - port = setup['port'] - host = setup['client'] - f = setup['file'] - fix_app_engine_debug = False - - debugger = get_global_debugger() - if debugger is None: - debugger = PyDB() - - try: - from _pydev_bundle import pydev_monkey - except: - pass # Not usable on jython 2.1 - else: - if setup['multiprocess']: # PyDev - pydev_monkey.patch_new_process_functions() - - elif setup['multiproc']: # PyCharm - pydev_log.debug("Started in multiproc mode\n") - global DISPATCH_APPROACH - DISPATCH_APPROACH = DISPATCH_APPROACH_EXISTING_CONNECTION - - dispatcher = Dispatcher() - try: - dispatcher.connect(host, port) - if dispatcher.port is not None: - port = dispatcher.port - pydev_log.debug("Received port %d\n", port) - pydev_log.info("pydev debugger: process %d is connecting\n" % os.getpid()) - - try: - pydev_monkey.patch_new_process_functions() - except: - pydev_log.exception("Error patching process functions.") - else: - pydev_log.critical("pydev debugger: couldn't get port for new debug process.") - finally: - dispatcher.close() - else: - try: - pydev_monkey.patch_new_process_functions_with_warning() - except: - pydev_log.exception("Error patching process functions.") - - # Only do this patching if we're not running with multiprocess turned on. - if f.find('dev_appserver.py') != -1: - if os.path.basename(f).startswith('dev_appserver.py'): - appserver_dir = os.path.dirname(f) - version_file = os.path.join(appserver_dir, 'VERSION') - if os.path.exists(version_file): - try: - stream = open(version_file, 'r') - try: - for line in stream.read().splitlines(): - line = line.strip() - if line.startswith('release:'): - line = line[8:].strip() - version = line.replace('"', '') - version = version.split('.') - if int(version[0]) > 1: - fix_app_engine_debug = True - - elif int(version[0]) == 1: - if int(version[1]) >= 7: - # Only fix from 1.7 onwards - fix_app_engine_debug = True - break - finally: - stream.close() - except: - pydev_log.exception() - - try: - # In the default run (i.e.: run directly on debug mode), we try to patch stackless as soon as possible - # on a run where we have a remote debug, we may have to be more careful because patching stackless means - # that if the user already had a stackless.set_schedule_callback installed, he'd loose it and would need - # to call it again (because stackless provides no way of getting the last function which was registered - # in set_schedule_callback). - # - # So, ideally, if there's an application using stackless and the application wants to use the remote debugger - # and benefit from stackless debugging, the application itself must call: - # - # import pydevd_stackless - # pydevd_stackless.patch_stackless() - # - # itself to be able to benefit from seeing the tasklets created before the remote debugger is attached. - from _pydevd_bundle import pydevd_stackless - pydevd_stackless.patch_stackless() - except: - # It's ok not having stackless there... - try: - if hasattr(sys, 'exc_clear'): - sys.exc_clear() # the exception information should be cleaned in Python 2 - except: - pass - - is_module = setup['module'] - if not setup['skip-notify-stdin']: - patch_stdin() - - if setup[pydevd_constants.ARGUMENT_JSON_PROTOCOL]: - PyDevdAPI().set_protocol(debugger, 0, JSON_PROTOCOL) - - elif setup[pydevd_constants.ARGUMENT_HTTP_JSON_PROTOCOL]: - PyDevdAPI().set_protocol(debugger, 0, HTTP_JSON_PROTOCOL) - - elif setup[pydevd_constants.ARGUMENT_HTTP_PROTOCOL]: - PyDevdAPI().set_protocol(debugger, 0, pydevd_constants.HTTP_PROTOCOL) - - elif setup[pydevd_constants.ARGUMENT_QUOTED_LINE_PROTOCOL]: - PyDevdAPI().set_protocol(debugger, 0, pydevd_constants.QUOTED_LINE_PROTOCOL) - - access_token = setup['access-token'] - if access_token: - debugger.authentication.access_token = access_token - - client_access_token = setup['client-access-token'] - if client_access_token: - debugger.authentication.client_access_token = client_access_token - - if fix_app_engine_debug: - sys.stderr.write("pydev debugger: google app engine integration enabled\n") - curr_dir = os.path.dirname(__file__) - app_engine_startup_file = os.path.join(curr_dir, 'pydev_app_engine_debug_startup.py') - - sys.argv.insert(1, '--python_startup_script=' + app_engine_startup_file) - import json - setup['pydevd'] = __file__ - sys.argv.insert(2, '--python_startup_args=%s' % json.dumps(setup),) - sys.argv.insert(3, '--automatic_restart=no') - sys.argv.insert(4, '--max_module_instances=1') - - # Run the dev_appserver - debugger.run(setup['file'], None, None, is_module, set_trace=False) - else: - if setup['save-threading']: - debugger.thread_analyser = ThreadingLogger() - if setup['save-asyncio']: - debugger.asyncio_analyser = AsyncioLogger() - - apply_debugger_options(setup) - - try: - debugger.connect(host, port) - except: - sys.stderr.write("Could not connect to %s: %s\n" % (host, port)) - pydev_log.exception() - sys.exit(1) - - globals = debugger.run(setup['file'], None, None, is_module) - - if setup['cmd-line']: - debugger.wait_for_commands(globals) - - -if __name__ == '__main__': - main() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/bytes/image_bytes.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/bytes/image_bytes.py deleted file mode 100644 index a456a493ccbc4a35d8f83a3ec418343a81e19596..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/docarray/typing/bytes/image_bytes.py +++ /dev/null @@ -1,145 +0,0 @@ -from io import BytesIO -from typing import TYPE_CHECKING, Any, Optional, Tuple, Type, TypeVar - -import numpy as np -from pydantic import parse_obj_as -from pydantic.validators import bytes_validator - -from docarray.typing.abstract_type import AbstractType -from docarray.typing.proto_register import _register_proto -from docarray.typing.tensor.image.image_ndarray import ImageNdArray -from docarray.utils._internal.misc import import_library - -if TYPE_CHECKING: - from PIL import Image as PILImage - from pydantic.fields import BaseConfig, ModelField - - from docarray.proto import NodeProto - -T = TypeVar('T', bound='ImageBytes') - - -@_register_proto(proto_type_name='image_bytes') -class ImageBytes(bytes, AbstractType): - """ - Bytes that store an image and that can be load into an image tensor - """ - - @classmethod - def validate( - cls: Type[T], - value: Any, - field: 'ModelField', - config: 'BaseConfig', - ) -> T: - value = bytes_validator(value) - return cls(value) - - @classmethod - def from_protobuf(cls: Type[T], pb_msg: T) -> T: - return parse_obj_as(cls, pb_msg) - - def _to_node_protobuf(self: T) -> 'NodeProto': - from docarray.proto import NodeProto - - return NodeProto(blob=self, type=self._proto_type_name) - - def load_pil( - self, - ) -> 'PILImage.Image': - """ - Load the image from the bytes into a `PIL.Image.Image` instance - - --- - - ```python - from pydantic import parse_obj_as - - from docarray import BaseDoc - from docarray.typing import ImageUrl - - img_url = "https://upload.wikimedia.org/wikipedia/commons/8/80/Dag_Sebastian_Ahlander_at_G%C3%B6teborg_Book_Fair_2012b.jpg" - - img_url = parse_obj_as(ImageUrl, img_url) - img = img_url.load_pil() - - from PIL.Image import Image - - assert isinstance(img, Image) - ``` - - --- - :return: a Pillow image - """ - PIL = import_library('PIL', raise_error=True) # noqa: F841 - from PIL import Image as PILImage - - return PILImage.open(BytesIO(self)) - - def load( - self, - width: Optional[int] = None, - height: Optional[int] = None, - axis_layout: Tuple[str, str, str] = ('H', 'W', 'C'), - ) -> ImageNdArray: - """ - Load the image from the [`ImageBytes`][docarray.typing.ImageBytes] into an - [`ImageNdArray`][docarray.typing.ImageNdArray]. - - --- - - ```python - from docarray import BaseDoc - from docarray.typing import ImageNdArray, ImageUrl - - - class MyDoc(BaseDoc): - img_url: ImageUrl - - - doc = MyDoc( - img_url="https://upload.wikimedia.org/wikipedia/commons/8/80/" - "Dag_Sebastian_Ahlander_at_G%C3%B6teborg_Book_Fair_2012b.jpg" - ) - - img_tensor = doc.img_url.load() - assert isinstance(img_tensor, ImageNdArray) - - img_tensor = doc.img_url.load(height=224, width=224) - assert img_tensor.shape == (224, 224, 3) - - layout = ('C', 'W', 'H') - img_tensor = doc.img_url.load(height=100, width=200, axis_layout=layout) - assert img_tensor.shape == (3, 200, 100) - ``` - - --- - - :param width: width of the image tensor. - :param height: height of the image tensor. - :param axis_layout: ordering of the different image axes. - 'H' = height, 'W' = width, 'C' = color channel - :return: [`ImageNdArray`][docarray.typing.ImageNdArray] representing the image as RGB values - """ - raw_img = self.load_pil() - - if width or height: - new_width = width or raw_img.width - new_height = height or raw_img.height - raw_img = raw_img.resize((new_width, new_height)) - try: - tensor = np.array(raw_img.convert('RGB')) - except Exception: - tensor = np.array(raw_img) - - img = self._move_channel_axis(tensor, axis_layout=axis_layout) - return parse_obj_as(ImageNdArray, img) - - @staticmethod - def _move_channel_axis( - tensor: np.ndarray, axis_layout: Tuple[str, str, str] = ('H', 'W', 'C') - ) -> np.ndarray: - """Moves channel axis around.""" - channel_to_offset = {'H': 0, 'W': 1, 'C': 2} - permutation = tuple(channel_to_offset[axis] for axis in axis_layout) - return np.transpose(tensor, permutation) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/resnet.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/resnet.py deleted file mode 100644 index 28455d123a12f887400c19c263d08cc2ed08522e..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/projects/deeplab/resnet.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import fvcore.nn.weight_init as weight_init -import torch.nn.functional as F - -from annotator.oneformer.detectron2.layers import CNNBlockBase, Conv2d, get_norm -from annotator.oneformer.detectron2.modeling import BACKBONE_REGISTRY -from annotator.oneformer.detectron2.modeling.backbone.resnet import ( - BasicStem, - BottleneckBlock, - DeformBottleneckBlock, - ResNet, -) - - -class DeepLabStem(CNNBlockBase): - """ - The DeepLab ResNet stem (layers before the first residual block). - """ - - def __init__(self, in_channels=3, out_channels=128, norm="BN"): - """ - Args: - norm (str or callable): norm after the first conv layer. - See :func:`layers.get_norm` for supported format. - """ - super().__init__(in_channels, out_channels, 4) - self.in_channels = in_channels - self.conv1 = Conv2d( - in_channels, - out_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False, - norm=get_norm(norm, out_channels // 2), - ) - self.conv2 = Conv2d( - out_channels // 2, - out_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False, - norm=get_norm(norm, out_channels // 2), - ) - self.conv3 = Conv2d( - out_channels // 2, - out_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False, - norm=get_norm(norm, out_channels), - ) - weight_init.c2_msra_fill(self.conv1) - weight_init.c2_msra_fill(self.conv2) - weight_init.c2_msra_fill(self.conv3) - - def forward(self, x): - x = self.conv1(x) - x = F.relu_(x) - x = self.conv2(x) - x = F.relu_(x) - x = self.conv3(x) - x = F.relu_(x) - x = F.max_pool2d(x, kernel_size=3, stride=2, padding=1) - return x - - -@BACKBONE_REGISTRY.register() -def build_resnet_deeplab_backbone(cfg, input_shape): - """ - Create a ResNet instance from config. - Returns: - ResNet: a :class:`ResNet` instance. - """ - # need registration of new blocks/stems? - norm = cfg.MODEL.RESNETS.NORM - if cfg.MODEL.RESNETS.STEM_TYPE == "basic": - stem = BasicStem( - in_channels=input_shape.channels, - out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS, - norm=norm, - ) - elif cfg.MODEL.RESNETS.STEM_TYPE == "deeplab": - stem = DeepLabStem( - in_channels=input_shape.channels, - out_channels=cfg.MODEL.RESNETS.STEM_OUT_CHANNELS, - norm=norm, - ) - else: - raise ValueError("Unknown stem type: {}".format(cfg.MODEL.RESNETS.STEM_TYPE)) - - # fmt: off - freeze_at = cfg.MODEL.BACKBONE.FREEZE_AT - out_features = cfg.MODEL.RESNETS.OUT_FEATURES - depth = cfg.MODEL.RESNETS.DEPTH - num_groups = cfg.MODEL.RESNETS.NUM_GROUPS - width_per_group = cfg.MODEL.RESNETS.WIDTH_PER_GROUP - bottleneck_channels = num_groups * width_per_group - in_channels = cfg.MODEL.RESNETS.STEM_OUT_CHANNELS - out_channels = cfg.MODEL.RESNETS.RES2_OUT_CHANNELS - stride_in_1x1 = cfg.MODEL.RESNETS.STRIDE_IN_1X1 - res4_dilation = cfg.MODEL.RESNETS.RES4_DILATION - res5_dilation = cfg.MODEL.RESNETS.RES5_DILATION - deform_on_per_stage = cfg.MODEL.RESNETS.DEFORM_ON_PER_STAGE - deform_modulated = cfg.MODEL.RESNETS.DEFORM_MODULATED - deform_num_groups = cfg.MODEL.RESNETS.DEFORM_NUM_GROUPS - res5_multi_grid = cfg.MODEL.RESNETS.RES5_MULTI_GRID - # fmt: on - assert res4_dilation in {1, 2}, "res4_dilation cannot be {}.".format(res4_dilation) - assert res5_dilation in {1, 2, 4}, "res5_dilation cannot be {}.".format(res5_dilation) - if res4_dilation == 2: - # Always dilate res5 if res4 is dilated. - assert res5_dilation == 4 - - num_blocks_per_stage = {50: [3, 4, 6, 3], 101: [3, 4, 23, 3], 152: [3, 8, 36, 3]}[depth] - - stages = [] - - # Avoid creating variables without gradients - # It consumes extra memory and may cause allreduce to fail - out_stage_idx = [{"res2": 2, "res3": 3, "res4": 4, "res5": 5}[f] for f in out_features] - max_stage_idx = max(out_stage_idx) - for idx, stage_idx in enumerate(range(2, max_stage_idx + 1)): - if stage_idx == 4: - dilation = res4_dilation - elif stage_idx == 5: - dilation = res5_dilation - else: - dilation = 1 - first_stride = 1 if idx == 0 or dilation > 1 else 2 - stage_kargs = { - "num_blocks": num_blocks_per_stage[idx], - "stride_per_block": [first_stride] + [1] * (num_blocks_per_stage[idx] - 1), - "in_channels": in_channels, - "out_channels": out_channels, - "norm": norm, - } - stage_kargs["bottleneck_channels"] = bottleneck_channels - stage_kargs["stride_in_1x1"] = stride_in_1x1 - stage_kargs["dilation"] = dilation - stage_kargs["num_groups"] = num_groups - if deform_on_per_stage[idx]: - stage_kargs["block_class"] = DeformBottleneckBlock - stage_kargs["deform_modulated"] = deform_modulated - stage_kargs["deform_num_groups"] = deform_num_groups - else: - stage_kargs["block_class"] = BottleneckBlock - if stage_idx == 5: - stage_kargs.pop("dilation") - stage_kargs["dilation_per_block"] = [dilation * mg for mg in res5_multi_grid] - blocks = ResNet.make_stage(**stage_kargs) - in_channels = out_channels - out_channels *= 2 - bottleneck_channels *= 2 - stages.append(blocks) - return ResNet(stem, stages, out_features=out_features).freeze(freeze_at) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/schedules/schedule_80k.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/schedules/schedule_80k.py deleted file mode 100644 index c190cee6bdc7922b688ea75dc8f152fa15c24617..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/schedules/schedule_80k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=80000) -checkpoint_config = dict(by_epoch=False, interval=8000) -evaluation = dict(interval=8000, metric='mIoU') diff --git a/spaces/TabPFN/TabPFNEvaluation/TabPFN/utils.py b/spaces/TabPFN/TabPFNEvaluation/TabPFN/utils.py deleted file mode 100644 index b600696b0baf302858ceabb99a1e4e24986c0624..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNEvaluation/TabPFN/utils.py +++ /dev/null @@ -1,236 +0,0 @@ -import os -import math -import argparse -import random -import datetime - -import torch -from torch import nn -from torch.optim.lr_scheduler import LambdaLR -import numpy as np - -# copied from huggingface -def get_cosine_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps, num_cycles=0.5, last_epoch=-1): - """ Create a schedule with a learning rate that decreases following the - values of the cosine function between 0 and `pi * cycles` after a warmup - period during which it increases linearly between 0 and 1. - """ - - def lr_lambda(current_step): - if current_step < num_warmup_steps: - return float(current_step) / float(max(1, num_warmup_steps)) - progress = float(current_step - num_warmup_steps) / float(max(1, num_training_steps - num_warmup_steps)) - return max(0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress))) - - return LambdaLR(optimizer, lr_lambda, last_epoch) - -# copied from huggingface -def get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps, last_epoch=-1): - """ - Create a schedule with a learning rate that decreases linearly from the initial lr set in the optimizer to 0, after - a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer. - - Args: - optimizer (:class:`~torch.optim.Optimizer`): - The optimizer for which to schedule the learning rate. - num_warmup_steps (:obj:`int`): - The number of steps for the warmup phase. - num_training_steps (:obj:`int`): - The total number of training steps. - last_epoch (:obj:`int`, `optional`, defaults to -1): - The index of the last epoch when resuming training. - - Return: - :obj:`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. - """ - - def lr_lambda(current_step: int): - if current_step < num_warmup_steps: - return float(current_step) / float(max(1, num_warmup_steps)) - return max( - 0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps)) - ) - - return LambdaLR(optimizer, lr_lambda, last_epoch) - - -def get_openai_lr(transformer_model): - num_params = sum(p.numel() for p in transformer_model.parameters()) - return 0.003239 - 0.0001395 * math.log(num_params) - - -def get_weighted_single_eval_pos_sampler(max_len): - """ - This gives a sampler that can be used for `single_eval_pos` which yields good performance for all positions p, - where p <= `max_len`. At most `max_len` - 1 examples are shown to the Transformer. - :return: Sampler that can be fed to `train()` as `single_eval_pos_gen`. - """ - return lambda: random.choices(range(max_len), [1 / (max_len - i) for i in range(max_len)])[0] - - -def get_uniform_single_eval_pos_sampler(max_len, min_len=0): - """ - Just sample any evaluation position with the same weight - :return: Sampler that can be fed to `train()` as `single_eval_pos_gen`. - """ - return lambda: random.choices(range(min_len, max_len))[0] - - -class SeqBN(nn.Module): - def __init__(self, d_model): - super().__init__() - self.bn = nn.BatchNorm1d(d_model) - self.d_model = d_model - - def forward(self, x): - assert self.d_model == x.shape[-1] - flat_x = x.view(-1, self.d_model) - flat_x = self.bn(flat_x) - return flat_x.view(*x.shape) - - -def set_locals_in_self(locals): - self = locals['self'] - for var_name, val in locals.items(): - if var_name != 'self': setattr(self, var_name, val) - - -default_device = 'cuda:0' if torch.cuda.is_available() else 'cpu:0' - - -# Copied from StackOverflow, but we do an eval on the values additionally -class StoreDictKeyPair(argparse.Action): - def __init__(self, option_strings, dest, nargs=None, **kwargs): - self._nargs = nargs - super(StoreDictKeyPair, self).__init__(option_strings, dest, nargs=nargs, **kwargs) - - def __call__(self, parser, namespace, values, option_string=None): - my_dict = {} - for kv in values: - k, v = kv.split("=") - try: - my_dict[k] = eval(v) - except NameError: - my_dict[k] = v - setattr(namespace, self.dest, my_dict) - print("dict values: {}".format(my_dict)) - -def get_nan_value(v, set_value_to_nan=0.0): - if random.random() < set_value_to_nan: - return v - else: - return random.choice([-999, 0, 1, 999]) - -def to_ranking(data): - x = (data >= data.unsqueeze(-3)) - x = x.sum(0) - return x -# TODO: Is there a better way to do this? -# 1. Cmparing to unique elements: When all values are different we still get quadratic blowup -# 2. Argsort(Argsort()) returns ranking, but with duplicate values there is an ordering which is problematic -# 3. Argsort(Argsort(Unique))->Scatter seems a bit complicated, doesn't have quadratic blowup, but how fast? -def to_ranking_low_mem(data): - x = torch.zeros_like(data) - for col in range(data.shape[-1]): - x_ = (data[:, :, col] >= data[:, :, col].unsqueeze(-2)) - x_ = x_.sum(0) - x[:, :, col] = x_ - return x - -def nan_handling_missing_for_unknown_reason_value(set_value_to_nan=0.0): - return get_nan_value(float('nan'), set_value_to_nan) - -def nan_handling_missing_for_no_reason_value(set_value_to_nan=0.0): - return get_nan_value(float('-inf'), set_value_to_nan) - -def nan_handling_missing_for_a_reason_value(set_value_to_nan=0.0): - return get_nan_value(float('inf'), set_value_to_nan) - -def torch_nanmean(x, axis=0): - num = torch.where(torch.isnan(x), torch.full_like(x, 0), torch.full_like(x, 1)).sum(axis=axis) - value = torch.where(torch.isnan(x), torch.full_like(x, 0), x).sum(axis=axis) - return value / num - -def torch_nanstd(x, axis=0): - num = torch.where(torch.isnan(x), torch.full_like(x, 0), torch.full_like(x, 1)).sum(axis=axis) - value = torch.where(torch.isnan(x), torch.full_like(x, 0), x).sum(axis=axis) - mean = value / num - mean_broadcast = torch.repeat_interleave(mean.unsqueeze(axis), x.shape[axis], dim=axis) - return torch.sqrt(torch.nansum(torch.square(mean_broadcast - x), axis=axis) / (num - 1)) - -def normalize_data(data, normalize_positions=-1): - if normalize_positions > 0: - mean = torch_nanmean(data[:normalize_positions], axis=0) - std = torch_nanstd(data[:normalize_positions], axis=0) + .000001 - else: - mean = torch_nanmean(data, axis=0) - std = torch_nanstd(data, axis=0) + .000001 - data = (data - mean) / std - data = torch.clip(data, min=-100, max=100) - - return data - -def remove_outliers(X, n_sigma=4): - # Expects T, B, H - assert len(X.shape) == 3, "X must be T,B,H" - #for b in range(X.shape[1]): - #for col in range(X.shape[2]): - data = X - data_mean, data_std = torch_nanmean(data, axis=0), torch_nanstd(data, axis=0) - cut_off = data_std * n_sigma - lower, upper = data_mean - cut_off, data_mean + cut_off - - data_clean = X[:].clone() - data_clean[torch.logical_or(data > upper, data < lower)] = np.nan - data_mean, data_std = torch_nanmean(data_clean, axis=0), torch_nanstd(data_clean, axis=0) - cut_off = data_std * n_sigma - lower, upper = data_mean - cut_off, data_mean + cut_off - - X = torch.maximum(-torch.log(1+torch.abs(X)) + lower, X) - X = torch.minimum(torch.log(1+torch.abs(X)) + upper, X) - # print(ds[1][data < lower, col], ds[1][data > upper, col], ds[1][~np.isnan(data), col].shape, data_mean, data_std) - return X - -def bool_mask_to_att_mask(mask): - return mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0)) - -def print_on_master_only(is_master): - import builtins as __builtin__ - - builtin_print = __builtin__.print - - def print(*args, **kwargs): - force = kwargs.pop("force", False) - if is_master or force: - builtin_print(*args, **kwargs) - - __builtin__.print = print - -def init_dist(device): - if 'SLURM_PROCID' in os.environ and torch.cuda.device_count() > 1: - assert device != 'cpu:0' - rank = int(os.environ['SLURM_PROCID']) - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '12355' - torch.cuda.set_device(rank) - os.environ['CUDA_VISIBLE_DEVICES'] = str(rank) - torch.distributed.init_process_group(backend="nccl", init_method="env://", timeout=datetime.timedelta(seconds=20), - world_size=torch.cuda.device_count(), rank=rank) - torch.distributed.barrier() - print_on_master_only(rank == 0) - print(f"Distributed training on {torch.cuda.device_count()} GPUs, this is rank {rank}, " - "only I can print, but when using print(..., force=True) it will print on all ranks.") - - return True, rank, f'cuda:{rank}' - else: - print('Not using distributed') - # will not change any of the behavior of print, but allows putting the force=True in the print calls - print_on_master_only(True) - return False, 0, device - -# NOP function for python with statements (x = NOP(); with x:) -class NOP(): - def __enter__(self): - pass - def __exit__(self, type, value, traceback): - pass \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/database.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/database.py deleted file mode 100644 index 5db5d7f507c1d150e6b36f236df7ee61c0f65581..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/distlib/database.py +++ /dev/null @@ -1,1350 +0,0 @@ -# -*- coding: utf-8 -*- -# -# Copyright (C) 2012-2017 The Python Software Foundation. -# See LICENSE.txt and CONTRIBUTORS.txt. -# -"""PEP 376 implementation.""" - -from __future__ import unicode_literals - -import base64 -import codecs -import contextlib -import hashlib -import logging -import os -import posixpath -import sys -import zipimport - -from . import DistlibException, resources -from .compat import StringIO -from .version import get_scheme, UnsupportedVersionError -from .metadata import (Metadata, METADATA_FILENAME, WHEEL_METADATA_FILENAME, - LEGACY_METADATA_FILENAME) -from .util import (parse_requirement, cached_property, parse_name_and_version, - read_exports, write_exports, CSVReader, CSVWriter) - - -__all__ = ['Distribution', 'BaseInstalledDistribution', - 'InstalledDistribution', 'EggInfoDistribution', - 'DistributionPath'] - - -logger = logging.getLogger(__name__) - -EXPORTS_FILENAME = 'pydist-exports.json' -COMMANDS_FILENAME = 'pydist-commands.json' - -DIST_FILES = ('INSTALLER', METADATA_FILENAME, 'RECORD', 'REQUESTED', - 'RESOURCES', EXPORTS_FILENAME, 'SHARED') - -DISTINFO_EXT = '.dist-info' - - -class _Cache(object): - """ - A simple cache mapping names and .dist-info paths to distributions - """ - def __init__(self): - """ - Initialise an instance. There is normally one for each DistributionPath. - """ - self.name = {} - self.path = {} - self.generated = False - - def clear(self): - """ - Clear the cache, setting it to its initial state. - """ - self.name.clear() - self.path.clear() - self.generated = False - - def add(self, dist): - """ - Add a distribution to the cache. - :param dist: The distribution to add. - """ - if dist.path not in self.path: - self.path[dist.path] = dist - self.name.setdefault(dist.key, []).append(dist) - - -class DistributionPath(object): - """ - Represents a set of distributions installed on a path (typically sys.path). - """ - def __init__(self, path=None, include_egg=False): - """ - Create an instance from a path, optionally including legacy (distutils/ - setuptools/distribute) distributions. - :param path: The path to use, as a list of directories. If not specified, - sys.path is used. - :param include_egg: If True, this instance will look for and return legacy - distributions as well as those based on PEP 376. - """ - if path is None: - path = sys.path - self.path = path - self._include_dist = True - self._include_egg = include_egg - - self._cache = _Cache() - self._cache_egg = _Cache() - self._cache_enabled = True - self._scheme = get_scheme('default') - - def _get_cache_enabled(self): - return self._cache_enabled - - def _set_cache_enabled(self, value): - self._cache_enabled = value - - cache_enabled = property(_get_cache_enabled, _set_cache_enabled) - - def clear_cache(self): - """ - Clears the internal cache. - """ - self._cache.clear() - self._cache_egg.clear() - - - def _yield_distributions(self): - """ - Yield .dist-info and/or .egg(-info) distributions. - """ - # We need to check if we've seen some resources already, because on - # some Linux systems (e.g. some Debian/Ubuntu variants) there are - # symlinks which alias other files in the environment. - seen = set() - for path in self.path: - finder = resources.finder_for_path(path) - if finder is None: - continue - r = finder.find('') - if not r or not r.is_container: - continue - rset = sorted(r.resources) - for entry in rset: - r = finder.find(entry) - if not r or r.path in seen: - continue - try: - if self._include_dist and entry.endswith(DISTINFO_EXT): - possible_filenames = [METADATA_FILENAME, - WHEEL_METADATA_FILENAME, - LEGACY_METADATA_FILENAME] - for metadata_filename in possible_filenames: - metadata_path = posixpath.join(entry, metadata_filename) - pydist = finder.find(metadata_path) - if pydist: - break - else: - continue - - with contextlib.closing(pydist.as_stream()) as stream: - metadata = Metadata(fileobj=stream, scheme='legacy') - logger.debug('Found %s', r.path) - seen.add(r.path) - yield new_dist_class(r.path, metadata=metadata, - env=self) - elif self._include_egg and entry.endswith(('.egg-info', - '.egg')): - logger.debug('Found %s', r.path) - seen.add(r.path) - yield old_dist_class(r.path, self) - except Exception as e: - msg = 'Unable to read distribution at %s, perhaps due to bad metadata: %s' - logger.warning(msg, r.path, e) - import warnings - warnings.warn(msg % (r.path, e), stacklevel=2) - - def _generate_cache(self): - """ - Scan the path for distributions and populate the cache with - those that are found. - """ - gen_dist = not self._cache.generated - gen_egg = self._include_egg and not self._cache_egg.generated - if gen_dist or gen_egg: - for dist in self._yield_distributions(): - if isinstance(dist, InstalledDistribution): - self._cache.add(dist) - else: - self._cache_egg.add(dist) - - if gen_dist: - self._cache.generated = True - if gen_egg: - self._cache_egg.generated = True - - @classmethod - def distinfo_dirname(cls, name, version): - """ - The *name* and *version* parameters are converted into their - filename-escaped form, i.e. any ``'-'`` characters are replaced - with ``'_'`` other than the one in ``'dist-info'`` and the one - separating the name from the version number. - - :parameter name: is converted to a standard distribution name by replacing - any runs of non- alphanumeric characters with a single - ``'-'``. - :type name: string - :parameter version: is converted to a standard version string. Spaces - become dots, and all other non-alphanumeric characters - (except dots) become dashes, with runs of multiple - dashes condensed to a single dash. - :type version: string - :returns: directory name - :rtype: string""" - name = name.replace('-', '_') - return '-'.join([name, version]) + DISTINFO_EXT - - def get_distributions(self): - """ - Provides an iterator that looks for distributions and returns - :class:`InstalledDistribution` or - :class:`EggInfoDistribution` instances for each one of them. - - :rtype: iterator of :class:`InstalledDistribution` and - :class:`EggInfoDistribution` instances - """ - if not self._cache_enabled: - for dist in self._yield_distributions(): - yield dist - else: - self._generate_cache() - - for dist in self._cache.path.values(): - yield dist - - if self._include_egg: - for dist in self._cache_egg.path.values(): - yield dist - - def get_distribution(self, name): - """ - Looks for a named distribution on the path. - - This function only returns the first result found, as no more than one - value is expected. If nothing is found, ``None`` is returned. - - :rtype: :class:`InstalledDistribution`, :class:`EggInfoDistribution` - or ``None`` - """ - result = None - name = name.lower() - if not self._cache_enabled: - for dist in self._yield_distributions(): - if dist.key == name: - result = dist - break - else: - self._generate_cache() - - if name in self._cache.name: - result = self._cache.name[name][0] - elif self._include_egg and name in self._cache_egg.name: - result = self._cache_egg.name[name][0] - return result - - def provides_distribution(self, name, version=None): - """ - Iterates over all distributions to find which distributions provide *name*. - If a *version* is provided, it will be used to filter the results. - - This function only returns the first result found, since no more than - one values are expected. If the directory is not found, returns ``None``. - - :parameter version: a version specifier that indicates the version - required, conforming to the format in ``PEP-345`` - - :type name: string - :type version: string - """ - matcher = None - if version is not None: - try: - matcher = self._scheme.matcher('%s (%s)' % (name, version)) - except ValueError: - raise DistlibException('invalid name or version: %r, %r' % - (name, version)) - - for dist in self.get_distributions(): - # We hit a problem on Travis where enum34 was installed and doesn't - # have a provides attribute ... - if not hasattr(dist, 'provides'): - logger.debug('No "provides": %s', dist) - else: - provided = dist.provides - - for p in provided: - p_name, p_ver = parse_name_and_version(p) - if matcher is None: - if p_name == name: - yield dist - break - else: - if p_name == name and matcher.match(p_ver): - yield dist - break - - def get_file_path(self, name, relative_path): - """ - Return the path to a resource file. - """ - dist = self.get_distribution(name) - if dist is None: - raise LookupError('no distribution named %r found' % name) - return dist.get_resource_path(relative_path) - - def get_exported_entries(self, category, name=None): - """ - Return all of the exported entries in a particular category. - - :param category: The category to search for entries. - :param name: If specified, only entries with that name are returned. - """ - for dist in self.get_distributions(): - r = dist.exports - if category in r: - d = r[category] - if name is not None: - if name in d: - yield d[name] - else: - for v in d.values(): - yield v - - -class Distribution(object): - """ - A base class for distributions, whether installed or from indexes. - Either way, it must have some metadata, so that's all that's needed - for construction. - """ - - build_time_dependency = False - """ - Set to True if it's known to be only a build-time dependency (i.e. - not needed after installation). - """ - - requested = False - """A boolean that indicates whether the ``REQUESTED`` metadata file is - present (in other words, whether the package was installed by user - request or it was installed as a dependency).""" - - def __init__(self, metadata): - """ - Initialise an instance. - :param metadata: The instance of :class:`Metadata` describing this - distribution. - """ - self.metadata = metadata - self.name = metadata.name - self.key = self.name.lower() # for case-insensitive comparisons - self.version = metadata.version - self.locator = None - self.digest = None - self.extras = None # additional features requested - self.context = None # environment marker overrides - self.download_urls = set() - self.digests = {} - - @property - def source_url(self): - """ - The source archive download URL for this distribution. - """ - return self.metadata.source_url - - download_url = source_url # Backward compatibility - - @property - def name_and_version(self): - """ - A utility property which displays the name and version in parentheses. - """ - return '%s (%s)' % (self.name, self.version) - - @property - def provides(self): - """ - A set of distribution names and versions provided by this distribution. - :return: A set of "name (version)" strings. - """ - plist = self.metadata.provides - s = '%s (%s)' % (self.name, self.version) - if s not in plist: - plist.append(s) - return plist - - def _get_requirements(self, req_attr): - md = self.metadata - reqts = getattr(md, req_attr) - logger.debug('%s: got requirements %r from metadata: %r', self.name, req_attr, - reqts) - return set(md.get_requirements(reqts, extras=self.extras, - env=self.context)) - - @property - def run_requires(self): - return self._get_requirements('run_requires') - - @property - def meta_requires(self): - return self._get_requirements('meta_requires') - - @property - def build_requires(self): - return self._get_requirements('build_requires') - - @property - def test_requires(self): - return self._get_requirements('test_requires') - - @property - def dev_requires(self): - return self._get_requirements('dev_requires') - - def matches_requirement(self, req): - """ - Say if this instance matches (fulfills) a requirement. - :param req: The requirement to match. - :rtype req: str - :return: True if it matches, else False. - """ - # Requirement may contain extras - parse to lose those - # from what's passed to the matcher - r = parse_requirement(req) - scheme = get_scheme(self.metadata.scheme) - try: - matcher = scheme.matcher(r.requirement) - except UnsupportedVersionError: - # XXX compat-mode if cannot read the version - logger.warning('could not read version %r - using name only', - req) - name = req.split()[0] - matcher = scheme.matcher(name) - - name = matcher.key # case-insensitive - - result = False - for p in self.provides: - p_name, p_ver = parse_name_and_version(p) - if p_name != name: - continue - try: - result = matcher.match(p_ver) - break - except UnsupportedVersionError: - pass - return result - - def __repr__(self): - """ - Return a textual representation of this instance, - """ - if self.source_url: - suffix = ' [%s]' % self.source_url - else: - suffix = '' - return '' % (self.name, self.version, suffix) - - def __eq__(self, other): - """ - See if this distribution is the same as another. - :param other: The distribution to compare with. To be equal to one - another. distributions must have the same type, name, - version and source_url. - :return: True if it is the same, else False. - """ - if type(other) is not type(self): - result = False - else: - result = (self.name == other.name and - self.version == other.version and - self.source_url == other.source_url) - return result - - def __hash__(self): - """ - Compute hash in a way which matches the equality test. - """ - return hash(self.name) + hash(self.version) + hash(self.source_url) - - -class BaseInstalledDistribution(Distribution): - """ - This is the base class for installed distributions (whether PEP 376 or - legacy). - """ - - hasher = None - - def __init__(self, metadata, path, env=None): - """ - Initialise an instance. - :param metadata: An instance of :class:`Metadata` which describes the - distribution. This will normally have been initialised - from a metadata file in the ``path``. - :param path: The path of the ``.dist-info`` or ``.egg-info`` - directory for the distribution. - :param env: This is normally the :class:`DistributionPath` - instance where this distribution was found. - """ - super(BaseInstalledDistribution, self).__init__(metadata) - self.path = path - self.dist_path = env - - def get_hash(self, data, hasher=None): - """ - Get the hash of some data, using a particular hash algorithm, if - specified. - - :param data: The data to be hashed. - :type data: bytes - :param hasher: The name of a hash implementation, supported by hashlib, - or ``None``. Examples of valid values are ``'sha1'``, - ``'sha224'``, ``'sha384'``, '``sha256'``, ``'md5'`` and - ``'sha512'``. If no hasher is specified, the ``hasher`` - attribute of the :class:`InstalledDistribution` instance - is used. If the hasher is determined to be ``None``, MD5 - is used as the hashing algorithm. - :returns: The hash of the data. If a hasher was explicitly specified, - the returned hash will be prefixed with the specified hasher - followed by '='. - :rtype: str - """ - if hasher is None: - hasher = self.hasher - if hasher is None: - hasher = hashlib.md5 - prefix = '' - else: - hasher = getattr(hashlib, hasher) - prefix = '%s=' % self.hasher - digest = hasher(data).digest() - digest = base64.urlsafe_b64encode(digest).rstrip(b'=').decode('ascii') - return '%s%s' % (prefix, digest) - - -class InstalledDistribution(BaseInstalledDistribution): - """ - Created with the *path* of the ``.dist-info`` directory provided to the - constructor. It reads the metadata contained in ``pydist.json`` when it is - instantiated., or uses a passed in Metadata instance (useful for when - dry-run mode is being used). - """ - - hasher = 'sha256' - - def __init__(self, path, metadata=None, env=None): - self.modules = [] - self.finder = finder = resources.finder_for_path(path) - if finder is None: - raise ValueError('finder unavailable for %s' % path) - if env and env._cache_enabled and path in env._cache.path: - metadata = env._cache.path[path].metadata - elif metadata is None: - r = finder.find(METADATA_FILENAME) - # Temporary - for Wheel 0.23 support - if r is None: - r = finder.find(WHEEL_METADATA_FILENAME) - # Temporary - for legacy support - if r is None: - r = finder.find(LEGACY_METADATA_FILENAME) - if r is None: - raise ValueError('no %s found in %s' % (METADATA_FILENAME, - path)) - with contextlib.closing(r.as_stream()) as stream: - metadata = Metadata(fileobj=stream, scheme='legacy') - - super(InstalledDistribution, self).__init__(metadata, path, env) - - if env and env._cache_enabled: - env._cache.add(self) - - r = finder.find('REQUESTED') - self.requested = r is not None - p = os.path.join(path, 'top_level.txt') - if os.path.exists(p): - with open(p, 'rb') as f: - data = f.read().decode('utf-8') - self.modules = data.splitlines() - - def __repr__(self): - return '' % ( - self.name, self.version, self.path) - - def __str__(self): - return "%s %s" % (self.name, self.version) - - def _get_records(self): - """ - Get the list of installed files for the distribution - :return: A list of tuples of path, hash and size. Note that hash and - size might be ``None`` for some entries. The path is exactly - as stored in the file (which is as in PEP 376). - """ - results = [] - r = self.get_distinfo_resource('RECORD') - with contextlib.closing(r.as_stream()) as stream: - with CSVReader(stream=stream) as record_reader: - # Base location is parent dir of .dist-info dir - #base_location = os.path.dirname(self.path) - #base_location = os.path.abspath(base_location) - for row in record_reader: - missing = [None for i in range(len(row), 3)] - path, checksum, size = row + missing - #if not os.path.isabs(path): - # path = path.replace('/', os.sep) - # path = os.path.join(base_location, path) - results.append((path, checksum, size)) - return results - - @cached_property - def exports(self): - """ - Return the information exported by this distribution. - :return: A dictionary of exports, mapping an export category to a dict - of :class:`ExportEntry` instances describing the individual - export entries, and keyed by name. - """ - result = {} - r = self.get_distinfo_resource(EXPORTS_FILENAME) - if r: - result = self.read_exports() - return result - - def read_exports(self): - """ - Read exports data from a file in .ini format. - - :return: A dictionary of exports, mapping an export category to a list - of :class:`ExportEntry` instances describing the individual - export entries. - """ - result = {} - r = self.get_distinfo_resource(EXPORTS_FILENAME) - if r: - with contextlib.closing(r.as_stream()) as stream: - result = read_exports(stream) - return result - - def write_exports(self, exports): - """ - Write a dictionary of exports to a file in .ini format. - :param exports: A dictionary of exports, mapping an export category to - a list of :class:`ExportEntry` instances describing the - individual export entries. - """ - rf = self.get_distinfo_file(EXPORTS_FILENAME) - with open(rf, 'w') as f: - write_exports(exports, f) - - def get_resource_path(self, relative_path): - """ - NOTE: This API may change in the future. - - Return the absolute path to a resource file with the given relative - path. - - :param relative_path: The path, relative to .dist-info, of the resource - of interest. - :return: The absolute path where the resource is to be found. - """ - r = self.get_distinfo_resource('RESOURCES') - with contextlib.closing(r.as_stream()) as stream: - with CSVReader(stream=stream) as resources_reader: - for relative, destination in resources_reader: - if relative == relative_path: - return destination - raise KeyError('no resource file with relative path %r ' - 'is installed' % relative_path) - - def list_installed_files(self): - """ - Iterates over the ``RECORD`` entries and returns a tuple - ``(path, hash, size)`` for each line. - - :returns: iterator of (path, hash, size) - """ - for result in self._get_records(): - yield result - - def write_installed_files(self, paths, prefix, dry_run=False): - """ - Writes the ``RECORD`` file, using the ``paths`` iterable passed in. Any - existing ``RECORD`` file is silently overwritten. - - prefix is used to determine when to write absolute paths. - """ - prefix = os.path.join(prefix, '') - base = os.path.dirname(self.path) - base_under_prefix = base.startswith(prefix) - base = os.path.join(base, '') - record_path = self.get_distinfo_file('RECORD') - logger.info('creating %s', record_path) - if dry_run: - return None - with CSVWriter(record_path) as writer: - for path in paths: - if os.path.isdir(path) or path.endswith(('.pyc', '.pyo')): - # do not put size and hash, as in PEP-376 - hash_value = size = '' - else: - size = '%d' % os.path.getsize(path) - with open(path, 'rb') as fp: - hash_value = self.get_hash(fp.read()) - if path.startswith(base) or (base_under_prefix and - path.startswith(prefix)): - path = os.path.relpath(path, base) - writer.writerow((path, hash_value, size)) - - # add the RECORD file itself - if record_path.startswith(base): - record_path = os.path.relpath(record_path, base) - writer.writerow((record_path, '', '')) - return record_path - - def check_installed_files(self): - """ - Checks that the hashes and sizes of the files in ``RECORD`` are - matched by the files themselves. Returns a (possibly empty) list of - mismatches. Each entry in the mismatch list will be a tuple consisting - of the path, 'exists', 'size' or 'hash' according to what didn't match - (existence is checked first, then size, then hash), the expected - value and the actual value. - """ - mismatches = [] - base = os.path.dirname(self.path) - record_path = self.get_distinfo_file('RECORD') - for path, hash_value, size in self.list_installed_files(): - if not os.path.isabs(path): - path = os.path.join(base, path) - if path == record_path: - continue - if not os.path.exists(path): - mismatches.append((path, 'exists', True, False)) - elif os.path.isfile(path): - actual_size = str(os.path.getsize(path)) - if size and actual_size != size: - mismatches.append((path, 'size', size, actual_size)) - elif hash_value: - if '=' in hash_value: - hasher = hash_value.split('=', 1)[0] - else: - hasher = None - - with open(path, 'rb') as f: - actual_hash = self.get_hash(f.read(), hasher) - if actual_hash != hash_value: - mismatches.append((path, 'hash', hash_value, actual_hash)) - return mismatches - - @cached_property - def shared_locations(self): - """ - A dictionary of shared locations whose keys are in the set 'prefix', - 'purelib', 'platlib', 'scripts', 'headers', 'data' and 'namespace'. - The corresponding value is the absolute path of that category for - this distribution, and takes into account any paths selected by the - user at installation time (e.g. via command-line arguments). In the - case of the 'namespace' key, this would be a list of absolute paths - for the roots of namespace packages in this distribution. - - The first time this property is accessed, the relevant information is - read from the SHARED file in the .dist-info directory. - """ - result = {} - shared_path = os.path.join(self.path, 'SHARED') - if os.path.isfile(shared_path): - with codecs.open(shared_path, 'r', encoding='utf-8') as f: - lines = f.read().splitlines() - for line in lines: - key, value = line.split('=', 1) - if key == 'namespace': - result.setdefault(key, []).append(value) - else: - result[key] = value - return result - - def write_shared_locations(self, paths, dry_run=False): - """ - Write shared location information to the SHARED file in .dist-info. - :param paths: A dictionary as described in the documentation for - :meth:`shared_locations`. - :param dry_run: If True, the action is logged but no file is actually - written. - :return: The path of the file written to. - """ - shared_path = os.path.join(self.path, 'SHARED') - logger.info('creating %s', shared_path) - if dry_run: - return None - lines = [] - for key in ('prefix', 'lib', 'headers', 'scripts', 'data'): - path = paths[key] - if os.path.isdir(paths[key]): - lines.append('%s=%s' % (key, path)) - for ns in paths.get('namespace', ()): - lines.append('namespace=%s' % ns) - - with codecs.open(shared_path, 'w', encoding='utf-8') as f: - f.write('\n'.join(lines)) - return shared_path - - def get_distinfo_resource(self, path): - if path not in DIST_FILES: - raise DistlibException('invalid path for a dist-info file: ' - '%r at %r' % (path, self.path)) - finder = resources.finder_for_path(self.path) - if finder is None: - raise DistlibException('Unable to get a finder for %s' % self.path) - return finder.find(path) - - def get_distinfo_file(self, path): - """ - Returns a path located under the ``.dist-info`` directory. Returns a - string representing the path. - - :parameter path: a ``'/'``-separated path relative to the - ``.dist-info`` directory or an absolute path; - If *path* is an absolute path and doesn't start - with the ``.dist-info`` directory path, - a :class:`DistlibException` is raised - :type path: str - :rtype: str - """ - # Check if it is an absolute path # XXX use relpath, add tests - if path.find(os.sep) >= 0: - # it's an absolute path? - distinfo_dirname, path = path.split(os.sep)[-2:] - if distinfo_dirname != self.path.split(os.sep)[-1]: - raise DistlibException( - 'dist-info file %r does not belong to the %r %s ' - 'distribution' % (path, self.name, self.version)) - - # The file must be relative - if path not in DIST_FILES: - raise DistlibException('invalid path for a dist-info file: ' - '%r at %r' % (path, self.path)) - - return os.path.join(self.path, path) - - def list_distinfo_files(self): - """ - Iterates over the ``RECORD`` entries and returns paths for each line if - the path is pointing to a file located in the ``.dist-info`` directory - or one of its subdirectories. - - :returns: iterator of paths - """ - base = os.path.dirname(self.path) - for path, checksum, size in self._get_records(): - # XXX add separator or use real relpath algo - if not os.path.isabs(path): - path = os.path.join(base, path) - if path.startswith(self.path): - yield path - - def __eq__(self, other): - return (isinstance(other, InstalledDistribution) and - self.path == other.path) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - __hash__ = object.__hash__ - - -class EggInfoDistribution(BaseInstalledDistribution): - """Created with the *path* of the ``.egg-info`` directory or file provided - to the constructor. It reads the metadata contained in the file itself, or - if the given path happens to be a directory, the metadata is read from the - file ``PKG-INFO`` under that directory.""" - - requested = True # as we have no way of knowing, assume it was - shared_locations = {} - - def __init__(self, path, env=None): - def set_name_and_version(s, n, v): - s.name = n - s.key = n.lower() # for case-insensitive comparisons - s.version = v - - self.path = path - self.dist_path = env - if env and env._cache_enabled and path in env._cache_egg.path: - metadata = env._cache_egg.path[path].metadata - set_name_and_version(self, metadata.name, metadata.version) - else: - metadata = self._get_metadata(path) - - # Need to be set before caching - set_name_and_version(self, metadata.name, metadata.version) - - if env and env._cache_enabled: - env._cache_egg.add(self) - super(EggInfoDistribution, self).__init__(metadata, path, env) - - def _get_metadata(self, path): - requires = None - - def parse_requires_data(data): - """Create a list of dependencies from a requires.txt file. - - *data*: the contents of a setuptools-produced requires.txt file. - """ - reqs = [] - lines = data.splitlines() - for line in lines: - line = line.strip() - if line.startswith('['): - logger.warning('Unexpected line: quitting requirement scan: %r', - line) - break - r = parse_requirement(line) - if not r: - logger.warning('Not recognised as a requirement: %r', line) - continue - if r.extras: - logger.warning('extra requirements in requires.txt are ' - 'not supported') - if not r.constraints: - reqs.append(r.name) - else: - cons = ', '.join('%s%s' % c for c in r.constraints) - reqs.append('%s (%s)' % (r.name, cons)) - return reqs - - def parse_requires_path(req_path): - """Create a list of dependencies from a requires.txt file. - - *req_path*: the path to a setuptools-produced requires.txt file. - """ - - reqs = [] - try: - with codecs.open(req_path, 'r', 'utf-8') as fp: - reqs = parse_requires_data(fp.read()) - except IOError: - pass - return reqs - - tl_path = tl_data = None - if path.endswith('.egg'): - if os.path.isdir(path): - p = os.path.join(path, 'EGG-INFO') - meta_path = os.path.join(p, 'PKG-INFO') - metadata = Metadata(path=meta_path, scheme='legacy') - req_path = os.path.join(p, 'requires.txt') - tl_path = os.path.join(p, 'top_level.txt') - requires = parse_requires_path(req_path) - else: - # FIXME handle the case where zipfile is not available - zipf = zipimport.zipimporter(path) - fileobj = StringIO( - zipf.get_data('EGG-INFO/PKG-INFO').decode('utf8')) - metadata = Metadata(fileobj=fileobj, scheme='legacy') - try: - data = zipf.get_data('EGG-INFO/requires.txt') - tl_data = zipf.get_data('EGG-INFO/top_level.txt').decode('utf-8') - requires = parse_requires_data(data.decode('utf-8')) - except IOError: - requires = None - elif path.endswith('.egg-info'): - if os.path.isdir(path): - req_path = os.path.join(path, 'requires.txt') - requires = parse_requires_path(req_path) - path = os.path.join(path, 'PKG-INFO') - tl_path = os.path.join(path, 'top_level.txt') - metadata = Metadata(path=path, scheme='legacy') - else: - raise DistlibException('path must end with .egg-info or .egg, ' - 'got %r' % path) - - if requires: - metadata.add_requirements(requires) - # look for top-level modules in top_level.txt, if present - if tl_data is None: - if tl_path is not None and os.path.exists(tl_path): - with open(tl_path, 'rb') as f: - tl_data = f.read().decode('utf-8') - if not tl_data: - tl_data = [] - else: - tl_data = tl_data.splitlines() - self.modules = tl_data - return metadata - - def __repr__(self): - return '' % ( - self.name, self.version, self.path) - - def __str__(self): - return "%s %s" % (self.name, self.version) - - def check_installed_files(self): - """ - Checks that the hashes and sizes of the files in ``RECORD`` are - matched by the files themselves. Returns a (possibly empty) list of - mismatches. Each entry in the mismatch list will be a tuple consisting - of the path, 'exists', 'size' or 'hash' according to what didn't match - (existence is checked first, then size, then hash), the expected - value and the actual value. - """ - mismatches = [] - record_path = os.path.join(self.path, 'installed-files.txt') - if os.path.exists(record_path): - for path, _, _ in self.list_installed_files(): - if path == record_path: - continue - if not os.path.exists(path): - mismatches.append((path, 'exists', True, False)) - return mismatches - - def list_installed_files(self): - """ - Iterates over the ``installed-files.txt`` entries and returns a tuple - ``(path, hash, size)`` for each line. - - :returns: a list of (path, hash, size) - """ - - def _md5(path): - f = open(path, 'rb') - try: - content = f.read() - finally: - f.close() - return hashlib.md5(content).hexdigest() - - def _size(path): - return os.stat(path).st_size - - record_path = os.path.join(self.path, 'installed-files.txt') - result = [] - if os.path.exists(record_path): - with codecs.open(record_path, 'r', encoding='utf-8') as f: - for line in f: - line = line.strip() - p = os.path.normpath(os.path.join(self.path, line)) - # "./" is present as a marker between installed files - # and installation metadata files - if not os.path.exists(p): - logger.warning('Non-existent file: %s', p) - if p.endswith(('.pyc', '.pyo')): - continue - #otherwise fall through and fail - if not os.path.isdir(p): - result.append((p, _md5(p), _size(p))) - result.append((record_path, None, None)) - return result - - def list_distinfo_files(self, absolute=False): - """ - Iterates over the ``installed-files.txt`` entries and returns paths for - each line if the path is pointing to a file located in the - ``.egg-info`` directory or one of its subdirectories. - - :parameter absolute: If *absolute* is ``True``, each returned path is - transformed into a local absolute path. Otherwise the - raw value from ``installed-files.txt`` is returned. - :type absolute: boolean - :returns: iterator of paths - """ - record_path = os.path.join(self.path, 'installed-files.txt') - if os.path.exists(record_path): - skip = True - with codecs.open(record_path, 'r', encoding='utf-8') as f: - for line in f: - line = line.strip() - if line == './': - skip = False - continue - if not skip: - p = os.path.normpath(os.path.join(self.path, line)) - if p.startswith(self.path): - if absolute: - yield p - else: - yield line - - def __eq__(self, other): - return (isinstance(other, EggInfoDistribution) and - self.path == other.path) - - # See http://docs.python.org/reference/datamodel#object.__hash__ - __hash__ = object.__hash__ - -new_dist_class = InstalledDistribution -old_dist_class = EggInfoDistribution - - -class DependencyGraph(object): - """ - Represents a dependency graph between distributions. - - The dependency relationships are stored in an ``adjacency_list`` that maps - distributions to a list of ``(other, label)`` tuples where ``other`` - is a distribution and the edge is labeled with ``label`` (i.e. the version - specifier, if such was provided). Also, for more efficient traversal, for - every distribution ``x``, a list of predecessors is kept in - ``reverse_list[x]``. An edge from distribution ``a`` to - distribution ``b`` means that ``a`` depends on ``b``. If any missing - dependencies are found, they are stored in ``missing``, which is a - dictionary that maps distributions to a list of requirements that were not - provided by any other distributions. - """ - - def __init__(self): - self.adjacency_list = {} - self.reverse_list = {} - self.missing = {} - - def add_distribution(self, distribution): - """Add the *distribution* to the graph. - - :type distribution: :class:`distutils2.database.InstalledDistribution` - or :class:`distutils2.database.EggInfoDistribution` - """ - self.adjacency_list[distribution] = [] - self.reverse_list[distribution] = [] - #self.missing[distribution] = [] - - def add_edge(self, x, y, label=None): - """Add an edge from distribution *x* to distribution *y* with the given - *label*. - - :type x: :class:`distutils2.database.InstalledDistribution` or - :class:`distutils2.database.EggInfoDistribution` - :type y: :class:`distutils2.database.InstalledDistribution` or - :class:`distutils2.database.EggInfoDistribution` - :type label: ``str`` or ``None`` - """ - self.adjacency_list[x].append((y, label)) - # multiple edges are allowed, so be careful - if x not in self.reverse_list[y]: - self.reverse_list[y].append(x) - - def add_missing(self, distribution, requirement): - """ - Add a missing *requirement* for the given *distribution*. - - :type distribution: :class:`distutils2.database.InstalledDistribution` - or :class:`distutils2.database.EggInfoDistribution` - :type requirement: ``str`` - """ - logger.debug('%s missing %r', distribution, requirement) - self.missing.setdefault(distribution, []).append(requirement) - - def _repr_dist(self, dist): - return '%s %s' % (dist.name, dist.version) - - def repr_node(self, dist, level=1): - """Prints only a subgraph""" - output = [self._repr_dist(dist)] - for other, label in self.adjacency_list[dist]: - dist = self._repr_dist(other) - if label is not None: - dist = '%s [%s]' % (dist, label) - output.append(' ' * level + str(dist)) - suboutput = self.repr_node(other, level + 1) - subs = suboutput.split('\n') - output.extend(subs[1:]) - return '\n'.join(output) - - def to_dot(self, f, skip_disconnected=True): - """Writes a DOT output for the graph to the provided file *f*. - - If *skip_disconnected* is set to ``True``, then all distributions - that are not dependent on any other distribution are skipped. - - :type f: has to support ``file``-like operations - :type skip_disconnected: ``bool`` - """ - disconnected = [] - - f.write("digraph dependencies {\n") - for dist, adjs in self.adjacency_list.items(): - if len(adjs) == 0 and not skip_disconnected: - disconnected.append(dist) - for other, label in adjs: - if not label is None: - f.write('"%s" -> "%s" [label="%s"]\n' % - (dist.name, other.name, label)) - else: - f.write('"%s" -> "%s"\n' % (dist.name, other.name)) - if not skip_disconnected and len(disconnected) > 0: - f.write('subgraph disconnected {\n') - f.write('label = "Disconnected"\n') - f.write('bgcolor = red\n') - - for dist in disconnected: - f.write('"%s"' % dist.name) - f.write('\n') - f.write('}\n') - f.write('}\n') - - def topological_sort(self): - """ - Perform a topological sort of the graph. - :return: A tuple, the first element of which is a topologically sorted - list of distributions, and the second element of which is a - list of distributions that cannot be sorted because they have - circular dependencies and so form a cycle. - """ - result = [] - # Make a shallow copy of the adjacency list - alist = {} - for k, v in self.adjacency_list.items(): - alist[k] = v[:] - while True: - # See what we can remove in this run - to_remove = [] - for k, v in list(alist.items())[:]: - if not v: - to_remove.append(k) - del alist[k] - if not to_remove: - # What's left in alist (if anything) is a cycle. - break - # Remove from the adjacency list of others - for k, v in alist.items(): - alist[k] = [(d, r) for d, r in v if d not in to_remove] - logger.debug('Moving to result: %s', - ['%s (%s)' % (d.name, d.version) for d in to_remove]) - result.extend(to_remove) - return result, list(alist.keys()) - - def __repr__(self): - """Representation of the graph""" - output = [] - for dist, adjs in self.adjacency_list.items(): - output.append(self.repr_node(dist)) - return '\n'.join(output) - - -def make_graph(dists, scheme='default'): - """Makes a dependency graph from the given distributions. - - :parameter dists: a list of distributions - :type dists: list of :class:`distutils2.database.InstalledDistribution` and - :class:`distutils2.database.EggInfoDistribution` instances - :rtype: a :class:`DependencyGraph` instance - """ - scheme = get_scheme(scheme) - graph = DependencyGraph() - provided = {} # maps names to lists of (version, dist) tuples - - # first, build the graph and find out what's provided - for dist in dists: - graph.add_distribution(dist) - - for p in dist.provides: - name, version = parse_name_and_version(p) - logger.debug('Add to provided: %s, %s, %s', name, version, dist) - provided.setdefault(name, []).append((version, dist)) - - # now make the edges - for dist in dists: - requires = (dist.run_requires | dist.meta_requires | - dist.build_requires | dist.dev_requires) - for req in requires: - try: - matcher = scheme.matcher(req) - except UnsupportedVersionError: - # XXX compat-mode if cannot read the version - logger.warning('could not read version %r - using name only', - req) - name = req.split()[0] - matcher = scheme.matcher(name) - - name = matcher.key # case-insensitive - - matched = False - if name in provided: - for version, provider in provided[name]: - try: - match = matcher.match(version) - except UnsupportedVersionError: - match = False - - if match: - graph.add_edge(dist, provider, req) - matched = True - break - if not matched: - graph.add_missing(dist, req) - return graph - - -def get_dependent_dists(dists, dist): - """Recursively generate a list of distributions from *dists* that are - dependent on *dist*. - - :param dists: a list of distributions - :param dist: a distribution, member of *dists* for which we are interested - """ - if dist not in dists: - raise DistlibException('given distribution %r is not a member ' - 'of the list' % dist.name) - graph = make_graph(dists) - - dep = [dist] # dependent distributions - todo = graph.reverse_list[dist] # list of nodes we should inspect - - while todo: - d = todo.pop() - dep.append(d) - for succ in graph.reverse_list[d]: - if succ not in dep: - todo.append(succ) - - dep.pop(0) # remove dist from dep, was there to prevent infinite loops - return dep - - -def get_required_dists(dists, dist): - """Recursively generate a list of distributions from *dists* that are - required by *dist*. - - :param dists: a list of distributions - :param dist: a distribution, member of *dists* for which we are interested - in finding the dependencies. - """ - if dist not in dists: - raise DistlibException('given distribution %r is not a member ' - 'of the list' % dist.name) - graph = make_graph(dists) - - req = set() # required distributions - todo = graph.adjacency_list[dist] # list of nodes we should inspect - seen = set(t[0] for t in todo) # already added to todo - - while todo: - d = todo.pop()[0] - req.add(d) - pred_list = graph.adjacency_list[d] - for pred in pred_list: - d = pred[0] - if d not in req and d not in seen: - seen.add(d) - todo.append(pred) - return req - - -def make_dist(name, version, **kwargs): - """ - A convenience method for making a dist given just a name and version. - """ - summary = kwargs.pop('summary', 'Placeholder for summary') - md = Metadata(**kwargs) - md.name = name - md.version = version - md.summary = summary or 'Placeholder for summary' - return Distribution(md) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tools/benchmark.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tools/benchmark.py deleted file mode 100644 index aaac56400148f7b140b7c1356bbbc3b4293e5ce3..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tools/benchmark.py +++ /dev/null @@ -1,197 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -""" -A script to benchmark builtin models. - -Note: this script has an extra dependency of psutil. -""" - -import itertools -import logging -import psutil -import torch -import tqdm -from fvcore.common.timer import Timer -from torch.nn.parallel import DistributedDataParallel - -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import LazyConfig, get_cfg, instantiate -from detectron2.data import ( - DatasetFromList, - build_detection_test_loader, - build_detection_train_loader, -) -from detectron2.data.benchmark import DataLoaderBenchmark -from detectron2.engine import AMPTrainer, SimpleTrainer, default_argument_parser, hooks, launch -from detectron2.modeling import build_model -from detectron2.solver import build_optimizer -from detectron2.utils import comm -from detectron2.utils.collect_env import collect_env_info -from detectron2.utils.events import CommonMetricPrinter -from detectron2.utils.logger import setup_logger - -logger = logging.getLogger("detectron2") - - -def setup(args): - if args.config_file.endswith(".yaml"): - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.SOLVER.BASE_LR = 0.001 # Avoid NaNs. Not useful in this script anyway. - cfg.merge_from_list(args.opts) - cfg.freeze() - else: - cfg = LazyConfig.load(args.config_file) - cfg = LazyConfig.apply_overrides(cfg, args.opts) - setup_logger(distributed_rank=comm.get_rank()) - return cfg - - -def create_data_benchmark(cfg, args): - if args.config_file.endswith(".py"): - dl_cfg = cfg.dataloader.train - dl_cfg._target_ = DataLoaderBenchmark - return instantiate(dl_cfg) - else: - kwargs = build_detection_train_loader.from_config(cfg) - kwargs.pop("aspect_ratio_grouping", None) - kwargs["_target_"] = DataLoaderBenchmark - return instantiate(kwargs) - - -def RAM_msg(): - vram = psutil.virtual_memory() - return "RAM Usage: {:.2f}/{:.2f} GB".format( - (vram.total - vram.available) / 1024 ** 3, vram.total / 1024 ** 3 - ) - - -def benchmark_data(args): - cfg = setup(args) - logger.info("After spawning " + RAM_msg()) - - benchmark = create_data_benchmark(cfg, args) - benchmark.benchmark_distributed(250, 10) - # test for a few more rounds - for k in range(10): - logger.info(f"Iteration {k} " + RAM_msg()) - benchmark.benchmark_distributed(250, 1) - - -def benchmark_data_advanced(args): - # benchmark dataloader with more details to help analyze performance bottleneck - cfg = setup(args) - benchmark = create_data_benchmark(cfg, args) - - if comm.get_rank() == 0: - benchmark.benchmark_dataset(100) - benchmark.benchmark_mapper(100) - benchmark.benchmark_workers(100, warmup=10) - benchmark.benchmark_IPC(100, warmup=10) - if comm.get_world_size() > 1: - benchmark.benchmark_distributed(100) - logger.info("Rerun ...") - benchmark.benchmark_distributed(100) - - -def benchmark_train(args): - cfg = setup(args) - model = build_model(cfg) - logger.info("Model:\n{}".format(model)) - if comm.get_world_size() > 1: - model = DistributedDataParallel( - model, device_ids=[comm.get_local_rank()], broadcast_buffers=False - ) - optimizer = build_optimizer(cfg, model) - checkpointer = DetectionCheckpointer(model, optimizer=optimizer) - checkpointer.load(cfg.MODEL.WEIGHTS) - - cfg.defrost() - cfg.DATALOADER.NUM_WORKERS = 2 - data_loader = build_detection_train_loader(cfg) - dummy_data = list(itertools.islice(data_loader, 100)) - - def f(): - data = DatasetFromList(dummy_data, copy=False, serialize=False) - while True: - yield from data - - max_iter = 400 - trainer = (AMPTrainer if cfg.SOLVER.AMP.ENABLED else SimpleTrainer)(model, f(), optimizer) - trainer.register_hooks( - [ - hooks.IterationTimer(), - hooks.PeriodicWriter([CommonMetricPrinter(max_iter)]), - hooks.TorchProfiler( - lambda trainer: trainer.iter == max_iter - 1, cfg.OUTPUT_DIR, save_tensorboard=True - ), - ] - ) - trainer.train(1, max_iter) - - -@torch.no_grad() -def benchmark_eval(args): - cfg = setup(args) - if args.config_file.endswith(".yaml"): - model = build_model(cfg) - DetectionCheckpointer(model).load(cfg.MODEL.WEIGHTS) - - cfg.defrost() - cfg.DATALOADER.NUM_WORKERS = 0 - data_loader = build_detection_test_loader(cfg, cfg.DATASETS.TEST[0]) - else: - model = instantiate(cfg.model) - model.to(cfg.train.device) - DetectionCheckpointer(model).load(cfg.train.init_checkpoint) - - cfg.dataloader.num_workers = 0 - data_loader = instantiate(cfg.dataloader.test) - - model.eval() - logger.info("Model:\n{}".format(model)) - dummy_data = DatasetFromList(list(itertools.islice(data_loader, 100)), copy=False) - - def f(): - while True: - yield from dummy_data - - for k in range(5): # warmup - model(dummy_data[k]) - - max_iter = 300 - timer = Timer() - with tqdm.tqdm(total=max_iter) as pbar: - for idx, d in enumerate(f()): - if idx == max_iter: - break - model(d) - pbar.update() - logger.info("{} iters in {} seconds.".format(max_iter, timer.seconds())) - - -if __name__ == "__main__": - parser = default_argument_parser() - parser.add_argument("--task", choices=["train", "eval", "data", "data_advanced"], required=True) - args = parser.parse_args() - assert not args.eval_only - - logger.info("Environment info:\n" + collect_env_info()) - if "data" in args.task: - print("Initial " + RAM_msg()) - if args.task == "data": - f = benchmark_data - if args.task == "data_advanced": - f = benchmark_data_advanced - elif args.task == "train": - """ - Note: training speed may not be representative. - The training cost of a R-CNN model varies with the content of the data - and the quality of the model. - """ - f = benchmark_train - elif args.task == "eval": - f = benchmark_eval - # only benchmark single-GPU inference. - assert args.num_gpus == 1 and args.num_machines == 1 - launch(f, args.num_gpus, args.num_machines, args.machine_rank, args.dist_url, args=(args,)) diff --git a/spaces/Theivaprakasham/facedetect/app.py b/spaces/Theivaprakasham/facedetect/app.py deleted file mode 100644 index fe9e3af4e99872ff165552be0a1f78ef71a67682..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/facedetect/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import os - -os.system('pip install insightface==0.6.2') - - -import gradio as gr -import numpy as np -import insightface -from insightface.app import FaceAnalysis -from insightface.data import get_image as ins_get_image -from PIL import Image - -import PIL - -app = FaceAnalysis(name="buffalo_sc", providers=['CPUExecutionProvider'], allowed_modules=['detection']) - -article="

Face Detection

" -description = "This Face Detection Project uses InsightFace Library (https://insightface.ai/). We use RetinaFace-500MF model for the Face Detection. Upload an image or click an example image to use." - -def show_preds(input_image, detection_threshold=0.2): - - if detection_threshold<0.05 or detection_threshold==None: detection_threshold = 0.10 - - app.prepare(ctx_id=0, det_size=(640, 640), det_thresh=detection_threshold) - - img = PIL.Image.fromarray(input_image, 'RGB') - basewidth = 900 - wpercent = (basewidth/float(img.size[0])) - hsize = int((float(img.size[1])*float(wpercent))) - img = img.resize((basewidth,hsize), Image.ANTIALIAS) - - #display(img) - faces = app.get(np.array(img)) - detected = app.draw_on(np.array(img), faces) - return detected - -detection_threshold_slider = gr.inputs.Slider(minimum=0, maximum=1, step=0.05, default=0.2, label="Detection Threshold") -outputs = gr.outputs.Image(type="pil") - -examples = [['example1.jpg',0.2], ['example2.jpg',0.2]] - -gr_interface = gr.Interface(fn=show_preds, inputs=["image", detection_threshold_slider], outputs=outputs, title='Face Detection App', article=article,description=description, examples=examples, analytics_enabled = True, enable_queue=True) -gr_interface.launch(inline=False, share=True, debug=True) \ No newline at end of file diff --git a/spaces/Trangluna2002/AI_Cover_Gen/src/download_models.py b/spaces/Trangluna2002/AI_Cover_Gen/src/download_models.py deleted file mode 100644 index 0df2477e4c465eb234bde7501127d2ce2b53f56e..0000000000000000000000000000000000000000 --- a/spaces/Trangluna2002/AI_Cover_Gen/src/download_models.py +++ /dev/null @@ -1,31 +0,0 @@ -from pathlib import Path -import requests - -MDX_DOWNLOAD_LINK = 'https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/' -RVC_DOWNLOAD_LINK = 'https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/' - -BASE_DIR = Path(__file__).resolve().parent.parent -mdxnet_models_dir = BASE_DIR / 'mdxnet_models' -rvc_models_dir = BASE_DIR / 'rvc_models' - - -def dl_model(link, model_name, dir_name): - with requests.get(f'{link}{model_name}') as r: - r.raise_for_status() - with open(dir_name / model_name, 'wb') as f: - for chunk in r.iter_content(chunk_size=8192): - f.write(chunk) - - -if __name__ == '__main__': - mdx_model_names = ['UVR-MDX-NET-Voc_FT.onnx', 'UVR_MDXNET_KARA_2.onnx', 'Reverb_HQ_By_FoxJoy.onnx'] - for model in mdx_model_names: - print(f'Downloading {model}...') - dl_model(MDX_DOWNLOAD_LINK, model, mdxnet_models_dir) - - rvc_model_names = ['hubert_base.pt', 'rmvpe.pt'] - for model in rvc_model_names: - print(f'Downloading {model}...') - dl_model(RVC_DOWNLOAD_LINK, model, rvc_models_dir) - - print('All models downloaded!') diff --git a/spaces/TrustSafeAI/NCTV/assets/css/bootstrap/bootstrap.rtl.min.css b/spaces/TrustSafeAI/NCTV/assets/css/bootstrap/bootstrap.rtl.min.css deleted file mode 100644 index e6fe1f604d2459d416a5c61bbf6a6b3b1479074d..0000000000000000000000000000000000000000 --- a/spaces/TrustSafeAI/NCTV/assets/css/bootstrap/bootstrap.rtl.min.css +++ /dev/null @@ -1,7 +0,0 @@ -@charset "UTF-8";/*! - * Bootstrap v5.1.3 (https://getbootstrap.com/) - * Copyright 2011-2021 The Bootstrap Authors - * Copyright 2011-2021 Twitter, Inc. - * Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE) - */:root{--bs-blue:#0d6efd;--bs-indigo:#6610f2;--bs-purple:#6f42c1;--bs-pink:#d63384;--bs-red:#dc3545;--bs-orange:#fd7e14;--bs-yellow:#ffc107;--bs-green:#198754;--bs-teal:#20c997;--bs-cyan:#0dcaf0;--bs-white:#fff;--bs-gray:#6c757d;--bs-gray-dark:#343a40;--bs-gray-100:#f8f9fa;--bs-gray-200:#e9ecef;--bs-gray-300:#dee2e6;--bs-gray-400:#ced4da;--bs-gray-500:#adb5bd;--bs-gray-600:#6c757d;--bs-gray-700:#495057;--bs-gray-800:#343a40;--bs-gray-900:#212529;--bs-primary:#0d6efd;--bs-secondary:#6c757d;--bs-success:#198754;--bs-info:#0dcaf0;--bs-warning:#ffc107;--bs-danger:#dc3545;--bs-light:#f8f9fa;--bs-dark:#212529;--bs-primary-rgb:13,110,253;--bs-secondary-rgb:108,117,125;--bs-success-rgb:25,135,84;--bs-info-rgb:13,202,240;--bs-warning-rgb:255,193,7;--bs-danger-rgb:220,53,69;--bs-light-rgb:248,249,250;--bs-dark-rgb:33,37,41;--bs-white-rgb:255,255,255;--bs-black-rgb:0,0,0;--bs-body-color-rgb:33,37,41;--bs-body-bg-rgb:255,255,255;--bs-font-sans-serif:system-ui,-apple-system,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans","Liberation Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";--bs-font-monospace:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;--bs-gradient:linear-gradient(180deg, rgba(255, 255, 255, 0.15), rgba(255, 255, 255, 0));--bs-body-font-family:var(--bs-font-sans-serif);--bs-body-font-size:1rem;--bs-body-font-weight:400;--bs-body-line-height:1.5;--bs-body-color:#212529;--bs-body-bg:#fff}*,::after,::before{box-sizing:border-box}@media (prefers-reduced-motion:no-preference){:root{scroll-behavior:smooth}}body{margin:0;font-family:var(--bs-body-font-family);font-size:var(--bs-body-font-size);font-weight:var(--bs-body-font-weight);line-height:var(--bs-body-line-height);color:var(--bs-body-color);text-align:var(--bs-body-text-align);background-color:var(--bs-body-bg);-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:transparent}hr{margin:1rem 0;color:inherit;background-color:currentColor;border:0;opacity:.25}hr:not([size]){height:1px}.h1,.h2,.h3,.h4,.h5,.h6,h1,h2,h3,h4,h5,h6{margin-top:0;margin-bottom:.5rem;font-weight:500;line-height:1.2}.h1,h1{font-size:calc(1.375rem + 1.5vw)}@media (min-width:1200px){.h1,h1{font-size:2.5rem}}.h2,h2{font-size:calc(1.325rem + .9vw)}@media (min-width:1200px){.h2,h2{font-size:2rem}}.h3,h3{font-size:calc(1.3rem + .6vw)}@media (min-width:1200px){.h3,h3{font-size:1.75rem}}.h4,h4{font-size:calc(1.275rem + .3vw)}@media (min-width:1200px){.h4,h4{font-size:1.5rem}}.h5,h5{font-size:1.25rem}.h6,h6{font-size:1rem}p{margin-top:0;margin-bottom:1rem}abbr[data-bs-original-title],abbr[title]{-webkit-text-decoration:underline dotted;text-decoration:underline dotted;cursor:help;-webkit-text-decoration-skip-ink:none;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}ol,ul{padding-right:2rem}dl,ol,ul{margin-top:0;margin-bottom:1rem}ol ol,ol ul,ul ol,ul ul{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-right:0}blockquote{margin:0 0 1rem}b,strong{font-weight:bolder}.small,small{font-size:.875em}.mark,mark{padding:.2em;background-color:#fcf8e3}sub,sup{position:relative;font-size:.75em;line-height:0;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}a{color:#0d6efd;text-decoration:underline}a:hover{color:#0a58ca}a:not([href]):not([class]),a:not([href]):not([class]):hover{color:inherit;text-decoration:none}code,kbd,pre,samp{font-family:var(--bs-font-monospace);font-size:1em;direction:ltr;unicode-bidi:bidi-override}pre{display:block;margin-top:0;margin-bottom:1rem;overflow:auto;font-size:.875em}pre code{font-size:inherit;color:inherit;word-break:normal}code{font-size:.875em;color:#d63384;word-wrap:break-word}a>code{color:inherit}kbd{padding:.2rem .4rem;font-size:.875em;color:#fff;background-color:#212529;border-radius:.2rem}kbd kbd{padding:0;font-size:1em;font-weight:700}figure{margin:0 0 1rem}img,svg{vertical-align:middle}table{caption-side:bottom;border-collapse:collapse}caption{padding-top:.5rem;padding-bottom:.5rem;color:#6c757d;text-align:right}th{text-align:inherit;text-align:-webkit-match-parent}tbody,td,tfoot,th,thead,tr{border-color:inherit;border-style:solid;border-width:0}label{display:inline-block}button{border-radius:0}button:focus:not(:focus-visible){outline:0}button,input,optgroup,select,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,select{text-transform:none}[role=button]{cursor:pointer}select{word-wrap:normal}select:disabled{opacity:1}[list]::-webkit-calendar-picker-indicator{display:none}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button}[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled),button:not(:disabled){cursor:pointer}::-moz-focus-inner{padding:0;border-style:none}textarea{resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{float:right;width:100%;padding:0;margin-bottom:.5rem;font-size:calc(1.275rem + .3vw);line-height:inherit}@media (min-width:1200px){legend{font-size:1.5rem}}legend+*{clear:right}::-webkit-datetime-edit-day-field,::-webkit-datetime-edit-fields-wrapper,::-webkit-datetime-edit-hour-field,::-webkit-datetime-edit-minute,::-webkit-datetime-edit-month-field,::-webkit-datetime-edit-text,::-webkit-datetime-edit-year-field{padding:0}::-webkit-inner-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:textfield}[type=email],[type=number],[type=tel],[type=url]{direction:ltr}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-color-swatch-wrapper{padding:0}::-webkit-file-upload-button{font:inherit}::file-selector-button{font:inherit}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}iframe{border:0}summary{display:list-item;cursor:pointer}progress{vertical-align:baseline}[hidden]{display:none!important}.lead{font-size:1.25rem;font-weight:300}.display-1{font-size:calc(1.625rem + 4.5vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-1{font-size:5rem}}.display-2{font-size:calc(1.575rem + 3.9vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-2{font-size:4.5rem}}.display-3{font-size:calc(1.525rem + 3.3vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-3{font-size:4rem}}.display-4{font-size:calc(1.475rem + 2.7vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-4{font-size:3.5rem}}.display-5{font-size:calc(1.425rem + 2.1vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-5{font-size:3rem}}.display-6{font-size:calc(1.375rem + 1.5vw);font-weight:300;line-height:1.2}@media (min-width:1200px){.display-6{font-size:2.5rem}}.list-unstyled{padding-right:0;list-style:none}.list-inline{padding-right:0;list-style:none}.list-inline-item{display:inline-block}.list-inline-item:not(:last-child){margin-left:.5rem}.initialism{font-size:.875em;text-transform:uppercase}.blockquote{margin-bottom:1rem;font-size:1.25rem}.blockquote>:last-child{margin-bottom:0}.blockquote-footer{margin-top:-1rem;margin-bottom:1rem;font-size:.875em;color:#6c757d}.blockquote-footer::before{content:"— "}.img-fluid{max-width:100%;height:auto}.img-thumbnail{padding:.25rem;background-color:#fff;border:1px solid #dee2e6;border-radius:.25rem;max-width:100%;height:auto}.figure{display:inline-block}.figure-img{margin-bottom:.5rem;line-height:1}.figure-caption{font-size:.875em;color:#6c757d}.container,.container-fluid,.container-lg,.container-md,.container-sm,.container-xl,.container-xxl{width:100%;padding-left:var(--bs-gutter-x,.75rem);padding-right:var(--bs-gutter-x,.75rem);margin-left:auto;margin-right:auto}@media (min-width:576px){.container,.container-sm{max-width:540px}}@media (min-width:768px){.container,.container-md,.container-sm{max-width:720px}}@media (min-width:992px){.container,.container-lg,.container-md,.container-sm{max-width:960px}}@media (min-width:1200px){.container,.container-lg,.container-md,.container-sm,.container-xl{max-width:1140px}}@media (min-width:1400px){.container,.container-lg,.container-md,.container-sm,.container-xl,.container-xxl{max-width:1320px}}.row{--bs-gutter-x:1.5rem;--bs-gutter-y:0;display:flex;flex-wrap:wrap;margin-top:calc(-1 * var(--bs-gutter-y));margin-left:calc(-.5 * var(--bs-gutter-x));margin-right:calc(-.5 * var(--bs-gutter-x))}.row>*{flex-shrink:0;width:100%;max-width:100%;padding-left:calc(var(--bs-gutter-x) * .5);padding-right:calc(var(--bs-gutter-x) * .5);margin-top:var(--bs-gutter-y)}.col{flex:1 0 0%}.row-cols-auto>*{flex:0 0 auto;width:auto}.row-cols-1>*{flex:0 0 auto;width:100%}.row-cols-2>*{flex:0 0 auto;width:50%}.row-cols-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-4>*{flex:0 0 auto;width:25%}.row-cols-5>*{flex:0 0 auto;width:20%}.row-cols-6>*{flex:0 0 auto;width:16.6666666667%}.col-auto{flex:0 0 auto;width:auto}.col-1{flex:0 0 auto;width:8.33333333%}.col-2{flex:0 0 auto;width:16.66666667%}.col-3{flex:0 0 auto;width:25%}.col-4{flex:0 0 auto;width:33.33333333%}.col-5{flex:0 0 auto;width:41.66666667%}.col-6{flex:0 0 auto;width:50%}.col-7{flex:0 0 auto;width:58.33333333%}.col-8{flex:0 0 auto;width:66.66666667%}.col-9{flex:0 0 auto;width:75%}.col-10{flex:0 0 auto;width:83.33333333%}.col-11{flex:0 0 auto;width:91.66666667%}.col-12{flex:0 0 auto;width:100%}.offset-1{margin-right:8.33333333%}.offset-2{margin-right:16.66666667%}.offset-3{margin-right:25%}.offset-4{margin-right:33.33333333%}.offset-5{margin-right:41.66666667%}.offset-6{margin-right:50%}.offset-7{margin-right:58.33333333%}.offset-8{margin-right:66.66666667%}.offset-9{margin-right:75%}.offset-10{margin-right:83.33333333%}.offset-11{margin-right:91.66666667%}.g-0,.gx-0{--bs-gutter-x:0}.g-0,.gy-0{--bs-gutter-y:0}.g-1,.gx-1{--bs-gutter-x:0.25rem}.g-1,.gy-1{--bs-gutter-y:0.25rem}.g-2,.gx-2{--bs-gutter-x:0.5rem}.g-2,.gy-2{--bs-gutter-y:0.5rem}.g-3,.gx-3{--bs-gutter-x:1rem}.g-3,.gy-3{--bs-gutter-y:1rem}.g-4,.gx-4{--bs-gutter-x:1.5rem}.g-4,.gy-4{--bs-gutter-y:1.5rem}.g-5,.gx-5{--bs-gutter-x:3rem}.g-5,.gy-5{--bs-gutter-y:3rem}@media (min-width:576px){.col-sm{flex:1 0 0%}.row-cols-sm-auto>*{flex:0 0 auto;width:auto}.row-cols-sm-1>*{flex:0 0 auto;width:100%}.row-cols-sm-2>*{flex:0 0 auto;width:50%}.row-cols-sm-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-sm-4>*{flex:0 0 auto;width:25%}.row-cols-sm-5>*{flex:0 0 auto;width:20%}.row-cols-sm-6>*{flex:0 0 auto;width:16.6666666667%}.col-sm-auto{flex:0 0 auto;width:auto}.col-sm-1{flex:0 0 auto;width:8.33333333%}.col-sm-2{flex:0 0 auto;width:16.66666667%}.col-sm-3{flex:0 0 auto;width:25%}.col-sm-4{flex:0 0 auto;width:33.33333333%}.col-sm-5{flex:0 0 auto;width:41.66666667%}.col-sm-6{flex:0 0 auto;width:50%}.col-sm-7{flex:0 0 auto;width:58.33333333%}.col-sm-8{flex:0 0 auto;width:66.66666667%}.col-sm-9{flex:0 0 auto;width:75%}.col-sm-10{flex:0 0 auto;width:83.33333333%}.col-sm-11{flex:0 0 auto;width:91.66666667%}.col-sm-12{flex:0 0 auto;width:100%}.offset-sm-0{margin-right:0}.offset-sm-1{margin-right:8.33333333%}.offset-sm-2{margin-right:16.66666667%}.offset-sm-3{margin-right:25%}.offset-sm-4{margin-right:33.33333333%}.offset-sm-5{margin-right:41.66666667%}.offset-sm-6{margin-right:50%}.offset-sm-7{margin-right:58.33333333%}.offset-sm-8{margin-right:66.66666667%}.offset-sm-9{margin-right:75%}.offset-sm-10{margin-right:83.33333333%}.offset-sm-11{margin-right:91.66666667%}.g-sm-0,.gx-sm-0{--bs-gutter-x:0}.g-sm-0,.gy-sm-0{--bs-gutter-y:0}.g-sm-1,.gx-sm-1{--bs-gutter-x:0.25rem}.g-sm-1,.gy-sm-1{--bs-gutter-y:0.25rem}.g-sm-2,.gx-sm-2{--bs-gutter-x:0.5rem}.g-sm-2,.gy-sm-2{--bs-gutter-y:0.5rem}.g-sm-3,.gx-sm-3{--bs-gutter-x:1rem}.g-sm-3,.gy-sm-3{--bs-gutter-y:1rem}.g-sm-4,.gx-sm-4{--bs-gutter-x:1.5rem}.g-sm-4,.gy-sm-4{--bs-gutter-y:1.5rem}.g-sm-5,.gx-sm-5{--bs-gutter-x:3rem}.g-sm-5,.gy-sm-5{--bs-gutter-y:3rem}}@media (min-width:768px){.col-md{flex:1 0 0%}.row-cols-md-auto>*{flex:0 0 auto;width:auto}.row-cols-md-1>*{flex:0 0 auto;width:100%}.row-cols-md-2>*{flex:0 0 auto;width:50%}.row-cols-md-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-md-4>*{flex:0 0 auto;width:25%}.row-cols-md-5>*{flex:0 0 auto;width:20%}.row-cols-md-6>*{flex:0 0 auto;width:16.6666666667%}.col-md-auto{flex:0 0 auto;width:auto}.col-md-1{flex:0 0 auto;width:8.33333333%}.col-md-2{flex:0 0 auto;width:16.66666667%}.col-md-3{flex:0 0 auto;width:25%}.col-md-4{flex:0 0 auto;width:33.33333333%}.col-md-5{flex:0 0 auto;width:41.66666667%}.col-md-6{flex:0 0 auto;width:50%}.col-md-7{flex:0 0 auto;width:58.33333333%}.col-md-8{flex:0 0 auto;width:66.66666667%}.col-md-9{flex:0 0 auto;width:75%}.col-md-10{flex:0 0 auto;width:83.33333333%}.col-md-11{flex:0 0 auto;width:91.66666667%}.col-md-12{flex:0 0 auto;width:100%}.offset-md-0{margin-right:0}.offset-md-1{margin-right:8.33333333%}.offset-md-2{margin-right:16.66666667%}.offset-md-3{margin-right:25%}.offset-md-4{margin-right:33.33333333%}.offset-md-5{margin-right:41.66666667%}.offset-md-6{margin-right:50%}.offset-md-7{margin-right:58.33333333%}.offset-md-8{margin-right:66.66666667%}.offset-md-9{margin-right:75%}.offset-md-10{margin-right:83.33333333%}.offset-md-11{margin-right:91.66666667%}.g-md-0,.gx-md-0{--bs-gutter-x:0}.g-md-0,.gy-md-0{--bs-gutter-y:0}.g-md-1,.gx-md-1{--bs-gutter-x:0.25rem}.g-md-1,.gy-md-1{--bs-gutter-y:0.25rem}.g-md-2,.gx-md-2{--bs-gutter-x:0.5rem}.g-md-2,.gy-md-2{--bs-gutter-y:0.5rem}.g-md-3,.gx-md-3{--bs-gutter-x:1rem}.g-md-3,.gy-md-3{--bs-gutter-y:1rem}.g-md-4,.gx-md-4{--bs-gutter-x:1.5rem}.g-md-4,.gy-md-4{--bs-gutter-y:1.5rem}.g-md-5,.gx-md-5{--bs-gutter-x:3rem}.g-md-5,.gy-md-5{--bs-gutter-y:3rem}}@media (min-width:992px){.col-lg{flex:1 0 0%}.row-cols-lg-auto>*{flex:0 0 auto;width:auto}.row-cols-lg-1>*{flex:0 0 auto;width:100%}.row-cols-lg-2>*{flex:0 0 auto;width:50%}.row-cols-lg-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-lg-4>*{flex:0 0 auto;width:25%}.row-cols-lg-5>*{flex:0 0 auto;width:20%}.row-cols-lg-6>*{flex:0 0 auto;width:16.6666666667%}.col-lg-auto{flex:0 0 auto;width:auto}.col-lg-1{flex:0 0 auto;width:8.33333333%}.col-lg-2{flex:0 0 auto;width:16.66666667%}.col-lg-3{flex:0 0 auto;width:25%}.col-lg-4{flex:0 0 auto;width:33.33333333%}.col-lg-5{flex:0 0 auto;width:41.66666667%}.col-lg-6{flex:0 0 auto;width:50%}.col-lg-7{flex:0 0 auto;width:58.33333333%}.col-lg-8{flex:0 0 auto;width:66.66666667%}.col-lg-9{flex:0 0 auto;width:75%}.col-lg-10{flex:0 0 auto;width:83.33333333%}.col-lg-11{flex:0 0 auto;width:91.66666667%}.col-lg-12{flex:0 0 auto;width:100%}.offset-lg-0{margin-right:0}.offset-lg-1{margin-right:8.33333333%}.offset-lg-2{margin-right:16.66666667%}.offset-lg-3{margin-right:25%}.offset-lg-4{margin-right:33.33333333%}.offset-lg-5{margin-right:41.66666667%}.offset-lg-6{margin-right:50%}.offset-lg-7{margin-right:58.33333333%}.offset-lg-8{margin-right:66.66666667%}.offset-lg-9{margin-right:75%}.offset-lg-10{margin-right:83.33333333%}.offset-lg-11{margin-right:91.66666667%}.g-lg-0,.gx-lg-0{--bs-gutter-x:0}.g-lg-0,.gy-lg-0{--bs-gutter-y:0}.g-lg-1,.gx-lg-1{--bs-gutter-x:0.25rem}.g-lg-1,.gy-lg-1{--bs-gutter-y:0.25rem}.g-lg-2,.gx-lg-2{--bs-gutter-x:0.5rem}.g-lg-2,.gy-lg-2{--bs-gutter-y:0.5rem}.g-lg-3,.gx-lg-3{--bs-gutter-x:1rem}.g-lg-3,.gy-lg-3{--bs-gutter-y:1rem}.g-lg-4,.gx-lg-4{--bs-gutter-x:1.5rem}.g-lg-4,.gy-lg-4{--bs-gutter-y:1.5rem}.g-lg-5,.gx-lg-5{--bs-gutter-x:3rem}.g-lg-5,.gy-lg-5{--bs-gutter-y:3rem}}@media (min-width:1200px){.col-xl{flex:1 0 0%}.row-cols-xl-auto>*{flex:0 0 auto;width:auto}.row-cols-xl-1>*{flex:0 0 auto;width:100%}.row-cols-xl-2>*{flex:0 0 auto;width:50%}.row-cols-xl-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-xl-4>*{flex:0 0 auto;width:25%}.row-cols-xl-5>*{flex:0 0 auto;width:20%}.row-cols-xl-6>*{flex:0 0 auto;width:16.6666666667%}.col-xl-auto{flex:0 0 auto;width:auto}.col-xl-1{flex:0 0 auto;width:8.33333333%}.col-xl-2{flex:0 0 auto;width:16.66666667%}.col-xl-3{flex:0 0 auto;width:25%}.col-xl-4{flex:0 0 auto;width:33.33333333%}.col-xl-5{flex:0 0 auto;width:41.66666667%}.col-xl-6{flex:0 0 auto;width:50%}.col-xl-7{flex:0 0 auto;width:58.33333333%}.col-xl-8{flex:0 0 auto;width:66.66666667%}.col-xl-9{flex:0 0 auto;width:75%}.col-xl-10{flex:0 0 auto;width:83.33333333%}.col-xl-11{flex:0 0 auto;width:91.66666667%}.col-xl-12{flex:0 0 auto;width:100%}.offset-xl-0{margin-right:0}.offset-xl-1{margin-right:8.33333333%}.offset-xl-2{margin-right:16.66666667%}.offset-xl-3{margin-right:25%}.offset-xl-4{margin-right:33.33333333%}.offset-xl-5{margin-right:41.66666667%}.offset-xl-6{margin-right:50%}.offset-xl-7{margin-right:58.33333333%}.offset-xl-8{margin-right:66.66666667%}.offset-xl-9{margin-right:75%}.offset-xl-10{margin-right:83.33333333%}.offset-xl-11{margin-right:91.66666667%}.g-xl-0,.gx-xl-0{--bs-gutter-x:0}.g-xl-0,.gy-xl-0{--bs-gutter-y:0}.g-xl-1,.gx-xl-1{--bs-gutter-x:0.25rem}.g-xl-1,.gy-xl-1{--bs-gutter-y:0.25rem}.g-xl-2,.gx-xl-2{--bs-gutter-x:0.5rem}.g-xl-2,.gy-xl-2{--bs-gutter-y:0.5rem}.g-xl-3,.gx-xl-3{--bs-gutter-x:1rem}.g-xl-3,.gy-xl-3{--bs-gutter-y:1rem}.g-xl-4,.gx-xl-4{--bs-gutter-x:1.5rem}.g-xl-4,.gy-xl-4{--bs-gutter-y:1.5rem}.g-xl-5,.gx-xl-5{--bs-gutter-x:3rem}.g-xl-5,.gy-xl-5{--bs-gutter-y:3rem}}@media (min-width:1400px){.col-xxl{flex:1 0 0%}.row-cols-xxl-auto>*{flex:0 0 auto;width:auto}.row-cols-xxl-1>*{flex:0 0 auto;width:100%}.row-cols-xxl-2>*{flex:0 0 auto;width:50%}.row-cols-xxl-3>*{flex:0 0 auto;width:33.3333333333%}.row-cols-xxl-4>*{flex:0 0 auto;width:25%}.row-cols-xxl-5>*{flex:0 0 auto;width:20%}.row-cols-xxl-6>*{flex:0 0 auto;width:16.6666666667%}.col-xxl-auto{flex:0 0 auto;width:auto}.col-xxl-1{flex:0 0 auto;width:8.33333333%}.col-xxl-2{flex:0 0 auto;width:16.66666667%}.col-xxl-3{flex:0 0 auto;width:25%}.col-xxl-4{flex:0 0 auto;width:33.33333333%}.col-xxl-5{flex:0 0 auto;width:41.66666667%}.col-xxl-6{flex:0 0 auto;width:50%}.col-xxl-7{flex:0 0 auto;width:58.33333333%}.col-xxl-8{flex:0 0 auto;width:66.66666667%}.col-xxl-9{flex:0 0 auto;width:75%}.col-xxl-10{flex:0 0 auto;width:83.33333333%}.col-xxl-11{flex:0 0 auto;width:91.66666667%}.col-xxl-12{flex:0 0 auto;width:100%}.offset-xxl-0{margin-right:0}.offset-xxl-1{margin-right:8.33333333%}.offset-xxl-2{margin-right:16.66666667%}.offset-xxl-3{margin-right:25%}.offset-xxl-4{margin-right:33.33333333%}.offset-xxl-5{margin-right:41.66666667%}.offset-xxl-6{margin-right:50%}.offset-xxl-7{margin-right:58.33333333%}.offset-xxl-8{margin-right:66.66666667%}.offset-xxl-9{margin-right:75%}.offset-xxl-10{margin-right:83.33333333%}.offset-xxl-11{margin-right:91.66666667%}.g-xxl-0,.gx-xxl-0{--bs-gutter-x:0}.g-xxl-0,.gy-xxl-0{--bs-gutter-y:0}.g-xxl-1,.gx-xxl-1{--bs-gutter-x:0.25rem}.g-xxl-1,.gy-xxl-1{--bs-gutter-y:0.25rem}.g-xxl-2,.gx-xxl-2{--bs-gutter-x:0.5rem}.g-xxl-2,.gy-xxl-2{--bs-gutter-y:0.5rem}.g-xxl-3,.gx-xxl-3{--bs-gutter-x:1rem}.g-xxl-3,.gy-xxl-3{--bs-gutter-y:1rem}.g-xxl-4,.gx-xxl-4{--bs-gutter-x:1.5rem}.g-xxl-4,.gy-xxl-4{--bs-gutter-y:1.5rem}.g-xxl-5,.gx-xxl-5{--bs-gutter-x:3rem}.g-xxl-5,.gy-xxl-5{--bs-gutter-y:3rem}}.table{--bs-table-bg:transparent;--bs-table-accent-bg:transparent;--bs-table-striped-color:#212529;--bs-table-striped-bg:rgba(0, 0, 0, 0.05);--bs-table-active-color:#212529;--bs-table-active-bg:rgba(0, 0, 0, 0.1);--bs-table-hover-color:#212529;--bs-table-hover-bg:rgba(0, 0, 0, 0.075);width:100%;margin-bottom:1rem;color:#212529;vertical-align:top;border-color:#dee2e6}.table>:not(caption)>*>*{padding:.5rem .5rem;background-color:var(--bs-table-bg);border-bottom-width:1px;box-shadow:inset 0 0 0 9999px var(--bs-table-accent-bg)}.table>tbody{vertical-align:inherit}.table>thead{vertical-align:bottom}.table>:not(:first-child){border-top:2px solid currentColor}.caption-top{caption-side:top}.table-sm>:not(caption)>*>*{padding:.25rem .25rem}.table-bordered>:not(caption)>*{border-width:1px 0}.table-bordered>:not(caption)>*>*{border-width:0 1px}.table-borderless>:not(caption)>*>*{border-bottom-width:0}.table-borderless>:not(:first-child){border-top-width:0}.table-striped>tbody>tr:nth-of-type(odd)>*{--bs-table-accent-bg:var(--bs-table-striped-bg);color:var(--bs-table-striped-color)}.table-active{--bs-table-accent-bg:var(--bs-table-active-bg);color:var(--bs-table-active-color)}.table-hover>tbody>tr:hover>*{--bs-table-accent-bg:var(--bs-table-hover-bg);color:var(--bs-table-hover-color)}.table-primary{--bs-table-bg:#cfe2ff;--bs-table-striped-bg:#c5d7f2;--bs-table-striped-color:#000;--bs-table-active-bg:#bacbe6;--bs-table-active-color:#000;--bs-table-hover-bg:#bfd1ec;--bs-table-hover-color:#000;color:#000;border-color:#bacbe6}.table-secondary{--bs-table-bg:#e2e3e5;--bs-table-striped-bg:#d7d8da;--bs-table-striped-color:#000;--bs-table-active-bg:#cbccce;--bs-table-active-color:#000;--bs-table-hover-bg:#d1d2d4;--bs-table-hover-color:#000;color:#000;border-color:#cbccce}.table-success{--bs-table-bg:#d1e7dd;--bs-table-striped-bg:#c7dbd2;--bs-table-striped-color:#000;--bs-table-active-bg:#bcd0c7;--bs-table-active-color:#000;--bs-table-hover-bg:#c1d6cc;--bs-table-hover-color:#000;color:#000;border-color:#bcd0c7}.table-info{--bs-table-bg:#cff4fc;--bs-table-striped-bg:#c5e8ef;--bs-table-striped-color:#000;--bs-table-active-bg:#badce3;--bs-table-active-color:#000;--bs-table-hover-bg:#bfe2e9;--bs-table-hover-color:#000;color:#000;border-color:#badce3}.table-warning{--bs-table-bg:#fff3cd;--bs-table-striped-bg:#f2e7c3;--bs-table-striped-color:#000;--bs-table-active-bg:#e6dbb9;--bs-table-active-color:#000;--bs-table-hover-bg:#ece1be;--bs-table-hover-color:#000;color:#000;border-color:#e6dbb9}.table-danger{--bs-table-bg:#f8d7da;--bs-table-striped-bg:#eccccf;--bs-table-striped-color:#000;--bs-table-active-bg:#dfc2c4;--bs-table-active-color:#000;--bs-table-hover-bg:#e5c7ca;--bs-table-hover-color:#000;color:#000;border-color:#dfc2c4}.table-light{--bs-table-bg:#f8f9fa;--bs-table-striped-bg:#ecedee;--bs-table-striped-color:#000;--bs-table-active-bg:#dfe0e1;--bs-table-active-color:#000;--bs-table-hover-bg:#e5e6e7;--bs-table-hover-color:#000;color:#000;border-color:#dfe0e1}.table-dark{--bs-table-bg:#212529;--bs-table-striped-bg:#2c3034;--bs-table-striped-color:#fff;--bs-table-active-bg:#373b3e;--bs-table-active-color:#fff;--bs-table-hover-bg:#323539;--bs-table-hover-color:#fff;color:#fff;border-color:#373b3e}.table-responsive{overflow-x:auto;-webkit-overflow-scrolling:touch}@media (max-width:575.98px){.table-responsive-sm{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media (max-width:767.98px){.table-responsive-md{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media (max-width:991.98px){.table-responsive-lg{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media (max-width:1199.98px){.table-responsive-xl{overflow-x:auto;-webkit-overflow-scrolling:touch}}@media (max-width:1399.98px){.table-responsive-xxl{overflow-x:auto;-webkit-overflow-scrolling:touch}}.form-label{margin-bottom:.5rem}.col-form-label{padding-top:calc(.375rem + 1px);padding-bottom:calc(.375rem + 1px);margin-bottom:0;font-size:inherit;line-height:1.5}.col-form-label-lg{padding-top:calc(.5rem + 1px);padding-bottom:calc(.5rem + 1px);font-size:1.25rem}.col-form-label-sm{padding-top:calc(.25rem + 1px);padding-bottom:calc(.25rem + 1px);font-size:.875rem}.form-text{margin-top:.25rem;font-size:.875em;color:#6c757d}.form-control{display:block;width:100%;padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#212529;background-color:#fff;background-clip:padding-box;border:1px solid #ced4da;-webkit-appearance:none;-moz-appearance:none;appearance:none;border-radius:.25rem;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control{transition:none}}.form-control[type=file]{overflow:hidden}.form-control[type=file]:not(:disabled):not([readonly]){cursor:pointer}.form-control:focus{color:#212529;background-color:#fff;border-color:#86b7fe;outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.form-control::-webkit-date-and-time-value{height:1.5em}.form-control::-moz-placeholder{color:#6c757d;opacity:1}.form-control::placeholder{color:#6c757d;opacity:1}.form-control:disabled,.form-control[readonly]{background-color:#e9ecef;opacity:1}.form-control::-webkit-file-upload-button{padding:.375rem .75rem;margin:-.375rem -.75rem;-webkit-margin-end:.75rem;margin-inline-end:.75rem;color:#212529;background-color:#e9ecef;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;-webkit-transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}.form-control::file-selector-button{padding:.375rem .75rem;margin:-.375rem -.75rem;-webkit-margin-end:.75rem;margin-inline-end:.75rem;color:#212529;background-color:#e9ecef;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control::-webkit-file-upload-button{-webkit-transition:none;transition:none}.form-control::file-selector-button{transition:none}}.form-control:hover:not(:disabled):not([readonly])::-webkit-file-upload-button{background-color:#dde0e3}.form-control:hover:not(:disabled):not([readonly])::file-selector-button{background-color:#dde0e3}.form-control::-webkit-file-upload-button{padding:.375rem .75rem;margin:-.375rem -.75rem;-webkit-margin-end:.75rem;margin-inline-end:.75rem;color:#212529;background-color:#e9ecef;pointer-events:none;border-color:inherit;border-style:solid;border-width:0;border-inline-end-width:1px;border-radius:0;-webkit-transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-control::-webkit-file-upload-button{-webkit-transition:none;transition:none}}.form-control:hover:not(:disabled):not([readonly])::-webkit-file-upload-button{background-color:#dde0e3}.form-control-plaintext{display:block;width:100%;padding:.375rem 0;margin-bottom:0;line-height:1.5;color:#212529;background-color:transparent;border:solid transparent;border-width:1px 0}.form-control-plaintext.form-control-lg,.form-control-plaintext.form-control-sm{padding-left:0;padding-right:0}.form-control-sm{min-height:calc(1.5em + .5rem + 2px);padding:.25rem .5rem;font-size:.875rem;border-radius:.2rem}.form-control-sm::-webkit-file-upload-button{padding:.25rem .5rem;margin:-.25rem -.5rem;-webkit-margin-end:.5rem;margin-inline-end:.5rem}.form-control-sm::file-selector-button{padding:.25rem .5rem;margin:-.25rem -.5rem;-webkit-margin-end:.5rem;margin-inline-end:.5rem}.form-control-sm::-webkit-file-upload-button{padding:.25rem .5rem;margin:-.25rem -.5rem;-webkit-margin-end:.5rem;margin-inline-end:.5rem}.form-control-lg{min-height:calc(1.5em + 1rem + 2px);padding:.5rem 1rem;font-size:1.25rem;border-radius:.3rem}.form-control-lg::-webkit-file-upload-button{padding:.5rem 1rem;margin:-.5rem -1rem;-webkit-margin-end:1rem;margin-inline-end:1rem}.form-control-lg::file-selector-button{padding:.5rem 1rem;margin:-.5rem -1rem;-webkit-margin-end:1rem;margin-inline-end:1rem}.form-control-lg::-webkit-file-upload-button{padding:.5rem 1rem;margin:-.5rem -1rem;-webkit-margin-end:1rem;margin-inline-end:1rem}textarea.form-control{min-height:calc(1.5em + .75rem + 2px)}textarea.form-control-sm{min-height:calc(1.5em + .5rem + 2px)}textarea.form-control-lg{min-height:calc(1.5em + 1rem + 2px)}.form-control-color{width:3rem;height:auto;padding:.375rem}.form-control-color:not(:disabled):not([readonly]){cursor:pointer}.form-control-color::-moz-color-swatch{height:1.5em;border-radius:.25rem}.form-control-color::-webkit-color-swatch{height:1.5em;border-radius:.25rem}.form-select{display:block;width:100%;padding:.375rem .75rem .375rem 2.25rem;-moz-padding-start:calc(0.75rem - 3px);font-size:1rem;font-weight:400;line-height:1.5;color:#212529;background-color:#fff;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:left .75rem center;background-size:16px 12px;border:1px solid #ced4da;border-radius:.25rem;transition:border-color .15s ease-in-out,box-shadow .15s ease-in-out;-webkit-appearance:none;-moz-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.form-select{transition:none}}.form-select:focus{border-color:#86b7fe;outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.form-select[multiple],.form-select[size]:not([size="1"]){padding-left:.75rem;background-image:none}.form-select:disabled{background-color:#e9ecef}.form-select:-moz-focusring{color:transparent;text-shadow:0 0 0 #212529}.form-select-sm{padding-top:.25rem;padding-bottom:.25rem;padding-right:.5rem;font-size:.875rem;border-radius:.2rem}.form-select-lg{padding-top:.5rem;padding-bottom:.5rem;padding-right:1rem;font-size:1.25rem;border-radius:.3rem}.form-check{display:block;min-height:1.5rem;padding-right:1.5em;margin-bottom:.125rem}.form-check .form-check-input{float:right;margin-right:-1.5em}.form-check-input{width:1em;height:1em;margin-top:.25em;vertical-align:top;background-color:#fff;background-repeat:no-repeat;background-position:center;background-size:contain;border:1px solid rgba(0,0,0,.25);-webkit-appearance:none;-moz-appearance:none;appearance:none;-webkit-print-color-adjust:exact;color-adjust:exact}.form-check-input[type=checkbox]{border-radius:.25em}.form-check-input[type=radio]{border-radius:50%}.form-check-input:active{filter:brightness(90%)}.form-check-input:focus{border-color:#86b7fe;outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.form-check-input:checked{background-color:#0d6efd;border-color:#0d6efd}.form-check-input:checked[type=checkbox]{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10l3 3l6-6'/%3e%3c/svg%3e")}.form-check-input:checked[type=radio]{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='2' fill='%23fff'/%3e%3c/svg%3e")}.form-check-input[type=checkbox]:indeterminate{background-color:#0d6efd;border-color:#0d6efd;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3e%3cpath fill='none' stroke='%23fff' stroke-linecap='round' stroke-linejoin='round' stroke-width='3' d='M6 10h8'/%3e%3c/svg%3e")}.form-check-input:disabled{pointer-events:none;filter:none;opacity:.5}.form-check-input:disabled~.form-check-label,.form-check-input[disabled]~.form-check-label{opacity:.5}.form-switch{padding-right:2.5em}.form-switch .form-check-input{width:2em;margin-right:-2.5em;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='rgba%280, 0, 0, 0.25%29'/%3e%3c/svg%3e");background-position:right center;border-radius:2em;transition:background-position .15s ease-in-out}@media (prefers-reduced-motion:reduce){.form-switch .form-check-input{transition:none}}.form-switch .form-check-input:focus{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%2386b7fe'/%3e%3c/svg%3e")}.form-switch .form-check-input:checked{background-position:left center;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='-4 -4 8 8'%3e%3ccircle r='3' fill='%23fff'/%3e%3c/svg%3e")}.form-check-inline{display:inline-block;margin-left:1rem}.btn-check{position:absolute;clip:rect(0,0,0,0);pointer-events:none}.btn-check:disabled+.btn,.btn-check[disabled]+.btn{pointer-events:none;filter:none;opacity:.65}.form-range{width:100%;height:1.5rem;padding:0;background-color:transparent;-webkit-appearance:none;-moz-appearance:none;appearance:none}.form-range:focus{outline:0}.form-range:focus::-webkit-slider-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .25rem rgba(13,110,253,.25)}.form-range:focus::-moz-range-thumb{box-shadow:0 0 0 1px #fff,0 0 0 .25rem rgba(13,110,253,.25)}.form-range::-moz-focus-outer{border:0}.form-range::-webkit-slider-thumb{width:1rem;height:1rem;margin-top:-.25rem;background-color:#0d6efd;border:0;border-radius:1rem;-webkit-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-webkit-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.form-range::-webkit-slider-thumb{-webkit-transition:none;transition:none}}.form-range::-webkit-slider-thumb:active{background-color:#b6d4fe}.form-range::-webkit-slider-runnable-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.form-range::-moz-range-thumb{width:1rem;height:1rem;background-color:#0d6efd;border:0;border-radius:1rem;-moz-transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;transition:background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out;-moz-appearance:none;appearance:none}@media (prefers-reduced-motion:reduce){.form-range::-moz-range-thumb{-moz-transition:none;transition:none}}.form-range::-moz-range-thumb:active{background-color:#b6d4fe}.form-range::-moz-range-track{width:100%;height:.5rem;color:transparent;cursor:pointer;background-color:#dee2e6;border-color:transparent;border-radius:1rem}.form-range:disabled{pointer-events:none}.form-range:disabled::-webkit-slider-thumb{background-color:#adb5bd}.form-range:disabled::-moz-range-thumb{background-color:#adb5bd}.form-floating{position:relative}.form-floating>.form-control,.form-floating>.form-select{height:calc(3.5rem + 2px);line-height:1.25}.form-floating>label{position:absolute;top:0;right:0;height:100%;padding:1rem .75rem;pointer-events:none;border:1px solid transparent;transform-origin:100% 0;transition:opacity .1s ease-in-out,transform .1s ease-in-out}@media (prefers-reduced-motion:reduce){.form-floating>label{transition:none}}.form-floating>.form-control{padding:1rem .75rem}.form-floating>.form-control::-moz-placeholder{color:transparent}.form-floating>.form-control::placeholder{color:transparent}.form-floating>.form-control:not(:-moz-placeholder-shown){padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:focus,.form-floating>.form-control:not(:placeholder-shown){padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:-webkit-autofill{padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-select{padding-top:1.625rem;padding-bottom:.625rem}.form-floating>.form-control:not(:-moz-placeholder-shown)~label{opacity:.65;transform:scale(.85) translateY(-.5rem) translateX(-.15rem)}.form-floating>.form-control:focus~label,.form-floating>.form-control:not(:placeholder-shown)~label,.form-floating>.form-select~label{opacity:.65;transform:scale(.85) translateY(-.5rem) translateX(-.15rem)}.form-floating>.form-control:-webkit-autofill~label{opacity:.65;transform:scale(.85) translateY(-.5rem) translateX(-.15rem)}.input-group{position:relative;display:flex;flex-wrap:wrap;align-items:stretch;width:100%}.input-group>.form-control,.input-group>.form-select{position:relative;flex:1 1 auto;width:1%;min-width:0}.input-group>.form-control:focus,.input-group>.form-select:focus{z-index:3}.input-group .btn{position:relative;z-index:2}.input-group .btn:focus{z-index:3}.input-group-text{display:flex;align-items:center;padding:.375rem .75rem;font-size:1rem;font-weight:400;line-height:1.5;color:#212529;text-align:center;white-space:nowrap;background-color:#e9ecef;border:1px solid #ced4da;border-radius:.25rem}.input-group-lg>.btn,.input-group-lg>.form-control,.input-group-lg>.form-select,.input-group-lg>.input-group-text{padding:.5rem 1rem;font-size:1.25rem;border-radius:.3rem}.input-group-sm>.btn,.input-group-sm>.form-control,.input-group-sm>.form-select,.input-group-sm>.input-group-text{padding:.25rem .5rem;font-size:.875rem;border-radius:.2rem}.input-group-lg>.form-select,.input-group-sm>.form-select{padding-left:3rem}.input-group:not(.has-validation)>.dropdown-toggle:nth-last-child(n+3),.input-group:not(.has-validation)>:not(:last-child):not(.dropdown-toggle):not(.dropdown-menu){border-top-left-radius:0;border-bottom-left-radius:0}.input-group.has-validation>.dropdown-toggle:nth-last-child(n+4),.input-group.has-validation>:nth-last-child(n+3):not(.dropdown-toggle):not(.dropdown-menu){border-top-left-radius:0;border-bottom-left-radius:0}.input-group>:not(:first-child):not(.dropdown-menu):not(.valid-tooltip):not(.valid-feedback):not(.invalid-tooltip):not(.invalid-feedback){margin-right:-1px;border-top-right-radius:0;border-bottom-right-radius:0}.valid-feedback{display:none;width:100%;margin-top:.25rem;font-size:.875em;color:#198754}.valid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;color:#fff;background-color:rgba(25,135,84,.9);border-radius:.25rem}.is-valid~.valid-feedback,.is-valid~.valid-tooltip,.was-validated :valid~.valid-feedback,.was-validated :valid~.valid-tooltip{display:block}.form-control.is-valid,.was-validated .form-control:valid{border-color:#198754;padding-left:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:left calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-valid:focus,.was-validated .form-control:valid:focus{border-color:#198754;box-shadow:0 0 0 .25rem rgba(25,135,84,.25)}.was-validated textarea.form-control:valid,textarea.form-control.is-valid{padding-left:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) left calc(.375em + .1875rem)}.form-select.is-valid,.was-validated .form-select:valid{border-color:#198754}.form-select.is-valid:not([multiple]):not([size]),.form-select.is-valid:not([multiple])[size="1"],.was-validated .form-select:valid:not([multiple]):not([size]),.was-validated .form-select:valid:not([multiple])[size="1"]{padding-left:4.125rem;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e"),url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 8 8'%3e%3cpath fill='%23198754' d='M2.3 6.73L.6 4.53c-.4-1.04.46-1.4 1.1-.8l1.1 1.4 3.4-3.8c.6-.63 1.6-.27 1.2.7l-4 4.6c-.43.5-.8.4-1.1.1z'/%3e%3c/svg%3e");background-position:left .75rem center,center left 2.25rem;background-size:16px 12px,calc(.75em + .375rem) calc(.75em + .375rem)}.form-select.is-valid:focus,.was-validated .form-select:valid:focus{border-color:#198754;box-shadow:0 0 0 .25rem rgba(25,135,84,.25)}.form-check-input.is-valid,.was-validated .form-check-input:valid{border-color:#198754}.form-check-input.is-valid:checked,.was-validated .form-check-input:valid:checked{background-color:#198754}.form-check-input.is-valid:focus,.was-validated .form-check-input:valid:focus{box-shadow:0 0 0 .25rem rgba(25,135,84,.25)}.form-check-input.is-valid~.form-check-label,.was-validated .form-check-input:valid~.form-check-label{color:#198754}.form-check-inline .form-check-input~.valid-feedback{margin-right:.5em}.input-group .form-control.is-valid,.input-group .form-select.is-valid,.was-validated .input-group .form-control:valid,.was-validated .input-group .form-select:valid{z-index:1}.input-group .form-control.is-valid:focus,.input-group .form-select.is-valid:focus,.was-validated .input-group .form-control:valid:focus,.was-validated .input-group .form-select:valid:focus{z-index:3}.invalid-feedback{display:none;width:100%;margin-top:.25rem;font-size:.875em;color:#dc3545}.invalid-tooltip{position:absolute;top:100%;z-index:5;display:none;max-width:100%;padding:.25rem .5rem;margin-top:.1rem;font-size:.875rem;color:#fff;background-color:rgba(220,53,69,.9);border-radius:.25rem}.is-invalid~.invalid-feedback,.is-invalid~.invalid-tooltip,.was-validated :invalid~.invalid-feedback,.was-validated :invalid~.invalid-tooltip{display:block}.form-control.is-invalid,.was-validated .form-control:invalid{border-color:#dc3545;padding-left:calc(1.5em + .75rem);background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23dc3545'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");background-repeat:no-repeat;background-position:left calc(.375em + .1875rem) center;background-size:calc(.75em + .375rem) calc(.75em + .375rem)}.form-control.is-invalid:focus,.was-validated .form-control:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .25rem rgba(220,53,69,.25)}.was-validated textarea.form-control:invalid,textarea.form-control.is-invalid{padding-left:calc(1.5em + .75rem);background-position:top calc(.375em + .1875rem) left calc(.375em + .1875rem)}.form-select.is-invalid,.was-validated .form-select:invalid{border-color:#dc3545}.form-select.is-invalid:not([multiple]):not([size]),.form-select.is-invalid:not([multiple])[size="1"],.was-validated .form-select:invalid:not([multiple]):not([size]),.was-validated .form-select:invalid:not([multiple])[size="1"]{padding-left:4.125rem;background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16'%3e%3cpath fill='none' stroke='%23343a40' stroke-linecap='round' stroke-linejoin='round' stroke-width='2' d='M2 5l6 6 6-6'/%3e%3c/svg%3e"),url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 12 12' width='12' height='12' fill='none' stroke='%23dc3545'%3e%3ccircle cx='6' cy='6' r='4.5'/%3e%3cpath stroke-linejoin='round' d='M5.8 3.6h.4L6 6.5z'/%3e%3ccircle cx='6' cy='8.2' r='.6' fill='%23dc3545' stroke='none'/%3e%3c/svg%3e");background-position:left .75rem center,center left 2.25rem;background-size:16px 12px,calc(.75em + .375rem) calc(.75em + .375rem)}.form-select.is-invalid:focus,.was-validated .form-select:invalid:focus{border-color:#dc3545;box-shadow:0 0 0 .25rem rgba(220,53,69,.25)}.form-check-input.is-invalid,.was-validated .form-check-input:invalid{border-color:#dc3545}.form-check-input.is-invalid:checked,.was-validated .form-check-input:invalid:checked{background-color:#dc3545}.form-check-input.is-invalid:focus,.was-validated .form-check-input:invalid:focus{box-shadow:0 0 0 .25rem rgba(220,53,69,.25)}.form-check-input.is-invalid~.form-check-label,.was-validated .form-check-input:invalid~.form-check-label{color:#dc3545}.form-check-inline .form-check-input~.invalid-feedback{margin-right:.5em}.input-group .form-control.is-invalid,.input-group .form-select.is-invalid,.was-validated .input-group .form-control:invalid,.was-validated .input-group .form-select:invalid{z-index:2}.input-group .form-control.is-invalid:focus,.input-group .form-select.is-invalid:focus,.was-validated .input-group .form-control:invalid:focus,.was-validated .input-group .form-select:invalid:focus{z-index:3}.btn{display:inline-block;font-weight:400;line-height:1.5;color:#212529;text-align:center;text-decoration:none;vertical-align:middle;cursor:pointer;-webkit-user-select:none;-moz-user-select:none;user-select:none;background-color:transparent;border:1px solid transparent;padding:.375rem .75rem;font-size:1rem;border-radius:.25rem;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.btn{transition:none}}.btn:hover{color:#212529}.btn-check:focus+.btn,.btn:focus{outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.btn.disabled,.btn:disabled,fieldset:disabled .btn{pointer-events:none;opacity:.65}.btn-primary{color:#fff;background-color:#0d6efd;border-color:#0d6efd}.btn-primary:hover{color:#fff;background-color:#0b5ed7;border-color:#0a58ca}.btn-check:focus+.btn-primary,.btn-primary:focus{color:#fff;background-color:#0b5ed7;border-color:#0a58ca;box-shadow:0 0 0 .25rem rgba(49,132,253,.5)}.btn-check:active+.btn-primary,.btn-check:checked+.btn-primary,.btn-primary.active,.btn-primary:active,.show>.btn-primary.dropdown-toggle{color:#fff;background-color:#0a58ca;border-color:#0a53be}.btn-check:active+.btn-primary:focus,.btn-check:checked+.btn-primary:focus,.btn-primary.active:focus,.btn-primary:active:focus,.show>.btn-primary.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(49,132,253,.5)}.btn-primary.disabled,.btn-primary:disabled{color:#fff;background-color:#0d6efd;border-color:#0d6efd}.btn-secondary{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-secondary:hover{color:#fff;background-color:#5c636a;border-color:#565e64}.btn-check:focus+.btn-secondary,.btn-secondary:focus{color:#fff;background-color:#5c636a;border-color:#565e64;box-shadow:0 0 0 .25rem rgba(130,138,145,.5)}.btn-check:active+.btn-secondary,.btn-check:checked+.btn-secondary,.btn-secondary.active,.btn-secondary:active,.show>.btn-secondary.dropdown-toggle{color:#fff;background-color:#565e64;border-color:#51585e}.btn-check:active+.btn-secondary:focus,.btn-check:checked+.btn-secondary:focus,.btn-secondary.active:focus,.btn-secondary:active:focus,.show>.btn-secondary.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(130,138,145,.5)}.btn-secondary.disabled,.btn-secondary:disabled{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-success{color:#fff;background-color:#198754;border-color:#198754}.btn-success:hover{color:#fff;background-color:#157347;border-color:#146c43}.btn-check:focus+.btn-success,.btn-success:focus{color:#fff;background-color:#157347;border-color:#146c43;box-shadow:0 0 0 .25rem rgba(60,153,110,.5)}.btn-check:active+.btn-success,.btn-check:checked+.btn-success,.btn-success.active,.btn-success:active,.show>.btn-success.dropdown-toggle{color:#fff;background-color:#146c43;border-color:#13653f}.btn-check:active+.btn-success:focus,.btn-check:checked+.btn-success:focus,.btn-success.active:focus,.btn-success:active:focus,.show>.btn-success.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(60,153,110,.5)}.btn-success.disabled,.btn-success:disabled{color:#fff;background-color:#198754;border-color:#198754}.btn-info{color:#000;background-color:#0dcaf0;border-color:#0dcaf0}.btn-info:hover{color:#000;background-color:#31d2f2;border-color:#25cff2}.btn-check:focus+.btn-info,.btn-info:focus{color:#000;background-color:#31d2f2;border-color:#25cff2;box-shadow:0 0 0 .25rem rgba(11,172,204,.5)}.btn-check:active+.btn-info,.btn-check:checked+.btn-info,.btn-info.active,.btn-info:active,.show>.btn-info.dropdown-toggle{color:#000;background-color:#3dd5f3;border-color:#25cff2}.btn-check:active+.btn-info:focus,.btn-check:checked+.btn-info:focus,.btn-info.active:focus,.btn-info:active:focus,.show>.btn-info.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(11,172,204,.5)}.btn-info.disabled,.btn-info:disabled{color:#000;background-color:#0dcaf0;border-color:#0dcaf0}.btn-warning{color:#000;background-color:#ffc107;border-color:#ffc107}.btn-warning:hover{color:#000;background-color:#ffca2c;border-color:#ffc720}.btn-check:focus+.btn-warning,.btn-warning:focus{color:#000;background-color:#ffca2c;border-color:#ffc720;box-shadow:0 0 0 .25rem rgba(217,164,6,.5)}.btn-check:active+.btn-warning,.btn-check:checked+.btn-warning,.btn-warning.active,.btn-warning:active,.show>.btn-warning.dropdown-toggle{color:#000;background-color:#ffcd39;border-color:#ffc720}.btn-check:active+.btn-warning:focus,.btn-check:checked+.btn-warning:focus,.btn-warning.active:focus,.btn-warning:active:focus,.show>.btn-warning.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(217,164,6,.5)}.btn-warning.disabled,.btn-warning:disabled{color:#000;background-color:#ffc107;border-color:#ffc107}.btn-danger{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-danger:hover{color:#fff;background-color:#bb2d3b;border-color:#b02a37}.btn-check:focus+.btn-danger,.btn-danger:focus{color:#fff;background-color:#bb2d3b;border-color:#b02a37;box-shadow:0 0 0 .25rem rgba(225,83,97,.5)}.btn-check:active+.btn-danger,.btn-check:checked+.btn-danger,.btn-danger.active,.btn-danger:active,.show>.btn-danger.dropdown-toggle{color:#fff;background-color:#b02a37;border-color:#a52834}.btn-check:active+.btn-danger:focus,.btn-check:checked+.btn-danger:focus,.btn-danger.active:focus,.btn-danger:active:focus,.show>.btn-danger.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(225,83,97,.5)}.btn-danger.disabled,.btn-danger:disabled{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-light{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-light:hover{color:#000;background-color:#f9fafb;border-color:#f9fafb}.btn-check:focus+.btn-light,.btn-light:focus{color:#000;background-color:#f9fafb;border-color:#f9fafb;box-shadow:0 0 0 .25rem rgba(211,212,213,.5)}.btn-check:active+.btn-light,.btn-check:checked+.btn-light,.btn-light.active,.btn-light:active,.show>.btn-light.dropdown-toggle{color:#000;background-color:#f9fafb;border-color:#f9fafb}.btn-check:active+.btn-light:focus,.btn-check:checked+.btn-light:focus,.btn-light.active:focus,.btn-light:active:focus,.show>.btn-light.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(211,212,213,.5)}.btn-light.disabled,.btn-light:disabled{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-dark{color:#fff;background-color:#212529;border-color:#212529}.btn-dark:hover{color:#fff;background-color:#1c1f23;border-color:#1a1e21}.btn-check:focus+.btn-dark,.btn-dark:focus{color:#fff;background-color:#1c1f23;border-color:#1a1e21;box-shadow:0 0 0 .25rem rgba(66,70,73,.5)}.btn-check:active+.btn-dark,.btn-check:checked+.btn-dark,.btn-dark.active,.btn-dark:active,.show>.btn-dark.dropdown-toggle{color:#fff;background-color:#1a1e21;border-color:#191c1f}.btn-check:active+.btn-dark:focus,.btn-check:checked+.btn-dark:focus,.btn-dark.active:focus,.btn-dark:active:focus,.show>.btn-dark.dropdown-toggle:focus{box-shadow:0 0 0 .25rem rgba(66,70,73,.5)}.btn-dark.disabled,.btn-dark:disabled{color:#fff;background-color:#212529;border-color:#212529}.btn-outline-primary{color:#0d6efd;border-color:#0d6efd}.btn-outline-primary:hover{color:#fff;background-color:#0d6efd;border-color:#0d6efd}.btn-check:focus+.btn-outline-primary,.btn-outline-primary:focus{box-shadow:0 0 0 .25rem rgba(13,110,253,.5)}.btn-check:active+.btn-outline-primary,.btn-check:checked+.btn-outline-primary,.btn-outline-primary.active,.btn-outline-primary.dropdown-toggle.show,.btn-outline-primary:active{color:#fff;background-color:#0d6efd;border-color:#0d6efd}.btn-check:active+.btn-outline-primary:focus,.btn-check:checked+.btn-outline-primary:focus,.btn-outline-primary.active:focus,.btn-outline-primary.dropdown-toggle.show:focus,.btn-outline-primary:active:focus{box-shadow:0 0 0 .25rem rgba(13,110,253,.5)}.btn-outline-primary.disabled,.btn-outline-primary:disabled{color:#0d6efd;background-color:transparent}.btn-outline-secondary{color:#6c757d;border-color:#6c757d}.btn-outline-secondary:hover{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-check:focus+.btn-outline-secondary,.btn-outline-secondary:focus{box-shadow:0 0 0 .25rem rgba(108,117,125,.5)}.btn-check:active+.btn-outline-secondary,.btn-check:checked+.btn-outline-secondary,.btn-outline-secondary.active,.btn-outline-secondary.dropdown-toggle.show,.btn-outline-secondary:active{color:#fff;background-color:#6c757d;border-color:#6c757d}.btn-check:active+.btn-outline-secondary:focus,.btn-check:checked+.btn-outline-secondary:focus,.btn-outline-secondary.active:focus,.btn-outline-secondary.dropdown-toggle.show:focus,.btn-outline-secondary:active:focus{box-shadow:0 0 0 .25rem rgba(108,117,125,.5)}.btn-outline-secondary.disabled,.btn-outline-secondary:disabled{color:#6c757d;background-color:transparent}.btn-outline-success{color:#198754;border-color:#198754}.btn-outline-success:hover{color:#fff;background-color:#198754;border-color:#198754}.btn-check:focus+.btn-outline-success,.btn-outline-success:focus{box-shadow:0 0 0 .25rem rgba(25,135,84,.5)}.btn-check:active+.btn-outline-success,.btn-check:checked+.btn-outline-success,.btn-outline-success.active,.btn-outline-success.dropdown-toggle.show,.btn-outline-success:active{color:#fff;background-color:#198754;border-color:#198754}.btn-check:active+.btn-outline-success:focus,.btn-check:checked+.btn-outline-success:focus,.btn-outline-success.active:focus,.btn-outline-success.dropdown-toggle.show:focus,.btn-outline-success:active:focus{box-shadow:0 0 0 .25rem rgba(25,135,84,.5)}.btn-outline-success.disabled,.btn-outline-success:disabled{color:#198754;background-color:transparent}.btn-outline-info{color:#0dcaf0;border-color:#0dcaf0}.btn-outline-info:hover{color:#000;background-color:#0dcaf0;border-color:#0dcaf0}.btn-check:focus+.btn-outline-info,.btn-outline-info:focus{box-shadow:0 0 0 .25rem rgba(13,202,240,.5)}.btn-check:active+.btn-outline-info,.btn-check:checked+.btn-outline-info,.btn-outline-info.active,.btn-outline-info.dropdown-toggle.show,.btn-outline-info:active{color:#000;background-color:#0dcaf0;border-color:#0dcaf0}.btn-check:active+.btn-outline-info:focus,.btn-check:checked+.btn-outline-info:focus,.btn-outline-info.active:focus,.btn-outline-info.dropdown-toggle.show:focus,.btn-outline-info:active:focus{box-shadow:0 0 0 .25rem rgba(13,202,240,.5)}.btn-outline-info.disabled,.btn-outline-info:disabled{color:#0dcaf0;background-color:transparent}.btn-outline-warning{color:#ffc107;border-color:#ffc107}.btn-outline-warning:hover{color:#000;background-color:#ffc107;border-color:#ffc107}.btn-check:focus+.btn-outline-warning,.btn-outline-warning:focus{box-shadow:0 0 0 .25rem rgba(255,193,7,.5)}.btn-check:active+.btn-outline-warning,.btn-check:checked+.btn-outline-warning,.btn-outline-warning.active,.btn-outline-warning.dropdown-toggle.show,.btn-outline-warning:active{color:#000;background-color:#ffc107;border-color:#ffc107}.btn-check:active+.btn-outline-warning:focus,.btn-check:checked+.btn-outline-warning:focus,.btn-outline-warning.active:focus,.btn-outline-warning.dropdown-toggle.show:focus,.btn-outline-warning:active:focus{box-shadow:0 0 0 .25rem rgba(255,193,7,.5)}.btn-outline-warning.disabled,.btn-outline-warning:disabled{color:#ffc107;background-color:transparent}.btn-outline-danger{color:#dc3545;border-color:#dc3545}.btn-outline-danger:hover{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-check:focus+.btn-outline-danger,.btn-outline-danger:focus{box-shadow:0 0 0 .25rem rgba(220,53,69,.5)}.btn-check:active+.btn-outline-danger,.btn-check:checked+.btn-outline-danger,.btn-outline-danger.active,.btn-outline-danger.dropdown-toggle.show,.btn-outline-danger:active{color:#fff;background-color:#dc3545;border-color:#dc3545}.btn-check:active+.btn-outline-danger:focus,.btn-check:checked+.btn-outline-danger:focus,.btn-outline-danger.active:focus,.btn-outline-danger.dropdown-toggle.show:focus,.btn-outline-danger:active:focus{box-shadow:0 0 0 .25rem rgba(220,53,69,.5)}.btn-outline-danger.disabled,.btn-outline-danger:disabled{color:#dc3545;background-color:transparent}.btn-outline-light{color:#f8f9fa;border-color:#f8f9fa}.btn-outline-light:hover{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-check:focus+.btn-outline-light,.btn-outline-light:focus{box-shadow:0 0 0 .25rem rgba(248,249,250,.5)}.btn-check:active+.btn-outline-light,.btn-check:checked+.btn-outline-light,.btn-outline-light.active,.btn-outline-light.dropdown-toggle.show,.btn-outline-light:active{color:#000;background-color:#f8f9fa;border-color:#f8f9fa}.btn-check:active+.btn-outline-light:focus,.btn-check:checked+.btn-outline-light:focus,.btn-outline-light.active:focus,.btn-outline-light.dropdown-toggle.show:focus,.btn-outline-light:active:focus{box-shadow:0 0 0 .25rem rgba(248,249,250,.5)}.btn-outline-light.disabled,.btn-outline-light:disabled{color:#f8f9fa;background-color:transparent}.btn-outline-dark{color:#212529;border-color:#212529}.btn-outline-dark:hover{color:#fff;background-color:#212529;border-color:#212529}.btn-check:focus+.btn-outline-dark,.btn-outline-dark:focus{box-shadow:0 0 0 .25rem rgba(33,37,41,.5)}.btn-check:active+.btn-outline-dark,.btn-check:checked+.btn-outline-dark,.btn-outline-dark.active,.btn-outline-dark.dropdown-toggle.show,.btn-outline-dark:active{color:#fff;background-color:#212529;border-color:#212529}.btn-check:active+.btn-outline-dark:focus,.btn-check:checked+.btn-outline-dark:focus,.btn-outline-dark.active:focus,.btn-outline-dark.dropdown-toggle.show:focus,.btn-outline-dark:active:focus{box-shadow:0 0 0 .25rem rgba(33,37,41,.5)}.btn-outline-dark.disabled,.btn-outline-dark:disabled{color:#212529;background-color:transparent}.btn-link{font-weight:400;color:#0d6efd;text-decoration:underline}.btn-link:hover{color:#0a58ca}.btn-link.disabled,.btn-link:disabled{color:#6c757d}.btn-group-lg>.btn,.btn-lg{padding:.5rem 1rem;font-size:1.25rem;border-radius:.3rem}.btn-group-sm>.btn,.btn-sm{padding:.25rem .5rem;font-size:.875rem;border-radius:.2rem}.fade{transition:opacity .15s linear}@media (prefers-reduced-motion:reduce){.fade{transition:none}}.fade:not(.show){opacity:0}.collapse:not(.show){display:none}.collapsing{height:0;overflow:hidden;transition:height .35s ease}@media (prefers-reduced-motion:reduce){.collapsing{transition:none}}.collapsing.collapse-horizontal{width:0;height:auto;transition:width .35s ease}@media (prefers-reduced-motion:reduce){.collapsing.collapse-horizontal{transition:none}}.dropdown,.dropend,.dropstart,.dropup{position:relative}.dropdown-toggle{white-space:nowrap}.dropdown-toggle::after{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid;border-left:.3em solid transparent;border-bottom:0;border-right:.3em solid transparent}.dropdown-toggle:empty::after{margin-right:0}.dropdown-menu{position:absolute;z-index:1000;display:none;min-width:10rem;padding:.5rem 0;margin:0;font-size:1rem;color:#212529;text-align:right;list-style:none;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.15);border-radius:.25rem}.dropdown-menu[data-bs-popper]{top:100%;right:0;margin-top:.125rem}.dropdown-menu-start{--bs-position:start}.dropdown-menu-start[data-bs-popper]{left:auto;right:0}.dropdown-menu-end{--bs-position:end}.dropdown-menu-end[data-bs-popper]{left:0;right:auto}@media (min-width:576px){.dropdown-menu-sm-start{--bs-position:start}.dropdown-menu-sm-start[data-bs-popper]{left:auto;right:0}.dropdown-menu-sm-end{--bs-position:end}.dropdown-menu-sm-end[data-bs-popper]{left:0;right:auto}}@media (min-width:768px){.dropdown-menu-md-start{--bs-position:start}.dropdown-menu-md-start[data-bs-popper]{left:auto;right:0}.dropdown-menu-md-end{--bs-position:end}.dropdown-menu-md-end[data-bs-popper]{left:0;right:auto}}@media (min-width:992px){.dropdown-menu-lg-start{--bs-position:start}.dropdown-menu-lg-start[data-bs-popper]{left:auto;right:0}.dropdown-menu-lg-end{--bs-position:end}.dropdown-menu-lg-end[data-bs-popper]{left:0;right:auto}}@media (min-width:1200px){.dropdown-menu-xl-start{--bs-position:start}.dropdown-menu-xl-start[data-bs-popper]{left:auto;right:0}.dropdown-menu-xl-end{--bs-position:end}.dropdown-menu-xl-end[data-bs-popper]{left:0;right:auto}}@media (min-width:1400px){.dropdown-menu-xxl-start{--bs-position:start}.dropdown-menu-xxl-start[data-bs-popper]{left:auto;right:0}.dropdown-menu-xxl-end{--bs-position:end}.dropdown-menu-xxl-end[data-bs-popper]{left:0;right:auto}}.dropup .dropdown-menu[data-bs-popper]{top:auto;bottom:100%;margin-top:0;margin-bottom:.125rem}.dropup .dropdown-toggle::after{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:0;border-left:.3em solid transparent;border-bottom:.3em solid;border-right:.3em solid transparent}.dropup .dropdown-toggle:empty::after{margin-right:0}.dropend .dropdown-menu[data-bs-popper]{top:0;left:auto;right:100%;margin-top:0;margin-right:.125rem}.dropend .dropdown-toggle::after{display:inline-block;margin-right:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-left:0;border-bottom:.3em solid transparent;border-right:.3em solid}.dropend .dropdown-toggle:empty::after{margin-right:0}.dropend .dropdown-toggle::after{vertical-align:0}.dropstart .dropdown-menu[data-bs-popper]{top:0;left:100%;right:auto;margin-top:0;margin-left:.125rem}.dropstart .dropdown-toggle::after{display:inline-block;margin-right:.255em;vertical-align:.255em;content:""}.dropstart .dropdown-toggle::after{display:none}.dropstart .dropdown-toggle::before{display:inline-block;margin-left:.255em;vertical-align:.255em;content:"";border-top:.3em solid transparent;border-left:.3em solid;border-bottom:.3em solid transparent}.dropstart .dropdown-toggle:empty::after{margin-right:0}.dropstart .dropdown-toggle::before{vertical-align:0}.dropdown-divider{height:0;margin:.5rem 0;overflow:hidden;border-top:1px solid rgba(0,0,0,.15)}.dropdown-item{display:block;width:100%;padding:.25rem 1rem;clear:both;font-weight:400;color:#212529;text-align:inherit;text-decoration:none;white-space:nowrap;background-color:transparent;border:0}.dropdown-item:focus,.dropdown-item:hover{color:#1e2125;background-color:#e9ecef}.dropdown-item.active,.dropdown-item:active{color:#fff;text-decoration:none;background-color:#0d6efd}.dropdown-item.disabled,.dropdown-item:disabled{color:#adb5bd;pointer-events:none;background-color:transparent}.dropdown-menu.show{display:block}.dropdown-header{display:block;padding:.5rem 1rem;margin-bottom:0;font-size:.875rem;color:#6c757d;white-space:nowrap}.dropdown-item-text{display:block;padding:.25rem 1rem;color:#212529}.dropdown-menu-dark{color:#dee2e6;background-color:#343a40;border-color:rgba(0,0,0,.15)}.dropdown-menu-dark .dropdown-item{color:#dee2e6}.dropdown-menu-dark .dropdown-item:focus,.dropdown-menu-dark .dropdown-item:hover{color:#fff;background-color:rgba(255,255,255,.15)}.dropdown-menu-dark .dropdown-item.active,.dropdown-menu-dark .dropdown-item:active{color:#fff;background-color:#0d6efd}.dropdown-menu-dark .dropdown-item.disabled,.dropdown-menu-dark .dropdown-item:disabled{color:#adb5bd}.dropdown-menu-dark .dropdown-divider{border-color:rgba(0,0,0,.15)}.dropdown-menu-dark .dropdown-item-text{color:#dee2e6}.dropdown-menu-dark .dropdown-header{color:#adb5bd}.btn-group,.btn-group-vertical{position:relative;display:inline-flex;vertical-align:middle}.btn-group-vertical>.btn,.btn-group>.btn{position:relative;flex:1 1 auto}.btn-group-vertical>.btn-check:checked+.btn,.btn-group-vertical>.btn-check:focus+.btn,.btn-group-vertical>.btn.active,.btn-group-vertical>.btn:active,.btn-group-vertical>.btn:focus,.btn-group-vertical>.btn:hover,.btn-group>.btn-check:checked+.btn,.btn-group>.btn-check:focus+.btn,.btn-group>.btn.active,.btn-group>.btn:active,.btn-group>.btn:focus,.btn-group>.btn:hover{z-index:1}.btn-toolbar{display:flex;flex-wrap:wrap;justify-content:flex-start}.btn-toolbar .input-group{width:auto}.btn-group>.btn-group:not(:first-child),.btn-group>.btn:not(:first-child){margin-right:-1px}.btn-group>.btn-group:not(:last-child)>.btn,.btn-group>.btn:not(:last-child):not(.dropdown-toggle){border-top-left-radius:0;border-bottom-left-radius:0}.btn-group>.btn-group:not(:first-child)>.btn,.btn-group>.btn:nth-child(n+3),.btn-group>:not(.btn-check)+.btn{border-top-right-radius:0;border-bottom-right-radius:0}.dropdown-toggle-split{padding-left:.5625rem;padding-right:.5625rem}.dropdown-toggle-split::after,.dropend .dropdown-toggle-split::after,.dropup .dropdown-toggle-split::after{margin-right:0}.dropstart .dropdown-toggle-split::before{margin-left:0}.btn-group-sm>.btn+.dropdown-toggle-split,.btn-sm+.dropdown-toggle-split{padding-left:.375rem;padding-right:.375rem}.btn-group-lg>.btn+.dropdown-toggle-split,.btn-lg+.dropdown-toggle-split{padding-left:.75rem;padding-right:.75rem}.btn-group-vertical{flex-direction:column;align-items:flex-start;justify-content:center}.btn-group-vertical>.btn,.btn-group-vertical>.btn-group{width:100%}.btn-group-vertical>.btn-group:not(:first-child),.btn-group-vertical>.btn:not(:first-child){margin-top:-1px}.btn-group-vertical>.btn-group:not(:last-child)>.btn,.btn-group-vertical>.btn:not(:last-child):not(.dropdown-toggle){border-bottom-left-radius:0;border-bottom-right-radius:0}.btn-group-vertical>.btn-group:not(:first-child)>.btn,.btn-group-vertical>.btn~.btn{border-top-right-radius:0;border-top-left-radius:0}.nav{display:flex;flex-wrap:wrap;padding-right:0;margin-bottom:0;list-style:none}.nav-link{display:block;padding:.5rem 1rem;color:#0d6efd;text-decoration:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out}@media (prefers-reduced-motion:reduce){.nav-link{transition:none}}.nav-link:focus,.nav-link:hover{color:#0a58ca}.nav-link.disabled{color:#6c757d;pointer-events:none;cursor:default}.nav-tabs{border-bottom:1px solid #dee2e6}.nav-tabs .nav-link{margin-bottom:-1px;background:0 0;border:1px solid transparent;border-top-right-radius:.25rem;border-top-left-radius:.25rem}.nav-tabs .nav-link:focus,.nav-tabs .nav-link:hover{border-color:#e9ecef #e9ecef #dee2e6;isolation:isolate}.nav-tabs .nav-link.disabled{color:#6c757d;background-color:transparent;border-color:transparent}.nav-tabs .nav-item.show .nav-link,.nav-tabs .nav-link.active{color:#495057;background-color:#fff;border-color:#dee2e6 #dee2e6 #fff}.nav-tabs .dropdown-menu{margin-top:-1px;border-top-right-radius:0;border-top-left-radius:0}.nav-pills .nav-link{background:0 0;border:0;border-radius:.25rem}.nav-pills .nav-link.active,.nav-pills .show>.nav-link{color:#fff;background-color:#0d6efd}.nav-fill .nav-item,.nav-fill>.nav-link{flex:1 1 auto;text-align:center}.nav-justified .nav-item,.nav-justified>.nav-link{flex-basis:0;flex-grow:1;text-align:center}.nav-fill .nav-item .nav-link,.nav-justified .nav-item .nav-link{width:100%}.tab-content>.tab-pane{display:none}.tab-content>.active{display:block}.navbar{position:relative;display:flex;flex-wrap:wrap;align-items:center;justify-content:space-between;padding-top:.5rem;padding-bottom:.5rem}.navbar>.container,.navbar>.container-fluid,.navbar>.container-lg,.navbar>.container-md,.navbar>.container-sm,.navbar>.container-xl,.navbar>.container-xxl{display:flex;flex-wrap:inherit;align-items:center;justify-content:space-between}.navbar-brand{padding-top:.3125rem;padding-bottom:.3125rem;margin-left:1rem;font-size:1.25rem;text-decoration:none;white-space:nowrap}.navbar-nav{display:flex;flex-direction:column;padding-right:0;margin-bottom:0;list-style:none}.navbar-nav .nav-link{padding-left:0;padding-right:0}.navbar-nav .dropdown-menu{position:static}.navbar-text{padding-top:.5rem;padding-bottom:.5rem}.navbar-collapse{flex-basis:100%;flex-grow:1;align-items:center}.navbar-toggler{padding:.25rem .75rem;font-size:1.25rem;line-height:1;background-color:transparent;border:1px solid transparent;border-radius:.25rem;transition:box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.navbar-toggler{transition:none}}.navbar-toggler:hover{text-decoration:none}.navbar-toggler:focus{text-decoration:none;outline:0;box-shadow:0 0 0 .25rem}.navbar-toggler-icon{display:inline-block;width:1.5em;height:1.5em;vertical-align:middle;background-repeat:no-repeat;background-position:center;background-size:100%}.navbar-nav-scroll{max-height:var(--bs-scroll-height,75vh);overflow-y:auto}@media (min-width:576px){.navbar-expand-sm{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand-sm .navbar-nav{flex-direction:row}.navbar-expand-sm .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-sm .navbar-nav .nav-link{padding-left:.5rem;padding-right:.5rem}.navbar-expand-sm .navbar-nav-scroll{overflow:visible}.navbar-expand-sm .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-sm .navbar-toggler{display:none}.navbar-expand-sm .offcanvas-header{display:none}.navbar-expand-sm .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-left:0;border-right:0;transition:none;transform:none}.navbar-expand-sm .offcanvas-bottom,.navbar-expand-sm .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand-sm .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}}@media (min-width:768px){.navbar-expand-md{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand-md .navbar-nav{flex-direction:row}.navbar-expand-md .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-md .navbar-nav .nav-link{padding-left:.5rem;padding-right:.5rem}.navbar-expand-md .navbar-nav-scroll{overflow:visible}.navbar-expand-md .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-md .navbar-toggler{display:none}.navbar-expand-md .offcanvas-header{display:none}.navbar-expand-md .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-left:0;border-right:0;transition:none;transform:none}.navbar-expand-md .offcanvas-bottom,.navbar-expand-md .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand-md .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}}@media (min-width:992px){.navbar-expand-lg{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand-lg .navbar-nav{flex-direction:row}.navbar-expand-lg .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-lg .navbar-nav .nav-link{padding-left:.5rem;padding-right:.5rem}.navbar-expand-lg .navbar-nav-scroll{overflow:visible}.navbar-expand-lg .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-lg .navbar-toggler{display:none}.navbar-expand-lg .offcanvas-header{display:none}.navbar-expand-lg .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-left:0;border-right:0;transition:none;transform:none}.navbar-expand-lg .offcanvas-bottom,.navbar-expand-lg .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand-lg .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}}@media (min-width:1200px){.navbar-expand-xl{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand-xl .navbar-nav{flex-direction:row}.navbar-expand-xl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xl .navbar-nav .nav-link{padding-left:.5rem;padding-right:.5rem}.navbar-expand-xl .navbar-nav-scroll{overflow:visible}.navbar-expand-xl .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-xl .navbar-toggler{display:none}.navbar-expand-xl .offcanvas-header{display:none}.navbar-expand-xl .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-left:0;border-right:0;transition:none;transform:none}.navbar-expand-xl .offcanvas-bottom,.navbar-expand-xl .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand-xl .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}}@media (min-width:1400px){.navbar-expand-xxl{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand-xxl .navbar-nav{flex-direction:row}.navbar-expand-xxl .navbar-nav .dropdown-menu{position:absolute}.navbar-expand-xxl .navbar-nav .nav-link{padding-left:.5rem;padding-right:.5rem}.navbar-expand-xxl .navbar-nav-scroll{overflow:visible}.navbar-expand-xxl .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand-xxl .navbar-toggler{display:none}.navbar-expand-xxl .offcanvas-header{display:none}.navbar-expand-xxl .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-left:0;border-right:0;transition:none;transform:none}.navbar-expand-xxl .offcanvas-bottom,.navbar-expand-xxl .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand-xxl .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}}.navbar-expand{flex-wrap:nowrap;justify-content:flex-start}.navbar-expand .navbar-nav{flex-direction:row}.navbar-expand .navbar-nav .dropdown-menu{position:absolute}.navbar-expand .navbar-nav .nav-link{padding-left:.5rem;padding-right:.5rem}.navbar-expand .navbar-nav-scroll{overflow:visible}.navbar-expand .navbar-collapse{display:flex!important;flex-basis:auto}.navbar-expand .navbar-toggler{display:none}.navbar-expand .offcanvas-header{display:none}.navbar-expand .offcanvas{position:inherit;bottom:0;z-index:1000;flex-grow:1;visibility:visible!important;background-color:transparent;border-left:0;border-right:0;transition:none;transform:none}.navbar-expand .offcanvas-bottom,.navbar-expand .offcanvas-top{height:auto;border-top:0;border-bottom:0}.navbar-expand .offcanvas-body{display:flex;flex-grow:0;padding:0;overflow-y:visible}.navbar-light .navbar-brand{color:rgba(0,0,0,.9)}.navbar-light .navbar-brand:focus,.navbar-light .navbar-brand:hover{color:rgba(0,0,0,.9)}.navbar-light .navbar-nav .nav-link{color:rgba(0,0,0,.55)}.navbar-light .navbar-nav .nav-link:focus,.navbar-light .navbar-nav .nav-link:hover{color:rgba(0,0,0,.7)}.navbar-light .navbar-nav .nav-link.disabled{color:rgba(0,0,0,.3)}.navbar-light .navbar-nav .nav-link.active,.navbar-light .navbar-nav .show>.nav-link{color:rgba(0,0,0,.9)}.navbar-light .navbar-toggler{color:rgba(0,0,0,.55);border-color:rgba(0,0,0,.1)}.navbar-light .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%280, 0, 0, 0.55%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-light .navbar-text{color:rgba(0,0,0,.55)}.navbar-light .navbar-text a,.navbar-light .navbar-text a:focus,.navbar-light .navbar-text a:hover{color:rgba(0,0,0,.9)}.navbar-dark .navbar-brand{color:#fff}.navbar-dark .navbar-brand:focus,.navbar-dark .navbar-brand:hover{color:#fff}.navbar-dark .navbar-nav .nav-link{color:rgba(255,255,255,.55)}.navbar-dark .navbar-nav .nav-link:focus,.navbar-dark .navbar-nav .nav-link:hover{color:rgba(255,255,255,.75)}.navbar-dark .navbar-nav .nav-link.disabled{color:rgba(255,255,255,.25)}.navbar-dark .navbar-nav .nav-link.active,.navbar-dark .navbar-nav .show>.nav-link{color:#fff}.navbar-dark .navbar-toggler{color:rgba(255,255,255,.55);border-color:rgba(255,255,255,.1)}.navbar-dark .navbar-toggler-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 30 30'%3e%3cpath stroke='rgba%28255, 255, 255, 0.55%29' stroke-linecap='round' stroke-miterlimit='10' stroke-width='2' d='M4 7h22M4 15h22M4 23h22'/%3e%3c/svg%3e")}.navbar-dark .navbar-text{color:rgba(255,255,255,.55)}.navbar-dark .navbar-text a,.navbar-dark .navbar-text a:focus,.navbar-dark .navbar-text a:hover{color:#fff}.card{position:relative;display:flex;flex-direction:column;min-width:0;word-wrap:break-word;background-color:#fff;background-clip:border-box;border:1px solid rgba(0,0,0,.125);border-radius:.25rem}.card>hr{margin-left:0;margin-right:0}.card>.list-group{border-top:inherit;border-bottom:inherit}.card>.list-group:first-child{border-top-width:0;border-top-right-radius:calc(.25rem - 1px);border-top-left-radius:calc(.25rem - 1px)}.card>.list-group:last-child{border-bottom-width:0;border-bottom-left-radius:calc(.25rem - 1px);border-bottom-right-radius:calc(.25rem - 1px)}.card>.card-header+.list-group,.card>.list-group+.card-footer{border-top:0}.card-body{flex:1 1 auto;padding:1rem 1rem}.card-title{margin-bottom:.5rem}.card-subtitle{margin-top:-.25rem;margin-bottom:0}.card-text:last-child{margin-bottom:0}.card-link+.card-link{margin-right:1rem}.card-header{padding:.5rem 1rem;margin-bottom:0;background-color:rgba(0,0,0,.03);border-bottom:1px solid rgba(0,0,0,.125)}.card-header:first-child{border-radius:calc(.25rem - 1px) calc(.25rem - 1px) 0 0}.card-footer{padding:.5rem 1rem;background-color:rgba(0,0,0,.03);border-top:1px solid rgba(0,0,0,.125)}.card-footer:last-child{border-radius:0 0 calc(.25rem - 1px) calc(.25rem - 1px)}.card-header-tabs{margin-left:-.5rem;margin-bottom:-.5rem;margin-right:-.5rem;border-bottom:0}.card-header-pills{margin-left:-.5rem;margin-right:-.5rem}.card-img-overlay{position:absolute;top:0;left:0;bottom:0;right:0;padding:1rem;border-radius:calc(.25rem - 1px)}.card-img,.card-img-bottom,.card-img-top{width:100%}.card-img,.card-img-top{border-top-right-radius:calc(.25rem - 1px);border-top-left-radius:calc(.25rem - 1px)}.card-img,.card-img-bottom{border-bottom-left-radius:calc(.25rem - 1px);border-bottom-right-radius:calc(.25rem - 1px)}.card-group>.card{margin-bottom:.75rem}@media (min-width:576px){.card-group{display:flex;flex-flow:row wrap}.card-group>.card{flex:1 0 0%;margin-bottom:0}.card-group>.card+.card{margin-right:0;border-right:0}.card-group>.card:not(:last-child){border-top-left-radius:0;border-bottom-left-radius:0}.card-group>.card:not(:last-child) .card-header,.card-group>.card:not(:last-child) .card-img-top{border-top-left-radius:0}.card-group>.card:not(:last-child) .card-footer,.card-group>.card:not(:last-child) .card-img-bottom{border-bottom-left-radius:0}.card-group>.card:not(:first-child){border-top-right-radius:0;border-bottom-right-radius:0}.card-group>.card:not(:first-child) .card-header,.card-group>.card:not(:first-child) .card-img-top{border-top-right-radius:0}.card-group>.card:not(:first-child) .card-footer,.card-group>.card:not(:first-child) .card-img-bottom{border-bottom-right-radius:0}}.accordion-button{position:relative;display:flex;align-items:center;width:100%;padding:1rem 1.25rem;font-size:1rem;color:#212529;text-align:right;background-color:#fff;border:0;border-radius:0;overflow-anchor:none;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out,border-radius .15s ease}@media (prefers-reduced-motion:reduce){.accordion-button{transition:none}}.accordion-button:not(.collapsed){color:#0c63e4;background-color:#e7f1ff;box-shadow:inset 0 -1px 0 rgba(0,0,0,.125)}.accordion-button:not(.collapsed)::after{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%230c63e4'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");transform:rotate(180deg)}.accordion-button::after{flex-shrink:0;width:1.25rem;height:1.25rem;margin-right:auto;content:"";background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23212529'%3e%3cpath fill-rule='evenodd' d='M1.646 4.646a.5.5 0 0 1 .708 0L8 10.293l5.646-5.647a.5.5 0 0 1 .708.708l-6 6a.5.5 0 0 1-.708 0l-6-6a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e");background-repeat:no-repeat;background-size:1.25rem;transition:transform .2s ease-in-out}@media (prefers-reduced-motion:reduce){.accordion-button::after{transition:none}}.accordion-button:hover{z-index:2}.accordion-button:focus{z-index:3;border-color:#86b7fe;outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.accordion-header{margin-bottom:0}.accordion-item{background-color:#fff;border:1px solid rgba(0,0,0,.125)}.accordion-item:first-of-type{border-top-right-radius:.25rem;border-top-left-radius:.25rem}.accordion-item:first-of-type .accordion-button{border-top-right-radius:calc(.25rem - 1px);border-top-left-radius:calc(.25rem - 1px)}.accordion-item:not(:first-of-type){border-top:0}.accordion-item:last-of-type{border-bottom-left-radius:.25rem;border-bottom-right-radius:.25rem}.accordion-item:last-of-type .accordion-button.collapsed{border-bottom-left-radius:calc(.25rem - 1px);border-bottom-right-radius:calc(.25rem - 1px)}.accordion-item:last-of-type .accordion-collapse{border-bottom-left-radius:.25rem;border-bottom-right-radius:.25rem}.accordion-body{padding:1rem 1.25rem}.accordion-flush .accordion-collapse{border-width:0}.accordion-flush .accordion-item{border-left:0;border-right:0;border-radius:0}.accordion-flush .accordion-item:first-child{border-top:0}.accordion-flush .accordion-item:last-child{border-bottom:0}.accordion-flush .accordion-item .accordion-button{border-radius:0}.breadcrumb{display:flex;flex-wrap:wrap;padding:0 0;margin-bottom:1rem;list-style:none}.breadcrumb-item+.breadcrumb-item{padding-right:.5rem}.breadcrumb-item+.breadcrumb-item::before{float:right;padding-left:.5rem;color:#6c757d;content:var(--bs-breadcrumb-divider, "/")}.breadcrumb-item.active{color:#6c757d}.pagination{display:flex;padding-right:0;list-style:none}.page-link{position:relative;display:block;color:#0d6efd;text-decoration:none;background-color:#fff;border:1px solid #dee2e6;transition:color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out}@media (prefers-reduced-motion:reduce){.page-link{transition:none}}.page-link:hover{z-index:2;color:#0a58ca;background-color:#e9ecef;border-color:#dee2e6}.page-link:focus{z-index:3;color:#0a58ca;background-color:#e9ecef;outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25)}.page-item:not(:first-child) .page-link{margin-right:-1px}.page-item.active .page-link{z-index:3;color:#fff;background-color:#0d6efd;border-color:#0d6efd}.page-item.disabled .page-link{color:#6c757d;pointer-events:none;background-color:#fff;border-color:#dee2e6}.page-link{padding:.375rem .75rem}.page-item:first-child .page-link{border-top-right-radius:.25rem;border-bottom-right-radius:.25rem}.page-item:last-child .page-link{border-top-left-radius:.25rem;border-bottom-left-radius:.25rem}.pagination-lg .page-link{padding:.75rem 1.5rem;font-size:1.25rem}.pagination-lg .page-item:first-child .page-link{border-top-right-radius:.3rem;border-bottom-right-radius:.3rem}.pagination-lg .page-item:last-child .page-link{border-top-left-radius:.3rem;border-bottom-left-radius:.3rem}.pagination-sm .page-link{padding:.25rem .5rem;font-size:.875rem}.pagination-sm .page-item:first-child .page-link{border-top-right-radius:.2rem;border-bottom-right-radius:.2rem}.pagination-sm .page-item:last-child .page-link{border-top-left-radius:.2rem;border-bottom-left-radius:.2rem}.badge{display:inline-block;padding:.35em .65em;font-size:.75em;font-weight:700;line-height:1;color:#fff;text-align:center;white-space:nowrap;vertical-align:baseline;border-radius:.25rem}.badge:empty{display:none}.btn .badge{position:relative;top:-1px}.alert{position:relative;padding:1rem 1rem;margin-bottom:1rem;border:1px solid transparent;border-radius:.25rem}.alert-heading{color:inherit}.alert-link{font-weight:700}.alert-dismissible{padding-left:3rem}.alert-dismissible .btn-close{position:absolute;top:0;left:0;z-index:2;padding:1.25rem 1rem}.alert-primary{color:#084298;background-color:#cfe2ff;border-color:#b6d4fe}.alert-primary .alert-link{color:#06357a}.alert-secondary{color:#41464b;background-color:#e2e3e5;border-color:#d3d6d8}.alert-secondary .alert-link{color:#34383c}.alert-success{color:#0f5132;background-color:#d1e7dd;border-color:#badbcc}.alert-success .alert-link{color:#0c4128}.alert-info{color:#055160;background-color:#cff4fc;border-color:#b6effb}.alert-info .alert-link{color:#04414d}.alert-warning{color:#664d03;background-color:#fff3cd;border-color:#ffecb5}.alert-warning .alert-link{color:#523e02}.alert-danger{color:#842029;background-color:#f8d7da;border-color:#f5c2c7}.alert-danger .alert-link{color:#6a1a21}.alert-light{color:#636464;background-color:#fefefe;border-color:#fdfdfe}.alert-light .alert-link{color:#4f5050}.alert-dark{color:#141619;background-color:#d3d3d4;border-color:#bcbebf}.alert-dark .alert-link{color:#101214}@-webkit-keyframes progress-bar-stripes{0%{background-position-x:1rem}}@keyframes progress-bar-stripes{0%{background-position-x:1rem}}.progress{display:flex;height:1rem;overflow:hidden;font-size:.75rem;background-color:#e9ecef;border-radius:.25rem}.progress-bar{display:flex;flex-direction:column;justify-content:center;overflow:hidden;color:#fff;text-align:center;white-space:nowrap;background-color:#0d6efd;transition:width .6s ease}@media (prefers-reduced-motion:reduce){.progress-bar{transition:none}}.progress-bar-striped{background-image:linear-gradient(-45deg,rgba(255,255,255,.15) 25%,transparent 25%,transparent 50%,rgba(255,255,255,.15) 50%,rgba(255,255,255,.15) 75%,transparent 75%,transparent);background-size:1rem 1rem}.progress-bar-animated{-webkit-animation:1s linear infinite progress-bar-stripes;animation:1s linear infinite progress-bar-stripes}@media (prefers-reduced-motion:reduce){.progress-bar-animated{-webkit-animation:none;animation:none}}.list-group{display:flex;flex-direction:column;padding-right:0;margin-bottom:0;border-radius:.25rem}.list-group-numbered{list-style-type:none;counter-reset:section}.list-group-numbered>li::before{content:counters(section, ".") ". ";counter-increment:section}.list-group-item-action{width:100%;color:#495057;text-align:inherit}.list-group-item-action:focus,.list-group-item-action:hover{z-index:1;color:#495057;text-decoration:none;background-color:#f8f9fa}.list-group-item-action:active{color:#212529;background-color:#e9ecef}.list-group-item{position:relative;display:block;padding:.5rem 1rem;color:#212529;text-decoration:none;background-color:#fff;border:1px solid rgba(0,0,0,.125)}.list-group-item:first-child{border-top-right-radius:inherit;border-top-left-radius:inherit}.list-group-item:last-child{border-bottom-left-radius:inherit;border-bottom-right-radius:inherit}.list-group-item.disabled,.list-group-item:disabled{color:#6c757d;pointer-events:none;background-color:#fff}.list-group-item.active{z-index:2;color:#fff;background-color:#0d6efd;border-color:#0d6efd}.list-group-item+.list-group-item{border-top-width:0}.list-group-item+.list-group-item.active{margin-top:-1px;border-top-width:1px}.list-group-horizontal{flex-direction:row}.list-group-horizontal>.list-group-item:first-child{border-bottom-right-radius:.25rem;border-top-left-radius:0}.list-group-horizontal>.list-group-item:last-child{border-top-left-radius:.25rem;border-bottom-right-radius:0}.list-group-horizontal>.list-group-item.active{margin-top:0}.list-group-horizontal>.list-group-item+.list-group-item{border-top-width:1px;border-right-width:0}.list-group-horizontal>.list-group-item+.list-group-item.active{margin-right:-1px;border-right-width:1px}@media (min-width:576px){.list-group-horizontal-sm{flex-direction:row}.list-group-horizontal-sm>.list-group-item:first-child{border-bottom-right-radius:.25rem;border-top-left-radius:0}.list-group-horizontal-sm>.list-group-item:last-child{border-top-left-radius:.25rem;border-bottom-right-radius:0}.list-group-horizontal-sm>.list-group-item.active{margin-top:0}.list-group-horizontal-sm>.list-group-item+.list-group-item{border-top-width:1px;border-right-width:0}.list-group-horizontal-sm>.list-group-item+.list-group-item.active{margin-right:-1px;border-right-width:1px}}@media (min-width:768px){.list-group-horizontal-md{flex-direction:row}.list-group-horizontal-md>.list-group-item:first-child{border-bottom-right-radius:.25rem;border-top-left-radius:0}.list-group-horizontal-md>.list-group-item:last-child{border-top-left-radius:.25rem;border-bottom-right-radius:0}.list-group-horizontal-md>.list-group-item.active{margin-top:0}.list-group-horizontal-md>.list-group-item+.list-group-item{border-top-width:1px;border-right-width:0}.list-group-horizontal-md>.list-group-item+.list-group-item.active{margin-right:-1px;border-right-width:1px}}@media (min-width:992px){.list-group-horizontal-lg{flex-direction:row}.list-group-horizontal-lg>.list-group-item:first-child{border-bottom-right-radius:.25rem;border-top-left-radius:0}.list-group-horizontal-lg>.list-group-item:last-child{border-top-left-radius:.25rem;border-bottom-right-radius:0}.list-group-horizontal-lg>.list-group-item.active{margin-top:0}.list-group-horizontal-lg>.list-group-item+.list-group-item{border-top-width:1px;border-right-width:0}.list-group-horizontal-lg>.list-group-item+.list-group-item.active{margin-right:-1px;border-right-width:1px}}@media (min-width:1200px){.list-group-horizontal-xl{flex-direction:row}.list-group-horizontal-xl>.list-group-item:first-child{border-bottom-right-radius:.25rem;border-top-left-radius:0}.list-group-horizontal-xl>.list-group-item:last-child{border-top-left-radius:.25rem;border-bottom-right-radius:0}.list-group-horizontal-xl>.list-group-item.active{margin-top:0}.list-group-horizontal-xl>.list-group-item+.list-group-item{border-top-width:1px;border-right-width:0}.list-group-horizontal-xl>.list-group-item+.list-group-item.active{margin-right:-1px;border-right-width:1px}}@media (min-width:1400px){.list-group-horizontal-xxl{flex-direction:row}.list-group-horizontal-xxl>.list-group-item:first-child{border-bottom-right-radius:.25rem;border-top-left-radius:0}.list-group-horizontal-xxl>.list-group-item:last-child{border-top-left-radius:.25rem;border-bottom-right-radius:0}.list-group-horizontal-xxl>.list-group-item.active{margin-top:0}.list-group-horizontal-xxl>.list-group-item+.list-group-item{border-top-width:1px;border-right-width:0}.list-group-horizontal-xxl>.list-group-item+.list-group-item.active{margin-right:-1px;border-right-width:1px}}.list-group-flush{border-radius:0}.list-group-flush>.list-group-item{border-width:0 0 1px}.list-group-flush>.list-group-item:last-child{border-bottom-width:0}.list-group-item-primary{color:#084298;background-color:#cfe2ff}.list-group-item-primary.list-group-item-action:focus,.list-group-item-primary.list-group-item-action:hover{color:#084298;background-color:#bacbe6}.list-group-item-primary.list-group-item-action.active{color:#fff;background-color:#084298;border-color:#084298}.list-group-item-secondary{color:#41464b;background-color:#e2e3e5}.list-group-item-secondary.list-group-item-action:focus,.list-group-item-secondary.list-group-item-action:hover{color:#41464b;background-color:#cbccce}.list-group-item-secondary.list-group-item-action.active{color:#fff;background-color:#41464b;border-color:#41464b}.list-group-item-success{color:#0f5132;background-color:#d1e7dd}.list-group-item-success.list-group-item-action:focus,.list-group-item-success.list-group-item-action:hover{color:#0f5132;background-color:#bcd0c7}.list-group-item-success.list-group-item-action.active{color:#fff;background-color:#0f5132;border-color:#0f5132}.list-group-item-info{color:#055160;background-color:#cff4fc}.list-group-item-info.list-group-item-action:focus,.list-group-item-info.list-group-item-action:hover{color:#055160;background-color:#badce3}.list-group-item-info.list-group-item-action.active{color:#fff;background-color:#055160;border-color:#055160}.list-group-item-warning{color:#664d03;background-color:#fff3cd}.list-group-item-warning.list-group-item-action:focus,.list-group-item-warning.list-group-item-action:hover{color:#664d03;background-color:#e6dbb9}.list-group-item-warning.list-group-item-action.active{color:#fff;background-color:#664d03;border-color:#664d03}.list-group-item-danger{color:#842029;background-color:#f8d7da}.list-group-item-danger.list-group-item-action:focus,.list-group-item-danger.list-group-item-action:hover{color:#842029;background-color:#dfc2c4}.list-group-item-danger.list-group-item-action.active{color:#fff;background-color:#842029;border-color:#842029}.list-group-item-light{color:#636464;background-color:#fefefe}.list-group-item-light.list-group-item-action:focus,.list-group-item-light.list-group-item-action:hover{color:#636464;background-color:#e5e5e5}.list-group-item-light.list-group-item-action.active{color:#fff;background-color:#636464;border-color:#636464}.list-group-item-dark{color:#141619;background-color:#d3d3d4}.list-group-item-dark.list-group-item-action:focus,.list-group-item-dark.list-group-item-action:hover{color:#141619;background-color:#bebebf}.list-group-item-dark.list-group-item-action.active{color:#fff;background-color:#141619;border-color:#141619}.btn-close{box-sizing:content-box;width:1em;height:1em;padding:.25em .25em;color:#000;background:transparent url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23000'%3e%3cpath d='M.293.293a1 1 0 011.414 0L8 6.586 14.293.293a1 1 0 111.414 1.414L9.414 8l6.293 6.293a1 1 0 01-1.414 1.414L8 9.414l-6.293 6.293a1 1 0 01-1.414-1.414L6.586 8 .293 1.707a1 1 0 010-1.414z'/%3e%3c/svg%3e") center/1em auto no-repeat;border:0;border-radius:.25rem;opacity:.5}.btn-close:hover{color:#000;text-decoration:none;opacity:.75}.btn-close:focus{outline:0;box-shadow:0 0 0 .25rem rgba(13,110,253,.25);opacity:1}.btn-close.disabled,.btn-close:disabled{pointer-events:none;-webkit-user-select:none;-moz-user-select:none;user-select:none;opacity:.25}.btn-close-white{filter:invert(1) grayscale(100%) brightness(200%)}.toast{width:350px;max-width:100%;font-size:.875rem;pointer-events:auto;background-color:rgba(255,255,255,.85);background-clip:padding-box;border:1px solid rgba(0,0,0,.1);box-shadow:0 .5rem 1rem rgba(0,0,0,.15);border-radius:.25rem}.toast.showing{opacity:0}.toast:not(.show){display:none}.toast-container{width:-webkit-max-content;width:-moz-max-content;width:max-content;max-width:100%;pointer-events:none}.toast-container>:not(:last-child){margin-bottom:.75rem}.toast-header{display:flex;align-items:center;padding:.5rem .75rem;color:#6c757d;background-color:rgba(255,255,255,.85);background-clip:padding-box;border-bottom:1px solid rgba(0,0,0,.05);border-top-right-radius:calc(.25rem - 1px);border-top-left-radius:calc(.25rem - 1px)}.toast-header .btn-close{margin-left:-.375rem;margin-right:.75rem}.toast-body{padding:.75rem;word-wrap:break-word}.modal{position:fixed;top:0;right:0;z-index:1055;display:none;width:100%;height:100%;overflow-x:hidden;overflow-y:auto;outline:0}.modal-dialog{position:relative;width:auto;margin:.5rem;pointer-events:none}.modal.fade .modal-dialog{transition:transform .3s ease-out;transform:translate(0,-50px)}@media (prefers-reduced-motion:reduce){.modal.fade .modal-dialog{transition:none}}.modal.show .modal-dialog{transform:none}.modal.modal-static .modal-dialog{transform:scale(1.02)}.modal-dialog-scrollable{height:calc(100% - 1rem)}.modal-dialog-scrollable .modal-content{max-height:100%;overflow:hidden}.modal-dialog-scrollable .modal-body{overflow-y:auto}.modal-dialog-centered{display:flex;align-items:center;min-height:calc(100% - 1rem)}.modal-content{position:relative;display:flex;flex-direction:column;width:100%;pointer-events:auto;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem;outline:0}.modal-backdrop{position:fixed;top:0;right:0;z-index:1050;width:100vw;height:100vh;background-color:#000}.modal-backdrop.fade{opacity:0}.modal-backdrop.show{opacity:.5}.modal-header{display:flex;flex-shrink:0;align-items:center;justify-content:space-between;padding:1rem 1rem;border-bottom:1px solid #dee2e6;border-top-right-radius:calc(.3rem - 1px);border-top-left-radius:calc(.3rem - 1px)}.modal-header .btn-close{padding:.5rem .5rem;margin:-.5rem auto -.5rem -.5rem}.modal-title{margin-bottom:0;line-height:1.5}.modal-body{position:relative;flex:1 1 auto;padding:1rem}.modal-footer{display:flex;flex-wrap:wrap;flex-shrink:0;align-items:center;justify-content:flex-end;padding:.75rem;border-top:1px solid #dee2e6;border-bottom-left-radius:calc(.3rem - 1px);border-bottom-right-radius:calc(.3rem - 1px)}.modal-footer>*{margin:.25rem}@media (min-width:576px){.modal-dialog{max-width:500px;margin:1.75rem auto}.modal-dialog-scrollable{height:calc(100% - 3.5rem)}.modal-dialog-centered{min-height:calc(100% - 3.5rem)}.modal-sm{max-width:300px}}@media (min-width:992px){.modal-lg,.modal-xl{max-width:800px}}@media (min-width:1200px){.modal-xl{max-width:1140px}}.modal-fullscreen{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen .modal-header{border-radius:0}.modal-fullscreen .modal-body{overflow-y:auto}.modal-fullscreen .modal-footer{border-radius:0}@media (max-width:575.98px){.modal-fullscreen-sm-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-sm-down .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen-sm-down .modal-header{border-radius:0}.modal-fullscreen-sm-down .modal-body{overflow-y:auto}.modal-fullscreen-sm-down .modal-footer{border-radius:0}}@media (max-width:767.98px){.modal-fullscreen-md-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-md-down .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen-md-down .modal-header{border-radius:0}.modal-fullscreen-md-down .modal-body{overflow-y:auto}.modal-fullscreen-md-down .modal-footer{border-radius:0}}@media (max-width:991.98px){.modal-fullscreen-lg-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-lg-down .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen-lg-down .modal-header{border-radius:0}.modal-fullscreen-lg-down .modal-body{overflow-y:auto}.modal-fullscreen-lg-down .modal-footer{border-radius:0}}@media (max-width:1199.98px){.modal-fullscreen-xl-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-xl-down .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen-xl-down .modal-header{border-radius:0}.modal-fullscreen-xl-down .modal-body{overflow-y:auto}.modal-fullscreen-xl-down .modal-footer{border-radius:0}}@media (max-width:1399.98px){.modal-fullscreen-xxl-down{width:100vw;max-width:none;height:100%;margin:0}.modal-fullscreen-xxl-down .modal-content{height:100%;border:0;border-radius:0}.modal-fullscreen-xxl-down .modal-header{border-radius:0}.modal-fullscreen-xxl-down .modal-body{overflow-y:auto}.modal-fullscreen-xxl-down .modal-footer{border-radius:0}}.tooltip{position:absolute;z-index:1080;display:block;margin:0;font-family:var(--bs-font-sans-serif);font-style:normal;font-weight:400;line-height:1.5;text-align:right;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;opacity:0}.tooltip.show{opacity:.9}.tooltip .tooltip-arrow{position:absolute;display:block;width:.8rem;height:.4rem}.tooltip .tooltip-arrow::before{position:absolute;content:"";border-color:transparent;border-style:solid}.bs-tooltip-auto[data-popper-placement^=top],.bs-tooltip-top{padding:.4rem 0}.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow,.bs-tooltip-top .tooltip-arrow{bottom:0}.bs-tooltip-auto[data-popper-placement^=top] .tooltip-arrow::before,.bs-tooltip-top .tooltip-arrow::before{top:-1px;border-width:.4rem .4rem 0;border-top-color:#000}.bs-tooltip-auto[data-popper-placement^=right],.bs-tooltip-end{padding:0 .4rem}.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow,.bs-tooltip-end .tooltip-arrow{right:0;width:.4rem;height:.8rem}.bs-tooltip-auto[data-popper-placement^=right] .tooltip-arrow::before,.bs-tooltip-end .tooltip-arrow::before{left:-1px;border-width:.4rem 0 .4rem .4rem;border-left-color:#000}.bs-tooltip-auto[data-popper-placement^=bottom],.bs-tooltip-bottom{padding:.4rem 0}.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow,.bs-tooltip-bottom .tooltip-arrow{top:0}.bs-tooltip-auto[data-popper-placement^=bottom] .tooltip-arrow::before,.bs-tooltip-bottom .tooltip-arrow::before{bottom:-1px;border-width:0 .4rem .4rem;border-bottom-color:#000}.bs-tooltip-auto[data-popper-placement^=left],.bs-tooltip-start{padding:0 .4rem}.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow,.bs-tooltip-start .tooltip-arrow{left:0;width:.4rem;height:.8rem}.bs-tooltip-auto[data-popper-placement^=left] .tooltip-arrow::before,.bs-tooltip-start .tooltip-arrow::before{right:-1px;border-width:.4rem .4rem .4rem 0;border-right-color:#000}.tooltip-inner{max-width:200px;padding:.25rem .5rem;color:#fff;text-align:center;background-color:#000;border-radius:.25rem}.popover{position:absolute;top:0;left:0;z-index:1070;display:block;max-width:276px;font-family:var(--bs-font-sans-serif);font-style:normal;font-weight:400;line-height:1.5;text-align:right;text-align:start;text-decoration:none;text-shadow:none;text-transform:none;letter-spacing:normal;word-break:normal;word-spacing:normal;white-space:normal;line-break:auto;font-size:.875rem;word-wrap:break-word;background-color:#fff;background-clip:padding-box;border:1px solid rgba(0,0,0,.2);border-radius:.3rem}.popover .popover-arrow{position:absolute;display:block;width:1rem;height:.5rem}.popover .popover-arrow::after,.popover .popover-arrow::before{position:absolute;display:block;content:"";border-color:transparent;border-style:solid}.bs-popover-auto[data-popper-placement^=top]>.popover-arrow,.bs-popover-top>.popover-arrow{bottom:calc(-.5rem - 1px)}.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::before,.bs-popover-top>.popover-arrow::before{bottom:0;border-width:.5rem .5rem 0;border-top-color:rgba(0,0,0,.25)}.bs-popover-auto[data-popper-placement^=top]>.popover-arrow::after,.bs-popover-top>.popover-arrow::after{bottom:1px;border-width:.5rem .5rem 0;border-top-color:#fff}.bs-popover-auto[data-popper-placement^=right]>.popover-arrow,.bs-popover-end>.popover-arrow{right:calc(-.5rem - 1px);width:.5rem;height:1rem}.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::before,.bs-popover-end>.popover-arrow::before{right:0;border-width:.5rem 0 .5rem .5rem;border-left-color:rgba(0,0,0,.25)}.bs-popover-auto[data-popper-placement^=right]>.popover-arrow::after,.bs-popover-end>.popover-arrow::after{right:1px;border-width:.5rem 0 .5rem .5rem;border-left-color:#fff}.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow,.bs-popover-bottom>.popover-arrow{top:calc(-.5rem - 1px)}.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::before,.bs-popover-bottom>.popover-arrow::before{top:0;border-width:0 .5rem .5rem .5rem;border-bottom-color:rgba(0,0,0,.25)}.bs-popover-auto[data-popper-placement^=bottom]>.popover-arrow::after,.bs-popover-bottom>.popover-arrow::after{top:1px;border-width:0 .5rem .5rem .5rem;border-bottom-color:#fff}.bs-popover-auto[data-popper-placement^=bottom] .popover-header::before,.bs-popover-bottom .popover-header::before{position:absolute;top:0;right:50%;display:block;width:1rem;margin-right:-.5rem;content:"";border-bottom:1px solid #f0f0f0}.bs-popover-auto[data-popper-placement^=left]>.popover-arrow,.bs-popover-start>.popover-arrow{left:calc(-.5rem - 1px);width:.5rem;height:1rem}.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::before,.bs-popover-start>.popover-arrow::before{left:0;border-width:.5rem .5rem .5rem 0;border-right-color:rgba(0,0,0,.25)}.bs-popover-auto[data-popper-placement^=left]>.popover-arrow::after,.bs-popover-start>.popover-arrow::after{left:1px;border-width:.5rem .5rem .5rem 0;border-right-color:#fff}.popover-header{padding:.5rem 1rem;margin-bottom:0;font-size:1rem;background-color:#f0f0f0;border-bottom:1px solid rgba(0,0,0,.2);border-top-right-radius:calc(.3rem - 1px);border-top-left-radius:calc(.3rem - 1px)}.popover-header:empty{display:none}.popover-body{padding:1rem 1rem;color:#212529}.carousel{position:relative}.carousel.pointer-event{touch-action:pan-y}.carousel-inner{position:relative;width:100%;overflow:hidden}.carousel-inner::after{display:block;clear:both;content:""}.carousel-item{position:relative;display:none;float:right;width:100%;margin-left:-100%;-webkit-backface-visibility:hidden;backface-visibility:hidden;transition:transform .6s ease-in-out}@media (prefers-reduced-motion:reduce){.carousel-item{transition:none}}.carousel-item-next,.carousel-item-prev,.carousel-item.active{display:block}.active.carousel-item-end,.carousel-item-next:not(.carousel-item-start){transform:translateX(100%)}.active.carousel-item-start,.carousel-item-prev:not(.carousel-item-end){transform:translateX(-100%)}.carousel-fade .carousel-item{opacity:0;transition-property:opacity;transform:none}.carousel-fade .carousel-item-next.carousel-item-start,.carousel-fade .carousel-item-prev.carousel-item-end,.carousel-fade .carousel-item.active{z-index:1;opacity:1}.carousel-fade .active.carousel-item-end,.carousel-fade .active.carousel-item-start{z-index:0;opacity:0;transition:opacity 0s .6s}@media (prefers-reduced-motion:reduce){.carousel-fade .active.carousel-item-end,.carousel-fade .active.carousel-item-start{transition:none}}.carousel-control-next,.carousel-control-prev{position:absolute;top:0;bottom:0;z-index:1;display:flex;align-items:center;justify-content:center;width:15%;padding:0;color:#fff;text-align:center;background:0 0;border:0;opacity:.5;transition:opacity .15s ease}@media (prefers-reduced-motion:reduce){.carousel-control-next,.carousel-control-prev{transition:none}}.carousel-control-next:focus,.carousel-control-next:hover,.carousel-control-prev:focus,.carousel-control-prev:hover{color:#fff;text-decoration:none;outline:0;opacity:.9}.carousel-control-prev{right:0}.carousel-control-next{left:0}.carousel-control-next-icon,.carousel-control-prev-icon{display:inline-block;width:2rem;height:2rem;background-repeat:no-repeat;background-position:50%;background-size:100% 100%}.carousel-control-next-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3e%3c/svg%3e")}.carousel-control-prev-icon{background-image:url("data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 16 16' fill='%23fff'%3e%3cpath d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3e%3c/svg%3e")}.carousel-indicators{position:absolute;left:0;bottom:0;right:0;z-index:2;display:flex;justify-content:center;padding:0;margin-left:15%;margin-bottom:1rem;margin-right:15%;list-style:none}.carousel-indicators [data-bs-target]{box-sizing:content-box;flex:0 1 auto;width:30px;height:3px;padding:0;margin-left:3px;margin-right:3px;text-indent:-999px;cursor:pointer;background-color:#fff;background-clip:padding-box;border:0;border-top:10px solid transparent;border-bottom:10px solid transparent;opacity:.5;transition:opacity .6s ease}@media (prefers-reduced-motion:reduce){.carousel-indicators [data-bs-target]{transition:none}}.carousel-indicators .active{opacity:1}.carousel-caption{position:absolute;left:15%;bottom:1.25rem;right:15%;padding-top:1.25rem;padding-bottom:1.25rem;color:#fff;text-align:center}.carousel-dark .carousel-control-next-icon,.carousel-dark .carousel-control-prev-icon{filter:invert(1) grayscale(100)}.carousel-dark .carousel-indicators [data-bs-target]{background-color:#000}.carousel-dark .carousel-caption{color:#000}@-webkit-keyframes spinner-border{to{transform:rotate(360deg)}}@keyframes spinner-border{to{transform:rotate(360deg)}}.spinner-border{display:inline-block;width:2rem;height:2rem;vertical-align:-.125em;border:.25em solid currentColor;border-left-color:transparent;border-radius:50%;-webkit-animation:.75s linear infinite spinner-border;animation:.75s linear infinite spinner-border}.spinner-border-sm{width:1rem;height:1rem;border-width:.2em}@-webkit-keyframes spinner-grow{0%{transform:scale(0)}50%{opacity:1;transform:none}}@keyframes spinner-grow{0%{transform:scale(0)}50%{opacity:1;transform:none}}.spinner-grow{display:inline-block;width:2rem;height:2rem;vertical-align:-.125em;background-color:currentColor;border-radius:50%;opacity:0;-webkit-animation:.75s linear infinite spinner-grow;animation:.75s linear infinite spinner-grow}.spinner-grow-sm{width:1rem;height:1rem}@media (prefers-reduced-motion:reduce){.spinner-border,.spinner-grow{-webkit-animation-duration:1.5s;animation-duration:1.5s}}.offcanvas{position:fixed;bottom:0;z-index:1045;display:flex;flex-direction:column;max-width:100%;visibility:hidden;background-color:#fff;background-clip:padding-box;outline:0;transition:transform .3s ease-in-out}@media (prefers-reduced-motion:reduce){.offcanvas{transition:none}}.offcanvas-backdrop{position:fixed;top:0;right:0;z-index:1040;width:100vw;height:100vh;background-color:#000}.offcanvas-backdrop.fade{opacity:0}.offcanvas-backdrop.show{opacity:.5}.offcanvas-header{display:flex;align-items:center;justify-content:space-between;padding:1rem 1rem}.offcanvas-header .btn-close{padding:.5rem .5rem;margin-top:-.5rem;margin-left:-.5rem;margin-bottom:-.5rem}.offcanvas-title{margin-bottom:0;line-height:1.5}.offcanvas-body{flex-grow:1;padding:1rem 1rem;overflow-y:auto}.offcanvas-start{top:0;right:0;width:400px;border-left:1px solid rgba(0,0,0,.2);transform:translateX(100%)}.offcanvas-end{top:0;left:0;width:400px;border-right:1px solid rgba(0,0,0,.2);transform:translateX(-100%)}.offcanvas-top{top:0;left:0;right:0;height:30vh;max-height:100%;border-bottom:1px solid rgba(0,0,0,.2);transform:translateY(-100%)}.offcanvas-bottom{left:0;right:0;height:30vh;max-height:100%;border-top:1px solid rgba(0,0,0,.2);transform:translateY(100%)}.offcanvas.show{transform:none}.placeholder{display:inline-block;min-height:1em;vertical-align:middle;cursor:wait;background-color:currentColor;opacity:.5}.placeholder.btn::before{display:inline-block;content:""}.placeholder-xs{min-height:.6em}.placeholder-sm{min-height:.8em}.placeholder-lg{min-height:1.2em}.placeholder-glow .placeholder{-webkit-animation:placeholder-glow 2s ease-in-out infinite;animation:placeholder-glow 2s ease-in-out infinite}@-webkit-keyframes placeholder-glow{50%{opacity:.2}}@keyframes placeholder-glow{50%{opacity:.2}}.placeholder-wave{-webkit-mask-image:linear-gradient(130deg,#000 55%,rgba(0,0,0,0.8) 75%,#000 95%);mask-image:linear-gradient(130deg,#000 55%,rgba(0,0,0,0.8) 75%,#000 95%);-webkit-mask-size:200% 100%;mask-size:200% 100%;-webkit-animation:placeholder-wave 2s linear infinite;animation:placeholder-wave 2s linear infinite}@-webkit-keyframes placeholder-wave{100%{-webkit-mask-position:-200% 0%;mask-position:-200% 0%}}@keyframes placeholder-wave{100%{-webkit-mask-position:-200% 0%;mask-position:-200% 0%}}.clearfix::after{display:block;clear:both;content:""}.link-primary{color:#0d6efd}.link-primary:focus,.link-primary:hover{color:#0a58ca}.link-secondary{color:#6c757d}.link-secondary:focus,.link-secondary:hover{color:#565e64}.link-success{color:#198754}.link-success:focus,.link-success:hover{color:#146c43}.link-info{color:#0dcaf0}.link-info:focus,.link-info:hover{color:#3dd5f3}.link-warning{color:#ffc107}.link-warning:focus,.link-warning:hover{color:#ffcd39}.link-danger{color:#dc3545}.link-danger:focus,.link-danger:hover{color:#b02a37}.link-light{color:#f8f9fa}.link-light:focus,.link-light:hover{color:#f9fafb}.link-dark{color:#212529}.link-dark:focus,.link-dark:hover{color:#1a1e21}.ratio{position:relative;width:100%}.ratio::before{display:block;padding-top:var(--bs-aspect-ratio);content:""}.ratio>*{position:absolute;top:0;right:0;width:100%;height:100%}.ratio-1x1{--bs-aspect-ratio:100%}.ratio-4x3{--bs-aspect-ratio:75%}.ratio-16x9{--bs-aspect-ratio:56.25%}.ratio-21x9{--bs-aspect-ratio:42.8571428571%}.fixed-top{position:fixed;top:0;left:0;right:0;z-index:1030}.fixed-bottom{position:fixed;left:0;bottom:0;right:0;z-index:1030}.sticky-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}@media (min-width:576px){.sticky-sm-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}@media (min-width:768px){.sticky-md-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}@media (min-width:992px){.sticky-lg-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}@media (min-width:1200px){.sticky-xl-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}@media (min-width:1400px){.sticky-xxl-top{position:-webkit-sticky;position:sticky;top:0;z-index:1020}}.hstack{display:flex;flex-direction:row;align-items:center;align-self:stretch}.vstack{display:flex;flex:1 1 auto;flex-direction:column;align-self:stretch}.visually-hidden,.visually-hidden-focusable:not(:focus):not(:focus-within){position:absolute!important;width:1px!important;height:1px!important;padding:0!important;margin:-1px!important;overflow:hidden!important;clip:rect(0,0,0,0)!important;white-space:nowrap!important;border:0!important}.stretched-link::after{position:absolute;top:0;left:0;bottom:0;right:0;z-index:1;content:""}.text-truncate{overflow:hidden;text-overflow:ellipsis;white-space:nowrap}.vr{display:inline-block;align-self:stretch;width:1px;min-height:1em;background-color:currentColor;opacity:.25}.align-baseline{vertical-align:baseline!important}.align-top{vertical-align:top!important}.align-middle{vertical-align:middle!important}.align-bottom{vertical-align:bottom!important}.align-text-bottom{vertical-align:text-bottom!important}.align-text-top{vertical-align:text-top!important}.float-start{float:right!important}.float-end{float:left!important}.float-none{float:none!important}.opacity-0{opacity:0!important}.opacity-25{opacity:.25!important}.opacity-50{opacity:.5!important}.opacity-75{opacity:.75!important}.opacity-100{opacity:1!important}.overflow-auto{overflow:auto!important}.overflow-hidden{overflow:hidden!important}.overflow-visible{overflow:visible!important}.overflow-scroll{overflow:scroll!important}.d-inline{display:inline!important}.d-inline-block{display:inline-block!important}.d-block{display:block!important}.d-grid{display:grid!important}.d-table{display:table!important}.d-table-row{display:table-row!important}.d-table-cell{display:table-cell!important}.d-flex{display:flex!important}.d-inline-flex{display:inline-flex!important}.d-none{display:none!important}.shadow{box-shadow:0 .5rem 1rem rgba(0,0,0,.15)!important}.shadow-sm{box-shadow:0 .125rem .25rem rgba(0,0,0,.075)!important}.shadow-lg{box-shadow:0 1rem 3rem rgba(0,0,0,.175)!important}.shadow-none{box-shadow:none!important}.position-static{position:static!important}.position-relative{position:relative!important}.position-absolute{position:absolute!important}.position-fixed{position:fixed!important}.position-sticky{position:-webkit-sticky!important;position:sticky!important}.top-0{top:0!important}.top-50{top:50%!important}.top-100{top:100%!important}.bottom-0{bottom:0!important}.bottom-50{bottom:50%!important}.bottom-100{bottom:100%!important}.start-0{right:0!important}.start-50{right:50%!important}.start-100{right:100%!important}.end-0{left:0!important}.end-50{left:50%!important}.end-100{left:100%!important}.translate-middle{transform:translate(50%,-50%)!important}.translate-middle-x{transform:translateX(50%)!important}.translate-middle-y{transform:translateY(-50%)!important}.border{border:1px solid #dee2e6!important}.border-0{border:0!important}.border-top{border-top:1px solid #dee2e6!important}.border-top-0{border-top:0!important}.border-end{border-left:1px solid #dee2e6!important}.border-end-0{border-left:0!important}.border-bottom{border-bottom:1px solid #dee2e6!important}.border-bottom-0{border-bottom:0!important}.border-start{border-right:1px solid #dee2e6!important}.border-start-0{border-right:0!important}.border-primary{border-color:#0d6efd!important}.border-secondary{border-color:#6c757d!important}.border-success{border-color:#198754!important}.border-info{border-color:#0dcaf0!important}.border-warning{border-color:#ffc107!important}.border-danger{border-color:#dc3545!important}.border-light{border-color:#f8f9fa!important}.border-dark{border-color:#212529!important}.border-white{border-color:#fff!important}.border-1{border-width:1px!important}.border-2{border-width:2px!important}.border-3{border-width:3px!important}.border-4{border-width:4px!important}.border-5{border-width:5px!important}.w-25{width:25%!important}.w-50{width:50%!important}.w-75{width:75%!important}.w-100{width:100%!important}.w-auto{width:auto!important}.mw-100{max-width:100%!important}.vw-100{width:100vw!important}.min-vw-100{min-width:100vw!important}.h-25{height:25%!important}.h-50{height:50%!important}.h-75{height:75%!important}.h-100{height:100%!important}.h-auto{height:auto!important}.mh-100{max-height:100%!important}.vh-100{height:100vh!important}.min-vh-100{min-height:100vh!important}.flex-fill{flex:1 1 auto!important}.flex-row{flex-direction:row!important}.flex-column{flex-direction:column!important}.flex-row-reverse{flex-direction:row-reverse!important}.flex-column-reverse{flex-direction:column-reverse!important}.flex-grow-0{flex-grow:0!important}.flex-grow-1{flex-grow:1!important}.flex-shrink-0{flex-shrink:0!important}.flex-shrink-1{flex-shrink:1!important}.flex-wrap{flex-wrap:wrap!important}.flex-nowrap{flex-wrap:nowrap!important}.flex-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-0{gap:0!important}.gap-1{gap:.25rem!important}.gap-2{gap:.5rem!important}.gap-3{gap:1rem!important}.gap-4{gap:1.5rem!important}.gap-5{gap:3rem!important}.justify-content-start{justify-content:flex-start!important}.justify-content-end{justify-content:flex-end!important}.justify-content-center{justify-content:center!important}.justify-content-between{justify-content:space-between!important}.justify-content-around{justify-content:space-around!important}.justify-content-evenly{justify-content:space-evenly!important}.align-items-start{align-items:flex-start!important}.align-items-end{align-items:flex-end!important}.align-items-center{align-items:center!important}.align-items-baseline{align-items:baseline!important}.align-items-stretch{align-items:stretch!important}.align-content-start{align-content:flex-start!important}.align-content-end{align-content:flex-end!important}.align-content-center{align-content:center!important}.align-content-between{align-content:space-between!important}.align-content-around{align-content:space-around!important}.align-content-stretch{align-content:stretch!important}.align-self-auto{align-self:auto!important}.align-self-start{align-self:flex-start!important}.align-self-end{align-self:flex-end!important}.align-self-center{align-self:center!important}.align-self-baseline{align-self:baseline!important}.align-self-stretch{align-self:stretch!important}.order-first{order:-1!important}.order-0{order:0!important}.order-1{order:1!important}.order-2{order:2!important}.order-3{order:3!important}.order-4{order:4!important}.order-5{order:5!important}.order-last{order:6!important}.m-0{margin:0!important}.m-1{margin:.25rem!important}.m-2{margin:.5rem!important}.m-3{margin:1rem!important}.m-4{margin:1.5rem!important}.m-5{margin:3rem!important}.m-auto{margin:auto!important}.mx-0{margin-left:0!important;margin-right:0!important}.mx-1{margin-left:.25rem!important;margin-right:.25rem!important}.mx-2{margin-left:.5rem!important;margin-right:.5rem!important}.mx-3{margin-left:1rem!important;margin-right:1rem!important}.mx-4{margin-left:1.5rem!important;margin-right:1.5rem!important}.mx-5{margin-left:3rem!important;margin-right:3rem!important}.mx-auto{margin-left:auto!important;margin-right:auto!important}.my-0{margin-top:0!important;margin-bottom:0!important}.my-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-0{margin-top:0!important}.mt-1{margin-top:.25rem!important}.mt-2{margin-top:.5rem!important}.mt-3{margin-top:1rem!important}.mt-4{margin-top:1.5rem!important}.mt-5{margin-top:3rem!important}.mt-auto{margin-top:auto!important}.me-0{margin-left:0!important}.me-1{margin-left:.25rem!important}.me-2{margin-left:.5rem!important}.me-3{margin-left:1rem!important}.me-4{margin-left:1.5rem!important}.me-5{margin-left:3rem!important}.me-auto{margin-left:auto!important}.mb-0{margin-bottom:0!important}.mb-1{margin-bottom:.25rem!important}.mb-2{margin-bottom:.5rem!important}.mb-3{margin-bottom:1rem!important}.mb-4{margin-bottom:1.5rem!important}.mb-5{margin-bottom:3rem!important}.mb-auto{margin-bottom:auto!important}.ms-0{margin-right:0!important}.ms-1{margin-right:.25rem!important}.ms-2{margin-right:.5rem!important}.ms-3{margin-right:1rem!important}.ms-4{margin-right:1.5rem!important}.ms-5{margin-right:3rem!important}.ms-auto{margin-right:auto!important}.p-0{padding:0!important}.p-1{padding:.25rem!important}.p-2{padding:.5rem!important}.p-3{padding:1rem!important}.p-4{padding:1.5rem!important}.p-5{padding:3rem!important}.px-0{padding-left:0!important;padding-right:0!important}.px-1{padding-left:.25rem!important;padding-right:.25rem!important}.px-2{padding-left:.5rem!important;padding-right:.5rem!important}.px-3{padding-left:1rem!important;padding-right:1rem!important}.px-4{padding-left:1.5rem!important;padding-right:1.5rem!important}.px-5{padding-left:3rem!important;padding-right:3rem!important}.py-0{padding-top:0!important;padding-bottom:0!important}.py-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-0{padding-top:0!important}.pt-1{padding-top:.25rem!important}.pt-2{padding-top:.5rem!important}.pt-3{padding-top:1rem!important}.pt-4{padding-top:1.5rem!important}.pt-5{padding-top:3rem!important}.pe-0{padding-left:0!important}.pe-1{padding-left:.25rem!important}.pe-2{padding-left:.5rem!important}.pe-3{padding-left:1rem!important}.pe-4{padding-left:1.5rem!important}.pe-5{padding-left:3rem!important}.pb-0{padding-bottom:0!important}.pb-1{padding-bottom:.25rem!important}.pb-2{padding-bottom:.5rem!important}.pb-3{padding-bottom:1rem!important}.pb-4{padding-bottom:1.5rem!important}.pb-5{padding-bottom:3rem!important}.ps-0{padding-right:0!important}.ps-1{padding-right:.25rem!important}.ps-2{padding-right:.5rem!important}.ps-3{padding-right:1rem!important}.ps-4{padding-right:1.5rem!important}.ps-5{padding-right:3rem!important}.font-monospace{font-family:var(--bs-font-monospace)!important}.fs-1{font-size:calc(1.375rem + 1.5vw)!important}.fs-2{font-size:calc(1.325rem + .9vw)!important}.fs-3{font-size:calc(1.3rem + .6vw)!important}.fs-4{font-size:calc(1.275rem + .3vw)!important}.fs-5{font-size:1.25rem!important}.fs-6{font-size:1rem!important}.fst-italic{font-style:italic!important}.fst-normal{font-style:normal!important}.fw-light{font-weight:300!important}.fw-lighter{font-weight:lighter!important}.fw-normal{font-weight:400!important}.fw-bold{font-weight:700!important}.fw-bolder{font-weight:bolder!important}.lh-1{line-height:1!important}.lh-sm{line-height:1.25!important}.lh-base{line-height:1.5!important}.lh-lg{line-height:2!important}.text-start{text-align:right!important}.text-end{text-align:left!important}.text-center{text-align:center!important}.text-decoration-none{text-decoration:none!important}.text-decoration-underline{text-decoration:underline!important}.text-decoration-line-through{text-decoration:line-through!important}.text-lowercase{text-transform:lowercase!important}.text-uppercase{text-transform:uppercase!important}.text-capitalize{text-transform:capitalize!important}.text-wrap{white-space:normal!important}.text-nowrap{white-space:nowrap!important}.text-primary{--bs-text-opacity:1;color:rgba(var(--bs-primary-rgb),var(--bs-text-opacity))!important}.text-secondary{--bs-text-opacity:1;color:rgba(var(--bs-secondary-rgb),var(--bs-text-opacity))!important}.text-success{--bs-text-opacity:1;color:rgba(var(--bs-success-rgb),var(--bs-text-opacity))!important}.text-info{--bs-text-opacity:1;color:rgba(var(--bs-info-rgb),var(--bs-text-opacity))!important}.text-warning{--bs-text-opacity:1;color:rgba(var(--bs-warning-rgb),var(--bs-text-opacity))!important}.text-danger{--bs-text-opacity:1;color:rgba(var(--bs-danger-rgb),var(--bs-text-opacity))!important}.text-light{--bs-text-opacity:1;color:rgba(var(--bs-light-rgb),var(--bs-text-opacity))!important}.text-dark{--bs-text-opacity:1;color:rgba(var(--bs-dark-rgb),var(--bs-text-opacity))!important}.text-black{--bs-text-opacity:1;color:rgba(var(--bs-black-rgb),var(--bs-text-opacity))!important}.text-white{--bs-text-opacity:1;color:rgba(var(--bs-white-rgb),var(--bs-text-opacity))!important}.text-body{--bs-text-opacity:1;color:rgba(var(--bs-body-color-rgb),var(--bs-text-opacity))!important}.text-muted{--bs-text-opacity:1;color:#6c757d!important}.text-black-50{--bs-text-opacity:1;color:rgba(0,0,0,.5)!important}.text-white-50{--bs-text-opacity:1;color:rgba(255,255,255,.5)!important}.text-reset{--bs-text-opacity:1;color:inherit!important}.text-opacity-25{--bs-text-opacity:0.25}.text-opacity-50{--bs-text-opacity:0.5}.text-opacity-75{--bs-text-opacity:0.75}.text-opacity-100{--bs-text-opacity:1}.bg-primary{--bs-bg-opacity:1;background-color:rgba(var(--bs-primary-rgb),var(--bs-bg-opacity))!important}.bg-secondary{--bs-bg-opacity:1;background-color:rgba(var(--bs-secondary-rgb),var(--bs-bg-opacity))!important}.bg-success{--bs-bg-opacity:1;background-color:rgba(var(--bs-success-rgb),var(--bs-bg-opacity))!important}.bg-info{--bs-bg-opacity:1;background-color:rgba(var(--bs-info-rgb),var(--bs-bg-opacity))!important}.bg-warning{--bs-bg-opacity:1;background-color:rgba(var(--bs-warning-rgb),var(--bs-bg-opacity))!important}.bg-danger{--bs-bg-opacity:1;background-color:rgba(var(--bs-danger-rgb),var(--bs-bg-opacity))!important}.bg-light{--bs-bg-opacity:1;background-color:rgba(var(--bs-light-rgb),var(--bs-bg-opacity))!important}.bg-dark{--bs-bg-opacity:1;background-color:rgba(var(--bs-dark-rgb),var(--bs-bg-opacity))!important}.bg-black{--bs-bg-opacity:1;background-color:rgba(var(--bs-black-rgb),var(--bs-bg-opacity))!important}.bg-white{--bs-bg-opacity:1;background-color:rgba(var(--bs-white-rgb),var(--bs-bg-opacity))!important}.bg-body{--bs-bg-opacity:1;background-color:rgba(var(--bs-body-bg-rgb),var(--bs-bg-opacity))!important}.bg-transparent{--bs-bg-opacity:1;background-color:transparent!important}.bg-opacity-10{--bs-bg-opacity:0.1}.bg-opacity-25{--bs-bg-opacity:0.25}.bg-opacity-50{--bs-bg-opacity:0.5}.bg-opacity-75{--bs-bg-opacity:0.75}.bg-opacity-100{--bs-bg-opacity:1}.bg-gradient{background-image:var(--bs-gradient)!important}.user-select-all{-webkit-user-select:all!important;-moz-user-select:all!important;user-select:all!important}.user-select-auto{-webkit-user-select:auto!important;-moz-user-select:auto!important;user-select:auto!important}.user-select-none{-webkit-user-select:none!important;-moz-user-select:none!important;user-select:none!important}.pe-none{pointer-events:none!important}.pe-auto{pointer-events:auto!important}.rounded{border-radius:.25rem!important}.rounded-0{border-radius:0!important}.rounded-1{border-radius:.2rem!important}.rounded-2{border-radius:.25rem!important}.rounded-3{border-radius:.3rem!important}.rounded-circle{border-radius:50%!important}.rounded-pill{border-radius:50rem!important}.rounded-top{border-top-right-radius:.25rem!important;border-top-left-radius:.25rem!important}.rounded-end{border-top-left-radius:.25rem!important;border-bottom-left-radius:.25rem!important}.rounded-bottom{border-bottom-left-radius:.25rem!important;border-bottom-right-radius:.25rem!important}.rounded-start{border-bottom-right-radius:.25rem!important;border-top-right-radius:.25rem!important}.visible{visibility:visible!important}.invisible{visibility:hidden!important}@media (min-width:576px){.float-sm-start{float:right!important}.float-sm-end{float:left!important}.float-sm-none{float:none!important}.d-sm-inline{display:inline!important}.d-sm-inline-block{display:inline-block!important}.d-sm-block{display:block!important}.d-sm-grid{display:grid!important}.d-sm-table{display:table!important}.d-sm-table-row{display:table-row!important}.d-sm-table-cell{display:table-cell!important}.d-sm-flex{display:flex!important}.d-sm-inline-flex{display:inline-flex!important}.d-sm-none{display:none!important}.flex-sm-fill{flex:1 1 auto!important}.flex-sm-row{flex-direction:row!important}.flex-sm-column{flex-direction:column!important}.flex-sm-row-reverse{flex-direction:row-reverse!important}.flex-sm-column-reverse{flex-direction:column-reverse!important}.flex-sm-grow-0{flex-grow:0!important}.flex-sm-grow-1{flex-grow:1!important}.flex-sm-shrink-0{flex-shrink:0!important}.flex-sm-shrink-1{flex-shrink:1!important}.flex-sm-wrap{flex-wrap:wrap!important}.flex-sm-nowrap{flex-wrap:nowrap!important}.flex-sm-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-sm-0{gap:0!important}.gap-sm-1{gap:.25rem!important}.gap-sm-2{gap:.5rem!important}.gap-sm-3{gap:1rem!important}.gap-sm-4{gap:1.5rem!important}.gap-sm-5{gap:3rem!important}.justify-content-sm-start{justify-content:flex-start!important}.justify-content-sm-end{justify-content:flex-end!important}.justify-content-sm-center{justify-content:center!important}.justify-content-sm-between{justify-content:space-between!important}.justify-content-sm-around{justify-content:space-around!important}.justify-content-sm-evenly{justify-content:space-evenly!important}.align-items-sm-start{align-items:flex-start!important}.align-items-sm-end{align-items:flex-end!important}.align-items-sm-center{align-items:center!important}.align-items-sm-baseline{align-items:baseline!important}.align-items-sm-stretch{align-items:stretch!important}.align-content-sm-start{align-content:flex-start!important}.align-content-sm-end{align-content:flex-end!important}.align-content-sm-center{align-content:center!important}.align-content-sm-between{align-content:space-between!important}.align-content-sm-around{align-content:space-around!important}.align-content-sm-stretch{align-content:stretch!important}.align-self-sm-auto{align-self:auto!important}.align-self-sm-start{align-self:flex-start!important}.align-self-sm-end{align-self:flex-end!important}.align-self-sm-center{align-self:center!important}.align-self-sm-baseline{align-self:baseline!important}.align-self-sm-stretch{align-self:stretch!important}.order-sm-first{order:-1!important}.order-sm-0{order:0!important}.order-sm-1{order:1!important}.order-sm-2{order:2!important}.order-sm-3{order:3!important}.order-sm-4{order:4!important}.order-sm-5{order:5!important}.order-sm-last{order:6!important}.m-sm-0{margin:0!important}.m-sm-1{margin:.25rem!important}.m-sm-2{margin:.5rem!important}.m-sm-3{margin:1rem!important}.m-sm-4{margin:1.5rem!important}.m-sm-5{margin:3rem!important}.m-sm-auto{margin:auto!important}.mx-sm-0{margin-left:0!important;margin-right:0!important}.mx-sm-1{margin-left:.25rem!important;margin-right:.25rem!important}.mx-sm-2{margin-left:.5rem!important;margin-right:.5rem!important}.mx-sm-3{margin-left:1rem!important;margin-right:1rem!important}.mx-sm-4{margin-left:1.5rem!important;margin-right:1.5rem!important}.mx-sm-5{margin-left:3rem!important;margin-right:3rem!important}.mx-sm-auto{margin-left:auto!important;margin-right:auto!important}.my-sm-0{margin-top:0!important;margin-bottom:0!important}.my-sm-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-sm-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-sm-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-sm-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-sm-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-sm-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-sm-0{margin-top:0!important}.mt-sm-1{margin-top:.25rem!important}.mt-sm-2{margin-top:.5rem!important}.mt-sm-3{margin-top:1rem!important}.mt-sm-4{margin-top:1.5rem!important}.mt-sm-5{margin-top:3rem!important}.mt-sm-auto{margin-top:auto!important}.me-sm-0{margin-left:0!important}.me-sm-1{margin-left:.25rem!important}.me-sm-2{margin-left:.5rem!important}.me-sm-3{margin-left:1rem!important}.me-sm-4{margin-left:1.5rem!important}.me-sm-5{margin-left:3rem!important}.me-sm-auto{margin-left:auto!important}.mb-sm-0{margin-bottom:0!important}.mb-sm-1{margin-bottom:.25rem!important}.mb-sm-2{margin-bottom:.5rem!important}.mb-sm-3{margin-bottom:1rem!important}.mb-sm-4{margin-bottom:1.5rem!important}.mb-sm-5{margin-bottom:3rem!important}.mb-sm-auto{margin-bottom:auto!important}.ms-sm-0{margin-right:0!important}.ms-sm-1{margin-right:.25rem!important}.ms-sm-2{margin-right:.5rem!important}.ms-sm-3{margin-right:1rem!important}.ms-sm-4{margin-right:1.5rem!important}.ms-sm-5{margin-right:3rem!important}.ms-sm-auto{margin-right:auto!important}.p-sm-0{padding:0!important}.p-sm-1{padding:.25rem!important}.p-sm-2{padding:.5rem!important}.p-sm-3{padding:1rem!important}.p-sm-4{padding:1.5rem!important}.p-sm-5{padding:3rem!important}.px-sm-0{padding-left:0!important;padding-right:0!important}.px-sm-1{padding-left:.25rem!important;padding-right:.25rem!important}.px-sm-2{padding-left:.5rem!important;padding-right:.5rem!important}.px-sm-3{padding-left:1rem!important;padding-right:1rem!important}.px-sm-4{padding-left:1.5rem!important;padding-right:1.5rem!important}.px-sm-5{padding-left:3rem!important;padding-right:3rem!important}.py-sm-0{padding-top:0!important;padding-bottom:0!important}.py-sm-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-sm-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-sm-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-sm-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-sm-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-sm-0{padding-top:0!important}.pt-sm-1{padding-top:.25rem!important}.pt-sm-2{padding-top:.5rem!important}.pt-sm-3{padding-top:1rem!important}.pt-sm-4{padding-top:1.5rem!important}.pt-sm-5{padding-top:3rem!important}.pe-sm-0{padding-left:0!important}.pe-sm-1{padding-left:.25rem!important}.pe-sm-2{padding-left:.5rem!important}.pe-sm-3{padding-left:1rem!important}.pe-sm-4{padding-left:1.5rem!important}.pe-sm-5{padding-left:3rem!important}.pb-sm-0{padding-bottom:0!important}.pb-sm-1{padding-bottom:.25rem!important}.pb-sm-2{padding-bottom:.5rem!important}.pb-sm-3{padding-bottom:1rem!important}.pb-sm-4{padding-bottom:1.5rem!important}.pb-sm-5{padding-bottom:3rem!important}.ps-sm-0{padding-right:0!important}.ps-sm-1{padding-right:.25rem!important}.ps-sm-2{padding-right:.5rem!important}.ps-sm-3{padding-right:1rem!important}.ps-sm-4{padding-right:1.5rem!important}.ps-sm-5{padding-right:3rem!important}.text-sm-start{text-align:right!important}.text-sm-end{text-align:left!important}.text-sm-center{text-align:center!important}}@media (min-width:768px){.float-md-start{float:right!important}.float-md-end{float:left!important}.float-md-none{float:none!important}.d-md-inline{display:inline!important}.d-md-inline-block{display:inline-block!important}.d-md-block{display:block!important}.d-md-grid{display:grid!important}.d-md-table{display:table!important}.d-md-table-row{display:table-row!important}.d-md-table-cell{display:table-cell!important}.d-md-flex{display:flex!important}.d-md-inline-flex{display:inline-flex!important}.d-md-none{display:none!important}.flex-md-fill{flex:1 1 auto!important}.flex-md-row{flex-direction:row!important}.flex-md-column{flex-direction:column!important}.flex-md-row-reverse{flex-direction:row-reverse!important}.flex-md-column-reverse{flex-direction:column-reverse!important}.flex-md-grow-0{flex-grow:0!important}.flex-md-grow-1{flex-grow:1!important}.flex-md-shrink-0{flex-shrink:0!important}.flex-md-shrink-1{flex-shrink:1!important}.flex-md-wrap{flex-wrap:wrap!important}.flex-md-nowrap{flex-wrap:nowrap!important}.flex-md-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-md-0{gap:0!important}.gap-md-1{gap:.25rem!important}.gap-md-2{gap:.5rem!important}.gap-md-3{gap:1rem!important}.gap-md-4{gap:1.5rem!important}.gap-md-5{gap:3rem!important}.justify-content-md-start{justify-content:flex-start!important}.justify-content-md-end{justify-content:flex-end!important}.justify-content-md-center{justify-content:center!important}.justify-content-md-between{justify-content:space-between!important}.justify-content-md-around{justify-content:space-around!important}.justify-content-md-evenly{justify-content:space-evenly!important}.align-items-md-start{align-items:flex-start!important}.align-items-md-end{align-items:flex-end!important}.align-items-md-center{align-items:center!important}.align-items-md-baseline{align-items:baseline!important}.align-items-md-stretch{align-items:stretch!important}.align-content-md-start{align-content:flex-start!important}.align-content-md-end{align-content:flex-end!important}.align-content-md-center{align-content:center!important}.align-content-md-between{align-content:space-between!important}.align-content-md-around{align-content:space-around!important}.align-content-md-stretch{align-content:stretch!important}.align-self-md-auto{align-self:auto!important}.align-self-md-start{align-self:flex-start!important}.align-self-md-end{align-self:flex-end!important}.align-self-md-center{align-self:center!important}.align-self-md-baseline{align-self:baseline!important}.align-self-md-stretch{align-self:stretch!important}.order-md-first{order:-1!important}.order-md-0{order:0!important}.order-md-1{order:1!important}.order-md-2{order:2!important}.order-md-3{order:3!important}.order-md-4{order:4!important}.order-md-5{order:5!important}.order-md-last{order:6!important}.m-md-0{margin:0!important}.m-md-1{margin:.25rem!important}.m-md-2{margin:.5rem!important}.m-md-3{margin:1rem!important}.m-md-4{margin:1.5rem!important}.m-md-5{margin:3rem!important}.m-md-auto{margin:auto!important}.mx-md-0{margin-left:0!important;margin-right:0!important}.mx-md-1{margin-left:.25rem!important;margin-right:.25rem!important}.mx-md-2{margin-left:.5rem!important;margin-right:.5rem!important}.mx-md-3{margin-left:1rem!important;margin-right:1rem!important}.mx-md-4{margin-left:1.5rem!important;margin-right:1.5rem!important}.mx-md-5{margin-left:3rem!important;margin-right:3rem!important}.mx-md-auto{margin-left:auto!important;margin-right:auto!important}.my-md-0{margin-top:0!important;margin-bottom:0!important}.my-md-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-md-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-md-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-md-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-md-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-md-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-md-0{margin-top:0!important}.mt-md-1{margin-top:.25rem!important}.mt-md-2{margin-top:.5rem!important}.mt-md-3{margin-top:1rem!important}.mt-md-4{margin-top:1.5rem!important}.mt-md-5{margin-top:3rem!important}.mt-md-auto{margin-top:auto!important}.me-md-0{margin-left:0!important}.me-md-1{margin-left:.25rem!important}.me-md-2{margin-left:.5rem!important}.me-md-3{margin-left:1rem!important}.me-md-4{margin-left:1.5rem!important}.me-md-5{margin-left:3rem!important}.me-md-auto{margin-left:auto!important}.mb-md-0{margin-bottom:0!important}.mb-md-1{margin-bottom:.25rem!important}.mb-md-2{margin-bottom:.5rem!important}.mb-md-3{margin-bottom:1rem!important}.mb-md-4{margin-bottom:1.5rem!important}.mb-md-5{margin-bottom:3rem!important}.mb-md-auto{margin-bottom:auto!important}.ms-md-0{margin-right:0!important}.ms-md-1{margin-right:.25rem!important}.ms-md-2{margin-right:.5rem!important}.ms-md-3{margin-right:1rem!important}.ms-md-4{margin-right:1.5rem!important}.ms-md-5{margin-right:3rem!important}.ms-md-auto{margin-right:auto!important}.p-md-0{padding:0!important}.p-md-1{padding:.25rem!important}.p-md-2{padding:.5rem!important}.p-md-3{padding:1rem!important}.p-md-4{padding:1.5rem!important}.p-md-5{padding:3rem!important}.px-md-0{padding-left:0!important;padding-right:0!important}.px-md-1{padding-left:.25rem!important;padding-right:.25rem!important}.px-md-2{padding-left:.5rem!important;padding-right:.5rem!important}.px-md-3{padding-left:1rem!important;padding-right:1rem!important}.px-md-4{padding-left:1.5rem!important;padding-right:1.5rem!important}.px-md-5{padding-left:3rem!important;padding-right:3rem!important}.py-md-0{padding-top:0!important;padding-bottom:0!important}.py-md-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-md-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-md-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-md-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-md-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-md-0{padding-top:0!important}.pt-md-1{padding-top:.25rem!important}.pt-md-2{padding-top:.5rem!important}.pt-md-3{padding-top:1rem!important}.pt-md-4{padding-top:1.5rem!important}.pt-md-5{padding-top:3rem!important}.pe-md-0{padding-left:0!important}.pe-md-1{padding-left:.25rem!important}.pe-md-2{padding-left:.5rem!important}.pe-md-3{padding-left:1rem!important}.pe-md-4{padding-left:1.5rem!important}.pe-md-5{padding-left:3rem!important}.pb-md-0{padding-bottom:0!important}.pb-md-1{padding-bottom:.25rem!important}.pb-md-2{padding-bottom:.5rem!important}.pb-md-3{padding-bottom:1rem!important}.pb-md-4{padding-bottom:1.5rem!important}.pb-md-5{padding-bottom:3rem!important}.ps-md-0{padding-right:0!important}.ps-md-1{padding-right:.25rem!important}.ps-md-2{padding-right:.5rem!important}.ps-md-3{padding-right:1rem!important}.ps-md-4{padding-right:1.5rem!important}.ps-md-5{padding-right:3rem!important}.text-md-start{text-align:right!important}.text-md-end{text-align:left!important}.text-md-center{text-align:center!important}}@media (min-width:992px){.float-lg-start{float:right!important}.float-lg-end{float:left!important}.float-lg-none{float:none!important}.d-lg-inline{display:inline!important}.d-lg-inline-block{display:inline-block!important}.d-lg-block{display:block!important}.d-lg-grid{display:grid!important}.d-lg-table{display:table!important}.d-lg-table-row{display:table-row!important}.d-lg-table-cell{display:table-cell!important}.d-lg-flex{display:flex!important}.d-lg-inline-flex{display:inline-flex!important}.d-lg-none{display:none!important}.flex-lg-fill{flex:1 1 auto!important}.flex-lg-row{flex-direction:row!important}.flex-lg-column{flex-direction:column!important}.flex-lg-row-reverse{flex-direction:row-reverse!important}.flex-lg-column-reverse{flex-direction:column-reverse!important}.flex-lg-grow-0{flex-grow:0!important}.flex-lg-grow-1{flex-grow:1!important}.flex-lg-shrink-0{flex-shrink:0!important}.flex-lg-shrink-1{flex-shrink:1!important}.flex-lg-wrap{flex-wrap:wrap!important}.flex-lg-nowrap{flex-wrap:nowrap!important}.flex-lg-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-lg-0{gap:0!important}.gap-lg-1{gap:.25rem!important}.gap-lg-2{gap:.5rem!important}.gap-lg-3{gap:1rem!important}.gap-lg-4{gap:1.5rem!important}.gap-lg-5{gap:3rem!important}.justify-content-lg-start{justify-content:flex-start!important}.justify-content-lg-end{justify-content:flex-end!important}.justify-content-lg-center{justify-content:center!important}.justify-content-lg-between{justify-content:space-between!important}.justify-content-lg-around{justify-content:space-around!important}.justify-content-lg-evenly{justify-content:space-evenly!important}.align-items-lg-start{align-items:flex-start!important}.align-items-lg-end{align-items:flex-end!important}.align-items-lg-center{align-items:center!important}.align-items-lg-baseline{align-items:baseline!important}.align-items-lg-stretch{align-items:stretch!important}.align-content-lg-start{align-content:flex-start!important}.align-content-lg-end{align-content:flex-end!important}.align-content-lg-center{align-content:center!important}.align-content-lg-between{align-content:space-between!important}.align-content-lg-around{align-content:space-around!important}.align-content-lg-stretch{align-content:stretch!important}.align-self-lg-auto{align-self:auto!important}.align-self-lg-start{align-self:flex-start!important}.align-self-lg-end{align-self:flex-end!important}.align-self-lg-center{align-self:center!important}.align-self-lg-baseline{align-self:baseline!important}.align-self-lg-stretch{align-self:stretch!important}.order-lg-first{order:-1!important}.order-lg-0{order:0!important}.order-lg-1{order:1!important}.order-lg-2{order:2!important}.order-lg-3{order:3!important}.order-lg-4{order:4!important}.order-lg-5{order:5!important}.order-lg-last{order:6!important}.m-lg-0{margin:0!important}.m-lg-1{margin:.25rem!important}.m-lg-2{margin:.5rem!important}.m-lg-3{margin:1rem!important}.m-lg-4{margin:1.5rem!important}.m-lg-5{margin:3rem!important}.m-lg-auto{margin:auto!important}.mx-lg-0{margin-left:0!important;margin-right:0!important}.mx-lg-1{margin-left:.25rem!important;margin-right:.25rem!important}.mx-lg-2{margin-left:.5rem!important;margin-right:.5rem!important}.mx-lg-3{margin-left:1rem!important;margin-right:1rem!important}.mx-lg-4{margin-left:1.5rem!important;margin-right:1.5rem!important}.mx-lg-5{margin-left:3rem!important;margin-right:3rem!important}.mx-lg-auto{margin-left:auto!important;margin-right:auto!important}.my-lg-0{margin-top:0!important;margin-bottom:0!important}.my-lg-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-lg-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-lg-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-lg-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-lg-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-lg-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-lg-0{margin-top:0!important}.mt-lg-1{margin-top:.25rem!important}.mt-lg-2{margin-top:.5rem!important}.mt-lg-3{margin-top:1rem!important}.mt-lg-4{margin-top:1.5rem!important}.mt-lg-5{margin-top:3rem!important}.mt-lg-auto{margin-top:auto!important}.me-lg-0{margin-left:0!important}.me-lg-1{margin-left:.25rem!important}.me-lg-2{margin-left:.5rem!important}.me-lg-3{margin-left:1rem!important}.me-lg-4{margin-left:1.5rem!important}.me-lg-5{margin-left:3rem!important}.me-lg-auto{margin-left:auto!important}.mb-lg-0{margin-bottom:0!important}.mb-lg-1{margin-bottom:.25rem!important}.mb-lg-2{margin-bottom:.5rem!important}.mb-lg-3{margin-bottom:1rem!important}.mb-lg-4{margin-bottom:1.5rem!important}.mb-lg-5{margin-bottom:3rem!important}.mb-lg-auto{margin-bottom:auto!important}.ms-lg-0{margin-right:0!important}.ms-lg-1{margin-right:.25rem!important}.ms-lg-2{margin-right:.5rem!important}.ms-lg-3{margin-right:1rem!important}.ms-lg-4{margin-right:1.5rem!important}.ms-lg-5{margin-right:3rem!important}.ms-lg-auto{margin-right:auto!important}.p-lg-0{padding:0!important}.p-lg-1{padding:.25rem!important}.p-lg-2{padding:.5rem!important}.p-lg-3{padding:1rem!important}.p-lg-4{padding:1.5rem!important}.p-lg-5{padding:3rem!important}.px-lg-0{padding-left:0!important;padding-right:0!important}.px-lg-1{padding-left:.25rem!important;padding-right:.25rem!important}.px-lg-2{padding-left:.5rem!important;padding-right:.5rem!important}.px-lg-3{padding-left:1rem!important;padding-right:1rem!important}.px-lg-4{padding-left:1.5rem!important;padding-right:1.5rem!important}.px-lg-5{padding-left:3rem!important;padding-right:3rem!important}.py-lg-0{padding-top:0!important;padding-bottom:0!important}.py-lg-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-lg-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-lg-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-lg-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-lg-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-lg-0{padding-top:0!important}.pt-lg-1{padding-top:.25rem!important}.pt-lg-2{padding-top:.5rem!important}.pt-lg-3{padding-top:1rem!important}.pt-lg-4{padding-top:1.5rem!important}.pt-lg-5{padding-top:3rem!important}.pe-lg-0{padding-left:0!important}.pe-lg-1{padding-left:.25rem!important}.pe-lg-2{padding-left:.5rem!important}.pe-lg-3{padding-left:1rem!important}.pe-lg-4{padding-left:1.5rem!important}.pe-lg-5{padding-left:3rem!important}.pb-lg-0{padding-bottom:0!important}.pb-lg-1{padding-bottom:.25rem!important}.pb-lg-2{padding-bottom:.5rem!important}.pb-lg-3{padding-bottom:1rem!important}.pb-lg-4{padding-bottom:1.5rem!important}.pb-lg-5{padding-bottom:3rem!important}.ps-lg-0{padding-right:0!important}.ps-lg-1{padding-right:.25rem!important}.ps-lg-2{padding-right:.5rem!important}.ps-lg-3{padding-right:1rem!important}.ps-lg-4{padding-right:1.5rem!important}.ps-lg-5{padding-right:3rem!important}.text-lg-start{text-align:right!important}.text-lg-end{text-align:left!important}.text-lg-center{text-align:center!important}}@media (min-width:1200px){.float-xl-start{float:right!important}.float-xl-end{float:left!important}.float-xl-none{float:none!important}.d-xl-inline{display:inline!important}.d-xl-inline-block{display:inline-block!important}.d-xl-block{display:block!important}.d-xl-grid{display:grid!important}.d-xl-table{display:table!important}.d-xl-table-row{display:table-row!important}.d-xl-table-cell{display:table-cell!important}.d-xl-flex{display:flex!important}.d-xl-inline-flex{display:inline-flex!important}.d-xl-none{display:none!important}.flex-xl-fill{flex:1 1 auto!important}.flex-xl-row{flex-direction:row!important}.flex-xl-column{flex-direction:column!important}.flex-xl-row-reverse{flex-direction:row-reverse!important}.flex-xl-column-reverse{flex-direction:column-reverse!important}.flex-xl-grow-0{flex-grow:0!important}.flex-xl-grow-1{flex-grow:1!important}.flex-xl-shrink-0{flex-shrink:0!important}.flex-xl-shrink-1{flex-shrink:1!important}.flex-xl-wrap{flex-wrap:wrap!important}.flex-xl-nowrap{flex-wrap:nowrap!important}.flex-xl-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-xl-0{gap:0!important}.gap-xl-1{gap:.25rem!important}.gap-xl-2{gap:.5rem!important}.gap-xl-3{gap:1rem!important}.gap-xl-4{gap:1.5rem!important}.gap-xl-5{gap:3rem!important}.justify-content-xl-start{justify-content:flex-start!important}.justify-content-xl-end{justify-content:flex-end!important}.justify-content-xl-center{justify-content:center!important}.justify-content-xl-between{justify-content:space-between!important}.justify-content-xl-around{justify-content:space-around!important}.justify-content-xl-evenly{justify-content:space-evenly!important}.align-items-xl-start{align-items:flex-start!important}.align-items-xl-end{align-items:flex-end!important}.align-items-xl-center{align-items:center!important}.align-items-xl-baseline{align-items:baseline!important}.align-items-xl-stretch{align-items:stretch!important}.align-content-xl-start{align-content:flex-start!important}.align-content-xl-end{align-content:flex-end!important}.align-content-xl-center{align-content:center!important}.align-content-xl-between{align-content:space-between!important}.align-content-xl-around{align-content:space-around!important}.align-content-xl-stretch{align-content:stretch!important}.align-self-xl-auto{align-self:auto!important}.align-self-xl-start{align-self:flex-start!important}.align-self-xl-end{align-self:flex-end!important}.align-self-xl-center{align-self:center!important}.align-self-xl-baseline{align-self:baseline!important}.align-self-xl-stretch{align-self:stretch!important}.order-xl-first{order:-1!important}.order-xl-0{order:0!important}.order-xl-1{order:1!important}.order-xl-2{order:2!important}.order-xl-3{order:3!important}.order-xl-4{order:4!important}.order-xl-5{order:5!important}.order-xl-last{order:6!important}.m-xl-0{margin:0!important}.m-xl-1{margin:.25rem!important}.m-xl-2{margin:.5rem!important}.m-xl-3{margin:1rem!important}.m-xl-4{margin:1.5rem!important}.m-xl-5{margin:3rem!important}.m-xl-auto{margin:auto!important}.mx-xl-0{margin-left:0!important;margin-right:0!important}.mx-xl-1{margin-left:.25rem!important;margin-right:.25rem!important}.mx-xl-2{margin-left:.5rem!important;margin-right:.5rem!important}.mx-xl-3{margin-left:1rem!important;margin-right:1rem!important}.mx-xl-4{margin-left:1.5rem!important;margin-right:1.5rem!important}.mx-xl-5{margin-left:3rem!important;margin-right:3rem!important}.mx-xl-auto{margin-left:auto!important;margin-right:auto!important}.my-xl-0{margin-top:0!important;margin-bottom:0!important}.my-xl-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-xl-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-xl-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-xl-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-xl-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-xl-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-xl-0{margin-top:0!important}.mt-xl-1{margin-top:.25rem!important}.mt-xl-2{margin-top:.5rem!important}.mt-xl-3{margin-top:1rem!important}.mt-xl-4{margin-top:1.5rem!important}.mt-xl-5{margin-top:3rem!important}.mt-xl-auto{margin-top:auto!important}.me-xl-0{margin-left:0!important}.me-xl-1{margin-left:.25rem!important}.me-xl-2{margin-left:.5rem!important}.me-xl-3{margin-left:1rem!important}.me-xl-4{margin-left:1.5rem!important}.me-xl-5{margin-left:3rem!important}.me-xl-auto{margin-left:auto!important}.mb-xl-0{margin-bottom:0!important}.mb-xl-1{margin-bottom:.25rem!important}.mb-xl-2{margin-bottom:.5rem!important}.mb-xl-3{margin-bottom:1rem!important}.mb-xl-4{margin-bottom:1.5rem!important}.mb-xl-5{margin-bottom:3rem!important}.mb-xl-auto{margin-bottom:auto!important}.ms-xl-0{margin-right:0!important}.ms-xl-1{margin-right:.25rem!important}.ms-xl-2{margin-right:.5rem!important}.ms-xl-3{margin-right:1rem!important}.ms-xl-4{margin-right:1.5rem!important}.ms-xl-5{margin-right:3rem!important}.ms-xl-auto{margin-right:auto!important}.p-xl-0{padding:0!important}.p-xl-1{padding:.25rem!important}.p-xl-2{padding:.5rem!important}.p-xl-3{padding:1rem!important}.p-xl-4{padding:1.5rem!important}.p-xl-5{padding:3rem!important}.px-xl-0{padding-left:0!important;padding-right:0!important}.px-xl-1{padding-left:.25rem!important;padding-right:.25rem!important}.px-xl-2{padding-left:.5rem!important;padding-right:.5rem!important}.px-xl-3{padding-left:1rem!important;padding-right:1rem!important}.px-xl-4{padding-left:1.5rem!important;padding-right:1.5rem!important}.px-xl-5{padding-left:3rem!important;padding-right:3rem!important}.py-xl-0{padding-top:0!important;padding-bottom:0!important}.py-xl-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-xl-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-xl-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-xl-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-xl-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-xl-0{padding-top:0!important}.pt-xl-1{padding-top:.25rem!important}.pt-xl-2{padding-top:.5rem!important}.pt-xl-3{padding-top:1rem!important}.pt-xl-4{padding-top:1.5rem!important}.pt-xl-5{padding-top:3rem!important}.pe-xl-0{padding-left:0!important}.pe-xl-1{padding-left:.25rem!important}.pe-xl-2{padding-left:.5rem!important}.pe-xl-3{padding-left:1rem!important}.pe-xl-4{padding-left:1.5rem!important}.pe-xl-5{padding-left:3rem!important}.pb-xl-0{padding-bottom:0!important}.pb-xl-1{padding-bottom:.25rem!important}.pb-xl-2{padding-bottom:.5rem!important}.pb-xl-3{padding-bottom:1rem!important}.pb-xl-4{padding-bottom:1.5rem!important}.pb-xl-5{padding-bottom:3rem!important}.ps-xl-0{padding-right:0!important}.ps-xl-1{padding-right:.25rem!important}.ps-xl-2{padding-right:.5rem!important}.ps-xl-3{padding-right:1rem!important}.ps-xl-4{padding-right:1.5rem!important}.ps-xl-5{padding-right:3rem!important}.text-xl-start{text-align:right!important}.text-xl-end{text-align:left!important}.text-xl-center{text-align:center!important}}@media (min-width:1400px){.float-xxl-start{float:right!important}.float-xxl-end{float:left!important}.float-xxl-none{float:none!important}.d-xxl-inline{display:inline!important}.d-xxl-inline-block{display:inline-block!important}.d-xxl-block{display:block!important}.d-xxl-grid{display:grid!important}.d-xxl-table{display:table!important}.d-xxl-table-row{display:table-row!important}.d-xxl-table-cell{display:table-cell!important}.d-xxl-flex{display:flex!important}.d-xxl-inline-flex{display:inline-flex!important}.d-xxl-none{display:none!important}.flex-xxl-fill{flex:1 1 auto!important}.flex-xxl-row{flex-direction:row!important}.flex-xxl-column{flex-direction:column!important}.flex-xxl-row-reverse{flex-direction:row-reverse!important}.flex-xxl-column-reverse{flex-direction:column-reverse!important}.flex-xxl-grow-0{flex-grow:0!important}.flex-xxl-grow-1{flex-grow:1!important}.flex-xxl-shrink-0{flex-shrink:0!important}.flex-xxl-shrink-1{flex-shrink:1!important}.flex-xxl-wrap{flex-wrap:wrap!important}.flex-xxl-nowrap{flex-wrap:nowrap!important}.flex-xxl-wrap-reverse{flex-wrap:wrap-reverse!important}.gap-xxl-0{gap:0!important}.gap-xxl-1{gap:.25rem!important}.gap-xxl-2{gap:.5rem!important}.gap-xxl-3{gap:1rem!important}.gap-xxl-4{gap:1.5rem!important}.gap-xxl-5{gap:3rem!important}.justify-content-xxl-start{justify-content:flex-start!important}.justify-content-xxl-end{justify-content:flex-end!important}.justify-content-xxl-center{justify-content:center!important}.justify-content-xxl-between{justify-content:space-between!important}.justify-content-xxl-around{justify-content:space-around!important}.justify-content-xxl-evenly{justify-content:space-evenly!important}.align-items-xxl-start{align-items:flex-start!important}.align-items-xxl-end{align-items:flex-end!important}.align-items-xxl-center{align-items:center!important}.align-items-xxl-baseline{align-items:baseline!important}.align-items-xxl-stretch{align-items:stretch!important}.align-content-xxl-start{align-content:flex-start!important}.align-content-xxl-end{align-content:flex-end!important}.align-content-xxl-center{align-content:center!important}.align-content-xxl-between{align-content:space-between!important}.align-content-xxl-around{align-content:space-around!important}.align-content-xxl-stretch{align-content:stretch!important}.align-self-xxl-auto{align-self:auto!important}.align-self-xxl-start{align-self:flex-start!important}.align-self-xxl-end{align-self:flex-end!important}.align-self-xxl-center{align-self:center!important}.align-self-xxl-baseline{align-self:baseline!important}.align-self-xxl-stretch{align-self:stretch!important}.order-xxl-first{order:-1!important}.order-xxl-0{order:0!important}.order-xxl-1{order:1!important}.order-xxl-2{order:2!important}.order-xxl-3{order:3!important}.order-xxl-4{order:4!important}.order-xxl-5{order:5!important}.order-xxl-last{order:6!important}.m-xxl-0{margin:0!important}.m-xxl-1{margin:.25rem!important}.m-xxl-2{margin:.5rem!important}.m-xxl-3{margin:1rem!important}.m-xxl-4{margin:1.5rem!important}.m-xxl-5{margin:3rem!important}.m-xxl-auto{margin:auto!important}.mx-xxl-0{margin-left:0!important;margin-right:0!important}.mx-xxl-1{margin-left:.25rem!important;margin-right:.25rem!important}.mx-xxl-2{margin-left:.5rem!important;margin-right:.5rem!important}.mx-xxl-3{margin-left:1rem!important;margin-right:1rem!important}.mx-xxl-4{margin-left:1.5rem!important;margin-right:1.5rem!important}.mx-xxl-5{margin-left:3rem!important;margin-right:3rem!important}.mx-xxl-auto{margin-left:auto!important;margin-right:auto!important}.my-xxl-0{margin-top:0!important;margin-bottom:0!important}.my-xxl-1{margin-top:.25rem!important;margin-bottom:.25rem!important}.my-xxl-2{margin-top:.5rem!important;margin-bottom:.5rem!important}.my-xxl-3{margin-top:1rem!important;margin-bottom:1rem!important}.my-xxl-4{margin-top:1.5rem!important;margin-bottom:1.5rem!important}.my-xxl-5{margin-top:3rem!important;margin-bottom:3rem!important}.my-xxl-auto{margin-top:auto!important;margin-bottom:auto!important}.mt-xxl-0{margin-top:0!important}.mt-xxl-1{margin-top:.25rem!important}.mt-xxl-2{margin-top:.5rem!important}.mt-xxl-3{margin-top:1rem!important}.mt-xxl-4{margin-top:1.5rem!important}.mt-xxl-5{margin-top:3rem!important}.mt-xxl-auto{margin-top:auto!important}.me-xxl-0{margin-left:0!important}.me-xxl-1{margin-left:.25rem!important}.me-xxl-2{margin-left:.5rem!important}.me-xxl-3{margin-left:1rem!important}.me-xxl-4{margin-left:1.5rem!important}.me-xxl-5{margin-left:3rem!important}.me-xxl-auto{margin-left:auto!important}.mb-xxl-0{margin-bottom:0!important}.mb-xxl-1{margin-bottom:.25rem!important}.mb-xxl-2{margin-bottom:.5rem!important}.mb-xxl-3{margin-bottom:1rem!important}.mb-xxl-4{margin-bottom:1.5rem!important}.mb-xxl-5{margin-bottom:3rem!important}.mb-xxl-auto{margin-bottom:auto!important}.ms-xxl-0{margin-right:0!important}.ms-xxl-1{margin-right:.25rem!important}.ms-xxl-2{margin-right:.5rem!important}.ms-xxl-3{margin-right:1rem!important}.ms-xxl-4{margin-right:1.5rem!important}.ms-xxl-5{margin-right:3rem!important}.ms-xxl-auto{margin-right:auto!important}.p-xxl-0{padding:0!important}.p-xxl-1{padding:.25rem!important}.p-xxl-2{padding:.5rem!important}.p-xxl-3{padding:1rem!important}.p-xxl-4{padding:1.5rem!important}.p-xxl-5{padding:3rem!important}.px-xxl-0{padding-left:0!important;padding-right:0!important}.px-xxl-1{padding-left:.25rem!important;padding-right:.25rem!important}.px-xxl-2{padding-left:.5rem!important;padding-right:.5rem!important}.px-xxl-3{padding-left:1rem!important;padding-right:1rem!important}.px-xxl-4{padding-left:1.5rem!important;padding-right:1.5rem!important}.px-xxl-5{padding-left:3rem!important;padding-right:3rem!important}.py-xxl-0{padding-top:0!important;padding-bottom:0!important}.py-xxl-1{padding-top:.25rem!important;padding-bottom:.25rem!important}.py-xxl-2{padding-top:.5rem!important;padding-bottom:.5rem!important}.py-xxl-3{padding-top:1rem!important;padding-bottom:1rem!important}.py-xxl-4{padding-top:1.5rem!important;padding-bottom:1.5rem!important}.py-xxl-5{padding-top:3rem!important;padding-bottom:3rem!important}.pt-xxl-0{padding-top:0!important}.pt-xxl-1{padding-top:.25rem!important}.pt-xxl-2{padding-top:.5rem!important}.pt-xxl-3{padding-top:1rem!important}.pt-xxl-4{padding-top:1.5rem!important}.pt-xxl-5{padding-top:3rem!important}.pe-xxl-0{padding-left:0!important}.pe-xxl-1{padding-left:.25rem!important}.pe-xxl-2{padding-left:.5rem!important}.pe-xxl-3{padding-left:1rem!important}.pe-xxl-4{padding-left:1.5rem!important}.pe-xxl-5{padding-left:3rem!important}.pb-xxl-0{padding-bottom:0!important}.pb-xxl-1{padding-bottom:.25rem!important}.pb-xxl-2{padding-bottom:.5rem!important}.pb-xxl-3{padding-bottom:1rem!important}.pb-xxl-4{padding-bottom:1.5rem!important}.pb-xxl-5{padding-bottom:3rem!important}.ps-xxl-0{padding-right:0!important}.ps-xxl-1{padding-right:.25rem!important}.ps-xxl-2{padding-right:.5rem!important}.ps-xxl-3{padding-right:1rem!important}.ps-xxl-4{padding-right:1.5rem!important}.ps-xxl-5{padding-right:3rem!important}.text-xxl-start{text-align:right!important}.text-xxl-end{text-align:left!important}.text-xxl-center{text-align:center!important}}@media (min-width:1200px){.fs-1{font-size:2.5rem!important}.fs-2{font-size:2rem!important}.fs-3{font-size:1.75rem!important}.fs-4{font-size:1.5rem!important}}@media print{.d-print-inline{display:inline!important}.d-print-inline-block{display:inline-block!important}.d-print-block{display:block!important}.d-print-grid{display:grid!important}.d-print-table{display:table!important}.d-print-table-row{display:table-row!important}.d-print-table-cell{display:table-cell!important}.d-print-flex{display:flex!important}.d-print-inline-flex{display:inline-flex!important}.d-print-none{display:none!important}} -/*# sourceMappingURL=bootstrap.rtl.min.css.map */ \ No newline at end of file diff --git a/spaces/XAI/CHM-Corr/model/chmnet.py b/spaces/XAI/CHM-Corr/model/chmnet.py deleted file mode 100644 index 137bb362ade79b98f5abdaaf98ece060a8195f0e..0000000000000000000000000000000000000000 --- a/spaces/XAI/CHM-Corr/model/chmnet.py +++ /dev/null @@ -1,42 +0,0 @@ -r""" Convolutional Hough Matching Networks """ - -import torch.nn as nn -import torch - -from . import chmlearner as chmlearner -from .base import backbone - - -class CHMNet(nn.Module): - def __init__(self, ktype): - super(CHMNet, self).__init__() - - self.backbone = backbone.resnet101(pretrained=True) - self.learner = chmlearner.CHMLearner(ktype, feat_dim=1024) - - def forward(self, src_img, trg_img): - src_feat, trg_feat = self.extract_features(src_img, trg_img) - correlation = self.learner(src_feat, trg_feat) - return correlation - - def extract_features(self, src_img, trg_img): - feat = self.backbone.conv1.forward(torch.cat([src_img, trg_img], dim=1)) - feat = self.backbone.bn1.forward(feat) - feat = self.backbone.relu.forward(feat) - feat = self.backbone.maxpool.forward(feat) - - for idx in range(1, 5): - feat = self.backbone.__getattr__('layer%d' % idx)(feat) - - if idx == 3: - src_feat = feat.narrow(1, 0, feat.size(1) // 2).clone() - trg_feat = feat.narrow(1, feat.size(1) // 2, feat.size(1) // 2).clone() - return src_feat, trg_feat - - def training_objective(cls, prd_kps, trg_kps, npts): - l2dist = (prd_kps - trg_kps).pow(2).sum(dim=1) - loss = [] - for dist, npt in zip(l2dist, npts): - loss.append(dist[:npt].mean()) - return torch.stack(loss).mean() - diff --git a/spaces/XzJosh/Ava2-Bert-VITS2/data_utils.py b/spaces/XzJosh/Ava2-Bert-VITS2/data_utils.py deleted file mode 100644 index be3a29a93188c5b3386f22e5db29e5e96d78109a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Ava2-Bert-VITS2/data_utils.py +++ /dev/null @@ -1,321 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - try: - spec = torch.load(spec_filename) - except: - if self.use_mel_spec_posterior: - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/XzJosh/Bekki-Bert-VITS2/text/english.py b/spaces/XzJosh/Bekki-Bert-VITS2/text/english.py deleted file mode 100644 index 781d0a56cef71f66fc67db51d76538be90d3ddd2..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Bekki-Bert-VITS2/text/english.py +++ /dev/null @@ -1,138 +0,0 @@ -import pickle -import os -import re -from g2p_en import G2p -from string import punctuation - -from text import symbols - -current_file_path = os.path.dirname(__file__) -CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep') -CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle') -_g2p = G2p() - -arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2', 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2', 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH', 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1', 'OW0', 'L', 'SH'} - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def read_dict(): - g2p_dict = {} - start_line = 49 - with open(CMU_DICT_PATH) as f: - line = f.readline() - line_index = 1 - while line: - if line_index >= start_line: - line = line.strip() - word_split = line.split(' ') - word = word_split[0] - - syllable_split = word_split[1].split(' - ') - g2p_dict[word] = [] - for syllable in syllable_split: - phone_split = syllable.split(' ') - g2p_dict[word].append(phone_split) - - line_index = line_index + 1 - line = f.readline() - - return g2p_dict - - -def cache_dict(g2p_dict, file_path): - with open(file_path, 'wb') as pickle_file: - pickle.dump(g2p_dict, pickle_file) - - -def get_dict(): - if os.path.exists(CACHE_PATH): - with open(CACHE_PATH, 'rb') as pickle_file: - g2p_dict = pickle.load(pickle_file) - else: - g2p_dict = read_dict() - cache_dict(g2p_dict, CACHE_PATH) - - return g2p_dict - -eng_dict = get_dict() - -def refine_ph(phn): - tone = 0 - if re.search(r'\d$', phn): - tone = int(phn[-1]) + 1 - phn = phn[:-1] - return phn.lower(), tone - -def refine_syllables(syllables): - tones = [] - phonemes = [] - for phn_list in syllables: - for i in range(len(phn_list)): - phn = phn_list[i] - phn, tone = refine_ph(phn) - phonemes.append(phn) - tones.append(tone) - return phonemes, tones - - -def text_normalize(text): - # todo: eng text normalize - return text - -def g2p(text): - - phones = [] - tones = [] - words = re.split(r"([,;.\-\?\!\s+])", text) - for w in words: - if w.upper() in eng_dict: - phns, tns = refine_syllables(eng_dict[w.upper()]) - phones += phns - tones += tns - else: - phone_list = list(filter(lambda p: p != " ", _g2p(w))) - for ph in phone_list: - if ph in arpa: - ph, tn = refine_ph(ph) - phones.append(ph) - tones.append(tn) - else: - phones.append(ph) - tones.append(0) - # todo: implement word2ph - word2ph = [1 for i in phones] - - phones = [post_replace_ph(i) for i in phones] - return phones, tones, word2ph - -if __name__ == "__main__": - # print(get_dict()) - # print(eng_word_to_phoneme("hello")) - print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder.")) - # all_phones = set() - # for k, syllables in eng_dict.items(): - # for group in syllables: - # for ph in group: - # all_phones.add(ph) - # print(all_phones) \ No newline at end of file diff --git a/spaces/XzJosh/Carol-Bert-VITS2/train_ms.py b/spaces/XzJosh/Carol-Bert-VITS2/train_ms.py deleted file mode 100644 index 5d109003d40497ea4493e7c73f47c1eb7370a81e..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Carol-Bert-VITS2/train_ms.py +++ /dev/null @@ -1,402 +0,0 @@ -import os -import json -import argparse -import itertools -import math -import torch -import shutil -from torch import nn, optim -from torch.nn import functional as F -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter -import torch.multiprocessing as mp -import torch.distributed as dist -from torch.nn.parallel import DistributedDataParallel as DDP -from torch.cuda.amp import autocast, GradScaler -from tqdm import tqdm -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -import commons -import utils -from data_utils import ( - TextAudioSpeakerLoader, - TextAudioSpeakerCollate, - DistributedBucketSampler -) -from models import ( - SynthesizerTrn, - MultiPeriodDiscriminator, - DurationDiscriminator, -) -from losses import ( - generator_loss, - discriminator_loss, - feature_loss, - kl_loss -) -from mel_processing import mel_spectrogram_torch, spec_to_mel_torch -from text.symbols import symbols - -torch.backends.cudnn.benchmark = True -torch.backends.cuda.matmul.allow_tf32 = True -torch.backends.cudnn.allow_tf32 = True -torch.set_float32_matmul_precision('medium') -global_step = 0 - - -def main(): - """Assume Single Node Multi GPUs Training Only""" - assert torch.cuda.is_available(), "CPU training is not allowed." - - n_gpus = torch.cuda.device_count() - os.environ['MASTER_ADDR'] = 'localhost' - os.environ['MASTER_PORT'] = '65280' - - hps = utils.get_hparams() - if not hps.cont: - shutil.copy('./pretrained_models/D_0.pth','./logs/OUTPUT_MODEL/D_0.pth') - shutil.copy('./pretrained_models/G_0.pth','./logs/OUTPUT_MODEL/G_0.pth') - shutil.copy('./pretrained_models/DUR_0.pth','./logs/OUTPUT_MODEL/DUR_0.pth') - mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,)) - - -def run(rank, n_gpus, hps): - global global_step - if rank == 0: - logger = utils.get_logger(hps.model_dir) - logger.info(hps) - utils.check_git_hash(hps.model_dir) - writer = SummaryWriter(log_dir=hps.model_dir) - writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval")) - - dist.init_process_group(backend= 'gloo' if os.name == 'nt' else 'nccl', init_method='env://', world_size=n_gpus, rank=rank) - torch.manual_seed(hps.train.seed) - torch.cuda.set_device(rank) - - train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data) - train_sampler = DistributedBucketSampler( - train_dataset, - hps.train.batch_size, - [32, 300, 400, 500, 600, 700, 800, 900, 1000], - num_replicas=n_gpus, - rank=rank, - shuffle=True) - collate_fn = TextAudioSpeakerCollate() - train_loader = DataLoader(train_dataset, num_workers=2, shuffle=False, pin_memory=True, - collate_fn=collate_fn, batch_sampler=train_sampler) - if rank == 0: - eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data) - eval_loader = DataLoader(eval_dataset, num_workers=0, shuffle=False, - batch_size=1, pin_memory=True, - drop_last=False, collate_fn=collate_fn) - if "use_noise_scaled_mas" in hps.model.keys() and hps.model.use_noise_scaled_mas == True: - print("Using noise scaled MAS for VITS2") - use_noise_scaled_mas = True - mas_noise_scale_initial = 0.01 - noise_scale_delta = 2e-6 - else: - print("Using normal MAS for VITS1") - use_noise_scaled_mas = False - mas_noise_scale_initial = 0.0 - noise_scale_delta = 0.0 - if "use_duration_discriminator" in hps.model.keys() and hps.model.use_duration_discriminator == True: - print("Using duration discriminator for VITS2") - use_duration_discriminator = True - net_dur_disc = DurationDiscriminator( - hps.model.hidden_channels, - hps.model.hidden_channels, - 3, - 0.1, - gin_channels=hps.model.gin_channels if hps.data.n_speakers != 0 else 0, - ).cuda(rank) - if "use_spk_conditioned_encoder" in hps.model.keys() and hps.model.use_spk_conditioned_encoder == True: - if hps.data.n_speakers == 0: - raise ValueError("n_speakers must be > 0 when using spk conditioned encoder to train multi-speaker model") - use_spk_conditioned_encoder = True - else: - print("Using normal encoder for VITS1") - use_spk_conditioned_encoder = False - - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - mas_noise_scale_initial = mas_noise_scale_initial, - noise_scale_delta = noise_scale_delta, - **hps.model).cuda(rank) - - freeze_enc = getattr(hps.model, "freeze_enc", False) - if freeze_enc: - print("freeze encoder !!!") - for param in net_g.enc_p.parameters(): - param.requires_grad = False - - net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank) - optim_g = torch.optim.AdamW( - filter(lambda p: p.requires_grad, net_g.parameters()), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - optim_d = torch.optim.AdamW( - net_d.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - if net_dur_disc is not None: - optim_dur_disc = torch.optim.AdamW( - net_dur_disc.parameters(), - hps.train.learning_rate, - betas=hps.train.betas, - eps=hps.train.eps) - else: - optim_dur_disc = None - net_g = DDP(net_g, device_ids=[rank], find_unused_parameters=True) - net_d = DDP(net_d, device_ids=[rank], find_unused_parameters=True) - if net_dur_disc is not None: - net_dur_disc = DDP(net_dur_disc, device_ids=[rank], find_unused_parameters=True) - - pretrain_dir = None - if pretrain_dir is None: - try: - if net_dur_disc is not None: - _, optim_dur_disc, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "DUR_*.pth"), net_dur_disc, optim_dur_disc, skip_optimizer=not hps.cont) - _, optim_g, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, - optim_g, skip_optimizer=not hps.cont) - _, optim_d, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, - optim_d, skip_optimizer=not hps.cont) - - epoch_str = max(epoch_str, 1) - global_step = (epoch_str - 1) * len(train_loader) - except Exception as e: - print(e) - epoch_str = 1 - global_step = 0 - else: - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "G_*.pth"), net_g, - optim_g, True) - _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(pretrain_dir, "D_*.pth"), net_d, - optim_d, True) - - - - scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str - 2) - if net_dur_disc is not None: - scheduler_dur_disc = torch.optim.lr_scheduler.ExponentialLR(optim_dur_disc, gamma=hps.train.lr_decay, last_epoch=epoch_str-2) - else: - scheduler_dur_disc = None - scaler = GradScaler(enabled=hps.train.fp16_run) - - for epoch in range(epoch_str, hps.train.epochs + 1): - if rank == 0: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, eval_loader], logger, [writer, writer_eval]) - else: - train_and_evaluate(rank, epoch, hps, [net_g, net_d, net_dur_disc], [optim_g, optim_d, optim_dur_disc], [scheduler_g, scheduler_d, scheduler_dur_disc], scaler, [train_loader, None], None, None) - scheduler_g.step() - scheduler_d.step() - if net_dur_disc is not None: - scheduler_dur_disc.step() - - -def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers): - net_g, net_d, net_dur_disc = nets - optim_g, optim_d, optim_dur_disc = optims - scheduler_g, scheduler_d, scheduler_dur_disc = schedulers - train_loader, eval_loader = loaders - if writers is not None: - writer, writer_eval = writers - - train_loader.batch_sampler.set_epoch(epoch) - global global_step - - net_g.train() - net_d.train() - if net_dur_disc is not None: - net_dur_disc.train() - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in tqdm(enumerate(train_loader)): - if net_g.module.use_noise_scaled_mas: - current_mas_noise_scale = net_g.module.mas_noise_scale_initial - net_g.module.noise_scale_delta * global_step - net_g.module.current_mas_noise_scale = max(current_mas_noise_scale, 0.0) - x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True) - spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True) - y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True) - speakers = speakers.cuda(rank, non_blocking=True) - tone = tone.cuda(rank, non_blocking=True) - language = language.cuda(rank, non_blocking=True) - bert = bert.cuda(rank, non_blocking=True) - - with autocast(enabled=hps.train.fp16_run): - y_hat, l_length, attn, ids_slice, x_mask, z_mask, \ - (z, z_p, m_p, logs_p, m_q, logs_q), (hidden_x, logw, logw_) = net_g(x, x_lengths, spec, spec_lengths, speakers, tone, language, bert) - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - - y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice - - # Discriminator - y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach()) - with autocast(enabled=False): - loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g) - loss_disc_all = loss_disc - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x.detach(), x_mask.detach(), logw.detach(), logw_.detach()) - with autocast(enabled=False): - # TODO: I think need to mean using the mask, but for now, just mean all - loss_dur_disc, losses_dur_disc_r, losses_dur_disc_g = discriminator_loss(y_dur_hat_r, y_dur_hat_g) - loss_dur_disc_all = loss_dur_disc - optim_dur_disc.zero_grad() - scaler.scale(loss_dur_disc_all).backward() - scaler.unscale_(optim_dur_disc) - grad_norm_dur_disc = commons.clip_grad_value_(net_dur_disc.parameters(), None) - scaler.step(optim_dur_disc) - - optim_d.zero_grad() - scaler.scale(loss_disc_all).backward() - scaler.unscale_(optim_d) - grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None) - scaler.step(optim_d) - - with autocast(enabled=hps.train.fp16_run): - # Generator - y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat) - if net_dur_disc is not None: - y_dur_hat_r, y_dur_hat_g = net_dur_disc(hidden_x, x_mask, logw, logw_) - with autocast(enabled=False): - loss_dur = torch.sum(l_length.float()) - loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel - loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl - - loss_fm = feature_loss(fmap_r, fmap_g) - loss_gen, losses_gen = generator_loss(y_d_hat_g) - loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl - if net_dur_disc is not None: - loss_dur_gen, losses_dur_gen = generator_loss(y_dur_hat_g) - loss_gen_all += loss_dur_gen - optim_g.zero_grad() - scaler.scale(loss_gen_all).backward() - scaler.unscale_(optim_g) - grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None) - scaler.step(optim_g) - scaler.update() - - if rank == 0: - if global_step % hps.train.log_interval == 0: - lr = optim_g.param_groups[0]['lr'] - losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl] - logger.info('Train Epoch: {} [{:.0f}%]'.format( - epoch, - 100. * batch_idx / len(train_loader))) - logger.info([x.item() for x in losses] + [global_step, lr]) - - scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, - "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g} - scalar_dict.update( - {"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl}) - scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)}) - scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)}) - scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)}) - - image_dict = { - "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()), - "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()), - "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()), - "all/attn": utils.plot_alignment_to_numpy(attn[0, 0].data.cpu().numpy()) - } - utils.summarize( - writer=writer, - global_step=global_step, - images=image_dict, - scalars=scalar_dict) - - if global_step % hps.train.eval_interval == 0: - evaluate(hps, net_g, eval_loader, writer_eval) - utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "G_{}.pth".format(global_step))) - utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, - os.path.join(hps.model_dir, "D_{}.pth".format(global_step))) - if net_dur_disc is not None: - utils.save_checkpoint(net_dur_disc, optim_dur_disc, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "DUR_{}.pth".format(global_step))) - keep_ckpts = getattr(hps.train, 'keep_ckpts', 5) - if keep_ckpts > 0: - utils.clean_checkpoints(path_to_models=hps.model_dir, n_ckpts_to_keep=keep_ckpts, sort_by_time=True) - - - global_step += 1 - - if rank == 0: - logger.info('====> Epoch: {}'.format(epoch)) - - - -def evaluate(hps, generator, eval_loader, writer_eval): - generator.eval() - image_dict = {} - audio_dict = {} - print("Evaluating ...") - with torch.no_grad(): - for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers, tone, language, bert) in enumerate(eval_loader): - x, x_lengths = x.cuda(), x_lengths.cuda() - spec, spec_lengths = spec.cuda(), spec_lengths.cuda() - y, y_lengths = y.cuda(), y_lengths.cuda() - speakers = speakers.cuda() - bert = bert.cuda() - tone = tone.cuda() - language = language.cuda() - for use_sdp in [True, False]: - y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, tone, language, bert, y=spec, max_len=1000, sdp_ratio=0.0 if not use_sdp else 1.0) - y_hat_lengths = mask.sum([1, 2]).long() * hps.data.hop_length - - mel = spec_to_mel_torch( - spec, - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.mel_fmin, - hps.data.mel_fmax) - y_hat_mel = mel_spectrogram_torch( - y_hat.squeeze(1).float(), - hps.data.filter_length, - hps.data.n_mel_channels, - hps.data.sampling_rate, - hps.data.hop_length, - hps.data.win_length, - hps.data.mel_fmin, - hps.data.mel_fmax - ) - image_dict.update({ - f"gen/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy()) - }) - audio_dict.update({ - f"gen/audio_{batch_idx}_{use_sdp}": y_hat[0, :, :y_hat_lengths[0]] - }) - image_dict.update({f"gt/mel_{batch_idx}": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())}) - audio_dict.update({f"gt/audio_{batch_idx}": y[0, :, :y_lengths[0]]}) - - utils.summarize( - writer=writer_eval, - global_step=global_step, - images=image_dict, - audios=audio_dict, - audio_sampling_rate=hps.data.sampling_rate - ) - generator.train() - -if __name__ == "__main__": - main() diff --git a/spaces/XzJosh/LittleTaffy-Bert-VITS2/utils.py b/spaces/XzJosh/LittleTaffy-Bert-VITS2/utils.py deleted file mode 100644 index c6aa6cfc64c33e2eed33e9845239e831fc1c4a1a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LittleTaffy-Bert-VITS2/utils.py +++ /dev/null @@ -1,293 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - elif optimizer is None and not skip_optimizer: - #else: #Disable this line if Infer ,and enable the line upper - new_opt_dict = optimizer.state_dict() - new_opt_dict_params = new_opt_dict['param_groups'][0]['params'] - new_opt_dict['param_groups'] = checkpoint_dict['optimizer']['param_groups'] - new_opt_dict['param_groups'][0]['params'] = new_opt_dict_params - optimizer.load_state_dict(new_opt_dict) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - #assert "emb_g" not in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="./OUTPUT_MODEL", - help='Model name') - parser.add_argument('--cont', dest='cont', action="store_true", default=False, help="whether to continue training on the latest checkpoint") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.cont = args.cont - return hparams - - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - import re - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], - key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Yassine/Stego/stc_embed_c.h b/spaces/Yassine/Stego/stc_embed_c.h deleted file mode 100644 index 15f690baec70682118843e9478c7d62564e77946..0000000000000000000000000000000000000000 --- a/spaces/Yassine/Stego/stc_embed_c.h +++ /dev/null @@ -1,22 +0,0 @@ -#ifndef STC_EMBED_C_H -#define STC_EMBED_C_H - -#include "common.h" -/* Inputs: - cover - the binary cover vector - coverlength - length of the cover vector - message - the binary message to be hidden - messagelength - length of the message - profile - the vector of distortion weights (either double if usedouble = true, or u8 id usedouble = false) - usedouble - true = use double precision weight, false = use u8 weights - stego - pointer to an array of length 'coverlength' to receive the stego message; this parameter can be NULL - constr_height - the constraint height of the matrix; the higher, the better the efficiency but the greater the embedding time - -Return value: - On success, the function returns the total distortion introduced by the embedding. - On error, the function returns -1. -*/ - -double stc_embed(const u8 *cover, int coverlength, const u8 *message, int messagelength, const void *profile, bool usedouble, u8 *stego, int constr_height = 10); - -#endif diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_euler_discrete.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_euler_discrete.py deleted file mode 100644 index 9cb4a1eaa565acbf51970911248e1bf0d604c979..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_euler_discrete.py +++ /dev/null @@ -1,287 +0,0 @@ -# Copyright 2022 Katherine Crowson and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, BaseOutput, logging -from .scheduling_utils import SchedulerMixin - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -@dataclass -# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->EulerDiscrete -class EulerDiscreteSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - pred_original_sample: Optional[torch.FloatTensor] = None - - -class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin): - """ - Euler scheduler (Algorithm 2) from Karras et al. (2022) https://arxiv.org/abs/2206.00364. . Based on the original - k-diffusion implementation by Katherine Crowson: - https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - - """ - - _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy() - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - prediction_type: str = "epsilon", - ): - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - sigmas = np.concatenate([sigmas[::-1], [0.0]]).astype(np.float32) - self.sigmas = torch.from_numpy(sigmas) - - # standard deviation of the initial noise distribution - self.init_noise_sigma = self.sigmas.max() - - # setable values - self.num_inference_steps = None - timesteps = np.linspace(0, num_train_timesteps - 1, num_train_timesteps, dtype=float)[::-1].copy() - self.timesteps = torch.from_numpy(timesteps) - self.is_scale_input_called = False - - def scale_model_input( - self, sample: torch.FloatTensor, timestep: Union[float, torch.FloatTensor] - ) -> torch.FloatTensor: - """ - Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the Euler algorithm. - - Args: - sample (`torch.FloatTensor`): input sample - timestep (`float` or `torch.FloatTensor`): the current timestep in the diffusion chain - - Returns: - `torch.FloatTensor`: scaled input sample - """ - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - step_index = (self.timesteps == timestep).nonzero().item() - sigma = self.sigmas[step_index] - sample = sample / ((sigma**2 + 1) ** 0.5) - self.is_scale_input_called = True - return sample - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - device (`str` or `torch.device`, optional): - the device to which the timesteps should be moved to. If `None`, the timesteps are not moved. - """ - self.num_inference_steps = num_inference_steps - - timesteps = np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps, dtype=float)[::-1].copy() - sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - sigmas = np.interp(timesteps, np.arange(0, len(sigmas)), sigmas) - sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32) - self.sigmas = torch.from_numpy(sigmas).to(device=device) - if str(device).startswith("mps"): - # mps does not support float64 - self.timesteps = torch.from_numpy(timesteps).to(device, dtype=torch.float32) - else: - self.timesteps = torch.from_numpy(timesteps).to(device=device) - - def step( - self, - model_output: torch.FloatTensor, - timestep: Union[float, torch.FloatTensor], - sample: torch.FloatTensor, - s_churn: float = 0.0, - s_tmin: float = 0.0, - s_tmax: float = float("inf"), - s_noise: float = 1.0, - generator: Optional[torch.Generator] = None, - return_dict: bool = True, - ) -> Union[EulerDiscreteSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`float`): current timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - s_churn (`float`) - s_tmin (`float`) - s_tmax (`float`) - s_noise (`float`) - generator (`torch.Generator`, optional): Random number generator. - return_dict (`bool`): option for returning tuple rather than EulerDiscreteSchedulerOutput class - - Returns: - [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - - if ( - isinstance(timestep, int) - or isinstance(timestep, torch.IntTensor) - or isinstance(timestep, torch.LongTensor) - ): - raise ValueError( - "Passing integer indices (e.g. from `enumerate(timesteps)`) as timesteps to" - " `EulerDiscreteScheduler.step()` is not supported. Make sure to pass" - " one of the `scheduler.timesteps` as a timestep.", - ) - - if not self.is_scale_input_called: - logger.warning( - "The `scale_model_input` function should be called before `step` to ensure correct denoising. " - "See `StableDiffusionPipeline` for a usage example." - ) - - if isinstance(timestep, torch.Tensor): - timestep = timestep.to(self.timesteps.device) - - step_index = (self.timesteps == timestep).nonzero().item() - sigma = self.sigmas[step_index] - - gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0 - - device = model_output.device - if device.type == "mps": - # randn does not work reproducibly on mps - noise = torch.randn(model_output.shape, dtype=model_output.dtype, device="cpu", generator=generator).to( - device - ) - else: - noise = torch.randn(model_output.shape, dtype=model_output.dtype, device=device, generator=generator).to( - device - ) - - eps = noise * s_noise - sigma_hat = sigma * (gamma + 1) - - if gamma > 0: - sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5 - - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - if self.config.prediction_type == "epsilon": - pred_original_sample = sample - sigma_hat * model_output - elif self.config.prediction_type == "v_prediction": - # * c_out + input * c_skip - pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1)) - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" - ) - - # 2. Convert to an ODE derivative - derivative = (sample - pred_original_sample) / sigma_hat - - dt = self.sigmas[step_index + 1] - sigma_hat - - prev_sample = sample + derivative * dt - - if not return_dict: - return (prev_sample,) - - return EulerDiscreteSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample) - - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.FloatTensor, - ) -> torch.FloatTensor: - # Make sure sigmas and timesteps have the same device and dtype as original_samples - self.sigmas = self.sigmas.to(device=original_samples.device, dtype=original_samples.dtype) - if original_samples.device.type == "mps" and torch.is_floating_point(timesteps): - # mps does not support float64 - self.timesteps = self.timesteps.to(original_samples.device, dtype=torch.float32) - timesteps = timesteps.to(original_samples.device, dtype=torch.float32) - else: - self.timesteps = self.timesteps.to(original_samples.device) - timesteps = timesteps.to(original_samples.device) - - schedule_timesteps = self.timesteps - step_indices = [(schedule_timesteps == t).nonzero().item() for t in timesteps] - - sigma = self.sigmas[step_indices].flatten() - while len(sigma.shape) < len(original_samples.shape): - sigma = sigma.unsqueeze(-1) - - noisy_samples = original_samples + noise * sigma - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_lms_discrete_flax.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_lms_discrete_flax.py deleted file mode 100644 index 5da43be2ada3d471e4c146538c64d50c3700161f..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/schedulers/scheduling_lms_discrete_flax.py +++ /dev/null @@ -1,242 +0,0 @@ -# Copyright 2022 Katherine Crowson and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import flax -import jax.numpy as jnp -from scipy import integrate - -from ..configuration_utils import ConfigMixin, register_to_config -from .scheduling_utils_flax import ( - _FLAX_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, - FlaxSchedulerMixin, - FlaxSchedulerOutput, - broadcast_to_shape_from_left, -) - - -@flax.struct.dataclass -class LMSDiscreteSchedulerState: - # setable values - num_inference_steps: Optional[int] = None - timesteps: Optional[jnp.ndarray] = None - sigmas: Optional[jnp.ndarray] = None - derivatives: jnp.ndarray = jnp.array([]) - - @classmethod - def create(cls, num_train_timesteps: int, sigmas: jnp.ndarray): - return cls(timesteps=jnp.arange(0, num_train_timesteps)[::-1], sigmas=sigmas) - - -@dataclass -class FlaxLMSSchedulerOutput(FlaxSchedulerOutput): - state: LMSDiscreteSchedulerState - - -class FlaxLMSDiscreteScheduler(FlaxSchedulerMixin, ConfigMixin): - """ - Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by - Katherine Crowson: - https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`jnp.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - """ - - _compatibles = _FLAX_COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy() - - @property - def has_state(self): - return True - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[jnp.ndarray] = None, - ): - if trained_betas is not None: - self.betas = jnp.asarray(trained_betas) - elif beta_schedule == "linear": - self.betas = jnp.linspace(beta_start, beta_end, num_train_timesteps, dtype=jnp.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = jnp.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=jnp.float32) ** 2 - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = jnp.cumprod(self.alphas, axis=0) - - def create_state(self): - self.state = LMSDiscreteSchedulerState.create( - num_train_timesteps=self.config.num_train_timesteps, - sigmas=((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5, - ) - - def scale_model_input(self, state: LMSDiscreteSchedulerState, sample: jnp.ndarray, timestep: int) -> jnp.ndarray: - """ - Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the K-LMS algorithm. - - Args: - state (`LMSDiscreteSchedulerState`): - the `FlaxLMSDiscreteScheduler` state data class instance. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - timestep (`int`): - current discrete timestep in the diffusion chain. - - Returns: - `jnp.ndarray`: scaled input sample - """ - (step_index,) = jnp.where(state.timesteps == timestep, size=1) - sigma = state.sigmas[step_index] - sample = sample / ((sigma**2 + 1) ** 0.5) - return sample - - def get_lms_coefficient(self, state, order, t, current_order): - """ - Compute a linear multistep coefficient. - - Args: - order (TODO): - t (TODO): - current_order (TODO): - """ - - def lms_derivative(tau): - prod = 1.0 - for k in range(order): - if current_order == k: - continue - prod *= (tau - state.sigmas[t - k]) / (state.sigmas[t - current_order] - state.sigmas[t - k]) - return prod - - integrated_coeff = integrate.quad(lms_derivative, state.sigmas[t], state.sigmas[t + 1], epsrel=1e-4)[0] - - return integrated_coeff - - def set_timesteps( - self, state: LMSDiscreteSchedulerState, num_inference_steps: int, shape: Tuple = () - ) -> LMSDiscreteSchedulerState: - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - state (`LMSDiscreteSchedulerState`): - the `FlaxLMSDiscreteScheduler` state data class instance. - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - timesteps = jnp.linspace(self.config.num_train_timesteps - 1, 0, num_inference_steps, dtype=jnp.float32) - - low_idx = jnp.floor(timesteps).astype(int) - high_idx = jnp.ceil(timesteps).astype(int) - frac = jnp.mod(timesteps, 1.0) - sigmas = jnp.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5) - sigmas = (1 - frac) * sigmas[low_idx] + frac * sigmas[high_idx] - sigmas = jnp.concatenate([sigmas, jnp.array([0.0])]).astype(jnp.float32) - - return state.replace( - num_inference_steps=num_inference_steps, - timesteps=timesteps.astype(int), - derivatives=jnp.array([]), - sigmas=sigmas, - ) - - def step( - self, - state: LMSDiscreteSchedulerState, - model_output: jnp.ndarray, - timestep: int, - sample: jnp.ndarray, - order: int = 4, - return_dict: bool = True, - ) -> Union[FlaxLMSSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - state (`LMSDiscreteSchedulerState`): the `FlaxLMSDiscreteScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - order: coefficient for multi-step inference. - return_dict (`bool`): option for returning tuple rather than FlaxLMSSchedulerOutput class - - Returns: - [`FlaxLMSSchedulerOutput`] or `tuple`: [`FlaxLMSSchedulerOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - sigma = state.sigmas[timestep] - - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - pred_original_sample = sample - sigma * model_output - - # 2. Convert to an ODE derivative - derivative = (sample - pred_original_sample) / sigma - state = state.replace(derivatives=jnp.append(state.derivatives, derivative)) - if len(state.derivatives) > order: - state = state.replace(derivatives=jnp.delete(state.derivatives, 0)) - - # 3. Compute linear multistep coefficients - order = min(timestep + 1, order) - lms_coeffs = [self.get_lms_coefficient(state, order, timestep, curr_order) for curr_order in range(order)] - - # 4. Compute previous sample based on the derivatives path - prev_sample = sample + sum( - coeff * derivative for coeff, derivative in zip(lms_coeffs, reversed(state.derivatives)) - ) - - if not return_dict: - return (prev_sample, state) - - return FlaxLMSSchedulerOutput(prev_sample=prev_sample, state=state) - - def add_noise( - self, - state: LMSDiscreteSchedulerState, - original_samples: jnp.ndarray, - noise: jnp.ndarray, - timesteps: jnp.ndarray, - ) -> jnp.ndarray: - sigma = state.sigmas[timesteps].flatten() - sigma = broadcast_to_shape_from_left(sigma, noise.shape) - - noisy_samples = original_samples + noise * sigma - - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Yukki-Yui/moe-tts/app.py b/spaces/Yukki-Yui/moe-tts/app.py deleted file mode 100644 index 40b828b3cb658d9338029a40c49d7cc194b0ce35..0000000000000000000000000000000000000000 --- a/spaces/Yukki-Yui/moe-tts/app.py +++ /dev/null @@ -1,277 +0,0 @@ -import json -import os -import re - -import librosa -import numpy as np -import torch -from torch import no_grad, LongTensor -import commons -import utils -import gradio as gr -from models import SynthesizerTrn -from text import text_to_sequence, _clean_text -from mel_processing import spectrogram_torch - -limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces - - -def get_text(text, hps, is_phoneme): - text_norm = text_to_sequence(text, hps.symbols, [] if is_phoneme else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm - - -def create_tts_fn(model, hps, speaker_ids): - def tts_fn(text, speaker, speed, is_phoneme): - if limitation: - text_len = len(text) - max_len = 120 - if is_phoneme: - max_len *= 3 - else: - if len(hps.data.text_cleaners) > 0 and hps.data.text_cleaners[0] == "zh_ja_mixture_cleaners": - text_len = len(re.sub("(\[ZH\]|\[JA\])", "", text)) - if text_len > max_len: - return "Error: Text is too long", None - - speaker_id = speaker_ids[speaker] - stn_tst = get_text(text, hps, is_phoneme) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - sid = LongTensor([speaker_id]) - audio = model.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, - length_scale=1.0 / speed)[0][0, 0].data.cpu().float().numpy() - del stn_tst, x_tst, x_tst_lengths, sid - return "Success", (hps.data.sampling_rate, audio) - - return tts_fn - - -def create_vc_fn(model, hps, speaker_ids): - def vc_fn(original_speaker, target_speaker, input_audio): - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if limitation and duration > 30: - return "Error: Audio is too long", None - original_speaker_id = speaker_ids[original_speaker] - target_speaker_id = speaker_ids[target_speaker] - - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != hps.data.sampling_rate: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=hps.data.sampling_rate) - with no_grad(): - y = torch.FloatTensor(audio) - y = y.unsqueeze(0) - spec = spectrogram_torch(y, hps.data.filter_length, - hps.data.sampling_rate, hps.data.hop_length, hps.data.win_length, - center=False) - spec_lengths = LongTensor([spec.size(-1)]) - sid_src = LongTensor([original_speaker_id]) - sid_tgt = LongTensor([target_speaker_id]) - audio = model.voice_conversion(spec, spec_lengths, sid_src=sid_src, sid_tgt=sid_tgt)[0][ - 0, 0].data.cpu().float().numpy() - del y, spec, spec_lengths, sid_src, sid_tgt - return "Success", (hps.data.sampling_rate, audio) - - return vc_fn - - -def create_soft_vc_fn(model, hps, speaker_ids): - def soft_vc_fn(target_speaker, input_audio1, input_audio2): - input_audio = input_audio1 - if input_audio is None: - input_audio = input_audio2 - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if limitation and duration > 30: - return "Error: Audio is too long", None - target_speaker_id = speaker_ids[target_speaker] - - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - with torch.inference_mode(): - units = hubert.units(torch.FloatTensor(audio).unsqueeze(0).unsqueeze(0)) - with no_grad(): - unit_lengths = LongTensor([units.size(1)]) - sid = LongTensor([target_speaker_id]) - audio = model.infer(units, unit_lengths, sid=sid, noise_scale=.667, - noise_scale_w=0.8)[0][0, 0].data.cpu().float().numpy() - del units, unit_lengths, sid - return "Success", (hps.data.sampling_rate, audio) - - return soft_vc_fn - - -def create_to_phoneme_fn(hps): - def to_phoneme_fn(text): - return _clean_text(text, hps.data.text_cleaners) if text != "" else "" - - return to_phoneme_fn - - -css = """ - #advanced-btn { - color: white; - border-color: black; - background: black; - font-size: .7rem !important; - line-height: 19px; - margin-top: 24px; - margin-bottom: 12px; - padding: 2px 8px; - border-radius: 14px !important; - } - #advanced-options { - display: none; - margin-bottom: 20px; - } -""" - -if __name__ == '__main__': - models_tts = [] - models_vc = [] - models_soft_vc = [] - with open("saved_model/info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for i, info in models_info.items(): - name = info["title"] - lang = info["lang"] - example = info["example"] - config_path = f"saved_model/{i}/config.json" - model_path = f"saved_model/{i}/model.pth" - cover_path = f"saved_model/{i}/cover.jpg" - hps = utils.get_hparams_from_file(config_path) - model = SynthesizerTrn( - len(hps.symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model) - utils.load_checkpoint(model_path, model, None) - model.eval() - speaker_ids = [sid for sid, name in enumerate(hps.speakers) if name != "None"] - speakers = [name for sid, name in enumerate(hps.speakers) if name != "None"] - - t = info["type"] - if t == "vits": - models_tts.append((name, cover_path, speakers, lang, example, - hps.symbols, create_tts_fn(model, hps, speaker_ids), - create_to_phoneme_fn(hps))) - models_vc.append((name, cover_path, speakers, create_vc_fn(model, hps, speaker_ids))) - elif t == "soft-vits-vc": - models_soft_vc.append((name, cover_path, speakers, create_soft_vc_fn(model, hps, speaker_ids))) - - hubert = torch.hub.load("bshall/hubert:main", "hubert_soft") - - app = gr.Blocks(css=css) - - with app: - gr.Markdown("# Moe TTS And Voice Conversion Using VITS Model\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.moegoe)\n\n") - with gr.Tabs(): - with gr.TabItem("TTS"): - with gr.Tabs(): - for i, (name, cover_path, speakers, lang, example, symbols, tts_fn, - to_phoneme_fn) in enumerate(models_tts): - with gr.TabItem(f"model{i}"): - with gr.Column(): - gr.Markdown(f"## {name}\n\n" - f"![cover](file/{cover_path})\n\n" - f"lang: {lang}") - tts_input1 = gr.TextArea(label="Text (120 words limitation)", value=example, - elem_id=f"tts-input{i}") - tts_input2 = gr.Dropdown(label="Speaker", choices=speakers, - type="index", value=speakers[0]) - tts_input3 = gr.Slider(label="Speed", value=1, minimum=0.5, maximum=2, step=0.1) - with gr.Accordion(label="Advanced Options", open=False): - phoneme_input = gr.Checkbox(value=False, label="Phoneme input") - to_phoneme_btn = gr.Button("Covert text to phoneme") - phoneme_list = gr.Dataset(label="Phoneme list", components=[tts_input1], - samples=[[x] for x in symbols], - elem_id=f"phoneme-list{i}") - phoneme_list_json = gr.Json(value=symbols, visible=False) - tts_submit = gr.Button("Generate", variant="primary") - tts_output1 = gr.Textbox(label="Output Message") - tts_output2 = gr.Audio(label="Output Audio") - tts_submit.click(tts_fn, [tts_input1, tts_input2, tts_input3, phoneme_input], - [tts_output1, tts_output2]) - to_phoneme_btn.click(to_phoneme_fn, [tts_input1], [tts_input1]) - phoneme_list.click(None, [phoneme_list, phoneme_list_json], [], - _js=f""" - (i,phonemes) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#tts-input{i}").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + phonemes[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + phonemes[i].length; - text_input.selectionEnd = startPos + phonemes[i].length; - text_input.blur(); - window.scrollTo(x, y); - return []; - }}""") - - with gr.TabItem("Voice Conversion"): - with gr.Tabs(): - for i, (name, cover_path, speakers, vc_fn) in enumerate(models_vc): - with gr.TabItem(f"model{i}"): - gr.Markdown(f"## {name}\n\n" - f"![cover](file/{cover_path})") - vc_input1 = gr.Dropdown(label="Original Speaker", choices=speakers, type="index", - value=speakers[0]) - vc_input2 = gr.Dropdown(label="Target Speaker", choices=speakers, type="index", - value=speakers[1]) - vc_input3 = gr.Audio(label="Input Audio (30s limitation)") - vc_submit = gr.Button("Convert", variant="primary") - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input1, vc_input2, vc_input3], [vc_output1, vc_output2]) - with gr.TabItem("Soft Voice Conversion"): - with gr.Tabs(): - for i, (name, cover_path, speakers, soft_vc_fn) in enumerate(models_soft_vc): - with gr.TabItem(f"model{i}"): - gr.Markdown(f"## {name}\n\n" - f"![cover](file/{cover_path})") - vc_input1 = gr.Dropdown(label="Target Speaker", choices=speakers, type="index", - value=speakers[0]) - source_tabs = gr.Tabs() - with source_tabs: - with gr.TabItem("microphone"): - vc_input2 = gr.Audio(label="Input Audio (30s limitation)", source="microphone") - with gr.TabItem("upload"): - vc_input3 = gr.Audio(label="Input Audio (30s limitation)", source="upload") - vc_submit = gr.Button("Convert", variant="primary") - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - # clear inputs - source_tabs.set_event_trigger("change", None, [], [vc_input2, vc_input3], - js="()=>[null,null]") - vc_submit.click(soft_vc_fn, [vc_input1, vc_input2, vc_input3], - [vc_output1, vc_output2]) - gr.Markdown( - "unofficial demo for \n\n" - "- [https://github.com/CjangCjengh/MoeGoe](https://github.com/CjangCjengh/MoeGoe)\n" - "- [https://github.com/Francis-Komizu/VITS](https://github.com/Francis-Komizu/VITS)\n" - "- [https://github.com/luoyily/MoeTTS](https://github.com/luoyily/MoeTTS)\n" - "- [https://github.com/Francis-Komizu/Sovits](https://github.com/Francis-Komizu/Sovits)" - ) - app.queue(concurrency_count=3).launch(show_api=False) diff --git a/spaces/Yuliang/ECON/app.py b/spaces/Yuliang/ECON/app.py deleted file mode 100644 index a9ceb5a0ed847300d153c526763d80fcbd369a05..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ECON/app.py +++ /dev/null @@ -1,341 +0,0 @@ -# install - -import glob -import gradio as gr -import os - -import subprocess - -if os.getenv('SYSTEM') == 'spaces': - # subprocess.run('pip install pyembree'.split()) - subprocess.run( - 'pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu116_pyt1130/download.html' - .split() - ) - subprocess.run("python setup.py build_ext --inplace".split(), cwd="./lib/common/libmesh/") - subprocess.run("python setup.py build_ext --inplace".split(), cwd="./lib/common/libvoxelize/") - -from apps.infer import generate_model, generate_video - -# running - -description = ''' -# Unconstrained & Detailed Clothed Human Digitization (ECON + ControlNet) -### ECON: Explicit Clothed humans Optimized via Normal integration (CVPR 2023, Highlight) - -
, , etc.) to display data or information in a structured way. After you finish writing your draft, pare down your outline by removing any unnecessary or redundant information. You can also rearrange the order of your paragraphs or sections if needed. Step 6: Revise and proofread your article. The final step is to revise and proofread your article before publishing it. You can use tools like Hemingway Editor, ProWritingAid, or Grammarly to check for errors in grammar, spelling, punctuation, readability, style, tone, etc. You can also ask someone else to read your article and give you feedback. As you revise and proofread your article, try to: - Make sure that your article is coherent, consistent, and logical. - Make sure that your article is unique and original (you can use tools like Copyscape or Turnitin to check for plagiarism). - Make sure that your article is engaging and informative (you can use tools like CoSchedule Headline Analyzer or Yoast SEO Plugin to check for headline quality and SEO performance). Outline of the article: -

The Baby In Yellow: A Horror Game That Will Make You Think Twice About Babysitting

--

Introduction

- - What is The Baby In Yellow and what is it about? - Why is it a horror game and what makes it scary? - How can you download and play it on your Android device? -

The Baby In Yellow: A Game That Will Test Your Nerves

- -

The premise and the gameplay of The Baby In Yellow

- - You are a babysitter who has to take care of a baby in a yellow onesie - The baby is not a normal baby, but a demonic entity that can do strange things - You have to follow the instructions on the screen and try to survive the night -

The graphics and the sound effects of The Baby In Yellow

- - The game has a low-poly style that creates a contrast between the cute and the creepy - The game has a dark and eerie atmosphere that builds up tension and suspense - The game has realistic and disturbing sound effects that add to the horror -

How to Download and Play The Baby In Yellow on Your Android Device

- -

The requirements and the compatibility of The Baby In Yellow

- - The game requires Android 4.4 or higher and 136 MB of free space - The game is compatible with most Android devices, but some may experience performance issues - The game is free to download and play, but it may contain ads or in-app purchases -

The steps to download and install The Baby In Yellow

- - Go to one of the trusted sources that offer the APK file of The Baby In Yellow, such as [Softonic](^1^), [Tamindir](^2^), or [APKCombo](^3^) - Tap on the download button and wait for the file to be downloaded - Go to your device settings and enable the installation of apps from unknown sources - Locate the downloaded file in your file manager and tap on it to install it - Launch the game and enjoy the horror -

Conclusion

- - Summarize the main points of the article - Restate the thesis and provide a call to action or an interesting insight -

FAQs

- - List five unique FAQs related to the topic of the article Article with HTML formatting:

The Baby In Yellow: A Horror Game That Will Make You Think Twice About Babysitting

-

Introduction

-

If you are looking for a horror game that will challenge your nerves and make you jump out of your seat, you might want to try The Baby In Yellow. This is a first-person horror game developed by Team Terrible, where you will simulate the life of a babysitter. However, what you will babysit is more sinister than he first appears. The Baby In Yellow follows the same premise as the PC game, but it is now available for Android devices. In this article, we will tell you what The Baby In Yellow is about, why it is a horror game, and how you can download and play it on your Android device.

-

the baby in yellow indir apk


Downloadhttps://jinyurl.com/2uNU6e



-

The Baby In Yellow: A Game That Will Test Your Nerves

-

The premise and the gameplay of The Baby In Yellow

-

In The Baby In Yellow, you are a babysitter who has to take care of a baby in a yellow onesie. Sounds easy, right? Well, not quite. The baby is not a normal baby, but a demonic entity that can do strange things. He can teleport, levitate, laugh maniacally, stare at you with glowing eyes, and even summon fire. He can also escape from his crib, his room, or even his house. Your job is to follow the instructions on the screen and try to survive the night. You will have to feed him, change his diaper, put him to bed, and deal with his mischief. But be careful, because he might not like what you do.

-

The graphics and the sound effects of The Baby In Yellow

- from. The game has realistic and disturbing sound effects that add to the horror. You will hear the baby's cries, laughs, whispers, and screams, as well as the creaking of doors, the flickering of lights, and the thumping of footsteps.

-

How to Download and Play The Baby In Yellow on Your Android Device

-

The requirements and the compatibility of The Baby In Yellow

-

The game requires Android 4.4 or higher and 136 MB of free space. The game is compatible with most Android devices, but some may experience performance issues. The game is free to download and play, but it may contain ads or in-app purchases.

-

The steps to download and install The Baby In Yellow

-

To download and play The Baby In Yellow on your Android device, you need to follow these steps:

- - - - - - - - - - - - - - - - - - - - - - - - - -
StepInstruction
1Go to one of the trusted sources that offer the APK file of The Baby In Yellow, such as Softonic, Tamindir, or APKCombo.
2Tap on the download button and wait for the file to be downloaded.
3Go to your device settings and enable the installation of apps from unknown sources.
4Locate the downloaded file in your file manager and tap on it to install it.
5Launch the game and enjoy the horror.
-

Conclusion

-

The Baby In Yellow is a horror game that will make you think twice about babysitting. It is a game that will test your nerves and make you jump out of your seat. It is a game that has a low-poly style, a dark and eerie atmosphere, and realistic and disturbing sound effects. It is a game that is available for Android devices and can be downloaded and played for free. If you are looking for a horror game that will challenge you and scare you, you might want to try The Baby In Yellow. But be warned, this is not a game for the faint-hearted.

-

FAQs

-

Here are some frequently asked questions related to The Baby In Yellow:

-
    -
  1. Is The Baby In Yellow based on a true story?
  2. -

    No, The Baby In Yellow is not based on a true story. It is a fictional horror game inspired by a short film called The Thing in the Apartment Chapter 2, which was directed by John William Ross.

    -
  3. Is The Baby In Yellow safe to play?
  4. -

    The Baby In Yellow is safe to play as long as you are aware that it is a horror game that contains scary and violent scenes. It is not recommended for children or people who are sensitive to horror or gore. It is also advisable to play it in a well-lit room and with someone else nearby.

    -
  5. How long does it take to finish The Baby In Yellow?
  6. -

    The Baby In Yellow is a short game that can be finished in about 15 minutes. However, it has multiple endings depending on your choices and actions. You can replay the game to see different outcomes and discover more secrets.

    -

    the baby in yellow download android
    -the baby in yellow game apk
    -the baby in yellow free apk
    -the baby in yellow horror game apk
    -the baby in yellow apk mod
    -the baby in yellow apk pure
    -the baby in yellow apk latest version
    -the baby in yellow apk offline
    -the baby in yellow apk uptodown
    -the baby in yellow apk for pc
    -the baby in yellow apk android oyun club
    -the baby in yellow apk hile
    -the baby in yellow apk indir gezginler
    -the baby in yellow apk indir tamindir
    -the baby in yellow apk indir softonic
    -the baby in yellow apk indir cepde
    -the baby in yellow apk indir apkpure
    -the baby in yellow apk indir android oyun club
    -the baby in yellow apk indir son sürüm
    -the baby in yellow apk indir ücretsiz
    -the baby in yellow oyunu indir apk
    -the baby in yellow oyunu indir android
    -the baby in yellow oyunu indir pc
    -the baby in yellow oyunu indir ücretsiz
    -the baby in yellow oyunu indir tamindir
    -the baby in yellow oyunu indir gezginler
    -the baby in yellow oyunu indir softonic
    -the baby in yellow oyunu indir cepde
    -the baby in yellow oyunu indir apkpure
    -the baby in yellow oyunu indir android oyun club
    -download game the baby in yellow apk
    -download game the baby in yellow android
    -download game the baby in yellow mod apk
    -download game the baby in yellow free apk
    -download game the baby in yellow horror apk
    -download game the baby in yellow latest version apk
    -download game the baby in yellow offline apk
    -download game the baby in yellow uptodown apk
    -download game the baby in yellow for pc apk
    -download game the baby in yellow android oyun club apk
    -download game the baby in yellow hileli apk
    -download game the baby in yellow gezginler apk
    -download game the baby in yellow tamindir apk
    -download game the baby in yellow softonic apk
    -download game the baby in yellow cepde apk
    -download game the baby in yellow apkpure apk
    -download game the baby in yellow android oyun club apk
    -download game the baby in yellow son sürüm apk
    -download game the baby in yellow ücretsiz apk

    -
  7. What are some tips and tricks to play The Baby In Yellow?
  8. -

    Some tips and tricks to play The Baby In Yellow are:

    -
      -
    • Pay attention to the instructions on the screen and follow them carefully.
    • -
    • Use the flashlight to see better in the dark.
    • -
    • Avoid looking at the baby's eyes or touching him when he is angry.
    • -
    • Hide in the closet or under the bed if you hear something suspicious.
    • -
    • Don't let the baby escape from his room or his house.
    • -
    • Don't trust everything you see or hear.
    • -
    -
  9. Where can I find more games like The Baby In Yellow?
  10. -

    If you enjoyed playing The Baby In Yellow, you might also like these games:

    -
      -
    • Five Nights at Freddy's: A horror game where you have to survive five nights in a pizzeria haunted by animatronic animals.
    • -
    • Slendrina: The Cellar: A horror game where you have to explore a cellar and avoid a ghostly woman.
    • -
    • Eyes: The Horror Game: A horror game where you have to collect valuables in a haunted house and avoid a monster.
    • -
    • Hello Neighbor: A stealth horror game where you have to sneak into your neighbor's house and discover his secrets.
    • -
    -
-

I hope you enjoyed reading this article and learned something new. If you have any questions or comments, feel free to leave them below. And if you want to play The Baby In Yellow, don't forget to download it from one of the sources mentioned above. But be careful, because this game is not for the faint-hearted.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Epic War 6 How to Conquer Every Spot on the Board.md b/spaces/1phancelerku/anime-remove-background/Epic War 6 How to Conquer Every Spot on the Board.md deleted file mode 100644 index 808cd6406055ad106b131e08b65ae938d25d5751..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Epic War 6 How to Conquer Every Spot on the Board.md +++ /dev/null @@ -1,134 +0,0 @@ - -

Epic War 6 APK: A Thrilling Battle Game for Android

-

If you are looking for a game that combines strategy, action, and fantasy, then you should check out Epic War 6 APK. This is a game that lets you command legendary heroes and a strong army in epic battles against powerful enemies. You can choose from six unique heroes, each with their own strengths, weaknesses, and skills. You can also train and upgrade over 40 battle units, from archers and knights to dragons and giants. You can also challenge and defeat huge titans that will test your skills and strategy. And if you want to compete with other players from around the world, you can enter the PVP Arena and show how epic you are.

-

epic war 6 apk


DOWNLOADhttps://jinyurl.com/2uNKrf



-

In this article, we will tell you everything you need to know about Epic War 6 APK, including its features, how to download and install it, how to play it, how it compares with other games, what are its pros and cons, and what is its review. By the end of this article, you will have a clear idea of whether this game is worth playing or not.

-

Features of Epic War 6 APK

-

Epic War 6 APK has a lot of features that make it a fun and exciting game to play. Here are some of them:

-
    -
  • 6 unique heroes: You can choose from six different heroes, each with their own personality, backstory, and abilities. Some of them are based on famous characters from mythology or history, such as Thor, Hercules, or Joan of Arc. Each hero has a special skill that can change the outcome of the battle, such as summoning thunderstorms, healing allies, or boosting morale.
  • Over 40 battle units: You can train and upgrade a variety of units to fight for you in the battlefield. You can choose from different classes, such as infantry, cavalry, ranged, magic, or special. Each class has its own advantages and disadvantages, and you need to balance your army composition according to the situation. You can also unlock new units as you progress in the game, such as ninjas, samurais, or angels.
  • -
  • 10 powerful titans: You can face and defeat 10 massive titans that will pose a great challenge to your skills and strategy. These titans are based on mythical creatures, such as dragons, hydras, or krakens. They have different abilities and weaknesses, and you need to find the best way to exploit them. You can also use your hero's skill to deal extra damage or gain an edge in the fight.
  • -
  • PVP Arena: You can compete with other players from around the world in the PVP Arena mode. You can choose your hero and units and enter a random match against another player. You can also join a clan and participate in clan wars, where you can cooperate with your clan members and fight against other clans. You can earn rewards and rank up in the leaderboards by winning matches and wars.
  • -
-

How to Download and Install Epic War 6 APK

-

If you want to play Epic War 6 APK on your Android device, you need to download and install it first. Here are the steps that you need to follow:

-
    -
  1. Go to the official website of mob.org: This is one of the best sources for downloading free Android games. You can access it by typing mob.org in your browser or clicking on this link.
  2. -
  3. Search for Epic War 6 APK: Once you are on the website, you can use the search bar to look for Epic War 6 APK. You can also browse through the categories or genres to find it. Alternatively, you can use this direct link to go to the download page of Epic War 6 APK.
  4. -
  5. Click on the download button: When you find the game that you want, you can click on the green download button that says "Download Epic War 6". This will start the download process and you will see a progress bar on your screen.
  6. -
  7. Enable unknown sources on your device settings: Before you can install the APK file that you downloaded, you need to allow your device to install apps from unknown sources. To do this, go to your device settings and look for security or privacy options. Then, find the option that says "Unknown sources" or "Allow installation of apps from unknown sources" and enable it.
  8. -
  9. Install the APK file: After enabling unknown sources, you can go to your file manager or downloads folder and find the APK file that you downloaded. Tap on it and follow the instructions on your screen to install it.
  10. -
  11. Launch the game and enjoy the epic battles: Once the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer. You can then start playing the game and enjoy the epic battles.
  12. -

Gameplay Tips and Tricks for Epic War 6 APK

-

Epic War 6 APK is a game that requires strategy, skill, and patience. You need to plan your moves carefully and use your resources wisely. Here are some tips and tricks that can help you improve your gameplay and win more battles:

-
    -
  • Choose your hero wisely: Each hero has a different skill that can affect the battle in various ways. For example, Thor can summon thunderstorms that deal damage to all enemies, Hercules can heal all allies and boost their morale, and Joan of Arc can increase the attack and defense of all units. You need to choose the hero that suits your playstyle and strategy, and use their skill at the right time and place.
  • -
  • Use spells and skills at the right time and place: Apart from your hero's skill, you can also use spells that you can buy from the shop or earn from quests. These spells can have different effects, such as healing, damaging, freezing, or stunning. You need to use them wisely and strategically, as they have a cooldown time and a limited number of uses. You also need to aim them well, as some of them have a specific target or area of effect.
  • -
  • Upgrade your units and heroes regularly: As you progress in the game, you will face stronger enemies and tougher challenges. You need to upgrade your units and heroes regularly to increase their power and performance. You can upgrade them by using gold and gems that you can earn from battles, quests, or achievements. You can also equip them with items that you can buy from the shop or find in chests. These items can enhance their stats or give them special abilities.
  • -
  • Experiment with different combinations of units and heroes: There are many possible combinations of units and heroes that you can use in the game. You can mix and match different classes, such as infantry, cavalry, ranged, magic, or special. You can also try different heroes with different skills and abilities. You need to experiment with different combinations to find the best synergy and balance for your army.
  • -
-

Comparison of Epic War 6 APK with Other Games

-

Epic War 6 APK is not the only game that offers strategy and action in a fantasy setting. There are many other games that have similar or different features and gameplay. Here are some of them and how they compare with Epic War 6 APK:

- - - - - - - - - - - - - - - - - - - - - -
GameSimilaritiesDifferences
Epic War Saga- Same developer as Epic War 6 APK
- Similar gameplay but with more RPG elements
- Same genre of strategy and action
- Fewer heroes, units, and titans than Epic War 6 APK
- More quests, missions, and achievements than Epic War 6 APK
- Different graphics style and theme than Epic War 6 APK
Kingdom Rush- Same genre of strategy and action
- Similar gameplay but with tower defense elements
- Same theme of fantasy and mythology
- Different developer than Epic War 6 APK
- Fewer heroes and units than Epic War 6 APK
- No titans or PVP mode in Kingdom Rush
Clash of Clans- Same genre of strategy and action
- Similar gameplay but with base building and army management elements
- Same theme of fantasy and mythology
- Different developer than Epic War 6 APK
- More online multiplayer features than Epic War 6 APK
- Different graphics style and tone than Epic War 6 APK

Pros and Cons of Epic War 6 APK

-

Epic War 6 APK is a game that has many positive and negative aspects. Here are some of them:

-

Pros

-
    -
  • High-quality graphics: The game has impressive graphics that create a realistic and immersive experience. The heroes, units, and titans are well-designed and animated. The backgrounds and environments are detailed and colorful. The effects and sounds are also realistic and captivating.
  • -
  • Addictive gameplay: The game has a simple but engaging gameplay that keeps you hooked for hours. The battles are fast-paced and thrilling, with a lot of strategy and action involved. The game also has a lot of content and features to explore, such as quests, achievements, items, and PVP mode.
  • -
  • Diverse heroes and units: The game has a lot of variety and diversity in terms of heroes and units. You can choose from six different heroes, each with their own skills and abilities. You can also train and upgrade over 40 battle units, from archers and knights to dragons and giants. You can also unlock new units as you progress in the game, such as ninjas, samurais, or angels.
  • -
  • Online PVP mode: The game has an online PVP mode that lets you compete with other players from around the world. You can choose your hero and units and enter a random match against another player. You can also join a clan and participate in clan wars, where you can cooperate with your clan members and fight against other clans. You can earn rewards and rank up in the leaderboards by winning matches and wars.
  • -
  • Free to play: The game is free to download and play on your Android device. You do not need to pay anything to enjoy the game. You can also play the game offline without an internet connection.
  • -
-

Cons

-
    -
  • High learning curve: The game is not very easy to learn or master. You need to understand the mechanics and strategies of the game, such as how to use your hero's skill, how to upgrade your units, how to use spells, how to defeat titans, etc. You also need to practice a lot to improve your skills and performance.
  • -
  • Requires internet connection: The game requires an internet connection to access some of its features, such as PVP mode, clan wars, quests, achievements, etc. If you do not have a stable or fast internet connection, you may experience lagging or crashing issues.
  • -
  • May have bugs and glitches: The game may have some bugs and glitches that can affect your gameplay or experience. For example, some users have reported that the game freezes or crashes randomly, that the game does not save their progress or data, that the game does not load properly, etc.
  • -
  • May consume battery and storage space: The game may consume a lot of battery power and storage space on your device. This is because the game has high-quality graphics, sounds, and effects that require a lot of resources. You may need to charge your device frequently or clear some space on your device to play the game smoothly.
  • -
-

Review of Epic War 6 APK

-

Epic War 6 APK is a game that deserves a positive review from us. We think that it is a great game for fans of strategy and action games, with a lot of content and features to enjoy. We like the graphics, the gameplay, the diversity, and the online mode of the game. We think that it is a fun and exciting game to play.

-

However, we also acknowledge that the game has some flaws that need to be fixed or improved. We think that the game is not very easy to learn or master, that it requires an internet connection for some features, that it may have some bugs and glitches, and that it may consume a lot of battery power and storage space on your device.

-

Therefore, we give Epic War 6 APK a rating of 4.5 out of 5 stars based on our experience and feedback from other users. We think that it is a game worth playing if you like strategy and action games.

-

epic war 6 android game download
-epic war 6 free online strategy game
-epic war 6 heroes and titans apk
-epic war 6 unblocked html5 game
-epic war 6 mob.org apk file
-epic war 6 crazygames.com play now
-epic war 6 best army and spells
-epic war 6 mod apk unlimited money
-epic war 6 walkthrough and tips
-epic war 6 latest version update
-epic war 6 cheats and hacks
-epic war 6 review and rating
-epic war 6 gameplay and features
-epic war 6 multiplayer mode apk
-epic war 6 offline play apk
-epic war 6 trailer and screenshots
-epic war 6 system requirements and compatibility
-epic war 6 how to install apk
-epic war 6 similar games and alternatives
-epic war 6 developer and publisher
-epic war 6 forum and community
-epic war 6 guide and wiki
-epic war 6 support and contact
-epic war 6 news and updates
-epic war 6 awards and achievements
-epic war 6 fan art and videos
-epic war 6 merchandise and products
-epic war 6 soundtrack and music
-epic war 6 lore and story
-epic war 6 characters and skills
-epic war 6 units and classes
-epic war 6 maps and levels
-epic war 6 weapons and items
-epic war 6 enemies and bosses
-epic war 6 missions and challenges
-epic war 6 events and tournaments
-epic war 6 codes and coupons
-epic war 6 bugs and issues
-epic war 6 feedback and suggestions
-epic war 6 faq and help

-

Conclusion

-

In conclusion, Epic War 6 APK is a thrilling battle game for Android devices that lets you command legendary heroes and a strong army in epic battles against powerful enemies. You can choose from six unique heroes, each with their own skills and abilities. You can also train and upgrade over 40 battle units, from archers and knights to dragons and giants. You can also challenge and defeat huge titans that will test your skills and strategy. And if you want to compete with other players from around the world, you can enter the PVP Arena and show how epic you are.

-

We have also told you how to download and install Epic War 6 APK on your device, how to play it, how it compares with other games, what are its pros and cons, and what is its review. We hope that this article has been helpful and informative for you.

-

If you are interested in playing Epic War 6 APK, you can download it from the official website of mob.org or use this direct link. You can also visit the official Facebook page of the game for more updates and news. You can also watch this video for a preview of the game.

-

Thank you for reading this article and we hope that you enjoy playing Epic War 6 APK. Have fun and good luck!

-

FAQs

-

Here are some frequently asked questions about Epic War 6 APK:

-
    -
  1. What are the requirements to play Epic War 6 APK?
    -You need an Android device with Android 4.1 or higher and at least 100 MB of free storage space to play Epic War 6 APK. You also need an internet connection to access some features of the game, such as PVP mode, clan wars, quests, achievements, etc.
  2. -
  3. Is Epic War 6 APK safe to download and install?
    -Yes, Epic War 6 APK is safe to download and install on your device. It does not contain any viruses, malware, or spyware that can harm your device or data. However, you need to make sure that you download it from a trusted source, such as mob.org or the direct link that we provided in this article.
  4. -
  5. How can I get more gold and gems in Epic War 6 APK?
    -You can get more gold and gems in Epic War 6 APK by winning battles, completing quests, achieving goals, opening chests, watching ads, or buying them with real money. You can use gold and gems to upgrade your units and heroes, buy items and spells, or unlock new features and content.
  6. -
  7. How can I join or create a clan in Epic War 6 APK?
    -You can join or create a clan in Epic War 6 APK by going to the clan menu in the game. You can either search for an existing clan that suits your preferences and apply to join it, or create your own clan by choosing a name, a logo, and a description. You can also invite your friends or other players to join your clan. You can participate in clan wars, chat with your clan members, and share resources and tips with them.
  8. -
  9. How can I contact the developer of Epic War 6 APK?
    -You can contact the developer of Epic War 6 APK by sending an email to epicwar@artlogicgames.com or by visiting their website at www.artlogicgames.com. You can also follow them on Facebook at www.facebook.com/epicwargames. You can send them your feedback, suggestions, questions, or complaints about the game.
  10. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion_uncond/__init__.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion_uncond/__init__.py deleted file mode 100644 index 3286d84f41f239bbd3662100aaa85257c47cbab5..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/latent_diffusion_uncond/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# flake8: noqa -from .pipeline_latent_diffusion_uncond import LDMPipeline diff --git a/spaces/AIFILMS/generate_human_motion/app.py b/spaces/AIFILMS/generate_human_motion/app.py deleted file mode 100644 index 58c1cc635a5a4e8e6e00680a2ab5413668bdbe20..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/generate_human_motion/app.py +++ /dev/null @@ -1,319 +0,0 @@ -import sys -import os -import OpenGL.GL as gl -os.environ["PYOPENGL_PLATFORM"] = "egl" -os.environ["MESA_GL_VERSION_OVERRIDE"] = "4.1" -os.system('pip install /home/user/app/pyrender') - -sys.argv = ['VQ-Trans/GPT_eval_multi.py'] -os.chdir('VQ-Trans') - -sys.path.append('/home/user/app/VQ-Trans') -sys.path.append('/home/user/app/pyrender') - -import options.option_transformer as option_trans -from huggingface_hub import snapshot_download -model_path = snapshot_download(repo_id="vumichien/T2M-GPT") - -args = option_trans.get_args_parser() - -args.dataname = 't2m' -args.resume_pth = f'{model_path}/VQVAE/net_last.pth' -args.resume_trans = f'{model_path}/VQTransformer_corruption05/net_best_fid.pth' -args.down_t = 2 -args.depth = 3 -args.block_size = 51 - -import clip -import torch -import numpy as np -import models.vqvae as vqvae -import models.t2m_trans as trans -from utils.motion_process import recover_from_ric -import visualization.plot_3d_global as plot_3d -from models.rotation2xyz import Rotation2xyz -import numpy as np -from trimesh import Trimesh -import gc - -import torch -from visualize.simplify_loc2rot import joints2smpl -import pyrender -# import matplotlib.pyplot as plt - -import io -import imageio -from shapely import geometry -import trimesh -from pyrender.constants import RenderFlags -import math -# import ffmpeg -# from PIL import Image -import hashlib -import gradio as gr -import moviepy.editor as mp - -## load clip model and datasets -is_cuda = torch.cuda.is_available() -device = torch.device("cuda" if is_cuda else "cpu") -print(device) -clip_model, clip_preprocess = clip.load("ViT-B/32", device=device, jit=False, download_root='./') # Must set jit=False for training - -if is_cuda: - clip.model.convert_weights(clip_model) - -clip_model.eval() -for p in clip_model.parameters(): - p.requires_grad = False - -net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers - args.nb_code, - args.code_dim, - args.output_emb_width, - args.down_t, - args.stride_t, - args.width, - args.depth, - args.dilation_growth_rate) - - -trans_encoder = trans.Text2Motion_Transformer(num_vq=args.nb_code, - embed_dim=1024, - clip_dim=args.clip_dim, - block_size=args.block_size, - num_layers=9, - n_head=16, - drop_out_rate=args.drop_out_rate, - fc_rate=args.ff_rate) - - -print('loading checkpoint from {}'.format(args.resume_pth)) -ckpt = torch.load(args.resume_pth, map_location='cpu') -net.load_state_dict(ckpt['net'], strict=True) -net.eval() - -print('loading transformer checkpoint from {}'.format(args.resume_trans)) -ckpt = torch.load(args.resume_trans, map_location='cpu') -trans_encoder.load_state_dict(ckpt['trans'], strict=True) -trans_encoder.eval() - -mean = torch.from_numpy(np.load(f'{model_path}/meta/mean.npy')) -std = torch.from_numpy(np.load(f'{model_path}/meta/std.npy')) - -if is_cuda: - net.cuda() - trans_encoder.cuda() - mean = mean.cuda() - std = std.cuda() - -def render(motions, device_id=0, name='test_vis'): - frames, njoints, nfeats = motions.shape - MINS = motions.min(axis=0).min(axis=0) - MAXS = motions.max(axis=0).max(axis=0) - - height_offset = MINS[1] - motions[:, :, 1] -= height_offset - trajec = motions[:, 0, [0, 2]] - is_cuda = torch.cuda.is_available() - # device = torch.device("cuda" if is_cuda else "cpu") - j2s = joints2smpl(num_frames=frames, device_id=0, cuda=is_cuda) - rot2xyz = Rotation2xyz(device=device) - faces = rot2xyz.smpl_model.faces - - if not os.path.exists(f'output/{name}_pred.pt'): - print(f'Running SMPLify, it may take a few minutes.') - motion_tensor, opt_dict = j2s.joint2smpl(motions) # [nframes, njoints, 3] - - vertices = rot2xyz(torch.tensor(motion_tensor).clone(), mask=None, - pose_rep='rot6d', translation=True, glob=True, - jointstype='vertices', - vertstrans=True) - vertices = vertices.detach().cpu() - torch.save(vertices, f'output/{name}_pred.pt') - else: - vertices = torch.load(f'output/{name}_pred.pt') - frames = vertices.shape[3] # shape: 1, nb_frames, 3, nb_joints - print(vertices.shape) - MINS = torch.min(torch.min(vertices[0], axis=0)[0], axis=1)[0] - MAXS = torch.max(torch.max(vertices[0], axis=0)[0], axis=1)[0] - - out_list = [] - - minx = MINS[0] - 0.5 - maxx = MAXS[0] + 0.5 - minz = MINS[2] - 0.5 - maxz = MAXS[2] + 0.5 - polygon = geometry.Polygon([[minx, minz], [minx, maxz], [maxx, maxz], [maxx, minz]]) - polygon_mesh = trimesh.creation.extrude_polygon(polygon, 1e-5) - - vid = [] - for i in range(frames): - if i % 10 == 0: - print(i) - - mesh = Trimesh(vertices=vertices[0, :, :, i].squeeze().tolist(), faces=faces) - - base_color = (0.11, 0.53, 0.8, 0.5) - ## OPAQUE rendering without alpha - ## BLEND rendering consider alpha - material = pyrender.MetallicRoughnessMaterial( - metallicFactor=0.7, - alphaMode='OPAQUE', - baseColorFactor=base_color - ) - - - mesh = pyrender.Mesh.from_trimesh(mesh, material=material) - - polygon_mesh.visual.face_colors = [0, 0, 0, 0.21] - polygon_render = pyrender.Mesh.from_trimesh(polygon_mesh, smooth=False) - - bg_color = [1, 1, 1, 0.8] - scene = pyrender.Scene(bg_color=bg_color, ambient_light=(0.4, 0.4, 0.4)) - - sx, sy, tx, ty = [0.75, 0.75, 0, 0.10] - - camera = pyrender.PerspectiveCamera(yfov=(np.pi / 3.0)) - - light = pyrender.DirectionalLight(color=[1,1,1], intensity=300) - - scene.add(mesh) - - c = np.pi / 2 - - scene.add(polygon_render, pose=np.array([[ 1, 0, 0, 0], - - [ 0, np.cos(c), -np.sin(c), MINS[1].cpu().numpy()], - - [ 0, np.sin(c), np.cos(c), 0], - - [ 0, 0, 0, 1]])) - - light_pose = np.eye(4) - light_pose[:3, 3] = [0, -1, 1] - scene.add(light, pose=light_pose.copy()) - - light_pose[:3, 3] = [0, 1, 1] - scene.add(light, pose=light_pose.copy()) - - light_pose[:3, 3] = [1, 1, 2] - scene.add(light, pose=light_pose.copy()) - - - c = -np.pi / 6 - - scene.add(camera, pose=[[ 1, 0, 0, (minx+maxx).cpu().numpy()/2], - - [ 0, np.cos(c), -np.sin(c), 1.5], - - [ 0, np.sin(c), np.cos(c), max(4, minz.cpu().numpy()+(1.5-MINS[1].cpu().numpy())*2, (maxx-minx).cpu().numpy())], - - [ 0, 0, 0, 1] - ]) - - # render scene - r = pyrender.OffscreenRenderer(960, 960) - - color, _ = r.render(scene, flags=RenderFlags.RGBA) - # Image.fromarray(color).save(outdir+'/'+name+'_'+str(i)+'.png') - - vid.append(color) - - r.delete() - - out = np.stack(vid, axis=0) - imageio.mimwrite(f'output/results.gif', out, fps=20) - out_video = mp.VideoFileClip(f'output/results.gif') - out_video.write_videofile("output/results.mp4") - del out, vertices - return f'output/results.mp4' - -def predict(clip_text, method='fast'): - gc.collect() - if torch.cuda.is_available(): - text = clip.tokenize([clip_text], truncate=True).cuda() - else: - text = clip.tokenize([clip_text], truncate=True) - feat_clip_text = clip_model.encode_text(text).float() - index_motion = trans_encoder.sample(feat_clip_text[0:1], False) - pred_pose = net.forward_decoder(index_motion) - pred_xyz = recover_from_ric((pred_pose*std+mean).float(), 22) - output_name = hashlib.md5(clip_text.encode()).hexdigest() - if method == 'fast': - xyz = pred_xyz.reshape(1, -1, 22, 3) - pose_vis = plot_3d.draw_to_batch(xyz.detach().cpu().numpy(), title_batch=None, outname=[f'output/results.gif']) - out_video = mp.VideoFileClip("output/results.gif") - out_video.write_videofile("output/results.mp4") - return f'output/results.mp4' - elif method == 'slow': - output_path = render(pred_xyz.detach().cpu().numpy().squeeze(axis=0), device_id=0, name=output_name) - return output_path - - -# ---- Gradio Layout ----- -text_prompt = gr.Textbox(label="Text prompt", lines=1, interactive=True) -video_out = gr.Video(label="Motion", mirror_webcam=False, interactive=False) -demo = gr.Blocks() -demo.encrypt = False - -with demo: - gr.Markdown(''' -
-

Generating Human Motion from Textual Descriptions (T2M-GPT)

- This space uses T2M-GPT models based on Vector Quantised-Variational AutoEncoder (VQ-VAE) and Generative Pre-trained Transformer (GPT) for human motion generation from textural descriptions🤗 -
- ''') - with gr.Row(): - with gr.Column(): - gr.Markdown(''' -
- Demo Slow -
a man starts off in an up right position with botg arms extended out by his sides, he then brings his arms down to his body and claps his hands together. after this he wals down amd the the left where he proceeds to sit on a seat -
-
- ''') - with gr.Column(): - gr.Markdown(''' -
- Demo Slow 2 -
a person puts their hands together, leans forwards slightly then swings the arms from right to left -
-
- ''') - with gr.Column(): - gr.Markdown(''' -
- Demo Slow 3 -
a man is practicing the waltz with a partner -
-
- ''') - with gr.Row(): - with gr.Column(): - gr.Markdown(''' - ### Generate human motion by **T2M-GPT** - ##### Step 1. Give prompt text describing human motion - ##### Step 2. Choice method to render output (Fast: Sketch skeleton; Slow: SMPL mesh, only work with GPU and running time around 2 mins) - ##### Step 3. Generate output and enjoy - ''') - with gr.Column(): - with gr.Row(): - text_prompt.render() - method = gr.Dropdown(["slow", "fast"], label="Method", value="slow") - with gr.Row(): - generate_btn = gr.Button("Generate") - generate_btn.click(predict, [text_prompt, method], [video_out], api_name="generate") - print(video_out) - with gr.Row(): - video_out.render() - with gr.Row(): - gr.Markdown(''' - ### You can test by following examples: - ''') - examples = gr.Examples(examples= - [ "a person jogs in place, slowly at first, then increases speed. they then back up and squat down.", - "a man steps forward and does a handstand", - "a man rises from the ground, walks in a circle and sits back down on the ground"], - label="Examples", inputs=[text_prompt]) - -demo.launch(debug=True) diff --git a/spaces/AIZerotoHero-Health4All/02-ClinicalTerminology/README.md b/spaces/AIZerotoHero-Health4All/02-ClinicalTerminology/README.md deleted file mode 100644 index 28796cea638944008464739ccfd3773687e64b3b..0000000000000000000000000000000000000000 --- a/spaces/AIZerotoHero-Health4All/02-ClinicalTerminology/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 02 ClinicalTerminology -emoji: 🐠 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ASJMO/freegpt/client/css/buttons.css b/spaces/ASJMO/freegpt/client/css/buttons.css deleted file mode 100644 index e13f52d9a0414daaa80518bd205913a645a29563..0000000000000000000000000000000000000000 --- a/spaces/ASJMO/freegpt/client/css/buttons.css +++ /dev/null @@ -1,4 +0,0 @@ -.buttons { - display: flex; - justify-content: left; -} diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GptGod.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GptGod.py deleted file mode 100644 index 662884ddbec5ebffa03aae98a36727ff2cb6c366..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/GptGod.py +++ /dev/null @@ -1,51 +0,0 @@ -from __future__ import annotations -import secrets, json -from aiohttp import ClientSession -from typing import AsyncGenerator -from .base_provider import AsyncGeneratorProvider -from .helper import format_prompt - -class GptGod(AsyncGeneratorProvider): - url = "https://gptgod.site" - supports_gpt_35_turbo = True - working = True - - @classmethod - async def create_async_generator( - cls, - model: str, - messages: list[dict[str, str]], - **kwargs - ) -> AsyncGenerator: - headers = { - "User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/118.0", - "Accept": "text/event-stream", - "Accept-Language": "de,en-US;q=0.7,en;q=0.3", - "Accept-Encoding": "gzip, deflate, br", - "Alt-Used": "gptgod.site", - "Connection": "keep-alive", - "Referer": "https://gptgod.site/", - "Sec-Fetch-Dest": "empty", - "Sec-Fetch-Mode": "cors", - "Sec-Fetch-Site": "same-origin", - "Pragma": "no-cache", - "Cache-Control": "no-cache", - } - async with ClientSession(headers=headers) as session: - prompt = format_prompt(messages) - data = { - "content": prompt, - "id": secrets.token_hex(16).zfill(32) - } - async with session.get(f"{cls.url}/api/session/free/gpt3p5", params=data) as response: - response.raise_for_status() - event = None - async for line in response.content: - if line.startswith(b'event: '): - event = line[7:-1] - elif event == b"data" and line.startswith(b"data: "): - data = json.loads(line[6:-1]) - if data: - yield data - elif event == b"done": - break \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/Builders.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/Builders.js deleted file mode 100644 index bedfa7a49c502236aa2dbb9f26cdfd45b98b8cd1..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/Builders.js +++ /dev/null @@ -1,79 +0,0 @@ -import CreateImage from './CreateImage.js'; -import CreateSprite from './CreateSprite.js'; -import CreateVideo from './CreateVideo.js'; -import CreateText from './CreateText.js'; -import CreateBBCodeText from './CreateBBCodeText.js'; -import CreateRoundRectangle from './CreateRoundRectangle.js'; -import CreateNinePatch from './CreateNinePatch.js'; -import CreateNinePatch2 from './CreateNinePatch2.js'; -import CreateCanvas from './CreateCanvas.js'; -import CreateCircleMaskImage from './CreateCircleMaskImage.js'; -import CreateSpace from './CreateSpace.js'; - -import CreateSizer from './CreateSizer.js'; -import CreateFixWidthSizer from './CreateFixWidthSizer.js'; -import CreateGridSizer from './CreateGridSizer.js'; -import CreateOverlapSizer from './CreateOverlapSizer.js'; - -import CreateButtons from './CreateButtons.js'; -import CreateFixWidthButtons from './CreateFixWidthButtons.js'; -import CreateGridButtons from './CreateGridButtons.js'; - -import CreateLabel from './CreateLabel.js'; -import CreateBadgeLabel from './CreateBadgeLabel.js'; -import CreateDialog from './CreateDialog.js'; -import CreateTextBox from './CreateTextBox.js'; -import CreateSlider from './CreateSlider.js'; -import CreateNumberBar from './CreateNumberBar.js'; -import CreateScrollBar from './CreateScrollBar.js'; -import CreateTextArea from './CreateTextArea.js'; -import CreatePages from './CreatePages.js'; -import CreateToast from './CreateToast.js'; -import CreateKnob from './CreateKnob.js'; -import CreateHolyGrail from './CreateHolyGrail.js'; -import CreateMenu from './CreateMenu.js'; - -var Builders = { - Image: CreateImage, - Sprite: CreateSprite, - Video: CreateVideo, - Text: CreateText, - BBCodeText: CreateBBCodeText, - RoundRectangle: CreateRoundRectangle, - Ninepatch: CreateNinePatch, - Ninepatch2: CreateNinePatch2, - Canvas: CreateCanvas, - CircleMaskImage: CreateCircleMaskImage, - Space: CreateSpace, - - Sizer: CreateSizer, - FixWidthSizer: CreateFixWidthSizer, - GridSizer: CreateGridSizer, - OverlapSizer: CreateOverlapSizer, - - Buttons: CreateButtons, - FixWidthButtons: CreateFixWidthButtons, - GridButtons: CreateGridButtons, - - Label: CreateLabel, - BadgeLabel: CreateBadgeLabel, - Dialog: CreateDialog, - TextBox: CreateTextBox, - Slider: CreateSlider, - NumberBar: CreateNumberBar, - ScrollBar: CreateScrollBar, - TextArea: CreateTextArea, - Pages: CreatePages, - Toast: CreateToast, - Knob: CreateKnob, - HolyGrail: CreateHolyGrail, - Menu: CreateMenu, -}; - -/* -function(scene, data, view, styles, customBuilders) { - return gameObject; -} -*/ - -export default Builders; diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateLabel.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateLabel.js deleted file mode 100644 index 8c20f7f845f90be917a21a9cc0596c3cd8afabe5..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateLabel.js +++ /dev/null @@ -1,8 +0,0 @@ -import CreateAnyLabel from './utils/CreateAnyLabel.js'; -import Label from '../../label/Label.js'; - -var CreateLabel = function (scene, data, view, styles, customBuilders) { - return CreateAnyLabel(scene, data, view, styles, customBuilders, Label); -} - -export default CreateLabel; \ No newline at end of file diff --git a/spaces/Allie7/Nose/Dockerfile b/spaces/Allie7/Nose/Dockerfile deleted file mode 100644 index e903078eb67547d100c8e5548b2d7959ce565413..0000000000000000000000000000000000000000 --- a/spaces/Allie7/Nose/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node: 18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git/app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD ["npm", "start" ] \ No newline at end of file diff --git a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/model/__init__.py b/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/model/__init__.py deleted file mode 100644 index b6602d66834efa27a8b88c5eb92ed901389bd9ca..0000000000000000000000000000000000000000 --- a/spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/model/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -from src.model.styleRF import StyleRF -from src.utils.registry import Registry - -MODEL_REGISTRY = Registry("MODEL") - -MODEL_REGISTRY.register(StyleRF) \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_tensorrt_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_tensorrt_img2img.py deleted file mode 100644 index 67c7c2d00fbf53f26e42aa96dc5e049ea3b3d796..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_tensorrt_img2img.py +++ /dev/null @@ -1,1055 +0,0 @@ -# -# Copyright 2023 The HuggingFace Inc. team. -# SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: Apache-2.0 -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import os -from collections import OrderedDict -from copy import copy -from typing import List, Optional, Union - -import numpy as np -import onnx -import onnx_graphsurgeon as gs -import PIL -import tensorrt as trt -import torch -from huggingface_hub import snapshot_download -from onnx import shape_inference -from polygraphy import cuda -from polygraphy.backend.common import bytes_from_path -from polygraphy.backend.onnx.loader import fold_constants -from polygraphy.backend.trt import ( - CreateConfig, - Profile, - engine_from_bytes, - engine_from_network, - network_from_onnx_path, - save_engine, -) -from polygraphy.backend.trt import util as trt_util -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -from diffusers.models import AutoencoderKL, UNet2DConditionModel -from diffusers.pipelines.stable_diffusion import ( - StableDiffusionImg2ImgPipeline, - StableDiffusionPipelineOutput, - StableDiffusionSafetyChecker, -) -from diffusers.schedulers import DDIMScheduler -from diffusers.utils import DIFFUSERS_CACHE, logging - - -""" -Installation instructions -python3 -m pip install --upgrade transformers diffusers>=0.16.0 -python3 -m pip install --upgrade tensorrt>=8.6.1 -python3 -m pip install --upgrade polygraphy>=0.47.0 onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com -python3 -m pip install onnxruntime -""" - -TRT_LOGGER = trt.Logger(trt.Logger.ERROR) -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -# Map of numpy dtype -> torch dtype -numpy_to_torch_dtype_dict = { - np.uint8: torch.uint8, - np.int8: torch.int8, - np.int16: torch.int16, - np.int32: torch.int32, - np.int64: torch.int64, - np.float16: torch.float16, - np.float32: torch.float32, - np.float64: torch.float64, - np.complex64: torch.complex64, - np.complex128: torch.complex128, -} -if np.version.full_version >= "1.24.0": - numpy_to_torch_dtype_dict[np.bool_] = torch.bool -else: - numpy_to_torch_dtype_dict[np.bool] = torch.bool - -# Map of torch dtype -> numpy dtype -torch_to_numpy_dtype_dict = {value: key for (key, value) in numpy_to_torch_dtype_dict.items()} - - -def device_view(t): - return cuda.DeviceView(ptr=t.data_ptr(), shape=t.shape, dtype=torch_to_numpy_dtype_dict[t.dtype]) - - -def preprocess_image(image): - """ - image: torch.Tensor - """ - w, h = image.size - w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h)) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image).contiguous() - return 2.0 * image - 1.0 - - -class Engine: - def __init__(self, engine_path): - self.engine_path = engine_path - self.engine = None - self.context = None - self.buffers = OrderedDict() - self.tensors = OrderedDict() - - def __del__(self): - [buf.free() for buf in self.buffers.values() if isinstance(buf, cuda.DeviceArray)] - del self.engine - del self.context - del self.buffers - del self.tensors - - def build( - self, - onnx_path, - fp16, - input_profile=None, - enable_preview=False, - enable_all_tactics=False, - timing_cache=None, - workspace_size=0, - ): - logger.warning(f"Building TensorRT engine for {onnx_path}: {self.engine_path}") - p = Profile() - if input_profile: - for name, dims in input_profile.items(): - assert len(dims) == 3 - p.add(name, min=dims[0], opt=dims[1], max=dims[2]) - - config_kwargs = {} - - config_kwargs["preview_features"] = [trt.PreviewFeature.DISABLE_EXTERNAL_TACTIC_SOURCES_FOR_CORE_0805] - if enable_preview: - # Faster dynamic shapes made optional since it increases engine build time. - config_kwargs["preview_features"].append(trt.PreviewFeature.FASTER_DYNAMIC_SHAPES_0805) - if workspace_size > 0: - config_kwargs["memory_pool_limits"] = {trt.MemoryPoolType.WORKSPACE: workspace_size} - if not enable_all_tactics: - config_kwargs["tactic_sources"] = [] - - engine = engine_from_network( - network_from_onnx_path(onnx_path, flags=[trt.OnnxParserFlag.NATIVE_INSTANCENORM]), - config=CreateConfig(fp16=fp16, profiles=[p], load_timing_cache=timing_cache, **config_kwargs), - save_timing_cache=timing_cache, - ) - save_engine(engine, path=self.engine_path) - - def load(self): - logger.warning(f"Loading TensorRT engine: {self.engine_path}") - self.engine = engine_from_bytes(bytes_from_path(self.engine_path)) - - def activate(self): - self.context = self.engine.create_execution_context() - - def allocate_buffers(self, shape_dict=None, device="cuda"): - for idx in range(trt_util.get_bindings_per_profile(self.engine)): - binding = self.engine[idx] - if shape_dict and binding in shape_dict: - shape = shape_dict[binding] - else: - shape = self.engine.get_binding_shape(binding) - dtype = trt.nptype(self.engine.get_binding_dtype(binding)) - if self.engine.binding_is_input(binding): - self.context.set_binding_shape(idx, shape) - tensor = torch.empty(tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype]).to(device=device) - self.tensors[binding] = tensor - self.buffers[binding] = cuda.DeviceView(ptr=tensor.data_ptr(), shape=shape, dtype=dtype) - - def infer(self, feed_dict, stream): - start_binding, end_binding = trt_util.get_active_profile_bindings(self.context) - # shallow copy of ordered dict - device_buffers = copy(self.buffers) - for name, buf in feed_dict.items(): - assert isinstance(buf, cuda.DeviceView) - device_buffers[name] = buf - bindings = [0] * start_binding + [buf.ptr for buf in device_buffers.values()] - noerror = self.context.execute_async_v2(bindings=bindings, stream_handle=stream.ptr) - if not noerror: - raise ValueError("ERROR: inference failed.") - - return self.tensors - - -class Optimizer: - def __init__(self, onnx_graph): - self.graph = gs.import_onnx(onnx_graph) - - def cleanup(self, return_onnx=False): - self.graph.cleanup().toposort() - if return_onnx: - return gs.export_onnx(self.graph) - - def select_outputs(self, keep, names=None): - self.graph.outputs = [self.graph.outputs[o] for o in keep] - if names: - for i, name in enumerate(names): - self.graph.outputs[i].name = name - - def fold_constants(self, return_onnx=False): - onnx_graph = fold_constants(gs.export_onnx(self.graph), allow_onnxruntime_shape_inference=True) - self.graph = gs.import_onnx(onnx_graph) - if return_onnx: - return onnx_graph - - def infer_shapes(self, return_onnx=False): - onnx_graph = gs.export_onnx(self.graph) - if onnx_graph.ByteSize() > 2147483648: - raise TypeError("ERROR: model size exceeds supported 2GB limit") - else: - onnx_graph = shape_inference.infer_shapes(onnx_graph) - - self.graph = gs.import_onnx(onnx_graph) - if return_onnx: - return onnx_graph - - -class BaseModel: - def __init__(self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77): - self.model = model - self.name = "SD Model" - self.fp16 = fp16 - self.device = device - - self.min_batch = 1 - self.max_batch = max_batch_size - self.min_image_shape = 256 # min image resolution: 256x256 - self.max_image_shape = 1024 # max image resolution: 1024x1024 - self.min_latent_shape = self.min_image_shape // 8 - self.max_latent_shape = self.max_image_shape // 8 - - self.embedding_dim = embedding_dim - self.text_maxlen = text_maxlen - - def get_model(self): - return self.model - - def get_input_names(self): - pass - - def get_output_names(self): - pass - - def get_dynamic_axes(self): - return None - - def get_sample_input(self, batch_size, image_height, image_width): - pass - - def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape): - return None - - def get_shape_dict(self, batch_size, image_height, image_width): - return None - - def optimize(self, onnx_graph): - opt = Optimizer(onnx_graph) - opt.cleanup() - opt.fold_constants() - opt.infer_shapes() - onnx_opt_graph = opt.cleanup(return_onnx=True) - return onnx_opt_graph - - def check_dims(self, batch_size, image_height, image_width): - assert batch_size >= self.min_batch and batch_size <= self.max_batch - assert image_height % 8 == 0 or image_width % 8 == 0 - latent_height = image_height // 8 - latent_width = image_width // 8 - assert latent_height >= self.min_latent_shape and latent_height <= self.max_latent_shape - assert latent_width >= self.min_latent_shape and latent_width <= self.max_latent_shape - return (latent_height, latent_width) - - def get_minmax_dims(self, batch_size, image_height, image_width, static_batch, static_shape): - min_batch = batch_size if static_batch else self.min_batch - max_batch = batch_size if static_batch else self.max_batch - latent_height = image_height // 8 - latent_width = image_width // 8 - min_image_height = image_height if static_shape else self.min_image_shape - max_image_height = image_height if static_shape else self.max_image_shape - min_image_width = image_width if static_shape else self.min_image_shape - max_image_width = image_width if static_shape else self.max_image_shape - min_latent_height = latent_height if static_shape else self.min_latent_shape - max_latent_height = latent_height if static_shape else self.max_latent_shape - min_latent_width = latent_width if static_shape else self.min_latent_shape - max_latent_width = latent_width if static_shape else self.max_latent_shape - return ( - min_batch, - max_batch, - min_image_height, - max_image_height, - min_image_width, - max_image_width, - min_latent_height, - max_latent_height, - min_latent_width, - max_latent_width, - ) - - -def getOnnxPath(model_name, onnx_dir, opt=True): - return os.path.join(onnx_dir, model_name + (".opt" if opt else "") + ".onnx") - - -def getEnginePath(model_name, engine_dir): - return os.path.join(engine_dir, model_name + ".plan") - - -def build_engines( - models: dict, - engine_dir, - onnx_dir, - onnx_opset, - opt_image_height, - opt_image_width, - opt_batch_size=1, - force_engine_rebuild=False, - static_batch=False, - static_shape=True, - enable_preview=False, - enable_all_tactics=False, - timing_cache=None, - max_workspace_size=0, -): - built_engines = {} - if not os.path.isdir(onnx_dir): - os.makedirs(onnx_dir) - if not os.path.isdir(engine_dir): - os.makedirs(engine_dir) - - # Export models to ONNX - for model_name, model_obj in models.items(): - engine_path = getEnginePath(model_name, engine_dir) - if force_engine_rebuild or not os.path.exists(engine_path): - logger.warning("Building Engines...") - logger.warning("Engine build can take a while to complete") - onnx_path = getOnnxPath(model_name, onnx_dir, opt=False) - onnx_opt_path = getOnnxPath(model_name, onnx_dir) - if force_engine_rebuild or not os.path.exists(onnx_opt_path): - if force_engine_rebuild or not os.path.exists(onnx_path): - logger.warning(f"Exporting model: {onnx_path}") - model = model_obj.get_model() - with torch.inference_mode(), torch.autocast("cuda"): - inputs = model_obj.get_sample_input(opt_batch_size, opt_image_height, opt_image_width) - torch.onnx.export( - model, - inputs, - onnx_path, - export_params=True, - opset_version=onnx_opset, - do_constant_folding=True, - input_names=model_obj.get_input_names(), - output_names=model_obj.get_output_names(), - dynamic_axes=model_obj.get_dynamic_axes(), - ) - del model - torch.cuda.empty_cache() - gc.collect() - else: - logger.warning(f"Found cached model: {onnx_path}") - - # Optimize onnx - if force_engine_rebuild or not os.path.exists(onnx_opt_path): - logger.warning(f"Generating optimizing model: {onnx_opt_path}") - onnx_opt_graph = model_obj.optimize(onnx.load(onnx_path)) - onnx.save(onnx_opt_graph, onnx_opt_path) - else: - logger.warning(f"Found cached optimized model: {onnx_opt_path} ") - - # Build TensorRT engines - for model_name, model_obj in models.items(): - engine_path = getEnginePath(model_name, engine_dir) - engine = Engine(engine_path) - onnx_path = getOnnxPath(model_name, onnx_dir, opt=False) - onnx_opt_path = getOnnxPath(model_name, onnx_dir) - - if force_engine_rebuild or not os.path.exists(engine.engine_path): - engine.build( - onnx_opt_path, - fp16=True, - input_profile=model_obj.get_input_profile( - opt_batch_size, - opt_image_height, - opt_image_width, - static_batch=static_batch, - static_shape=static_shape, - ), - enable_preview=enable_preview, - timing_cache=timing_cache, - workspace_size=max_workspace_size, - ) - built_engines[model_name] = engine - - # Load and activate TensorRT engines - for model_name, model_obj in models.items(): - engine = built_engines[model_name] - engine.load() - engine.activate() - - return built_engines - - -def runEngine(engine, feed_dict, stream): - return engine.infer(feed_dict, stream) - - -class CLIP(BaseModel): - def __init__(self, model, device, max_batch_size, embedding_dim): - super(CLIP, self).__init__( - model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim - ) - self.name = "CLIP" - - def get_input_names(self): - return ["input_ids"] - - def get_output_names(self): - return ["text_embeddings", "pooler_output"] - - def get_dynamic_axes(self): - return {"input_ids": {0: "B"}, "text_embeddings": {0: "B"}} - - def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape): - self.check_dims(batch_size, image_height, image_width) - min_batch, max_batch, _, _, _, _, _, _, _, _ = self.get_minmax_dims( - batch_size, image_height, image_width, static_batch, static_shape - ) - return { - "input_ids": [(min_batch, self.text_maxlen), (batch_size, self.text_maxlen), (max_batch, self.text_maxlen)] - } - - def get_shape_dict(self, batch_size, image_height, image_width): - self.check_dims(batch_size, image_height, image_width) - return { - "input_ids": (batch_size, self.text_maxlen), - "text_embeddings": (batch_size, self.text_maxlen, self.embedding_dim), - } - - def get_sample_input(self, batch_size, image_height, image_width): - self.check_dims(batch_size, image_height, image_width) - return torch.zeros(batch_size, self.text_maxlen, dtype=torch.int32, device=self.device) - - def optimize(self, onnx_graph): - opt = Optimizer(onnx_graph) - opt.select_outputs([0]) # delete graph output#1 - opt.cleanup() - opt.fold_constants() - opt.infer_shapes() - opt.select_outputs([0], names=["text_embeddings"]) # rename network output - opt_onnx_graph = opt.cleanup(return_onnx=True) - return opt_onnx_graph - - -def make_CLIP(model, device, max_batch_size, embedding_dim, inpaint=False): - return CLIP(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim) - - -class UNet(BaseModel): - def __init__( - self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77, unet_dim=4 - ): - super(UNet, self).__init__( - model=model, - fp16=fp16, - device=device, - max_batch_size=max_batch_size, - embedding_dim=embedding_dim, - text_maxlen=text_maxlen, - ) - self.unet_dim = unet_dim - self.name = "UNet" - - def get_input_names(self): - return ["sample", "timestep", "encoder_hidden_states"] - - def get_output_names(self): - return ["latent"] - - def get_dynamic_axes(self): - return { - "sample": {0: "2B", 2: "H", 3: "W"}, - "encoder_hidden_states": {0: "2B"}, - "latent": {0: "2B", 2: "H", 3: "W"}, - } - - def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape): - latent_height, latent_width = self.check_dims(batch_size, image_height, image_width) - ( - min_batch, - max_batch, - _, - _, - _, - _, - min_latent_height, - max_latent_height, - min_latent_width, - max_latent_width, - ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape) - return { - "sample": [ - (2 * min_batch, self.unet_dim, min_latent_height, min_latent_width), - (2 * batch_size, self.unet_dim, latent_height, latent_width), - (2 * max_batch, self.unet_dim, max_latent_height, max_latent_width), - ], - "encoder_hidden_states": [ - (2 * min_batch, self.text_maxlen, self.embedding_dim), - (2 * batch_size, self.text_maxlen, self.embedding_dim), - (2 * max_batch, self.text_maxlen, self.embedding_dim), - ], - } - - def get_shape_dict(self, batch_size, image_height, image_width): - latent_height, latent_width = self.check_dims(batch_size, image_height, image_width) - return { - "sample": (2 * batch_size, self.unet_dim, latent_height, latent_width), - "encoder_hidden_states": (2 * batch_size, self.text_maxlen, self.embedding_dim), - "latent": (2 * batch_size, 4, latent_height, latent_width), - } - - def get_sample_input(self, batch_size, image_height, image_width): - latent_height, latent_width = self.check_dims(batch_size, image_height, image_width) - dtype = torch.float16 if self.fp16 else torch.float32 - return ( - torch.randn( - 2 * batch_size, self.unet_dim, latent_height, latent_width, dtype=torch.float32, device=self.device - ), - torch.tensor([1.0], dtype=torch.float32, device=self.device), - torch.randn(2 * batch_size, self.text_maxlen, self.embedding_dim, dtype=dtype, device=self.device), - ) - - -def make_UNet(model, device, max_batch_size, embedding_dim, inpaint=False): - return UNet( - model, - fp16=True, - device=device, - max_batch_size=max_batch_size, - embedding_dim=embedding_dim, - unet_dim=(9 if inpaint else 4), - ) - - -class VAE(BaseModel): - def __init__(self, model, device, max_batch_size, embedding_dim): - super(VAE, self).__init__( - model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim - ) - self.name = "VAE decoder" - - def get_input_names(self): - return ["latent"] - - def get_output_names(self): - return ["images"] - - def get_dynamic_axes(self): - return {"latent": {0: "B", 2: "H", 3: "W"}, "images": {0: "B", 2: "8H", 3: "8W"}} - - def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape): - latent_height, latent_width = self.check_dims(batch_size, image_height, image_width) - ( - min_batch, - max_batch, - _, - _, - _, - _, - min_latent_height, - max_latent_height, - min_latent_width, - max_latent_width, - ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape) - return { - "latent": [ - (min_batch, 4, min_latent_height, min_latent_width), - (batch_size, 4, latent_height, latent_width), - (max_batch, 4, max_latent_height, max_latent_width), - ] - } - - def get_shape_dict(self, batch_size, image_height, image_width): - latent_height, latent_width = self.check_dims(batch_size, image_height, image_width) - return { - "latent": (batch_size, 4, latent_height, latent_width), - "images": (batch_size, 3, image_height, image_width), - } - - def get_sample_input(self, batch_size, image_height, image_width): - latent_height, latent_width = self.check_dims(batch_size, image_height, image_width) - return torch.randn(batch_size, 4, latent_height, latent_width, dtype=torch.float32, device=self.device) - - -def make_VAE(model, device, max_batch_size, embedding_dim, inpaint=False): - return VAE(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim) - - -class TorchVAEEncoder(torch.nn.Module): - def __init__(self, model): - super().__init__() - self.vae_encoder = model - - def forward(self, x): - return self.vae_encoder.encode(x).latent_dist.sample() - - -class VAEEncoder(BaseModel): - def __init__(self, model, device, max_batch_size, embedding_dim): - super(VAEEncoder, self).__init__( - model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim - ) - self.name = "VAE encoder" - - def get_model(self): - vae_encoder = TorchVAEEncoder(self.model) - return vae_encoder - - def get_input_names(self): - return ["images"] - - def get_output_names(self): - return ["latent"] - - def get_dynamic_axes(self): - return {"images": {0: "B", 2: "8H", 3: "8W"}, "latent": {0: "B", 2: "H", 3: "W"}} - - def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape): - assert batch_size >= self.min_batch and batch_size <= self.max_batch - min_batch = batch_size if static_batch else self.min_batch - max_batch = batch_size if static_batch else self.max_batch - self.check_dims(batch_size, image_height, image_width) - ( - min_batch, - max_batch, - min_image_height, - max_image_height, - min_image_width, - max_image_width, - _, - _, - _, - _, - ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape) - - return { - "images": [ - (min_batch, 3, min_image_height, min_image_width), - (batch_size, 3, image_height, image_width), - (max_batch, 3, max_image_height, max_image_width), - ] - } - - def get_shape_dict(self, batch_size, image_height, image_width): - latent_height, latent_width = self.check_dims(batch_size, image_height, image_width) - return { - "images": (batch_size, 3, image_height, image_width), - "latent": (batch_size, 4, latent_height, latent_width), - } - - def get_sample_input(self, batch_size, image_height, image_width): - self.check_dims(batch_size, image_height, image_width) - return torch.randn(batch_size, 3, image_height, image_width, dtype=torch.float32, device=self.device) - - -def make_VAEEncoder(model, device, max_batch_size, embedding_dim, inpaint=False): - return VAEEncoder(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim) - - -class TensorRTStableDiffusionImg2ImgPipeline(StableDiffusionImg2ImgPipeline): - r""" - Pipeline for image-to-image generation using TensorRT accelerated Stable Diffusion. - - This model inherits from [`StableDiffusionImg2ImgPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: DDIMScheduler, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - stages=["clip", "unet", "vae", "vae_encoder"], - image_height: int = 512, - image_width: int = 512, - max_batch_size: int = 16, - # ONNX export parameters - onnx_opset: int = 17, - onnx_dir: str = "onnx", - # TensorRT engine build parameters - engine_dir: str = "engine", - build_preview_features: bool = True, - force_engine_rebuild: bool = False, - timing_cache: str = "timing_cache", - ): - super().__init__( - vae, text_encoder, tokenizer, unet, scheduler, safety_checker, feature_extractor, requires_safety_checker - ) - - self.vae.forward = self.vae.decode - - self.stages = stages - self.image_height, self.image_width = image_height, image_width - self.inpaint = False - self.onnx_opset = onnx_opset - self.onnx_dir = onnx_dir - self.engine_dir = engine_dir - self.force_engine_rebuild = force_engine_rebuild - self.timing_cache = timing_cache - self.build_static_batch = False - self.build_dynamic_shape = False - self.build_preview_features = build_preview_features - - self.max_batch_size = max_batch_size - # TODO: Restrict batch size to 4 for larger image dimensions as a WAR for TensorRT limitation. - if self.build_dynamic_shape or self.image_height > 512 or self.image_width > 512: - self.max_batch_size = 4 - - self.stream = None # loaded in loadResources() - self.models = {} # loaded in __loadModels() - self.engine = {} # loaded in build_engines() - - def __loadModels(self): - # Load pipeline models - self.embedding_dim = self.text_encoder.config.hidden_size - models_args = { - "device": self.torch_device, - "max_batch_size": self.max_batch_size, - "embedding_dim": self.embedding_dim, - "inpaint": self.inpaint, - } - if "clip" in self.stages: - self.models["clip"] = make_CLIP(self.text_encoder, **models_args) - if "unet" in self.stages: - self.models["unet"] = make_UNet(self.unet, **models_args) - if "vae" in self.stages: - self.models["vae"] = make_VAE(self.vae, **models_args) - if "vae_encoder" in self.stages: - self.models["vae_encoder"] = make_VAEEncoder(self.vae, **models_args) - - @classmethod - def set_cached_folder(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs): - cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE) - resume_download = kwargs.pop("resume_download", False) - proxies = kwargs.pop("proxies", None) - local_files_only = kwargs.pop("local_files_only", False) - use_auth_token = kwargs.pop("use_auth_token", None) - revision = kwargs.pop("revision", None) - - cls.cached_folder = ( - pretrained_model_name_or_path - if os.path.isdir(pretrained_model_name_or_path) - else snapshot_download( - pretrained_model_name_or_path, - cache_dir=cache_dir, - resume_download=resume_download, - proxies=proxies, - local_files_only=local_files_only, - use_auth_token=use_auth_token, - revision=revision, - ) - ) - - def to(self, torch_device: Optional[Union[str, torch.device]] = None, silence_dtype_warnings: bool = False): - super().to(torch_device, silence_dtype_warnings=silence_dtype_warnings) - - self.onnx_dir = os.path.join(self.cached_folder, self.onnx_dir) - self.engine_dir = os.path.join(self.cached_folder, self.engine_dir) - self.timing_cache = os.path.join(self.cached_folder, self.timing_cache) - - # set device - self.torch_device = self._execution_device - logger.warning(f"Running inference on device: {self.torch_device}") - - # load models - self.__loadModels() - - # build engines - self.engine = build_engines( - self.models, - self.engine_dir, - self.onnx_dir, - self.onnx_opset, - opt_image_height=self.image_height, - opt_image_width=self.image_width, - force_engine_rebuild=self.force_engine_rebuild, - static_batch=self.build_static_batch, - static_shape=not self.build_dynamic_shape, - enable_preview=self.build_preview_features, - timing_cache=self.timing_cache, - ) - - return self - - def __initialize_timesteps(self, timesteps, strength): - self.scheduler.set_timesteps(timesteps) - offset = self.scheduler.steps_offset if hasattr(self.scheduler, "steps_offset") else 0 - init_timestep = int(timesteps * strength) + offset - init_timestep = min(init_timestep, timesteps) - t_start = max(timesteps - init_timestep + offset, 0) - timesteps = self.scheduler.timesteps[t_start:].to(self.torch_device) - return timesteps, t_start - - def __preprocess_images(self, batch_size, images=()): - init_images = [] - for image in images: - image = image.to(self.torch_device).float() - image = image.repeat(batch_size, 1, 1, 1) - init_images.append(image) - return tuple(init_images) - - def __encode_image(self, init_image): - init_latents = runEngine(self.engine["vae_encoder"], {"images": device_view(init_image)}, self.stream)[ - "latent" - ] - init_latents = 0.18215 * init_latents - return init_latents - - def __encode_prompt(self, prompt, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. - Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - """ - # Tokenize prompt - text_input_ids = ( - self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - .input_ids.type(torch.int32) - .to(self.torch_device) - ) - - text_input_ids_inp = device_view(text_input_ids) - # NOTE: output tensor for CLIP must be cloned because it will be overwritten when called again for negative prompt - text_embeddings = runEngine(self.engine["clip"], {"input_ids": text_input_ids_inp}, self.stream)[ - "text_embeddings" - ].clone() - - # Tokenize negative prompt - uncond_input_ids = ( - self.tokenizer( - negative_prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - .input_ids.type(torch.int32) - .to(self.torch_device) - ) - uncond_input_ids_inp = device_view(uncond_input_ids) - uncond_embeddings = runEngine(self.engine["clip"], {"input_ids": uncond_input_ids_inp}, self.stream)[ - "text_embeddings" - ] - - # Concatenate the unconditional and text embeddings into a single batch to avoid doing two forward passes for classifier free guidance - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]).to(dtype=torch.float16) - - return text_embeddings - - def __denoise_latent( - self, latents, text_embeddings, timesteps=None, step_offset=0, mask=None, masked_image_latents=None - ): - if not isinstance(timesteps, torch.Tensor): - timesteps = self.scheduler.timesteps - for step_index, timestep in enumerate(timesteps): - # Expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) - latent_model_input = self.scheduler.scale_model_input(latent_model_input, timestep) - if isinstance(mask, torch.Tensor): - latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1) - - # Predict the noise residual - timestep_float = timestep.float() if timestep.dtype != torch.float32 else timestep - - sample_inp = device_view(latent_model_input) - timestep_inp = device_view(timestep_float) - embeddings_inp = device_view(text_embeddings) - noise_pred = runEngine( - self.engine["unet"], - {"sample": sample_inp, "timestep": timestep_inp, "encoder_hidden_states": embeddings_inp}, - self.stream, - )["latent"] - - # Perform guidance - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond) - - latents = self.scheduler.step(noise_pred, timestep, latents).prev_sample - - latents = 1.0 / 0.18215 * latents - return latents - - def __decode_latent(self, latents): - images = runEngine(self.engine["vae"], {"latent": device_view(latents)}, self.stream)["images"] - images = (images / 2 + 0.5).clamp(0, 1) - return images.cpu().permute(0, 2, 3, 1).float().numpy() - - def __loadResources(self, image_height, image_width, batch_size): - self.stream = cuda.Stream() - - # Allocate buffers for TensorRT engine bindings - for model_name, obj in self.models.items(): - self.engine[model_name].allocate_buffers( - shape_dict=obj.get_shape_dict(batch_size, image_height, image_width), device=self.torch_device - ) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]] = None, - image: Union[torch.FloatTensor, PIL.Image.Image] = None, - strength: float = 0.8, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will - be masked out with `mask_image` and repainted according to `prompt`. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead. - Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`). - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - - """ - self.generator = generator - self.denoising_steps = num_inference_steps - self.guidance_scale = guidance_scale - - # Pre-compute latent input scales and linear multistep coefficients - self.scheduler.set_timesteps(self.denoising_steps, device=self.torch_device) - - # Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - prompt = [prompt] - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"Expected prompt to be of type list or str but got {type(prompt)}") - - if negative_prompt is None: - negative_prompt = [""] * batch_size - - if negative_prompt is not None and isinstance(negative_prompt, str): - negative_prompt = [negative_prompt] - - assert len(prompt) == len(negative_prompt) - - if batch_size > self.max_batch_size: - raise ValueError( - f"Batch size {len(prompt)} is larger than allowed {self.max_batch_size}. If dynamic shape is used, then maximum batch size is 4" - ) - - # load resources - self.__loadResources(self.image_height, self.image_width, batch_size) - - with torch.inference_mode(), torch.autocast("cuda"), trt.Runtime(TRT_LOGGER): - # Initialize timesteps - timesteps, t_start = self.__initialize_timesteps(self.denoising_steps, strength) - latent_timestep = timesteps[:1].repeat(batch_size) - - # Pre-process input image - if isinstance(image, PIL.Image.Image): - image = preprocess_image(image) - init_image = self.__preprocess_images(batch_size, (image,))[0] - - # VAE encode init image - init_latents = self.__encode_image(init_image) - - # Add noise to latents using timesteps - noise = torch.randn( - init_latents.shape, generator=self.generator, device=self.torch_device, dtype=torch.float32 - ) - latents = self.scheduler.add_noise(init_latents, noise, latent_timestep) - - # CLIP text encoder - text_embeddings = self.__encode_prompt(prompt, negative_prompt) - - # UNet denoiser - latents = self.__denoise_latent(latents, text_embeddings, timesteps=timesteps, step_offset=t_start) - - # VAE decode latent - images = self.__decode_latent(latents) - - images = self.numpy_to_pil(images) - return StableDiffusionPipelineOutput(images=images, nsfw_content_detected=None) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_models_diffuser_to_diffusers.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_models_diffuser_to_diffusers.py deleted file mode 100644 index cc5321e33fe088c652f6014c6dab813bb8d5f246..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_models_diffuser_to_diffusers.py +++ /dev/null @@ -1,100 +0,0 @@ -import json -import os - -import torch - -from diffusers import UNet1DModel - - -os.makedirs("hub/hopper-medium-v2/unet/hor32", exist_ok=True) -os.makedirs("hub/hopper-medium-v2/unet/hor128", exist_ok=True) - -os.makedirs("hub/hopper-medium-v2/value_function", exist_ok=True) - - -def unet(hor): - if hor == 128: - down_block_types = ("DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D") - block_out_channels = (32, 128, 256) - up_block_types = ("UpResnetBlock1D", "UpResnetBlock1D") - - elif hor == 32: - down_block_types = ("DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D") - block_out_channels = (32, 64, 128, 256) - up_block_types = ("UpResnetBlock1D", "UpResnetBlock1D", "UpResnetBlock1D") - model = torch.load(f"/Users/bglickenhaus/Documents/diffuser/temporal_unet-hopper-mediumv2-hor{hor}.torch") - state_dict = model.state_dict() - config = { - "down_block_types": down_block_types, - "block_out_channels": block_out_channels, - "up_block_types": up_block_types, - "layers_per_block": 1, - "use_timestep_embedding": True, - "out_block_type": "OutConv1DBlock", - "norm_num_groups": 8, - "downsample_each_block": False, - "in_channels": 14, - "out_channels": 14, - "extra_in_channels": 0, - "time_embedding_type": "positional", - "flip_sin_to_cos": False, - "freq_shift": 1, - "sample_size": 65536, - "mid_block_type": "MidResTemporalBlock1D", - "act_fn": "mish", - } - hf_value_function = UNet1DModel(**config) - print(f"length of state dict: {len(state_dict.keys())}") - print(f"length of value function dict: {len(hf_value_function.state_dict().keys())}") - mapping = dict(zip(model.state_dict().keys(), hf_value_function.state_dict().keys())) - for k, v in mapping.items(): - state_dict[v] = state_dict.pop(k) - hf_value_function.load_state_dict(state_dict) - - torch.save(hf_value_function.state_dict(), f"hub/hopper-medium-v2/unet/hor{hor}/diffusion_pytorch_model.bin") - with open(f"hub/hopper-medium-v2/unet/hor{hor}/config.json", "w") as f: - json.dump(config, f) - - -def value_function(): - config = { - "in_channels": 14, - "down_block_types": ("DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D", "DownResnetBlock1D"), - "up_block_types": (), - "out_block_type": "ValueFunction", - "mid_block_type": "ValueFunctionMidBlock1D", - "block_out_channels": (32, 64, 128, 256), - "layers_per_block": 1, - "downsample_each_block": True, - "sample_size": 65536, - "out_channels": 14, - "extra_in_channels": 0, - "time_embedding_type": "positional", - "use_timestep_embedding": True, - "flip_sin_to_cos": False, - "freq_shift": 1, - "norm_num_groups": 8, - "act_fn": "mish", - } - - model = torch.load("/Users/bglickenhaus/Documents/diffuser/value_function-hopper-mediumv2-hor32.torch") - state_dict = model - hf_value_function = UNet1DModel(**config) - print(f"length of state dict: {len(state_dict.keys())}") - print(f"length of value function dict: {len(hf_value_function.state_dict().keys())}") - - mapping = dict(zip(state_dict.keys(), hf_value_function.state_dict().keys())) - for k, v in mapping.items(): - state_dict[v] = state_dict.pop(k) - - hf_value_function.load_state_dict(state_dict) - - torch.save(hf_value_function.state_dict(), "hub/hopper-medium-v2/value_function/diffusion_pytorch_model.bin") - with open("hub/hopper-medium-v2/value_function/config.json", "w") as f: - json.dump(config, f) - - -if __name__ == "__main__": - unet(32) - # unet(128) - value_function() diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/__init__.py deleted file mode 100644 index c860b95f609c5c94d327df5d5f6541b87cd44488..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/__init__.py +++ /dev/null @@ -1,291 +0,0 @@ -__version__ = "0.19.3" - -from .configuration_utils import ConfigMixin -from .utils import ( - OptionalDependencyNotAvailable, - is_flax_available, - is_inflect_available, - is_invisible_watermark_available, - is_k_diffusion_available, - is_k_diffusion_version, - is_librosa_available, - is_note_seq_available, - is_onnx_available, - is_scipy_available, - is_torch_available, - is_torchsde_available, - is_transformers_available, - is_transformers_version, - is_unidecode_available, - logging, -) - - -try: - if not is_onnx_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_onnx_objects import * # noqa F403 -else: - from .pipelines import OnnxRuntimeModel - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_pt_objects import * # noqa F403 -else: - from .models import ( - AsymmetricAutoencoderKL, - AutoencoderKL, - ControlNetModel, - ModelMixin, - MultiAdapter, - PriorTransformer, - T2IAdapter, - T5FilmDecoder, - Transformer2DModel, - UNet1DModel, - UNet2DConditionModel, - UNet2DModel, - UNet3DConditionModel, - VQModel, - ) - from .optimization import ( - get_constant_schedule, - get_constant_schedule_with_warmup, - get_cosine_schedule_with_warmup, - get_cosine_with_hard_restarts_schedule_with_warmup, - get_linear_schedule_with_warmup, - get_polynomial_decay_schedule_with_warmup, - get_scheduler, - ) - from .pipelines import ( - AudioPipelineOutput, - AutoPipelineForImage2Image, - AutoPipelineForInpainting, - AutoPipelineForText2Image, - ConsistencyModelPipeline, - DanceDiffusionPipeline, - DDIMPipeline, - DDPMPipeline, - DiffusionPipeline, - DiTPipeline, - ImagePipelineOutput, - KarrasVePipeline, - LDMPipeline, - LDMSuperResolutionPipeline, - PNDMPipeline, - RePaintPipeline, - ScoreSdeVePipeline, - ) - from .schedulers import ( - CMStochasticIterativeScheduler, - DDIMInverseScheduler, - DDIMParallelScheduler, - DDIMScheduler, - DDPMParallelScheduler, - DDPMScheduler, - DEISMultistepScheduler, - DPMSolverMultistepInverseScheduler, - DPMSolverMultistepScheduler, - DPMSolverSinglestepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - HeunDiscreteScheduler, - IPNDMScheduler, - KarrasVeScheduler, - KDPM2AncestralDiscreteScheduler, - KDPM2DiscreteScheduler, - PNDMScheduler, - RePaintScheduler, - SchedulerMixin, - ScoreSdeVeScheduler, - UnCLIPScheduler, - UniPCMultistepScheduler, - VQDiffusionScheduler, - ) - from .training_utils import EMAModel - -try: - if not (is_torch_available() and is_scipy_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_scipy_objects import * # noqa F403 -else: - from .schedulers import LMSDiscreteScheduler - -try: - if not (is_torch_available() and is_torchsde_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_torchsde_objects import * # noqa F403 -else: - from .schedulers import DPMSolverSDEScheduler - -try: - if not (is_torch_available() and is_transformers_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_transformers_objects import * # noqa F403 -else: - from .pipelines import ( - AltDiffusionImg2ImgPipeline, - AltDiffusionPipeline, - AudioLDMPipeline, - CycleDiffusionPipeline, - IFImg2ImgPipeline, - IFImg2ImgSuperResolutionPipeline, - IFInpaintingPipeline, - IFInpaintingSuperResolutionPipeline, - IFPipeline, - IFSuperResolutionPipeline, - ImageTextPipelineOutput, - KandinskyCombinedPipeline, - KandinskyImg2ImgCombinedPipeline, - KandinskyImg2ImgPipeline, - KandinskyInpaintCombinedPipeline, - KandinskyInpaintPipeline, - KandinskyPipeline, - KandinskyPriorPipeline, - KandinskyV22CombinedPipeline, - KandinskyV22ControlnetImg2ImgPipeline, - KandinskyV22ControlnetPipeline, - KandinskyV22Img2ImgCombinedPipeline, - KandinskyV22Img2ImgPipeline, - KandinskyV22InpaintCombinedPipeline, - KandinskyV22InpaintPipeline, - KandinskyV22Pipeline, - KandinskyV22PriorEmb2EmbPipeline, - KandinskyV22PriorPipeline, - LDMTextToImagePipeline, - PaintByExamplePipeline, - SemanticStableDiffusionPipeline, - ShapEImg2ImgPipeline, - ShapEPipeline, - StableDiffusionAdapterPipeline, - StableDiffusionAttendAndExcitePipeline, - StableDiffusionControlNetImg2ImgPipeline, - StableDiffusionControlNetInpaintPipeline, - StableDiffusionControlNetPipeline, - StableDiffusionDepth2ImgPipeline, - StableDiffusionDiffEditPipeline, - StableDiffusionImageVariationPipeline, - StableDiffusionImg2ImgPipeline, - StableDiffusionInpaintPipeline, - StableDiffusionInpaintPipelineLegacy, - StableDiffusionInstructPix2PixPipeline, - StableDiffusionLatentUpscalePipeline, - StableDiffusionLDM3DPipeline, - StableDiffusionModelEditingPipeline, - StableDiffusionPanoramaPipeline, - StableDiffusionParadigmsPipeline, - StableDiffusionPipeline, - StableDiffusionPipelineSafe, - StableDiffusionPix2PixZeroPipeline, - StableDiffusionSAGPipeline, - StableDiffusionUpscalePipeline, - StableDiffusionXLControlNetPipeline, - StableDiffusionXLImg2ImgPipeline, - StableDiffusionXLInpaintPipeline, - StableDiffusionXLInstructPix2PixPipeline, - StableDiffusionXLPipeline, - StableUnCLIPImg2ImgPipeline, - StableUnCLIPPipeline, - TextToVideoSDPipeline, - TextToVideoZeroPipeline, - UnCLIPImageVariationPipeline, - UnCLIPPipeline, - UniDiffuserModel, - UniDiffuserPipeline, - UniDiffuserTextDecoder, - VersatileDiffusionDualGuidedPipeline, - VersatileDiffusionImageVariationPipeline, - VersatileDiffusionPipeline, - VersatileDiffusionTextToImagePipeline, - VideoToVideoSDPipeline, - VQDiffusionPipeline, - ) - -try: - if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403 -else: - from .pipelines import StableDiffusionKDiffusionPipeline - -try: - if not (is_torch_available() and is_transformers_available() and is_onnx_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_transformers_and_onnx_objects import * # noqa F403 -else: - from .pipelines import ( - OnnxStableDiffusionImg2ImgPipeline, - OnnxStableDiffusionInpaintPipeline, - OnnxStableDiffusionInpaintPipelineLegacy, - OnnxStableDiffusionPipeline, - OnnxStableDiffusionUpscalePipeline, - StableDiffusionOnnxPipeline, - ) - -try: - if not (is_torch_available() and is_librosa_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_torch_and_librosa_objects import * # noqa F403 -else: - from .pipelines import AudioDiffusionPipeline, Mel - -try: - if not (is_transformers_available() and is_torch_available() and is_note_seq_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403 -else: - from .pipelines import SpectrogramDiffusionPipeline - -try: - if not is_flax_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_flax_objects import * # noqa F403 -else: - from .models.controlnet_flax import FlaxControlNetModel - from .models.modeling_flax_utils import FlaxModelMixin - from .models.unet_2d_condition_flax import FlaxUNet2DConditionModel - from .models.vae_flax import FlaxAutoencoderKL - from .pipelines import FlaxDiffusionPipeline - from .schedulers import ( - FlaxDDIMScheduler, - FlaxDDPMScheduler, - FlaxDPMSolverMultistepScheduler, - FlaxKarrasVeScheduler, - FlaxLMSDiscreteScheduler, - FlaxPNDMScheduler, - FlaxSchedulerMixin, - FlaxScoreSdeVeScheduler, - ) - - -try: - if not (is_flax_available() and is_transformers_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_flax_and_transformers_objects import * # noqa F403 -else: - from .pipelines import ( - FlaxStableDiffusionControlNetPipeline, - FlaxStableDiffusionImg2ImgPipeline, - FlaxStableDiffusionInpaintPipeline, - FlaxStableDiffusionPipeline, - ) - -try: - if not (is_note_seq_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from .utils.dummy_note_seq_objects import * # noqa F403 -else: - from .pipelines import MidiProcessor diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py deleted file mode 100644 index 53918fede7c2d4e9aaec8c7549630811c21e5bb7..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet_img2img.py +++ /dev/null @@ -1,409 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Callable, List, Optional, Union - -import numpy as np -import PIL -import torch -from PIL import Image - -from ...models import UNet2DConditionModel, VQModel -from ...schedulers import DDPMScheduler -from ...utils import ( - is_accelerate_available, - is_accelerate_version, - logging, - randn_tensor, - replace_example_docstring, -) -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> import numpy as np - - >>> from diffusers import KandinskyV22PriorEmb2EmbPipeline, KandinskyV22ControlnetImg2ImgPipeline - >>> from transformers import pipeline - >>> from diffusers.utils import load_image - - - >>> def make_hint(image, depth_estimator): - ... image = depth_estimator(image)["depth"] - ... image = np.array(image) - ... image = image[:, :, None] - ... image = np.concatenate([image, image, image], axis=2) - ... detected_map = torch.from_numpy(image).float() / 255.0 - ... hint = detected_map.permute(2, 0, 1) - ... return hint - - - >>> depth_estimator = pipeline("depth-estimation") - - >>> pipe_prior = KandinskyV22PriorEmb2EmbPipeline.from_pretrained( - ... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 - ... ) - >>> pipe_prior = pipe_prior.to("cuda") - - >>> pipe = KandinskyV22ControlnetImg2ImgPipeline.from_pretrained( - ... "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 - ... ) - >>> pipe = pipe.to("cuda") - - >>> img = load_image( - ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - ... "/kandinsky/cat.png" - ... ).resize((768, 768)) - - - >>> hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") - - >>> prompt = "A robot, 4k photo" - >>> negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" - - >>> generator = torch.Generator(device="cuda").manual_seed(43) - - >>> img_emb = pipe_prior(prompt=prompt, image=img, strength=0.85, generator=generator) - >>> negative_emb = pipe_prior(prompt=negative_prior_prompt, image=img, strength=1, generator=generator) - - >>> images = pipe( - ... image=img, - ... strength=0.5, - ... image_embeds=img_emb.image_embeds, - ... negative_image_embeds=negative_emb.image_embeds, - ... hint=hint, - ... num_inference_steps=50, - ... generator=generator, - ... height=768, - ... width=768, - ... ).images - - >>> images[0].save("robot_cat.png") - ``` -""" - - -# Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2.downscale_height_and_width -def downscale_height_and_width(height, width, scale_factor=8): - new_height = height // scale_factor**2 - if height % scale_factor**2 != 0: - new_height += 1 - new_width = width // scale_factor**2 - if width % scale_factor**2 != 0: - new_width += 1 - return new_height * scale_factor, new_width * scale_factor - - -# Copied from diffusers.pipelines.kandinsky.pipeline_kandinsky_img2img.prepare_image -def prepare_image(pil_image, w=512, h=512): - pil_image = pil_image.resize((w, h), resample=Image.BICUBIC, reducing_gap=1) - arr = np.array(pil_image.convert("RGB")) - arr = arr.astype(np.float32) / 127.5 - 1 - arr = np.transpose(arr, [2, 0, 1]) - image = torch.from_numpy(arr).unsqueeze(0) - return image - - -class KandinskyV22ControlnetImg2ImgPipeline(DiffusionPipeline): - """ - Pipeline for image-to-image generation using Kandinsky - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - scheduler ([`DDIMScheduler`]): - A scheduler to be used in combination with `unet` to generate image latents. - unet ([`UNet2DConditionModel`]): - Conditional U-Net architecture to denoise the image embedding. - movq ([`VQModel`]): - MoVQ Decoder to generate the image from the latents. - """ - - def __init__( - self, - unet: UNet2DConditionModel, - scheduler: DDPMScheduler, - movq: VQModel, - ): - super().__init__() - - self.register_modules( - unet=unet, - scheduler=scheduler, - movq=movq, - ) - self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1) - - # Copied from diffusers.pipelines.kandinsky.pipeline_kandinsky_img2img.KandinskyImg2ImgPipeline.get_timesteps - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - init_timestep = min(int(num_inference_steps * strength), num_inference_steps) - - t_start = max(num_inference_steps - init_timestep, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - # Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2_img2img.KandinskyV22Img2ImgPipeline.prepare_latents - def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator=None): - if not isinstance(image, (torch.Tensor, PIL.Image.Image, list)): - raise ValueError( - f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or list but is {type(image)}" - ) - - image = image.to(device=device, dtype=dtype) - - batch_size = batch_size * num_images_per_prompt - - if image.shape[1] == 4: - init_latents = image - - else: - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - elif isinstance(generator, list): - init_latents = [ - self.movq.encode(image[i : i + 1]).latent_dist.sample(generator[i]) for i in range(batch_size) - ] - init_latents = torch.cat(init_latents, dim=0) - else: - init_latents = self.movq.encode(image).latent_dist.sample(generator) - - init_latents = self.movq.config.scaling_factor * init_latents - - init_latents = torch.cat([init_latents], dim=0) - - shape = init_latents.shape - noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - - # get latents - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - - latents = init_latents - - return latents - - # Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2.KandinskyV22Pipeline.enable_model_cpu_offload - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared - to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` - method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with - `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - if self.device.type != "cpu": - self.to("cpu", silence_dtype_warnings=True) - torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist) - - hook = None - for cpu_offloaded_model in [self.unet, self.movq]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]], - image: Union[torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image]], - negative_image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]], - hint: torch.FloatTensor, - height: int = 512, - width: int = 512, - num_inference_steps: int = 100, - guidance_scale: float = 4.0, - strength: float = 0.3, - num_images_per_prompt: int = 1, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - output_type: Optional[str] = "pil", - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - return_dict: bool = True, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): - The clip image embeddings for text prompt, that will be used to condition the image generation. - image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, or `List[np.ndarray]`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. Can also accpet image latents as `image`, if passing latents directly, it will not be encoded - again. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image` - will be used as a starting point, adding more noise to it the larger the `strength`. The number of - denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will - be maximum and the denoising process will run for the full number of iterations specified in - `num_inference_steps`. A value of 1, therefore, essentially ignores `image`. - hint (`torch.FloatTensor`): - The controlnet condition. - negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): - The clip image embeddings for negative text prompt, will be used to condition the image generation. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 100): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 4.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` - (`np.array`) or `"pt"` (`torch.Tensor`). - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - - Examples: - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple` - """ - device = self._execution_device - - do_classifier_free_guidance = guidance_scale > 1.0 - - if isinstance(image_embeds, list): - image_embeds = torch.cat(image_embeds, dim=0) - if isinstance(negative_image_embeds, list): - negative_image_embeds = torch.cat(negative_image_embeds, dim=0) - if isinstance(hint, list): - hint = torch.cat(hint, dim=0) - - batch_size = image_embeds.shape[0] - - if do_classifier_free_guidance: - image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) - negative_image_embeds = negative_image_embeds.repeat_interleave(num_images_per_prompt, dim=0) - hint = hint.repeat_interleave(num_images_per_prompt, dim=0) - - image_embeds = torch.cat([negative_image_embeds, image_embeds], dim=0).to( - dtype=self.unet.dtype, device=device - ) - hint = torch.cat([hint, hint], dim=0).to(dtype=self.unet.dtype, device=device) - - if not isinstance(image, list): - image = [image] - if not all(isinstance(i, (PIL.Image.Image, torch.Tensor)) for i in image): - raise ValueError( - f"Input is in incorrect format: {[type(i) for i in image]}. Currently, we only support PIL image and pytorch tensor" - ) - - image = torch.cat([prepare_image(i, width, height) for i in image], dim=0) - image = image.to(dtype=image_embeds.dtype, device=device) - - latents = self.movq.encode(image)["latents"] - latents = latents.repeat_interleave(num_images_per_prompt, dim=0) - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device) - latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) - height, width = downscale_height_and_width(height, width, self.movq_scale_factor) - latents = self.prepare_latents( - latents, latent_timestep, batch_size, num_images_per_prompt, image_embeds.dtype, device, generator - ) - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - added_cond_kwargs = {"image_embeds": image_embeds, "hint": hint} - noise_pred = self.unet( - sample=latent_model_input, - timestep=t, - encoder_hidden_states=None, - added_cond_kwargs=added_cond_kwargs, - return_dict=False, - )[0] - - if do_classifier_free_guidance: - noise_pred, variance_pred = noise_pred.split(latents.shape[1], dim=1) - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - _, variance_pred_text = variance_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, variance_pred_text], dim=1) - - if not ( - hasattr(self.scheduler.config, "variance_type") - and self.scheduler.config.variance_type in ["learned", "learned_range"] - ): - noise_pred, _ = noise_pred.split(latents.shape[1], dim=1) - - # compute the previous noisy sample x_t -> x_t-1 - - latents = self.scheduler.step( - noise_pred, - t, - latents, - generator=generator, - )[0] - - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # post-processing - image = self.movq.decode(latents, force_not_quantize=True)["sample"] - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if output_type not in ["pt", "np", "pil"]: - raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") - - if output_type in ["np", "pil"]: - image = image * 0.5 + 0.5 - image = image.clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py deleted file mode 100644 index 5ca2a67cde62bff078b7c4c0d696a585265e4c3a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py +++ /dev/null @@ -1,5 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -model = dict( - backbone=dict( - dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_free_head.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_free_head.py deleted file mode 100644 index 1814a0cc4f577f470f74f025440073a0aaa1ebd0..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/anchor_free_head.py +++ /dev/null @@ -1,340 +0,0 @@ -from abc import abstractmethod - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, bias_init_with_prob, normal_init -from mmcv.runner import force_fp32 - -from mmdet.core import multi_apply -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class AnchorFreeHead(BaseDenseHead, BBoxTestMixin): - """Anchor-free head (FCOS, Fovea, RepPoints, etc.). - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - stacked_convs (int): Number of stacking convs of the head. - strides (tuple): Downsample factor of each feature map. - dcn_on_last_conv (bool): If true, use dcn in the last layer of - towers. Default: False. - conv_bias (bool | str): If specified as `auto`, it will be decided by - the norm_cfg. Bias of conv will be set as True if `norm_cfg` is - None, otherwise False. Default: "auto". - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - conv_cfg (dict): Config dict for convolution layer. Default: None. - norm_cfg (dict): Config dict for normalization layer. Default: None. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - """ # noqa: W605 - - _version = 1 - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - stacked_convs=4, - strides=(4, 8, 16, 32, 64), - dcn_on_last_conv=False, - conv_bias='auto', - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='IoULoss', loss_weight=1.0), - conv_cfg=None, - norm_cfg=None, - train_cfg=None, - test_cfg=None): - super(AnchorFreeHead, self).__init__() - self.num_classes = num_classes - self.cls_out_channels = num_classes - self.in_channels = in_channels - self.feat_channels = feat_channels - self.stacked_convs = stacked_convs - self.strides = strides - self.dcn_on_last_conv = dcn_on_last_conv - assert conv_bias == 'auto' or isinstance(conv_bias, bool) - self.conv_bias = conv_bias - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - - self._init_layers() - - def _init_layers(self): - """Initialize layers of the head.""" - self._init_cls_convs() - self._init_reg_convs() - self._init_predictor() - - def _init_cls_convs(self): - """Initialize classification conv layers of the head.""" - self.cls_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_reg_convs(self): - """Initialize bbox regression conv layers of the head.""" - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - if self.dcn_on_last_conv and i == self.stacked_convs - 1: - conv_cfg = dict(type='DCNv2') - else: - conv_cfg = self.conv_cfg - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.conv_bias)) - - def _init_predictor(self): - """Initialize predictor layers of the head.""" - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - def init_weights(self): - """Initialize weights of the head.""" - for m in self.cls_convs: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - for m in self.reg_convs: - if isinstance(m.conv, nn.Conv2d): - normal_init(m.conv, std=0.01) - bias_cls = bias_init_with_prob(0.01) - normal_init(self.conv_cls, std=0.01, bias=bias_cls) - normal_init(self.conv_reg, std=0.01) - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """Hack some keys of the model state dict so that can load checkpoints - of previous version.""" - version = local_metadata.get('version', None) - if version is None: - # the key is different in early versions - # for example, 'fcos_cls' become 'conv_cls' now - bbox_head_keys = [ - k for k in state_dict.keys() if k.startswith(prefix) - ] - ori_predictor_keys = [] - new_predictor_keys = [] - # e.g. 'fcos_cls' or 'fcos_reg' - for key in bbox_head_keys: - ori_predictor_keys.append(key) - key = key.split('.') - conv_name = None - if key[1].endswith('cls'): - conv_name = 'conv_cls' - elif key[1].endswith('reg'): - conv_name = 'conv_reg' - elif key[1].endswith('centerness'): - conv_name = 'conv_centerness' - else: - assert NotImplementedError - if conv_name is not None: - key[1] = conv_name - new_predictor_keys.append('.'.join(key)) - else: - ori_predictor_keys.pop(-1) - for i in range(len(new_predictor_keys)): - state_dict[new_predictor_keys[i]] = state_dict.pop( - ori_predictor_keys[i]) - super()._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, unexpected_keys, - error_msgs) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually contain classification scores and bbox predictions. - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - """ - return multi_apply(self.forward_single, feats)[:2] - - def forward_single(self, x): - """Forward features of a single scale level. - - Args: - x (Tensor): FPN feature maps of the specified stride. - - Returns: - tuple: Scores for each class, bbox predictions, features - after classification and regression conv layers, some - models needs these features like FCOS. - """ - cls_feat = x - reg_feat = x - - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - cls_score = self.conv_cls(cls_feat) - - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = self.conv_reg(reg_feat) - return cls_score, bbox_pred, cls_feat, reg_feat - - @abstractmethod - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level, - each is a 4D-tensor, the channel number is - num_points * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level, each is a 4D-tensor, the channel number is - num_points * 4. - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - """ - - raise NotImplementedError - - @abstractmethod - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - img_metas, - cfg=None, - rescale=None): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W) - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space - """ - - raise NotImplementedError - - @abstractmethod - def get_targets(self, points, gt_bboxes_list, gt_labels_list): - """Compute regression, classification and centerness targets for points - in multiple images. - - Args: - points (list[Tensor]): Points of each fpn level, each has shape - (num_points, 2). - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image, - each has shape (num_gt, 4). - gt_labels_list (list[Tensor]): Ground truth labels of each box, - each has shape (num_gt,). - """ - raise NotImplementedError - - def _get_points_single(self, - featmap_size, - stride, - dtype, - device, - flatten=False): - """Get points of a single scale level.""" - h, w = featmap_size - x_range = torch.arange(w, dtype=dtype, device=device) - y_range = torch.arange(h, dtype=dtype, device=device) - y, x = torch.meshgrid(y_range, x_range) - if flatten: - y = y.flatten() - x = x.flatten() - return y, x - - def get_points(self, featmap_sizes, dtype, device, flatten=False): - """Get points according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - dtype (torch.dtype): Type of points. - device (torch.device): Device of points. - - Returns: - tuple: points of each image. - """ - mlvl_points = [] - for i in range(len(featmap_sizes)): - mlvl_points.append( - self._get_points_single(featmap_sizes[i], self.strides[i], - dtype, device, flatten)) - return mlvl_points - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context_59.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context_59.py deleted file mode 100644 index 4a8180038be33fba9c3229ee3c017f2f0628544f..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_480x480_40k_pascal_context_59.py +++ /dev/null @@ -1,10 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', - '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(num_classes=59), - auxiliary_head=dict(num_classes=59), - test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320))) -optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py deleted file mode 100644 index 194564e761ddae165b39ef6598877e2e3820af0a..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_stack.py +++ /dev/null @@ -1,16 +0,0 @@ -from typing import List, TypeVar - -T = TypeVar("T") - - -class Stack(List[T]): - """A small shim over builtin list.""" - - @property - def top(self) -> T: - """Get top of stack.""" - return self[-1] - - def push(self, item: T) -> None: - """Push an item on to the stack (append in stack nomenclature).""" - self.append(item) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extension.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extension.py deleted file mode 100644 index 58c023f6b4479c631f382e5062932793d2bee26b..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/extension.py +++ /dev/null @@ -1,148 +0,0 @@ -import re -import functools -import distutils.core -import distutils.errors -import distutils.extension - -from .monkey import get_unpatched - - -def _have_cython(): - """ - Return True if Cython can be imported. - """ - cython_impl = 'Cython.Distutils.build_ext' - try: - # from (cython_impl) import build_ext - __import__(cython_impl, fromlist=['build_ext']).build_ext - return True - except Exception: - pass - return False - - -# for compatibility -have_pyrex = _have_cython - -_Extension = get_unpatched(distutils.core.Extension) - - -class Extension(_Extension): - """ - Describes a single extension module. - - This means that all source files will be compiled into a single binary file - ``.`` (with ```` derived from ``name`` and - ```` defined by one of the values in - ``importlib.machinery.EXTENSION_SUFFIXES``). - - In the case ``.pyx`` files are passed as ``sources and`` ``Cython`` is **not** - installed in the build environment, ``setuptools`` may also try to look for the - equivalent ``.cpp`` or ``.c`` files. - - :arg str name: - the full name of the extension, including any packages -- ie. - *not* a filename or pathname, but Python dotted name - - :arg list[str] sources: - list of source filenames, relative to the distribution root - (where the setup script lives), in Unix form (slash-separated) - for portability. Source files may be C, C++, SWIG (.i), - platform-specific resource files, or whatever else is recognized - by the "build_ext" command as source for a Python extension. - - :keyword list[str] include_dirs: - list of directories to search for C/C++ header files (in Unix - form for portability) - - :keyword list[tuple[str, str|None]] define_macros: - list of macros to define; each macro is defined using a 2-tuple: - the first item corresponding to the name of the macro and the second - item either a string with its value or None to - define it without a particular value (equivalent of "#define - FOO" in source or -DFOO on Unix C compiler command line) - - :keyword list[str] undef_macros: - list of macros to undefine explicitly - - :keyword list[str] library_dirs: - list of directories to search for C/C++ libraries at link time - - :keyword list[str] libraries: - list of library names (not filenames or paths) to link against - - :keyword list[str] runtime_library_dirs: - list of directories to search for C/C++ libraries at run time - (for shared extensions, this is when the extension is loaded). - Setting this will cause an exception during build on Windows - platforms. - - :keyword list[str] extra_objects: - list of extra files to link with (eg. object files not implied - by 'sources', static library that must be explicitly specified, - binary resource files, etc.) - - :keyword list[str] extra_compile_args: - any extra platform- and compiler-specific information to use - when compiling the source files in 'sources'. For platforms and - compilers where "command line" makes sense, this is typically a - list of command-line arguments, but for other platforms it could - be anything. - - :keyword list[str] extra_link_args: - any extra platform- and compiler-specific information to use - when linking object files together to create the extension (or - to create a new static Python interpreter). Similar - interpretation as for 'extra_compile_args'. - - :keyword list[str] export_symbols: - list of symbols to be exported from a shared extension. Not - used on all platforms, and not generally necessary for Python - extensions, which typically export exactly one symbol: "init" + - extension_name. - - :keyword list[str] swig_opts: - any extra options to pass to SWIG if a source file has the .i - extension. - - :keyword list[str] depends: - list of files that the extension depends on - - :keyword str language: - extension language (i.e. "c", "c++", "objc"). Will be detected - from the source extensions if not provided. - - :keyword bool optional: - specifies that a build failure in the extension should not abort the - build process, but simply not install the failing extension. - - :keyword bool py_limited_api: - opt-in flag for the usage of :doc:`Python's limited API `. - - :raises setuptools.errors.PlatformError: if 'runtime_library_dirs' is - specified on Windows. (since v63) - """ - - def __init__(self, name, sources, *args, **kw): - # The *args is needed for compatibility as calls may use positional - # arguments. py_limited_api may be set only via keyword. - self.py_limited_api = kw.pop("py_limited_api", False) - super().__init__(name, sources, *args, **kw) - - def _convert_pyx_sources_to_lang(self): - """ - Replace sources with .pyx extensions to sources with the target - language extension. This mechanism allows language authors to supply - pre-converted sources but to prefer the .pyx sources. - """ - if _have_cython(): - # the build has Cython, so allow it to compile the .pyx files - return - lang = self.language or '' - target_ext = '.cpp' if lang.lower() == 'c++' else '.c' - sub = functools.partial(re.sub, '.pyx$', target_ext) - self.sources = list(map(sub, self.sources)) - - -class Library(Extension): - """Just like a regular Extension, but built as a library instead""" diff --git a/spaces/AtomdffAI/wechatgpt4atom/README.md b/spaces/AtomdffAI/wechatgpt4atom/README.md deleted file mode 100644 index a060c61d40b2162b8e7cdf6100991a8a45cc5b9a..0000000000000000000000000000000000000000 --- a/spaces/AtomdffAI/wechatgpt4atom/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: wechat-bot -emoji: 👀 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -duplicated_from: lewisliuX123/wechatgpt3 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Awesimo/jojogan/e4e/training/__init__.py b/spaces/Awesimo/jojogan/e4e/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/fcos.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/fcos.py deleted file mode 100644 index 55cdb76e836214a2b5a7a4a5a5c47e3382dee86d..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/fcos.py +++ /dev/null @@ -1,303 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -from typing import List, Optional, Tuple -import torch -from fvcore.nn import sigmoid_focal_loss_jit -from torch import Tensor, nn -from torch.nn import functional as F - -from detectron2.layers import ShapeSpec, batched_nms -from detectron2.structures import Boxes, ImageList, Instances, pairwise_point_box_distance -from detectron2.utils.events import get_event_storage - -from ..anchor_generator import DefaultAnchorGenerator -from ..backbone import Backbone -from ..box_regression import Box2BoxTransformLinear, _dense_box_regression_loss -from .dense_detector import DenseDetector -from .retinanet import RetinaNetHead - -__all__ = ["FCOS"] - - -logger = logging.getLogger(__name__) - - -class FCOS(DenseDetector): - """ - Implement FCOS in :paper:`fcos`. - """ - - def __init__( - self, - *, - backbone: Backbone, - head: nn.Module, - head_in_features: Optional[List[str]] = None, - box2box_transform=None, - num_classes, - center_sampling_radius: float = 1.5, - focal_loss_alpha=0.25, - focal_loss_gamma=2.0, - test_score_thresh=0.2, - test_topk_candidates=1000, - test_nms_thresh=0.6, - max_detections_per_image=100, - pixel_mean, - pixel_std, - ): - """ - Args: - center_sampling_radius: radius of the "center" of a groundtruth box, - within which all anchor points are labeled positive. - Other arguments mean the same as in :class:`RetinaNet`. - """ - super().__init__( - backbone, head, head_in_features, pixel_mean=pixel_mean, pixel_std=pixel_std - ) - - self.num_classes = num_classes - - # FCOS uses one anchor point per location. - # We represent the anchor point by a box whose size equals the anchor stride. - feature_shapes = backbone.output_shape() - fpn_strides = [feature_shapes[k].stride for k in self.head_in_features] - self.anchor_generator = DefaultAnchorGenerator( - sizes=[[k] for k in fpn_strides], aspect_ratios=[1.0], strides=fpn_strides - ) - - # FCOS parameterizes box regression by a linear transform, - # where predictions are normalized by anchor stride (equal to anchor size). - if box2box_transform is None: - box2box_transform = Box2BoxTransformLinear(normalize_by_size=True) - self.box2box_transform = box2box_transform - - self.center_sampling_radius = float(center_sampling_radius) - - # Loss parameters: - self.focal_loss_alpha = focal_loss_alpha - self.focal_loss_gamma = focal_loss_gamma - - # Inference parameters: - self.test_score_thresh = test_score_thresh - self.test_topk_candidates = test_topk_candidates - self.test_nms_thresh = test_nms_thresh - self.max_detections_per_image = max_detections_per_image - - def forward_training(self, images, features, predictions, gt_instances): - # Transpose the Hi*Wi*A dimension to the middle: - pred_logits, pred_anchor_deltas, pred_centerness = self._transpose_dense_predictions( - predictions, [self.num_classes, 4, 1] - ) - anchors = self.anchor_generator(features) - gt_labels, gt_boxes = self.label_anchors(anchors, gt_instances) - return self.losses( - anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes, pred_centerness - ) - - @torch.no_grad() - def match_anchors(self, anchors: List[Boxes], gt_instances: List[Instances]): - """ - Match anchors with ground truth boxes. - - Args: - anchors: #level boxes, from the highest resolution to lower resolution - gt_instances: ground truth instances per image - - Returns: - List[Tensor]: - #image tensors, each is a vector of matched gt - indices (or -1 for unmatched anchors) for all anchors. - """ - num_anchors_per_level = [len(x) for x in anchors] - anchors = Boxes.cat(anchors) # Rx4 - anchor_centers = anchors.get_centers() # Rx2 - anchor_sizes = anchors.tensor[:, 2] - anchors.tensor[:, 0] # R - - lower_bound = anchor_sizes * 4 - lower_bound[: num_anchors_per_level[0]] = 0 - upper_bound = anchor_sizes * 8 - upper_bound[-num_anchors_per_level[-1] :] = float("inf") - - matched_indices = [] - for gt_per_image in gt_instances: - gt_centers = gt_per_image.gt_boxes.get_centers() # Nx2 - # FCOS with center sampling: anchor point must be close enough to gt center. - pairwise_match = (anchor_centers[:, None, :] - gt_centers[None, :, :]).abs_().max( - dim=2 - ).values < self.center_sampling_radius * anchor_sizes[:, None] - pairwise_dist = pairwise_point_box_distance(anchor_centers, gt_per_image.gt_boxes) - - # The original FCOS anchor matching rule: anchor point must be inside gt - pairwise_match &= pairwise_dist.min(dim=2).values > 0 - - # Multilevel anchor matching in FCOS: each anchor is only responsible - # for certain scale range. - pairwise_dist = pairwise_dist.max(dim=2).values - pairwise_match &= (pairwise_dist > lower_bound[:, None]) & ( - pairwise_dist < upper_bound[:, None] - ) - - # Match the GT box with minimum area, if there are multiple GT matches - gt_areas = gt_per_image.gt_boxes.area() # N - pairwise_match = pairwise_match.to(torch.float32) * (1e8 - gt_areas[None, :]) - min_values, matched_idx = pairwise_match.max(dim=1) # R, per-anchor match - matched_idx[min_values < 1e-5] = -1 # Unmatched anchors are assigned -1 - - matched_indices.append(matched_idx) - return matched_indices - - @torch.no_grad() - def label_anchors(self, anchors, gt_instances): - """ - Same interface as :meth:`RetinaNet.label_anchors`, but implemented with FCOS - anchor matching rule. - - Unlike RetinaNet, there are no ignored anchors. - """ - matched_indices = self.match_anchors(anchors, gt_instances) - - matched_labels, matched_boxes = [], [] - for gt_index, gt_per_image in zip(matched_indices, gt_instances): - label = gt_per_image.gt_classes[gt_index.clip(min=0)] - label[gt_index < 0] = self.num_classes # background - - matched_gt_boxes = gt_per_image.gt_boxes[gt_index.clip(min=0)] - - matched_labels.append(label) - matched_boxes.append(matched_gt_boxes) - return matched_labels, matched_boxes - - def losses( - self, anchors, pred_logits, gt_labels, pred_anchor_deltas, gt_boxes, pred_centerness - ): - """ - This method is almost identical to :meth:`RetinaNet.losses`, with an extra - "loss_centerness" in the returned dict. - """ - num_images = len(gt_labels) - gt_labels = torch.stack(gt_labels) # (N, R) - - pos_mask = (gt_labels >= 0) & (gt_labels != self.num_classes) - num_pos_anchors = pos_mask.sum().item() - get_event_storage().put_scalar("num_pos_anchors", num_pos_anchors / num_images) - normalizer = self._ema_update("loss_normalizer", max(num_pos_anchors, 1), 300) - - # classification and regression loss - gt_labels_target = F.one_hot(gt_labels, num_classes=self.num_classes + 1)[ - :, :, :-1 - ] # no loss for the last (background) class - loss_cls = sigmoid_focal_loss_jit( - torch.cat(pred_logits, dim=1), - gt_labels_target.to(pred_logits[0].dtype), - alpha=self.focal_loss_alpha, - gamma=self.focal_loss_gamma, - reduction="sum", - ) - - loss_box_reg = _dense_box_regression_loss( - anchors, - self.box2box_transform, - pred_anchor_deltas, - [x.tensor for x in gt_boxes], - pos_mask, - box_reg_loss_type="giou", - ) - - ctrness_targets = self.compute_ctrness_targets(anchors, gt_boxes) # NxR - pred_centerness = torch.cat(pred_centerness, dim=1).squeeze(dim=2) # NxR - ctrness_loss = F.binary_cross_entropy_with_logits( - pred_centerness[pos_mask], ctrness_targets[pos_mask], reduction="sum" - ) - return { - "loss_fcos_cls": loss_cls / normalizer, - "loss_fcos_loc": loss_box_reg / normalizer, - "loss_fcos_ctr": ctrness_loss / normalizer, - } - - def compute_ctrness_targets(self, anchors, gt_boxes): # NxR - anchors = Boxes.cat(anchors).tensor # Rx4 - reg_targets = [self.box2box_transform.get_deltas(anchors, m.tensor) for m in gt_boxes] - reg_targets = torch.stack(reg_targets, dim=0) # NxRx4 - if len(reg_targets) == 0: - return reg_targets.new_zeros(len(reg_targets)) - left_right = reg_targets[:, :, [0, 2]] - top_bottom = reg_targets[:, :, [1, 3]] - ctrness = (left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * ( - top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0] - ) - return torch.sqrt(ctrness) - - def forward_inference( - self, images: ImageList, features: List[Tensor], predictions: List[List[Tensor]] - ): - pred_logits, pred_anchor_deltas, pred_centerness = self._transpose_dense_predictions( - predictions, [self.num_classes, 4, 1] - ) - anchors = self.anchor_generator(features) - - results: List[Instances] = [] - for img_idx, image_size in enumerate(images.image_sizes): - scores_per_image = [ - # Multiply and sqrt centerness & classification scores - # (See eqn. 4 in https://arxiv.org/abs/2006.09214) - torch.sqrt(x[img_idx].sigmoid_() * y[img_idx].sigmoid_()) - for x, y in zip(pred_logits, pred_centerness) - ] - deltas_per_image = [x[img_idx] for x in pred_anchor_deltas] - results_per_image = self.inference_single_image( - anchors, scores_per_image, deltas_per_image, image_size - ) - results.append(results_per_image) - return results - - def inference_single_image( - self, - anchors: List[Boxes], - box_cls: List[Tensor], - box_delta: List[Tensor], - image_size: Tuple[int, int], - ): - """ - Identical to :meth:`RetinaNet.inference_single_image. - """ - pred = self._decode_multi_level_predictions( - anchors, - box_cls, - box_delta, - self.test_score_thresh, - self.test_topk_candidates, - image_size, - ) - keep = batched_nms( - pred.pred_boxes.tensor, pred.scores, pred.pred_classes, self.test_nms_thresh - ) - return pred[keep[: self.max_detections_per_image]] - - -class FCOSHead(RetinaNetHead): - """ - The head used in :paper:`fcos`. It adds an additional centerness - prediction branch on top of :class:`RetinaNetHead`. - """ - - def __init__(self, *, input_shape: List[ShapeSpec], conv_dims: List[int], **kwargs): - super().__init__(input_shape=input_shape, conv_dims=conv_dims, num_anchors=1, **kwargs) - # Unlike original FCOS, we do not add an additional learnable scale layer - # because it's found to have no benefits after normalizing regression targets by stride. - self._num_features = len(input_shape) - self.ctrness = nn.Conv2d(conv_dims[-1], 1, kernel_size=3, stride=1, padding=1) - torch.nn.init.normal_(self.ctrness.weight, std=0.01) - torch.nn.init.constant_(self.ctrness.bias, 0) - - def forward(self, features): - assert len(features) == self._num_features - logits = [] - bbox_reg = [] - ctrness = [] - for feature in features: - logits.append(self.cls_score(self.cls_subnet(feature))) - bbox_feature = self.bbox_subnet(feature) - bbox_reg.append(self.bbox_pred(bbox_feature)) - ctrness.append(self.ctrness(bbox_feature)) - return logits, bbox_reg, ctrness diff --git a/spaces/Benson/text-generation/Examples/Blockman Ir Aventura Hack Apk 2022 Cubos Ilimitados.md b/spaces/Benson/text-generation/Examples/Blockman Ir Aventura Hack Apk 2022 Cubos Ilimitados.md deleted file mode 100644 index 936b803a744f204b10a5cd5ac8a05433ad086bec..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Blockman Ir Aventura Hack Apk 2022 Cubos Ilimitados.md +++ /dev/null @@ -1,83 +0,0 @@ -
-

Blockman Go aventura Hack APK 2022 cubos ilimitados

-

¿Te encanta jugar juegos basados en bloques con tus amigos? ¿Quieres explorar diferentes mundos y completar varios desafíos? Si es así, entonces deberías probar Blockman Go Adventure, un juego divertido y adictivo que te permite crear tu propio avatar, personalizar tus ajustes y unirte a millones de jugadores en línea. ¡Pero espera, hay más! También puede utilizar Blockman Go Aventura Hack APK, una versión modificada del juego que le da cubos ilimitados, monedas, gemas, y otros recursos. En este artículo, le diremos todo lo que necesita saber sobre Blockman Go Adventure y Blockman Go Adventure Hack APK, incluyendo sus características, cómo jugarlos, y algunos consejos y trucos para aprovechar al máximo su experiencia de juego. ¡Vamos a empezar!

-

blockman ir aventura hack apk 2022 cubos ilimitados


Download ★★★★★ https://bltlly.com/2v6KrL



-

¿Qué es Blockman Go Adventure?

-

Blockman Go Adventure es un juego online gratuito que combina elementos de sandbox, aventura y juegos sociales. Está desarrollado por Blockman GO Studio, un equipo de desarrolladores de juegos creativos y apasionados que tienen como objetivo proporcionar juegos de alta calidad para jugadores de todas las edades. Blockman Go Adventure es uno de sus juegos más populares, con más de 10 millones de descargas en Google Play Store y una calificación de 4.4 estrellas.

-

Características de Blockman Go Adventure

-

Blockman Go Adventure tiene muchas características que lo hacen un juego agradable y atractivo para todos. Algunas de estas características son:

-
    -
  • Múltiples minijuegos: Puedes elegir entre más de 100 minijuegos que se adapten a tus preferencias y habilidades. Si te gustan los juegos de carreras, disparos, parkour o rompecabezas, encontrarás algo que te interesa en Blockman Go Adventure.
  • -
  • Diversos mundos: Puedes explorar diferentes mundos que tienen sus propios temas, entornos y desafíos. Puedes visitar el castillo medieval, la ciudad futurista, la isla tropical, y más.
  • - -
  • Interacción social: Puedes chatear con otros jugadores en tiempo real usando mensajes de voz o texto. También puedes hacer amigos, enviar regalos y unirte a clanes.
  • -
  • Sistema de recompensas: Puedes ganar monedas y gemas jugando minijuegos, completando tareas e iniciando sesión diariamente. Puedes usar estas monedas para comprar nuevos artículos para tu avatar o actualizar los existentes.
  • -
-

Cómo jugar Blockman Go Aventura

-

Jugar Blockman Go Adventure es fácil y divertido. Estos son los pasos a seguir:

-
    -
  1. Descargar e instalar el juego desde Google Play Store o App Store.
  2. -
  3. Crea una cuenta o inicia sesión con la existente.
  4. -
  5. Selecciona un mini-juego desde el lobby o crea tu propia habitación.
  6. -
  7. Invita a tus amigos o únete a otros jugadores en línea.
  8. -
  9. Disfruta del juego y chatea con otros jugadores.
  10. -
-

¿Qué es Blockman Go Aventura Hack APK?

-

Blockman Go Aventura Hack APK es una versión modificada del juego original que le da acceso a recursos y características ilimitadas. No está disponible en las tiendas de aplicaciones oficiales, pero se puede descargar desde sitios web de terceros. Sin embargo, debe tener cuidado al descargar estos archivos, ya que pueden contener virus o malware que pueden dañar su dispositivo o robar su información personal.

-

Beneficios de Blockman Go Aventura Hack APK

Algunos de los beneficios de Blockman Go Aventura Hack APK son:

-

-
    -
  • Cubos ilimitados: Puedes obtener cubos ilimitados, que son la moneda premium del juego. Puedes usar cubos para comprar artículos especiales, como membresía VIP, bolsas de la suerte y pieles exclusivas.
  • -
  • Monedas y gemas ilimitadas: También puedes obtener monedas y gemas ilimitadas, que son las monedas regulares del juego. Puedes usar monedas y gemas para comprar más atuendos, peinados, accesorios y pieles para tu avatar.
  • - -
  • Libre y fácil de usar: Usted no necesita raíz o jailbreak su dispositivo para utilizar Blockman Go Aventura Hack APK. Solo tienes que descargar e instalar el archivo, y estás listo para ir. No necesitas pagar nada ni completar ninguna encuesta para usar el hack.
  • -
-

Cómo descargar e instalar Blockman Go Aventura Hack APK

-

Si desea probar Blockman Go Aventura Hack APK, es necesario seguir estos pasos:

-
    -
  1. Ir a un sitio web confiable que ofrece Blockman Go Aventura Hack APK, tales como [HackDL] o [APKPure].
  2. -
  3. Haga clic en el botón de descarga y espere a que se descargue el archivo.
  4. -
  5. Ir a la configuración de su dispositivo y permitir la instalación de aplicaciones de fuentes desconocidas.
  6. -
  7. Busque el archivo descargado y toque en él para iniciar el proceso de instalación.
  8. -
  9. Siga las instrucciones en la pantalla y espere a que se complete la instalación.
  10. -
  11. Iniciar el juego y disfrutar del hack.
  12. -
-

Consejos y trucos para Blockman Go Aventura

-

Para hacer tu experiencia de juego más divertida y gratificante, aquí hay algunos consejos y trucos que puedes usar en Blockman Go Adventure:

-

Usa el menú mod para personalizar tu juego

-

Si usted está utilizando Blockman Go Aventura Hack APK, puede utilizar el menú mod para cambiar la configuración de juego de acuerdo a sus preferencias. Por ejemplo, puede aumentar su velocidad, saltar más alto, volar en el aire o volverse invisible. También puede deshabilitar algunas funciones que no le gustan, como anuncios, protección contra van o actualización automática. Sin embargo, debes tener cuidado al usar el menú mod, ya que algunos ajustes pueden causar fallas o errores en el juego. También debes evitar usarlo en salas públicas, ya que otros jugadores pueden reportarte por hacer trampa.

- -

Únete a un clan y juega con amigos

Otra forma de disfrutar de Blockman Go Adventure es unirse a un clan y jugar con amigos. Un clan es un grupo de jugadores que comparten un interés o objetivo común en el juego. Puedes unirte a un clan existente o crear uno propio. Al unirte a un clan, puedes chatear con otros miembros, enviar regalos, participar en guerras de clanes y ganar puntos de clan. También puedes invitar a tus amigos a unirse a tu clan o jugar con ellos en habitaciones privadas. Jugar con amigos puede hacer que el juego sea más divertido y social.

-

Conclusión

-

Blockman Go Adventure es un gran juego para cualquiera que ame los juegos basados en bloques con mucha variedad y creatividad. Puedes jugar diferentes minijuegos, explorar diferentes mundos, personalizar tu avatar e interactuar con otros jugadores en línea. También puede utilizar Blockman Go Aventura Hack APK para obtener recursos ilimitados y características que pueden mejorar su experiencia de juego. Sin embargo, debe tener cuidado al descargar e instalar estos archivos, ya que pueden contener virus o malware que pueden dañar su dispositivo o robar su información personal. También debe utilizar el truco de forma responsable y no abusar de él en las salas públicas o contra otros jugadores.

-

Resumen de los puntos principales

En este artículo, hemos cubierto los siguientes puntos:

-
    -
  • Blockman Go Adventure es un juego en línea gratuito que combina elementos de sandbox, aventura y juegos sociales.
  • Blockman Go Aventura Hack APK es una versión modificada del juego que le da cubos ilimitados, monedas, gemas y otros recursos.
  • -
  • Puede descargar e instalar Blockman Go Aventura Hack APK de sitios web de terceros, pero usted debe tener cuidado con los virus y el malware.
  • -
  • Puedes usar el menú mod para personalizar la configuración de tu juego, como velocidad, gravedad, invisibilidad y más.
  • -
  • Puedes recoger monedas y gemas para desbloquear nuevos objetos para tu avatar o actualizar los existentes.
  • - -
-

Llamada a la acción

-

Si usted está interesado en jugar Blockman Go Aventura o Blockman Go Aventura Hack APK, puede descargarlos de los enlaces a continuación. También puede visitar el sitio web oficial o las páginas de redes sociales de Blockman GO Studio para obtener más información sobre sus juegos y actualizaciones. ¡Diviértete y disfruta de la aventura!

-
    -
  • Descargar Blockman Go Adventure de Google Play Store
  • -
  • Descargar Blockman Go Adventure de App Store
  • -
  • Descargar Blockman Go aventura Hack APK de HackDL
  • -
  • Descargar Blockman Go aventura Hack APK de APKPure
  • -
  • Visite el sitio web oficial de Blockman GO Studio
  • -
  • Sigue a Blockman GO Studio en Twitter
  • -
  • https://bltlly.com/2v6LBt



    -

    ¿Por qué jugar Ultimate Car Driving Simulator en PC?

    -

    Si bien Ultimate Car Driving Simulator es un gran juego para jugar en su dispositivo móvil, es posible que se pregunte por qué debe jugar en su PC. Bueno, hay muchas razones para hacerlo, como:

    -
      -
    • Mejores gráficos y calidad de sonido: Jugando Ultimate Car Driving Simulator en PC le permitirá disfrutar de las impresionantes imágenes y efectos de sonido realistas del juego en alta resolución y pantalla completa. Usted será capaz de apreciar los detalles de los coches, los entornos, los efectos meteorológicos, etc. más claramente y sumergirse en el mundo del juego.
    • - -
    • Pantalla más grande y más divertido: Jugar Ultimate Car Driving Simulator en PC también hará que su experiencia de juego sea más divertida y agradable. Puedes jugar el juego en una pantalla más grande y compartirlo con tus amigos y familiares. También puede grabar su juego, tomar capturas de pantalla, transmitir en línea, chatear con otros jugadores, etc. con facilidad.
    • -
    -

    Como puedes ver, jugar Ultimate Car Driving Simulator en PC tiene muchas ventajas que mejorarán tu experiencia de juego. Entonces, ¿cómo se puede descargar y jugar Ultimate Car Driving Simulator en PC? Hay dos métodos principales que explicaremos en las siguientes secciones.

    -

    Cómo jugar último coche conducción simulador en PC con Windows 11

    -

    Si tienes un PC con Windows 11, estás de suerte porque puedes usar la función nativa de emulación de Android que viene con el nuevo sistema operativo. Esta función le permite ejecutar aplicaciones y juegos de Android en su PC sin ningún software o hardware adicional. Estos son los pasos para jugar Ultimate Car Driving Simulator en PC con Windows 11:

    -
      -
    1. Abra la aplicación de Microsoft Store en su PC con Windows 11 y busque Simulador de conducción de automóviles definitivo. Alternativamente, puedes usar este enlace para ir directamente a la página del juego.
    2. -
    3. Haga clic en el botón Instalar para descargar e instalar el juego en su PC. Es posible que necesite iniciar sesión con su cuenta de Microsoft si aún no lo ha hecho.
    4. -
    5. Inicie el juego desde el menú Inicio o el acceso directo del escritorio. Verá una ventana emergente que le pide que habilite las aplicaciones de Android en su PC. Haga clic en Activar.
    6. -
    7. Inicia sesión con tu cuenta de Google para acceder a los Servicios de Google Play y sincronizar tus datos de juego y logros. Puede usar una cuenta existente o crear una nueva.
    8. - -
    -

    ¡Eso es todo! Has descargado y jugado con éxito Ultimate Car Driving Simulator en PC con la función de emulación nativa de Windows 11 para Android. Sin embargo, si no tiene un PC con Windows 11 o prefiere otro método, puede usar un emulador de Android para PC en su lugar.

    -

    Cómo jugar Ultimate Car Driving Simulator en PC con emuladores de Android

    -

    Un emulador de Android es un programa de software que simula un dispositivo Android en su PC. Le permite ejecutar aplicaciones y juegos de Android en su PC con características y funciones similares como un dispositivo Android real. Hay muchos emuladores de Android para PC disponibles en línea, pero no todos ellos son compatibles o optimizados para juegos. Por lo tanto, hemos seleccionado algunos de los mejores emuladores de Android para PC que puede utilizar para jugar Ultimate Car Driving Simulator en PC. Son:

    - -
NombreDescripciónProsContras
BluestacksUn emulador de Android popular y potente para PC que ha sido diseñado para juegos. Tiene una interfaz fácil de usar y muchas características y opciones para mejorar su experiencia de juego -
    -
  • Soporta juegos de gama alta con gráficos y rendimiento altos
  • -
  • Ofrece una variedad de modos de juego, como Eco Mode, Multi-Instance, Macro Recorder, etc.
  • -
  • Tiene una tienda de aplicaciones incorporada y un centro de juegos con miles de juegos
  • -
  • Permite personalizar los controles, ajustes y preferencias del emulador y el juego
  • -
  • Tiene una gran y activa comunidad de usuarios y desarrolladores
  • -
-
-
    -
  • Requiere un PC de gama alta con al menos 4GB de RAM y una GPU dedicada
  • -
  • Consume muchos recursos de CPU y memoria
  • -
  • Puede tener problemas de compatibilidad con algunos juegos o aplicaciones
  • -
  • Puede tener anuncios o ventanas emergentes que pueden ser molestos o intrusivos
  • -
  • Puede tener riesgos de seguridad o privacidad si no se descarga desde el sitio web oficial
  • -
-
Un emulador de Android rápido y suave para PC que también está diseñado para juegos. Tiene una interfaz simple e intuitiva y muchas características y opciones para mejorar tu experiencia de juego -
    -
  • Soporta la mayoría de los juegos con altos gráficos y rendimiento
  • -
  • Ofrece una variedad de modos de juego, como Control de teclado, Registro de guiones, Multi-Drive, etc.
  • -
  • Tiene una tienda de aplicaciones incorporada y un centro de juegos con miles de juegos
  • -
  • Permite personalizar los controles, ajustes y preferencias del emulador y el juego
  • -
  • Tiene una gran y activa comunidad de usuarios y desarrolladores
  • -
-
-
    -
  • Requiere un PC de gama alta con al menos 2GB de RAM y una GPU dedicada
  • -
  • Consume muchos recursos de CPU y memoria
  • -
  • Puede tener problemas de compatibilidad con algunos juegos o aplicaciones
  • -
  • Puede tener anuncios o ventanas emergentes que pueden ser molestos o intrusivos
  • -
  • Puede tener riesgos de seguridad o privacidad si no se descarga desde el sitio web oficial
  • -
-
GameloopUn emulador de Android potente y optimizado para PC que está especialmente diseñado para juegos. Tiene una interfaz moderna y elegante y un montón de características y opciones para mejorar su experiencia de juego -
    -
  • Soporta la mayoría de los juegos con gráficos y rendimiento altos, especialmente juegos FPS y MOBA
  • -
  • Ofrece una variedad de modos de juego, como Modo Turbo, Modo Inteligente, Modo Esports, etc.
  • -
  • Tiene una tienda de aplicaciones incorporada y un centro de juegos con miles de juegos
  • -
  • Permite personalizar los controles, ajustes y preferencias del emulador y el juego
  • -
  • Tiene una gran y activa comunidad de usuarios y desarrolladores
-
    -
  • Requiere un PC de gama alta con al menos 4GB de RAM y una GPU dedicada
  • -
  • Consume muchos recursos de CPU y memoria
  • -
  • Puede tener problemas de compatibilidad con algunos juegos o aplicaciones
  • -
  • Puede tener anuncios o ventanas emergentes que pueden ser molestos o intrusivos
  • - -
-
CategoríaRecetaTiempo de cocción
AperitivosPatatas fritas de freidora de aire40 minutos
AperitivosEspárragos de freidora de aire20 minutos
Platos principalesChuletas de cerdo de freidora de aire20 minutos
Platos principalesPizza de freidora de aire10 minutos
PostresAire freidora Mini pastel de chocolate oscuro25 minutos
PostresCruasanes de queso crema de cereza con freidora de aire15 minutos
Plataforma o servicioCaracterísticas y beneficios
Música de Apple- Ofrece descargas ilimitadas y transmisiones de más de 75 millones de canciones, incluyendo "Amantes y Mejores Amigos" por Azana.
- Soporta la escucha sin conexión en múltiples dispositivos.
- Proporciona recomendaciones personalizadas, listas de reproducción, estaciones de radio y podcasts.
- Cuesta $9.99 por mes para los individuos, $14.99 por mes para las familias, o $4.99 por mes para los estudiantes.
- Ofrece una prueba gratuita durante tres meses.
Spotify- Ofrece transmisiones ilimitadas de más de 70 millones de canciones, incluyendo "Amantes y mejores amigos" por Azana.
- Permite descargas de hasta 10.000 canciones por dispositivo para usuarios premium.
- Proporciona recomendaciones personalizadas, listas de reproducción, estaciones de radio, podcasts y videos.
- Cuesta $9.99 por mes para los individuos, $14.99 por mes para las familias, o $4.99 por mes para los estudiantes.
- Ofrece una versión gratuita con anuncios y características limitadas.
Música de YouTube- Ofrece transmisiones ilimitadas de más de 60 millones de canciones, incluyendo "Amantes y mejores amigos" por Azana.
- Permite descargas de hasta 100.000 canciones por dispositivo para usuarios premium.
- Proporciona recomendaciones personalizadas, listas de reproducción, estaciones de radio, podcasts y videos.
- Cuesta $9.99 por mes para individuos o $14.99 por mes para familias.
- Ofrece una versión gratuita con anuncios y características limitadas.
Deezer
- - - -
- -
-
    -
  • Colab Notebook Google Colab
  • -
  • Blender Plugin Blender
  • -
  • Docker Image Docker
  • -
  • Windows Setup
  • -
- -
-Twitter Follow
- -
- - - -
- - -#### Citation -``` -@inproceedings{xiu2023econ, - title = {{ECON: Explicit Clothed humans Optimized via Normal integration}}, - author = {Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.}, - booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, - month = {June}, - year = {2023}, -} -``` - - -
- -More - -#### Acknowledgments: -- [controlnet-openpose](https://huggingface.co/spaces/diffusers/controlnet-openpose) -- [TEXTure](https://huggingface.co/spaces/TEXTurePaper/TEXTure) - - -#### Image Credits - -* [Pinterest](https://www.pinterest.com/search/pins/?q=parkour&rs=sitelinks_searchbox) - -#### Related works - -* [ICON @ MPI-IS](https://icon.is.tue.mpg.de/) -* [MonoPort @ USC](https://xiuyuliang.cn/monoport) -* [Phorhum @ Google](https://phorhum.github.io/) -* [PIFuHD @ Meta](https://shunsukesaito.github.io/PIFuHD/) -* [PaMIR @ Tsinghua](http://www.liuyebin.com/pamir/pamir.html) - -
- -
- -

Generate pose & prompt-guided images / Upload photos / Use examples → Submit Image (~3min) → Generate Video (~3min)

-

ECON is only suitable for humanoid images and will not work well on cartoons with non-human shapes.

-
-''' - -from controlnet_aux import OpenposeDetector -from diffusers import StableDiffusionControlNetPipeline, ControlNetModel -from diffusers import UniPCMultistepScheduler -import gradio as gr -import torch -import base64 -from io import BytesIO -from PIL import Image - -# live conditioning -canvas_html = "" -load_js = """ -async () => { - const url = "https://huggingface.co/datasets/radames/gradio-components/raw/main/pose-gradio.js" - fetch(url) - .then(res => res.text()) - .then(text => { - const script = document.createElement('script'); - script.type = "module" - script.src = URL.createObjectURL(new Blob([text], { type: 'application/javascript' })); - document.head.appendChild(script); - }); -} -""" -get_js_image = """ -async (image_in_img, prompt, image_file_live_opt, live_conditioning) => { - const canvasEl = document.getElementById("canvas-root"); - const data = canvasEl? canvasEl._data : null; - return [image_in_img, prompt, image_file_live_opt, data] -} -""" - -# Constants -low_threshold = 100 -high_threshold = 200 -default_step = 50 -cached = False - -# Models -pose_model = OpenposeDetector.from_pretrained("lllyasviel/ControlNet") -controlnet = ControlNetModel.from_pretrained( - "lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16 -) -pipe = StableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16 -) -pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) - -# This command loads the individual model components on GPU on-demand. So, we don't -# need to explicitly call pipe.to("cuda"). -pipe.enable_model_cpu_offload() - -# xformers -pipe.enable_xformers_memory_efficient_attention() - -# Generator seed, -generator = torch.manual_seed(0) - -hint_prompts = ''' -Hints:
-best quality, extremely detailed, solid color background, -super detail, high detail, edge lighting, soft focus, -light and dark contrast, 8k, edge lighting, 3d, c4d, -blender, oc renderer, ultra high definition, 3d rendering -''' - - -def get_pose(image): - return pose_model(image) - - -# def generate_texture(input_shape, text, seed, guidance_scale): -# iface = gr.Interface.load("spaces/TEXTurePaper/TEXTure") -# output_shape = iface(input_shape, text, seed, guidance_scale) -# return output_shape - - -def generate_images(image, prompt, image_file_live_opt='file', live_conditioning=None): - if image is None and 'image' not in live_conditioning: - raise gr.Error("Please provide an image") - try: - if image_file_live_opt == 'file': - pose = get_pose(image) - elif image_file_live_opt == 'webcam': - base64_img = live_conditioning['image'] - image_data = base64.b64decode(base64_img.split(',')[1]) - pose = Image.open(BytesIO(image_data)).convert('RGB').resize((512, 512)) - output = pipe( - prompt, - pose, - generator=generator, - num_images_per_prompt=3, - num_inference_steps=20, - ) - all_outputs = [] - all_outputs.append(pose) - for image in output.images: - all_outputs.append(image) - return all_outputs, all_outputs - except Exception as e: - raise gr.Error(str(e)) - - -def toggle(choice): - if choice == "file": - return gr.update(visible=True, value=None), gr.update(visible=False, value=None) - elif choice == "webcam": - return gr.update(visible=False, value=None), gr.update(visible=True, value=canvas_html) - - -examples_pose = glob.glob('examples/pose/*') -examples_cloth = glob.glob('examples/cloth/*') - -with gr.Blocks() as demo: - gr.Markdown(description) - - out_lst = [] - with gr.Row(): - with gr.Column(): - with gr.Row(): - - live_conditioning = gr.JSON(value={}, visible=False) - - with gr.Column(): - image_file_live_opt = gr.Radio(["file", "webcam"], - value="file", - label="How would you like to upload your image?") - - with gr.Row(): - image_in_img = gr.Image( - source="upload", visible=True, type="pil", label="Image for Pose" - ) - canvas = gr.HTML(None, elem_id="canvas_html", visible=False) - - image_file_live_opt.change( - fn=toggle, - inputs=[image_file_live_opt], - outputs=[image_in_img, canvas], - queue=False - ) - prompt = gr.Textbox( - label="Enter your prompt to synthesise the image", - max_lines=10, - placeholder="best quality, extremely detailed", - ) - - gr.Markdown(hint_prompts) - - with gr.Column(): - gallery = gr.Gallery(label="Generated Images").style(grid=[2], height="auto") - gallery_cache = gr.State() - - gr.Markdown( - ''' -
- Click the target generated image for Reconstruction.
- ↓ -
- ''' - ) - - inp = gr.Image(type="filepath", label="Input Image for Reconstruction") - fitting_step = gr.inputs.Slider( - 10, - 100, - step=10, - label='Fitting steps (Slower yet Better-aligned SMPL-X)', - default=default_step - ) - - with gr.Row(): - btn_sample = gr.Button("Generate Image") - btn_submit = gr.Button("Submit Image (~3min)") - - btn_sample.click( - fn=generate_images, - inputs=[image_in_img, prompt, image_file_live_opt, live_conditioning], - outputs=[gallery, gallery_cache], - _js=get_js_image - ) - - def get_select_index(cache, evt: gr.SelectData): - return cache[evt.index] - - gallery.select( - fn=get_select_index, - inputs=[gallery_cache], - outputs=[inp], - ) - - with gr.Row(): - - gr.Examples( - examples=list(examples_pose), - inputs=[inp], - cache_examples=cached, - fn=None, - outputs=None, - label="Hard Pose Examples" - ) - - gr.Examples( - examples=list(examples_cloth), - inputs=[inp], - cache_examples=cached, - fn=None, - outputs=None, - label="Loose Cloth Examples" - ) - - out_vid = gr.Video(label="Shared on Twitter with #ECON") - - with gr.Column(): - overlap_inp = gr.Image(type="filepath", label="Image Normal Overlap").style(height=400) - out_final = gr.Model3D( - clear_color=[0.0, 0.0, 0.0, 0.0], label="Clothed human", elem_id="avatar" - ) - out_smpl = gr.Model3D( - clear_color=[0.0, 0.0, 0.0, 0.0], label="SMPL-X body", elem_id="avatar" - ) - - vis_tensor_path = gr.State() - - with gr.Row(): - btn_video = gr.Button("Generate Video (~3min)") - - out_lst = [out_smpl, out_final, overlap_inp, vis_tensor_path] - - btn_video.click( - fn=generate_video, - inputs=[vis_tensor_path], - outputs=[out_vid], - ) - - btn_submit.click(fn=generate_model, inputs=[inp, fitting_step], outputs=out_lst) - - demo.load(None, None, None, _js=load_js) - -if __name__ == "__main__": - - demo.queue(concurrency_count=1) - demo.launch(debug=True, enable_queue=True) \ No newline at end of file diff --git a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/utils/callbacks.py b/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/utils/callbacks.py deleted file mode 100644 index 166d8938322d4b35783be4068ae9561f66c94749..0000000000000000000000000000000000000000 --- a/spaces/Zengyf-CVer/Streamlit_YOLOv5_Model2x/utils/callbacks.py +++ /dev/null @@ -1,76 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Callback utils -""" - -import threading - - -class Callbacks: - """" - Handles all registered callbacks for YOLOv5 Hooks - """ - - def __init__(self): - # Define the available callbacks - self._callbacks = { - 'on_pretrain_routine_start': [], - 'on_pretrain_routine_end': [], - 'on_train_start': [], - 'on_train_epoch_start': [], - 'on_train_batch_start': [], - 'optimizer_step': [], - 'on_before_zero_grad': [], - 'on_train_batch_end': [], - 'on_train_epoch_end': [], - 'on_val_start': [], - 'on_val_batch_start': [], - 'on_val_image_end': [], - 'on_val_batch_end': [], - 'on_val_end': [], - 'on_fit_epoch_end': [], # fit = train + val - 'on_model_save': [], - 'on_train_end': [], - 'on_params_update': [], - 'teardown': [],} - self.stop_training = False # set True to interrupt training - - def register_action(self, hook, name='', callback=None): - """ - Register a new action to a callback hook - - Args: - hook: The callback hook name to register the action to - name: The name of the action for later reference - callback: The callback to fire - """ - assert hook in self._callbacks, f"hook '{hook}' not found in callbacks {self._callbacks}" - assert callable(callback), f"callback '{callback}' is not callable" - self._callbacks[hook].append({'name': name, 'callback': callback}) - - def get_registered_actions(self, hook=None): - """" - Returns all the registered actions by callback hook - - Args: - hook: The name of the hook to check, defaults to all - """ - return self._callbacks[hook] if hook else self._callbacks - - def run(self, hook, *args, thread=False, **kwargs): - """ - Loop through the registered actions and fire all callbacks on main thread - - Args: - hook: The name of the hook to check, defaults to all - args: Arguments to receive from YOLOv5 - thread: (boolean) Run callbacks in daemon thread - kwargs: Keyword Arguments to receive from YOLOv5 - """ - - assert hook in self._callbacks, f"hook '{hook}' not found in callbacks {self._callbacks}" - for logger in self._callbacks[hook]: - if thread: - threading.Thread(target=logger['callback'], args=args, kwargs=kwargs, daemon=True).start() - else: - logger['callback'](*args, **kwargs) diff --git a/spaces/abhijithkota/my_gen_ai_page/app.py b/spaces/abhijithkota/my_gen_ai_page/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/abhijithkota/my_gen_ai_page/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/__init__.py deleted file mode 100644 index e54b088acf644d285ecbeb1440c414e722b9db58..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/backbones/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from .darknet import Darknet -from .detectors_resnet import DetectoRS_ResNet -from .detectors_resnext import DetectoRS_ResNeXt -from .hourglass import HourglassNet -from .hrnet import HRNet -from .regnet import RegNet -from .res2net import Res2Net -from .resnest import ResNeSt -from .resnet import ResNet, ResNetV1d -from .resnext import ResNeXt -from .ssd_vgg import SSDVGG -from .trident_resnet import TridentResNet -from .swin_transformer import SwinTransformer -from .uniformer import UniFormer - -__all__ = [ - 'RegNet', 'ResNet', 'ResNetV1d', 'ResNeXt', 'SSDVGG', 'HRNet', 'Res2Net', - 'HourglassNet', 'DetectoRS_ResNet', 'DetectoRS_ResNeXt', 'Darknet', - 'ResNeSt', 'TridentResNet', 'SwinTransformer', 'UniFormer' -] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/necks/nasfcos_fpn.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/necks/nasfcos_fpn.py deleted file mode 100644 index 2daf79ef591373499184c624ccd27fb7456dec06..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/necks/nasfcos_fpn.py +++ /dev/null @@ -1,161 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, caffe2_xavier_init -from mmcv.ops.merge_cells import ConcatCell - -from ..builder import NECKS - - -@NECKS.register_module() -class NASFCOS_FPN(nn.Module): - """FPN structure in NASFPN. - - Implementation of paper `NAS-FCOS: Fast Neural Architecture Search for - Object Detection `_ - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale) - num_outs (int): Number of output scales. - start_level (int): Index of the start input backbone level used to - build the feature pyramid. Default: 0. - end_level (int): Index of the end input backbone level (exclusive) to - build the feature pyramid. Default: -1, which means the last level. - add_extra_convs (bool): It decides whether to add conv - layers on top of the original feature maps. Default to False. - If True, its actual mode is specified by `extra_convs_on_inputs`. - conv_cfg (dict): dictionary to construct and config conv layer. - norm_cfg (dict): dictionary to construct and config norm layer. - """ - - def __init__(self, - in_channels, - out_channels, - num_outs, - start_level=1, - end_level=-1, - add_extra_convs=False, - conv_cfg=None, - norm_cfg=None): - super(NASFCOS_FPN, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.norm_cfg = norm_cfg - self.conv_cfg = conv_cfg - - if end_level == -1: - self.backbone_end_level = self.num_ins - assert num_outs >= self.num_ins - start_level - else: - self.backbone_end_level = end_level - assert end_level <= len(in_channels) - assert num_outs == end_level - start_level - self.start_level = start_level - self.end_level = end_level - self.add_extra_convs = add_extra_convs - - self.adapt_convs = nn.ModuleList() - for i in range(self.start_level, self.backbone_end_level): - adapt_conv = ConvModule( - in_channels[i], - out_channels, - 1, - stride=1, - padding=0, - bias=False, - norm_cfg=dict(type='BN'), - act_cfg=dict(type='ReLU', inplace=False)) - self.adapt_convs.append(adapt_conv) - - # C2 is omitted according to the paper - extra_levels = num_outs - self.backbone_end_level + self.start_level - - def build_concat_cell(with_input1_conv, with_input2_conv): - cell_conv_cfg = dict( - kernel_size=1, padding=0, bias=False, groups=out_channels) - return ConcatCell( - in_channels=out_channels, - out_channels=out_channels, - with_out_conv=True, - out_conv_cfg=cell_conv_cfg, - out_norm_cfg=dict(type='BN'), - out_conv_order=('norm', 'act', 'conv'), - with_input1_conv=with_input1_conv, - with_input2_conv=with_input2_conv, - input_conv_cfg=conv_cfg, - input_norm_cfg=norm_cfg, - upsample_mode='nearest') - - # Denote c3=f0, c4=f1, c5=f2 for convince - self.fpn = nn.ModuleDict() - self.fpn['c22_1'] = build_concat_cell(True, True) - self.fpn['c22_2'] = build_concat_cell(True, True) - self.fpn['c32'] = build_concat_cell(True, False) - self.fpn['c02'] = build_concat_cell(True, False) - self.fpn['c42'] = build_concat_cell(True, True) - self.fpn['c36'] = build_concat_cell(True, True) - self.fpn['c61'] = build_concat_cell(True, True) # f9 - self.extra_downsamples = nn.ModuleList() - for i in range(extra_levels): - extra_act_cfg = None if i == 0 \ - else dict(type='ReLU', inplace=False) - self.extra_downsamples.append( - ConvModule( - out_channels, - out_channels, - 3, - stride=2, - padding=1, - act_cfg=extra_act_cfg, - order=('act', 'norm', 'conv'))) - - def forward(self, inputs): - """Forward function.""" - feats = [ - adapt_conv(inputs[i + self.start_level]) - for i, adapt_conv in enumerate(self.adapt_convs) - ] - - for (i, module_name) in enumerate(self.fpn): - idx_1, idx_2 = int(module_name[1]), int(module_name[2]) - res = self.fpn[module_name](feats[idx_1], feats[idx_2]) - feats.append(res) - - ret = [] - for (idx, input_idx) in zip([9, 8, 7], [1, 2, 3]): # add P3, P4, P5 - feats1, feats2 = feats[idx], feats[5] - feats2_resize = F.interpolate( - feats2, - size=feats1.size()[2:], - mode='bilinear', - align_corners=False) - - feats_sum = feats1 + feats2_resize - ret.append( - F.interpolate( - feats_sum, - size=inputs[input_idx].size()[2:], - mode='bilinear', - align_corners=False)) - - for submodule in self.extra_downsamples: - ret.append(submodule(ret[-1])) - - return tuple(ret) - - def init_weights(self): - """Initialize the weights of module.""" - for module in self.fpn.values(): - if hasattr(module, 'conv_out'): - caffe2_xavier_init(module.out_conv.conv) - - for modules in [ - self.adapt_convs.modules(), - self.extra_downsamples.modules() - ]: - for module in modules: - if isinstance(module, nn.Conv2d): - caffe2_xavier_init(module) diff --git a/spaces/abidlabs/voice-verification/app.py b/spaces/abidlabs/voice-verification/app.py deleted file mode 100644 index e9b74898909a74aca77fe698ad9414d67d2be84e..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/voice-verification/app.py +++ /dev/null @@ -1,122 +0,0 @@ -import gradio as gr -import torch -from torchaudio.sox_effects import apply_effects_file -from transformers import AutoFeatureExtractor, AutoModelForAudioXVector -import os - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -STYLE = """ - -""" -OUTPUT_OK = ( - STYLE - + """ -
-

The speakers are

-

{:.1f}%

-

similar

-

Welcome, human!

-
(You must get at least 85% to be considered the same person)
-
-""" -) -OUTPUT_FAIL = ( - STYLE - + """ -
-

The speakers are

-

{:.1f}%

-

similar

-

You shall not pass!

-
(You must get at least 85% to be considered the same person)
-
-""" -) - -EFFECTS = [ - ["remix", "-"], - ["channels", "1"], - ["rate", "16000"], - ["gain", "-1.0"], - ["silence", "1", "0.1", "0.1%", "-1", "0.1", "0.1%"], - ["trim", "0", "10"], -] - -THRESHOLD = 0.85 - -model_name = "microsoft/unispeech-sat-base-plus-sv" -feature_extractor = AutoFeatureExtractor.from_pretrained(model_name) -model = AutoModelForAudioXVector.from_pretrained(model_name).to(device) -cosine_sim = torch.nn.CosineSimilarity(dim=-1) - - -def similarity_fn(path1, path2): - if not (path1 and path2): - return 'ERROR: Please record audio for *both* speakers!' - - wav1, _ = apply_effects_file(path1, EFFECTS) - wav2, _ = apply_effects_file(path2, EFFECTS) - print(wav1.shape, wav2.shape) - - input1 = feature_extractor(wav1.squeeze(0), return_tensors="pt", sampling_rate=16000).input_values.to(device) - input2 = feature_extractor(wav2.squeeze(0), return_tensors="pt", sampling_rate=16000).input_values.to(device) - - with torch.no_grad(): - emb1 = model(input1).embeddings - emb2 = model(input2).embeddings - emb1 = torch.nn.functional.normalize(emb1, dim=-1).cpu() - emb2 = torch.nn.functional.normalize(emb2, dim=-1).cpu() - similarity = cosine_sim(emb1, emb2).numpy()[0] - - if similarity >= THRESHOLD: - output = OUTPUT_OK.format(similarity * 100) - else: - output = OUTPUT_FAIL.format(similarity * 100) - - return output - - -inputs = [ - gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Speaker #1"), - gr.inputs.Audio(source="microphone", type="filepath", optional=True, label="Speaker #2"), -] -output = gr.outputs.HTML(label="") - - -description = ( - "This demo from Microsoft will compare two speech samples and determine if they are from the same speaker. " - "Try it with your own voice! If you find an incorrect prediction, you can click FLAG to save the recordings to a public dataset: " - "https://huggingface.co/datasets/abidlabs/voice-verification-adversarial-dataset, " - "consisting of samples on which the model makes mistakes, which may further improve research in this field. Disclaimer: this will " - "save the recordings to a PUBLIC dataset so please be careful about what you FLAG." -) -article = ( - "

" - "🎙️ Learn more about UniSpeech-SAT | " - "📚 UniSpeech-SAT paper | " - "📚 X-Vector paper" - "

" -) -examples = [ - ["samples/cate_blanch.mp3", "samples/cate_blanch_2.mp3"], - ["samples/cate_blanch.mp3", "samples/kirsten_dunst.wav"], -] - -HF_TOKEN = os.getenv('HF_TOKEN') -hf_saver = gr.HuggingFaceDatasetSaver(HF_TOKEN, "voice-verification-adversarial-dataset") - - -interface = gr.Interface( - fn=similarity_fn, - inputs=inputs, - outputs=output, - description=description, - title="Break this voice verification model!", - layout="horizontal", - theme="huggingface", - live=False, - examples=examples, - article="[Link to dataset](https://huggingface.co/datasets/abidlabs/voice-verification-adversarial-dataset)", -) -interface.launch(enable_queue=True) diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/visualize/joints2smpl/src/customloss.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/visualize/joints2smpl/src/customloss.py deleted file mode 100644 index 880ab4861c58cec9faeb086e430fde7387c5cc9e..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/visualize/joints2smpl/src/customloss.py +++ /dev/null @@ -1,222 +0,0 @@ -import torch -import torch.nn.functional as F -from visualize.joints2smpl.src import config - -# Guassian -def gmof(x, sigma): - """ - Geman-McClure error function - """ - x_squared = x ** 2 - sigma_squared = sigma ** 2 - return (sigma_squared * x_squared) / (sigma_squared + x_squared) - -# angle prior -def angle_prior(pose): - """ - Angle prior that penalizes unnatural bending of the knees and elbows - """ - # We subtract 3 because pose does not include the global rotation of the model - return torch.exp( - pose[:, [55 - 3, 58 - 3, 12 - 3, 15 - 3]] * torch.tensor([1., -1., -1, -1.], device=pose.device)) ** 2 - - -def perspective_projection(points, rotation, translation, - focal_length, camera_center): - """ - This function computes the perspective projection of a set of points. - Input: - points (bs, N, 3): 3D points - rotation (bs, 3, 3): Camera rotation - translation (bs, 3): Camera translation - focal_length (bs,) or scalar: Focal length - camera_center (bs, 2): Camera center - """ - batch_size = points.shape[0] - K = torch.zeros([batch_size, 3, 3], device=points.device) - K[:, 0, 0] = focal_length - K[:, 1, 1] = focal_length - K[:, 2, 2] = 1. - K[:, :-1, -1] = camera_center - - # Transform points - points = torch.einsum('bij,bkj->bki', rotation, points) - points = points + translation.unsqueeze(1) - - # Apply perspective distortion - projected_points = points / points[:, :, -1].unsqueeze(-1) - - # Apply camera intrinsics - projected_points = torch.einsum('bij,bkj->bki', K, projected_points) - - return projected_points[:, :, :-1] - - -def body_fitting_loss(body_pose, betas, model_joints, camera_t, camera_center, - joints_2d, joints_conf, pose_prior, - focal_length=5000, sigma=100, pose_prior_weight=4.78, - shape_prior_weight=5, angle_prior_weight=15.2, - output='sum'): - """ - Loss function for body fitting - """ - batch_size = body_pose.shape[0] - rotation = torch.eye(3, device=body_pose.device).unsqueeze(0).expand(batch_size, -1, -1) - - projected_joints = perspective_projection(model_joints, rotation, camera_t, - focal_length, camera_center) - - # Weighted robust reprojection error - reprojection_error = gmof(projected_joints - joints_2d, sigma) - reprojection_loss = (joints_conf ** 2) * reprojection_error.sum(dim=-1) - - # Pose prior loss - pose_prior_loss = (pose_prior_weight ** 2) * pose_prior(body_pose, betas) - - # Angle prior for knees and elbows - angle_prior_loss = (angle_prior_weight ** 2) * angle_prior(body_pose).sum(dim=-1) - - # Regularizer to prevent betas from taking large values - shape_prior_loss = (shape_prior_weight ** 2) * (betas ** 2).sum(dim=-1) - - total_loss = reprojection_loss.sum(dim=-1) + pose_prior_loss + angle_prior_loss + shape_prior_loss - - if output == 'sum': - return total_loss.sum() - elif output == 'reprojection': - return reprojection_loss - - -# --- get camera fitting loss ----- -def camera_fitting_loss(model_joints, camera_t, camera_t_est, camera_center, - joints_2d, joints_conf, - focal_length=5000, depth_loss_weight=100): - """ - Loss function for camera optimization. - """ - # Project model joints - batch_size = model_joints.shape[0] - rotation = torch.eye(3, device=model_joints.device).unsqueeze(0).expand(batch_size, -1, -1) - projected_joints = perspective_projection(model_joints, rotation, camera_t, - focal_length, camera_center) - - # get the indexed four - op_joints = ['OP RHip', 'OP LHip', 'OP RShoulder', 'OP LShoulder'] - op_joints_ind = [config.JOINT_MAP[joint] for joint in op_joints] - gt_joints = ['RHip', 'LHip', 'RShoulder', 'LShoulder'] - gt_joints_ind = [config.JOINT_MAP[joint] for joint in gt_joints] - - reprojection_error_op = (joints_2d[:, op_joints_ind] - - projected_joints[:, op_joints_ind]) ** 2 - reprojection_error_gt = (joints_2d[:, gt_joints_ind] - - projected_joints[:, gt_joints_ind]) ** 2 - - # Check if for each example in the batch all 4 OpenPose detections are valid, otherwise use the GT detections - # OpenPose joints are more reliable for this task, so we prefer to use them if possible - is_valid = (joints_conf[:, op_joints_ind].min(dim=-1)[0][:, None, None] > 0).float() - reprojection_loss = (is_valid * reprojection_error_op + (1 - is_valid) * reprojection_error_gt).sum(dim=(1, 2)) - - # Loss that penalizes deviation from depth estimate - depth_loss = (depth_loss_weight ** 2) * (camera_t[:, 2] - camera_t_est[:, 2]) ** 2 - - total_loss = reprojection_loss + depth_loss - return total_loss.sum() - - - - # #####--- body fitiing loss ----- -def body_fitting_loss_3d(body_pose, preserve_pose, - betas, model_joints, camera_translation, - j3d, pose_prior, - joints3d_conf, - sigma=100, pose_prior_weight=4.78*1.5, - shape_prior_weight=5.0, angle_prior_weight=15.2, - joint_loss_weight=500.0, - pose_preserve_weight=0.0, - use_collision=False, - model_vertices=None, model_faces=None, - search_tree=None, pen_distance=None, filter_faces=None, - collision_loss_weight=1000 - ): - """ - Loss function for body fitting - """ - batch_size = body_pose.shape[0] - - #joint3d_loss = (joint_loss_weight ** 2) * gmof((model_joints + camera_translation) - j3d, sigma).sum(dim=-1) - - joint3d_error = gmof((model_joints + camera_translation) - j3d, sigma) - - joint3d_loss_part = (joints3d_conf ** 2) * joint3d_error.sum(dim=-1) - joint3d_loss = ((joint_loss_weight ** 2) * joint3d_loss_part).sum(dim=-1) - - # Pose prior loss - pose_prior_loss = (pose_prior_weight ** 2) * pose_prior(body_pose, betas) - # Angle prior for knees and elbows - angle_prior_loss = (angle_prior_weight ** 2) * angle_prior(body_pose).sum(dim=-1) - # Regularizer to prevent betas from taking large values - shape_prior_loss = (shape_prior_weight ** 2) * (betas ** 2).sum(dim=-1) - - collision_loss = 0.0 - # Calculate the loss due to interpenetration - if use_collision: - triangles = torch.index_select( - model_vertices, 1, - model_faces).view(batch_size, -1, 3, 3) - - with torch.no_grad(): - collision_idxs = search_tree(triangles) - - # Remove unwanted collisions - if filter_faces is not None: - collision_idxs = filter_faces(collision_idxs) - - if collision_idxs.ge(0).sum().item() > 0: - collision_loss = torch.sum(collision_loss_weight * pen_distance(triangles, collision_idxs)) - - pose_preserve_loss = (pose_preserve_weight ** 2) * ((body_pose - preserve_pose) ** 2).sum(dim=-1) - - # print('joint3d_loss', joint3d_loss.shape) - # print('pose_prior_loss', pose_prior_loss.shape) - # print('angle_prior_loss', angle_prior_loss.shape) - # print('shape_prior_loss', shape_prior_loss.shape) - # print('collision_loss', collision_loss) - # print('pose_preserve_loss', pose_preserve_loss.shape) - - total_loss = joint3d_loss + pose_prior_loss + angle_prior_loss + shape_prior_loss + collision_loss + pose_preserve_loss - - return total_loss.sum() - - -# #####--- get camera fitting loss ----- -def camera_fitting_loss_3d(model_joints, camera_t, camera_t_est, - j3d, joints_category="orig", depth_loss_weight=100.0): - """ - Loss function for camera optimization. - """ - model_joints = model_joints + camera_t - # # get the indexed four - # op_joints = ['OP RHip', 'OP LHip', 'OP RShoulder', 'OP LShoulder'] - # op_joints_ind = [config.JOINT_MAP[joint] for joint in op_joints] - # - # j3d_error_loss = (j3d[:, op_joints_ind] - - # model_joints[:, op_joints_ind]) ** 2 - - gt_joints = ['RHip', 'LHip', 'RShoulder', 'LShoulder'] - gt_joints_ind = [config.JOINT_MAP[joint] for joint in gt_joints] - - if joints_category=="orig": - select_joints_ind = [config.JOINT_MAP[joint] for joint in gt_joints] - elif joints_category=="AMASS": - select_joints_ind = [config.AMASS_JOINT_MAP[joint] for joint in gt_joints] - else: - print("NO SUCH JOINTS CATEGORY!") - - j3d_error_loss = (j3d[:, select_joints_ind] - - model_joints[:, gt_joints_ind]) ** 2 - - # Loss that penalizes deviation from depth estimate - depth_loss = (depth_loss_weight**2) * (camera_t - camera_t_est)**2 - - total_loss = j3d_error_loss + depth_loss - return total_loss.sum() diff --git a/spaces/achyuth1344/stable-diffusion-web-ui/header_patch.py b/spaces/achyuth1344/stable-diffusion-web-ui/header_patch.py deleted file mode 100644 index aa8c592aca6649bf2ca293de6cb58566c7791cf3..0000000000000000000000000000000000000000 --- a/spaces/achyuth1344/stable-diffusion-web-ui/header_patch.py +++ /dev/null @@ -1,29 +0,0 @@ - with gr.Box(visible=is_spaces): - if(is_spaces and is_shared_ui): - gr.HTML(f''' -
-

🚨 We have a new version ✨ please try it and add a comment if you find any issues. https://huggingface.co/spaces/achyuth1344/stable-diffusion-webui

-
-

🚧 (WIP) Automatic1111 Stable Diffusion Web UI on 🤗 Hugging Face Spaces | Running model: Linaqruf/anything-v3.0

-

You can duplicate this Space to run it privately without a queue and load additional checkpoints.  Duplicate Space  Open In Colab  Become A Patreon  Buy a Coffee

-

📝 How to add private model or embed? 📺 Tutorial Video: https://www.youtube.com/channel/UCs_1ej3ysIROjWZVmNw-tyw 🐣 Please follow me for new updates https://twitter.com/pixiejourney

-
- ''') - elif(is_spaces): - import torch - if(not torch.cuda.is_available()): - gr.HTML(f''' -
-

🚧 (WIP) Private Automatic1111 Stable Diffusion Web UI on 🤗 Hugging Face Spaces

-

This Space is currently running on CPU, this WebUI may not run on CPU 🥶, you can upgrade for a GPU in the Settings tab  Open In Colab  Become A Patreon  Buy a Coffee

-

📝 How to add private model or embed? 📺 Tutorial Video: https://www.youtube.com/channel/UCs_1ej3ysIROjWZVmNw-tyw 🐣 Please follow me for new updates https://twitter.com/pixiejourney

-
- ''') - else: - gr.HTML(f''' -
-

🚧 (WIP) Private Automatic1111 Stable Diffusion Web UI on 🤗 Hugging Face Spaces

-

It is running on a GPU 🔥, you can don't forget to remove the GPU attribution once your are done playing with it  Open In Colab  Become A Patreon  Buy a Coffee

-

📝 How to add private model or embed? 📺 Tutorial Video: https://www.youtube.com/channel/UCs_1ej3ysIROjWZVmNw-tyw 🐣 Please follow me for new updates https://twitter.com/pixiejourney

-
- ''') diff --git a/spaces/adpro/dpt-depth15/app.py b/spaces/adpro/dpt-depth15/app.py deleted file mode 100644 index d53cd25e9a32ed9f2b8c670cb4e9b6f00b05ec82..0000000000000000000000000000000000000000 --- a/spaces/adpro/dpt-depth15/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import gradio as gr -from transformers import DPTFeatureExtractor, DPTForDepthEstimation -import torch -import numpy as np -from PIL import Image - -#torch.hub.download_url_to_file('http://images.cocodataset.org/val2017/000000039769.jpg', 'cats.jpg') - -feature_extractor = DPTFeatureExtractor.from_pretrained("Intel/dpt-large") -model = DPTForDepthEstimation.from_pretrained("Intel/dpt-large") - -def process_image(image): - # prepare image for the model - encoding = feature_extractor(image, return_tensors="pt") - - # forward pass - with torch.no_grad(): - outputs = model(**encoding) - predicted_depth = outputs.predicted_depth - - # interpolate to original size - prediction = torch.nn.functional.interpolate( - predicted_depth.unsqueeze(1), - size=image.size[::-1], - mode="bicubic", - align_corners=False, - ).squeeze() - output = prediction.cpu().numpy() - formatted = (output * 255 / np.max(output)).astype('uint8') - img = Image.fromarray(formatted) - return img - - return result - -title = "Demo: zero-shot depth estimation with DPT" -description = "Demo for Intel's DPT, a Dense Prediction Transformer for state-of-the-art dense prediction tasks such as semantic segmentation and depth estimation." - - -iface = gr.Interface(fn=process_image, - inputs=gr.inputs.Image(type="pil"), - outputs=gr.outputs.Image(type="pil", label="predicted depth"), - title=title, - description=description, - enable_queue=True) -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/akdeniz27/zero-shot-text-classification-with-multilingual-t5/mT5Model.py b/spaces/akdeniz27/zero-shot-text-classification-with-multilingual-t5/mT5Model.py deleted file mode 100644 index 3059bf798365c1c0320462b43ba6652cd5e0f7c3..0000000000000000000000000000000000000000 --- a/spaces/akdeniz27/zero-shot-text-classification-with-multilingual-t5/mT5Model.py +++ /dev/null @@ -1,66 +0,0 @@ -from torch.nn.functional import softmax -from transformers import MT5ForConditionalGeneration, MT5Tokenizer -import streamlit as st - -def process_nli(premise: str, hypothesis: str): - """ process to required xnli format with task prefix """ - return "".join(['xnli: premise: ', premise, ' hypothesis: ', hypothesis]) - -@st.cache(allow_output_mutation=True) -def setModel(model_name): - tokenizer = MT5Tokenizer.from_pretrained(model_name) - model = MT5ForConditionalGeneration.from_pretrained(model_name) - model.eval() - return model, tokenizer - -def runModel(model_name, sequence_to_classify, candidate_labels, hypothesis_template): - ENTAILS_LABEL = "▁0" - NEUTRAL_LABEL = "▁1" - CONTRADICTS_LABEL = "▁2" - - model, tokenizer = setModel(model_name) - - label_inds = tokenizer.convert_tokens_to_ids([ENTAILS_LABEL, NEUTRAL_LABEL, CONTRADICTS_LABEL]) - - # construct sequence of premise, hypothesis pairs - pairs = [(sequence_to_classify, hypothesis_template.format(label)) for label in candidate_labels] - # format for mt5 xnli task - seqs = [process_nli(premise=premise, hypothesis=hypothesis) for premise, hypothesis in pairs] - - inputs = tokenizer.batch_encode_plus(seqs, return_tensors="pt", padding=True) - out = model.generate(**inputs, output_scores=True, return_dict_in_generate=True, num_beams=1) - - # sanity check that our sequences are expected length (1 + start token + end token = 3) - for i, seq in enumerate(out.sequences): - assert len(seq) == 3 - - # get the scores for our only token of interest - # we'll now treat these like the output logits of a `*ForSequenceClassification` model - scores = out.scores[0] - - # scores has a size of the model's vocab. - # However, for this task we have a fixed set of labels - # sanity check that these labels are always the top 3 scoring - for i, sequence_scores in enumerate(scores): - top_scores = sequence_scores.argsort()[-3:] - assert set(top_scores.tolist()) == set(label_inds) - - # cut down scores to our task labels - scores = scores[:, label_inds] - - # new indices of entailment and contradiction in scores - entailment_ind = 0 - contradiction_ind = 2 - - # we can show, per item, the entailment vs contradiction probas - entail_vs_contra_scores = scores[:, [entailment_ind, contradiction_ind]] - entail_vs_contra_probas = softmax(entail_vs_contra_scores, dim=1) - - # or we can show probas similar to `ZeroShotClassificationPipeline` - # this gives a zero-shot classification style output across labels - entail_scores = scores[:, entailment_ind] - entail_probas = softmax(entail_scores, dim=0) - - dd = dict(zip(candidate_labels, entail_probas.tolist())) - ddd = dict(sorted(dd.items(), key = lambda x: x[1], reverse = True)) - return ddd \ No newline at end of file diff --git a/spaces/akhaliq/Detic/detic/evaluation/oideval.py b/spaces/akhaliq/Detic/detic/evaluation/oideval.py deleted file mode 100644 index e60125aec21f1f32f054cac51cdfb85368c53895..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Detic/detic/evaluation/oideval.py +++ /dev/null @@ -1,699 +0,0 @@ -# Part of the code is from https://github.com/tensorflow/models/blob/master/research/object_detection/metrics/oid_challenge_evaluation.py -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# The original code is under Apache License, Version 2.0 (the "License"); -# Part of the code is from https://github.com/lvis-dataset/lvis-api/blob/master/lvis/eval.py -# Copyright (c) 2019, Agrim Gupta and Ross Girshick -# Modified by Xingyi Zhou -# This script re-implement OpenImages evaluation in detectron2 -# The code is from https://github.com/xingyizhou/UniDet/blob/master/projects/UniDet/unidet/evaluation/oideval.py -# The original code is under Apache-2.0 License -# Copyright (c) Facebook, Inc. and its affiliates. -import os -import datetime -import logging -import itertools -from collections import OrderedDict -from collections import defaultdict -import copy -import json -import numpy as np -import torch -from tabulate import tabulate - -from lvis.lvis import LVIS -from lvis.results import LVISResults - -import pycocotools.mask as mask_utils - -from fvcore.common.file_io import PathManager -import detectron2.utils.comm as comm -from detectron2.data import MetadataCatalog -from detectron2.evaluation.coco_evaluation import instances_to_coco_json -from detectron2.utils.logger import create_small_table -from detectron2.evaluation import DatasetEvaluator - -def compute_average_precision(precision, recall): - """Compute Average Precision according to the definition in VOCdevkit. - Precision is modified to ensure that it does not decrease as recall - decrease. - Args: - precision: A float [N, 1] numpy array of precisions - recall: A float [N, 1] numpy array of recalls - Raises: - ValueError: if the input is not of the correct format - Returns: - average_precison: The area under the precision recall curve. NaN if - precision and recall are None. - """ - if precision is None: - if recall is not None: - raise ValueError("If precision is None, recall must also be None") - return np.NAN - - if not isinstance(precision, np.ndarray) or not isinstance( - recall, np.ndarray): - raise ValueError("precision and recall must be numpy array") - if precision.dtype != np.float or recall.dtype != np.float: - raise ValueError("input must be float numpy array.") - if len(precision) != len(recall): - raise ValueError("precision and recall must be of the same size.") - if not precision.size: - return 0.0 - if np.amin(precision) < 0 or np.amax(precision) > 1: - raise ValueError("Precision must be in the range of [0, 1].") - if np.amin(recall) < 0 or np.amax(recall) > 1: - raise ValueError("recall must be in the range of [0, 1].") - if not all(recall[i] <= recall[i + 1] for i in range(len(recall) - 1)): - raise ValueError("recall must be a non-decreasing array") - - recall = np.concatenate([[0], recall, [1]]) - precision = np.concatenate([[0], precision, [0]]) - - for i in range(len(precision) - 2, -1, -1): - precision[i] = np.maximum(precision[i], precision[i + 1]) - indices = np.where(recall[1:] != recall[:-1])[0] + 1 - average_precision = np.sum( - (recall[indices] - recall[indices - 1]) * precision[indices]) - return average_precision - -class OIDEval: - def __init__( - self, lvis_gt, lvis_dt, iou_type="bbox", expand_pred_label=False, - oid_hierarchy_path='./datasets/oid/annotations/challenge-2019-label500-hierarchy.json'): - """Constructor for OIDEval. - Args: - lvis_gt (LVIS class instance, or str containing path of annotation file) - lvis_dt (LVISResult class instance, or str containing path of result file, - or list of dict) - iou_type (str): segm or bbox evaluation - """ - self.logger = logging.getLogger(__name__) - - if iou_type not in ["bbox", "segm"]: - raise ValueError("iou_type: {} is not supported.".format(iou_type)) - - if isinstance(lvis_gt, LVIS): - self.lvis_gt = lvis_gt - elif isinstance(lvis_gt, str): - self.lvis_gt = LVIS(lvis_gt) - else: - raise TypeError("Unsupported type {} of lvis_gt.".format(lvis_gt)) - - if isinstance(lvis_dt, LVISResults): - self.lvis_dt = lvis_dt - elif isinstance(lvis_dt, (str, list)): - # self.lvis_dt = LVISResults(self.lvis_gt, lvis_dt, max_dets=-1) - self.lvis_dt = LVISResults(self.lvis_gt, lvis_dt) - else: - raise TypeError("Unsupported type {} of lvis_dt.".format(lvis_dt)) - - if expand_pred_label: - oid_hierarchy = json.load(open(oid_hierarchy_path, 'r')) - cat_info = self.lvis_gt.dataset['categories'] - freebase2id = {x['freebase_id']: x['id'] for x in cat_info} - id2freebase = {x['id']: x['freebase_id'] for x in cat_info} - id2name = {x['id']: x['name'] for x in cat_info} - - fas = defaultdict(set) - def dfs(hierarchy, cur_id): - all_childs = set() - all_keyed_child = {} - if 'Subcategory' in hierarchy: - for x in hierarchy['Subcategory']: - childs = dfs(x, freebase2id[x['LabelName']]) - all_childs.update(childs) - if cur_id != -1: - for c in all_childs: - fas[c].add(cur_id) - all_childs.add(cur_id) - return all_childs - dfs(oid_hierarchy, -1) - - expanded_pred = [] - id_count = 0 - for d in self.lvis_dt.dataset['annotations']: - cur_id = d['category_id'] - ids = [cur_id] + [x for x in fas[cur_id]] - for cat_id in ids: - new_box = copy.deepcopy(d) - id_count = id_count + 1 - new_box['id'] = id_count - new_box['category_id'] = cat_id - expanded_pred.append(new_box) - - print('Expanding original {} preds to {} preds'.format( - len(self.lvis_dt.dataset['annotations']), - len(expanded_pred) - )) - self.lvis_dt.dataset['annotations'] = expanded_pred - self.lvis_dt._create_index() - - # per-image per-category evaluation results - self.eval_imgs = defaultdict(list) - self.eval = {} # accumulated evaluation results - self._gts = defaultdict(list) # gt for evaluation - self._dts = defaultdict(list) # dt for evaluation - self.params = Params(iou_type=iou_type) # parameters - self.results = OrderedDict() - self.ious = {} # ious between all gts and dts - - self.params.img_ids = sorted(self.lvis_gt.get_img_ids()) - self.params.cat_ids = sorted(self.lvis_gt.get_cat_ids()) - - def _to_mask(self, anns, lvis): - for ann in anns: - rle = lvis.ann_to_rle(ann) - ann["segmentation"] = rle - - def _prepare(self): - """Prepare self._gts and self._dts for evaluation based on params.""" - - cat_ids = self.params.cat_ids if self.params.cat_ids else None - - gts = self.lvis_gt.load_anns( - self.lvis_gt.get_ann_ids(img_ids=self.params.img_ids, cat_ids=cat_ids) - ) - dts = self.lvis_dt.load_anns( - self.lvis_dt.get_ann_ids(img_ids=self.params.img_ids, cat_ids=cat_ids) - ) - # convert ground truth to mask if iou_type == 'segm' - if self.params.iou_type == "segm": - self._to_mask(gts, self.lvis_gt) - self._to_mask(dts, self.lvis_dt) - - for gt in gts: - self._gts[gt["image_id"], gt["category_id"]].append(gt) - - # For federated dataset evaluation we will filter out all dt for an - # image which belong to categories not present in gt and not present in - # the negative list for an image. In other words detector is not penalized - # for categories about which we don't have gt information about their - # presence or absence in an image. - img_data = self.lvis_gt.load_imgs(ids=self.params.img_ids) - # per image map of categories not present in image - img_nl = {d["id"]: d["neg_category_ids"] for d in img_data} - # per image list of categories present in image - img_pl = {d["id"]: d["pos_category_ids"] for d in img_data} - # img_pl = defaultdict(set) - for ann in gts: - # img_pl[ann["image_id"]].add(ann["category_id"]) - assert ann["category_id"] in img_pl[ann["image_id"]] - # print('check pos ids OK.') - - for dt in dts: - img_id, cat_id = dt["image_id"], dt["category_id"] - if cat_id not in img_nl[img_id] and cat_id not in img_pl[img_id]: - continue - self._dts[img_id, cat_id].append(dt) - - def evaluate(self): - """ - Run per image evaluation on given images and store results - (a list of dict) in self.eval_imgs. - """ - self.logger.info("Running per image evaluation.") - self.logger.info("Evaluate annotation type *{}*".format(self.params.iou_type)) - - self.params.img_ids = list(np.unique(self.params.img_ids)) - - if self.params.use_cats: - cat_ids = self.params.cat_ids - else: - cat_ids = [-1] - - self._prepare() - - self.ious = { - (img_id, cat_id): self.compute_iou(img_id, cat_id) - for img_id in self.params.img_ids - for cat_id in cat_ids - } - - # loop through images, area range, max detection number - print('Evaluating ...') - self.eval_imgs = [ - self.evaluate_img_google(img_id, cat_id, area_rng) - for cat_id in cat_ids - for area_rng in self.params.area_rng - for img_id in self.params.img_ids - ] - - def _get_gt_dt(self, img_id, cat_id): - """Create gt, dt which are list of anns/dets. If use_cats is true - only anns/dets corresponding to tuple (img_id, cat_id) will be - used. Else, all anns/dets in image are used and cat_id is not used. - """ - if self.params.use_cats: - gt = self._gts[img_id, cat_id] - dt = self._dts[img_id, cat_id] - else: - gt = [ - _ann - for _cat_id in self.params.cat_ids - for _ann in self._gts[img_id, cat_id] - ] - dt = [ - _ann - for _cat_id in self.params.cat_ids - for _ann in self._dts[img_id, cat_id] - ] - return gt, dt - - def compute_iou(self, img_id, cat_id): - gt, dt = self._get_gt_dt(img_id, cat_id) - - if len(gt) == 0 and len(dt) == 0: - return [] - - # Sort detections in decreasing order of score. - idx = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in idx] - - # iscrowd = [int(False)] * len(gt) - iscrowd = [int('iscrowd' in g and g['iscrowd'] > 0) for g in gt] - - if self.params.iou_type == "segm": - ann_type = "segmentation" - elif self.params.iou_type == "bbox": - ann_type = "bbox" - else: - raise ValueError("Unknown iou_type for iou computation.") - gt = [g[ann_type] for g in gt] - dt = [d[ann_type] for d in dt] - - # compute iou between each dt and gt region - # will return array of shape len(dt), len(gt) - ious = mask_utils.iou(dt, gt, iscrowd) - return ious - - def evaluate_img_google(self, img_id, cat_id, area_rng): - gt, dt = self._get_gt_dt(img_id, cat_id) - if len(gt) == 0 and len(dt) == 0: - return None - - if len(dt) == 0: - return { - "image_id": img_id, - "category_id": cat_id, - "area_rng": area_rng, - "dt_ids": [], - "dt_matches": np.array([], dtype=np.int32).reshape(1, -1), - "dt_scores": [], - "dt_ignore": np.array([], dtype=np.int32).reshape(1, -1), - 'num_gt': len(gt) - } - - no_crowd_inds = [i for i, g in enumerate(gt) \ - if ('iscrowd' not in g) or g['iscrowd'] == 0] - crowd_inds = [i for i, g in enumerate(gt) \ - if 'iscrowd' in g and g['iscrowd'] == 1] - dt_idx = np.argsort([-d["score"] for d in dt], kind="mergesort") - - if len(self.ious[img_id, cat_id]) > 0: - ious = self.ious[img_id, cat_id] - iou = ious[:, no_crowd_inds] - iou = iou[dt_idx] - ioa = ious[:, crowd_inds] - ioa = ioa[dt_idx] - else: - iou = np.zeros((len(dt_idx), 0)) - ioa = np.zeros((len(dt_idx), 0)) - scores = np.array([dt[i]['score'] for i in dt_idx]) - - num_detected_boxes = len(dt) - tp_fp_labels = np.zeros(num_detected_boxes, dtype=bool) - is_matched_to_group_of = np.zeros(num_detected_boxes, dtype=bool) - - def compute_match_iou(iou): - max_overlap_gt_ids = np.argmax(iou, axis=1) - is_gt_detected = np.zeros(iou.shape[1], dtype=bool) - for i in range(num_detected_boxes): - gt_id = max_overlap_gt_ids[i] - is_evaluatable = (not tp_fp_labels[i] and - iou[i, gt_id] >= 0.5 and - not is_matched_to_group_of[i]) - if is_evaluatable: - if not is_gt_detected[gt_id]: - tp_fp_labels[i] = True - is_gt_detected[gt_id] = True - - def compute_match_ioa(ioa): - scores_group_of = np.zeros(ioa.shape[1], dtype=float) - tp_fp_labels_group_of = np.ones( - ioa.shape[1], dtype=float) - max_overlap_group_of_gt_ids = np.argmax(ioa, axis=1) - for i in range(num_detected_boxes): - gt_id = max_overlap_group_of_gt_ids[i] - is_evaluatable = (not tp_fp_labels[i] and - ioa[i, gt_id] >= 0.5 and - not is_matched_to_group_of[i]) - if is_evaluatable: - is_matched_to_group_of[i] = True - scores_group_of[gt_id] = max(scores_group_of[gt_id], scores[i]) - selector = np.where((scores_group_of > 0) & (tp_fp_labels_group_of > 0)) - scores_group_of = scores_group_of[selector] - tp_fp_labels_group_of = tp_fp_labels_group_of[selector] - - return scores_group_of, tp_fp_labels_group_of - - if iou.shape[1] > 0: - compute_match_iou(iou) - - scores_box_group_of = np.ndarray([0], dtype=float) - tp_fp_labels_box_group_of = np.ndarray([0], dtype=float) - - if ioa.shape[1] > 0: - scores_box_group_of, tp_fp_labels_box_group_of = compute_match_ioa(ioa) - - valid_entries = (~is_matched_to_group_of) - - scores = np.concatenate( - (scores[valid_entries], scores_box_group_of)) - tp_fps = np.concatenate( - (tp_fp_labels[valid_entries].astype(float), - tp_fp_labels_box_group_of)) - - return { - "image_id": img_id, - "category_id": cat_id, - "area_rng": area_rng, - "dt_matches": np.array([1 if x > 0 else 0 for x in tp_fps], dtype=np.int32).reshape(1, -1), - "dt_scores": [x for x in scores], - "dt_ignore": np.array([0 for x in scores], dtype=np.int32).reshape(1, -1), - 'num_gt': len(gt) - } - - def accumulate(self): - """Accumulate per image evaluation results and store the result in - self.eval. - """ - self.logger.info("Accumulating evaluation results.") - - if not self.eval_imgs: - self.logger.warn("Please run evaluate first.") - - if self.params.use_cats: - cat_ids = self.params.cat_ids - else: - cat_ids = [-1] - - num_thrs = 1 - num_recalls = 1 - - num_cats = len(cat_ids) - num_area_rngs = 1 - num_imgs = len(self.params.img_ids) - - # -1 for absent categories - precision = -np.ones( - (num_thrs, num_recalls, num_cats, num_area_rngs) - ) - recall = -np.ones((num_thrs, num_cats, num_area_rngs)) - - # Initialize dt_pointers - dt_pointers = {} - for cat_idx in range(num_cats): - dt_pointers[cat_idx] = {} - for area_idx in range(num_area_rngs): - dt_pointers[cat_idx][area_idx] = {} - - # Per category evaluation - for cat_idx in range(num_cats): - Nk = cat_idx * num_area_rngs * num_imgs - for area_idx in range(num_area_rngs): - Na = area_idx * num_imgs - E = [ - self.eval_imgs[Nk + Na + img_idx] - for img_idx in range(num_imgs) - ] - # Remove elements which are None - E = [e for e in E if not e is None] - if len(E) == 0: - continue - - dt_scores = np.concatenate([e["dt_scores"] for e in E], axis=0) - dt_idx = np.argsort(-dt_scores, kind="mergesort") - dt_scores = dt_scores[dt_idx] - dt_m = np.concatenate([e["dt_matches"] for e in E], axis=1)[:, dt_idx] - dt_ig = np.concatenate([e["dt_ignore"] for e in E], axis=1)[:, dt_idx] - - num_gt = sum([e['num_gt'] for e in E]) - if num_gt == 0: - continue - - tps = np.logical_and(dt_m, np.logical_not(dt_ig)) - fps = np.logical_and(np.logical_not(dt_m), np.logical_not(dt_ig)) - tp_sum = np.cumsum(tps, axis=1).astype(dtype=np.float) - fp_sum = np.cumsum(fps, axis=1).astype(dtype=np.float) - - dt_pointers[cat_idx][area_idx] = { - "tps": tps, - "fps": fps, - } - - for iou_thr_idx, (tp, fp) in enumerate(zip(tp_sum, fp_sum)): - tp = np.array(tp) - fp = np.array(fp) - num_tp = len(tp) - rc = tp / num_gt - - if num_tp: - recall[iou_thr_idx, cat_idx, area_idx] = rc[ - -1 - ] - else: - recall[iou_thr_idx, cat_idx, area_idx] = 0 - - # np.spacing(1) ~= eps - pr = tp / (fp + tp + np.spacing(1)) - pr = pr.tolist() - - for i in range(num_tp - 1, 0, -1): - if pr[i] > pr[i - 1]: - pr[i - 1] = pr[i] - - mAP = compute_average_precision( - np.array(pr, np.float).reshape(-1), - np.array(rc, np.float).reshape(-1)) - precision[iou_thr_idx, :, cat_idx, area_idx] = mAP - - self.eval = { - "params": self.params, - "counts": [num_thrs, num_recalls, num_cats, num_area_rngs], - "date": datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S"), - "precision": precision, - "recall": recall, - "dt_pointers": dt_pointers, - } - - def _summarize(self, summary_type): - s = self.eval["precision"] - if len(s[s > -1]) == 0: - mean_s = -1 - else: - mean_s = np.mean(s[s > -1]) - # print(s.reshape(1, 1, -1, 1)) - return mean_s - - def summarize(self): - """Compute and display summary metrics for evaluation results.""" - if not self.eval: - raise RuntimeError("Please run accumulate() first.") - - max_dets = self.params.max_dets - self.results["AP50"] = self._summarize('ap') - - def run(self): - """Wrapper function which calculates the results.""" - self.evaluate() - self.accumulate() - self.summarize() - - def print_results(self): - template = " {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} catIds={:>3s}] = {:0.3f}" - - for key, value in self.results.items(): - max_dets = self.params.max_dets - if "AP" in key: - title = "Average Precision" - _type = "(AP)" - else: - title = "Average Recall" - _type = "(AR)" - - if len(key) > 2 and key[2].isdigit(): - iou_thr = (float(key[2:]) / 100) - iou = "{:0.2f}".format(iou_thr) - else: - iou = "{:0.2f}:{:0.2f}".format( - self.params.iou_thrs[0], self.params.iou_thrs[-1] - ) - - cat_group_name = "all" - area_rng = "all" - - print(template.format(title, _type, iou, area_rng, max_dets, cat_group_name, value)) - - def get_results(self): - if not self.results: - self.logger.warn("results is empty. Call run().") - return self.results - - -class Params: - def __init__(self, iou_type): - self.img_ids = [] - self.cat_ids = [] - # np.arange causes trouble. the data point on arange is slightly - # larger than the true value - self.iou_thrs = np.linspace( - 0.5, 0.95, int(np.round((0.95 - 0.5) / 0.05)) + 1, endpoint=True - ) - self.google_style = True - # print('Using google style PR curve') - self.iou_thrs = self.iou_thrs[:1] - self.max_dets = 1000 - - self.area_rng = [ - [0 ** 2, 1e5 ** 2], - ] - self.area_rng_lbl = ["all"] - self.use_cats = 1 - self.iou_type = iou_type - - -class OIDEvaluator(DatasetEvaluator): - def __init__(self, dataset_name, cfg, distributed, output_dir=None): - self._distributed = distributed - self._output_dir = output_dir - - self._cpu_device = torch.device("cpu") - self._logger = logging.getLogger(__name__) - - self._metadata = MetadataCatalog.get(dataset_name) - json_file = PathManager.get_local_path(self._metadata.json_file) - self._oid_api = LVIS(json_file) - # Test set json files do not contain annotations (evaluation must be - # performed using the LVIS evaluation server). - self._do_evaluation = len(self._oid_api.get_ann_ids()) > 0 - self._mask_on = cfg.MODEL.MASK_ON - - def reset(self): - self._predictions = [] - self._oid_results = [] - - def process(self, inputs, outputs): - for input, output in zip(inputs, outputs): - prediction = {"image_id": input["image_id"]} - instances = output["instances"].to(self._cpu_device) - prediction["instances"] = instances_to_coco_json( - instances, input["image_id"]) - self._predictions.append(prediction) - - def evaluate(self): - if self._distributed: - comm.synchronize() - self._predictions = comm.gather(self._predictions, dst=0) - self._predictions = list(itertools.chain(*self._predictions)) - - if not comm.is_main_process(): - return - - if len(self._predictions) == 0: - self._logger.warning("[LVISEvaluator] Did not receive valid predictions.") - return {} - - self._logger.info("Preparing results in the OID format ...") - self._oid_results = list( - itertools.chain(*[x["instances"] for x in self._predictions])) - - # unmap the category ids for LVIS (from 0-indexed to 1-indexed) - for result in self._oid_results: - result["category_id"] += 1 - - PathManager.mkdirs(self._output_dir) - file_path = os.path.join( - self._output_dir, "oid_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(self._oid_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating predictions ...") - self._results = OrderedDict() - res, mAP = _evaluate_predictions_on_oid( - self._oid_api, - file_path, - eval_seg=self._mask_on, - class_names=self._metadata.get("thing_classes"), - ) - self._results['bbox'] = res - mAP_out_path = os.path.join(self._output_dir, "oid_mAP.npy") - self._logger.info('Saving mAP to' + mAP_out_path) - np.save(mAP_out_path, mAP) - return copy.deepcopy(self._results) - -def _evaluate_predictions_on_oid( - oid_gt, oid_results_path, eval_seg=False, - class_names=None): - logger = logging.getLogger(__name__) - metrics = ["AP50", "AP50_expand"] - - results = {} - oid_eval = OIDEval(oid_gt, oid_results_path, 'bbox', expand_pred_label=False) - oid_eval.run() - oid_eval.print_results() - results["AP50"] = oid_eval.get_results()["AP50"] - - if eval_seg: - oid_eval = OIDEval(oid_gt, oid_results_path, 'segm', expand_pred_label=False) - oid_eval.run() - oid_eval.print_results() - results["AP50_segm"] = oid_eval.get_results()["AP50"] - else: - oid_eval = OIDEval(oid_gt, oid_results_path, 'bbox', expand_pred_label=True) - oid_eval.run() - oid_eval.print_results() - results["AP50_expand"] = oid_eval.get_results()["AP50"] - - mAP = np.zeros(len(class_names)) - 1 - precisions = oid_eval.eval['precision'] - assert len(class_names) == precisions.shape[2] - results_per_category = [] - id2apiid = sorted(oid_gt.get_cat_ids()) - inst_aware_ap, inst_count = 0, 0 - for idx, name in enumerate(class_names): - precision = precisions[:, :, idx, 0] - precision = precision[precision > -1] - ap = np.mean(precision) if precision.size else float("nan") - inst_num = len(oid_gt.get_ann_ids(cat_ids=[id2apiid[idx]])) - if inst_num > 0: - results_per_category.append(("{} {}".format( - name.replace(' ', '_'), - inst_num if inst_num < 1000 else '{:.1f}k'.format(inst_num / 1000)), - float(ap * 100))) - inst_aware_ap += inst_num * ap - inst_count += inst_num - mAP[idx] = ap - # logger.info("{} {} {:.2f}".format(name, inst_num, ap * 100)) - inst_aware_ap = inst_aware_ap * 100 / inst_count - N_COLS = min(6, len(results_per_category) * 2) - results_flatten = list(itertools.chain(*results_per_category)) - results_2d = itertools.zip_longest(*[results_flatten[i::N_COLS] for i in range(N_COLS)]) - table = tabulate( - results_2d, - tablefmt="pipe", - floatfmt=".3f", - headers=["category", "AP"] * (N_COLS // 2), - numalign="left", - ) - logger.info("Per-category {} AP: \n".format('bbox') + table) - logger.info("Instance-aware {} AP: {:.4f}".format('bbox', inst_aware_ap)) - - logger.info("Evaluation results for bbox: \n" + \ - create_small_table(results)) - return results, mAP \ No newline at end of file diff --git a/spaces/akhaliq/JoJoGAN/e4e/models/encoders/__init__.py b/spaces/akhaliq/JoJoGAN/e4e/models/encoders/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/akhaliq/lama/saicinpainting/training/losses/adversarial.py b/spaces/akhaliq/lama/saicinpainting/training/losses/adversarial.py deleted file mode 100644 index d6db2967ce5074d94ed3b4c51fc743ff2f7831b1..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/saicinpainting/training/losses/adversarial.py +++ /dev/null @@ -1,177 +0,0 @@ -from typing import Tuple, Dict, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class BaseAdversarialLoss: - def pre_generator_step(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - generator: nn.Module, discriminator: nn.Module): - """ - Prepare for generator step - :param real_batch: Tensor, a batch of real samples - :param fake_batch: Tensor, a batch of samples produced by generator - :param generator: - :param discriminator: - :return: None - """ - - def pre_discriminator_step(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - generator: nn.Module, discriminator: nn.Module): - """ - Prepare for discriminator step - :param real_batch: Tensor, a batch of real samples - :param fake_batch: Tensor, a batch of samples produced by generator - :param generator: - :param discriminator: - :return: None - """ - - def generator_loss(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - discr_real_pred: torch.Tensor, discr_fake_pred: torch.Tensor, - mask: Optional[torch.Tensor] = None) \ - -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - """ - Calculate generator loss - :param real_batch: Tensor, a batch of real samples - :param fake_batch: Tensor, a batch of samples produced by generator - :param discr_real_pred: Tensor, discriminator output for real_batch - :param discr_fake_pred: Tensor, discriminator output for fake_batch - :param mask: Tensor, actual mask, which was at input of generator when making fake_batch - :return: total generator loss along with some values that might be interesting to log - """ - raise NotImplemented() - - def discriminator_loss(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - discr_real_pred: torch.Tensor, discr_fake_pred: torch.Tensor, - mask: Optional[torch.Tensor] = None) \ - -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - """ - Calculate discriminator loss and call .backward() on it - :param real_batch: Tensor, a batch of real samples - :param fake_batch: Tensor, a batch of samples produced by generator - :param discr_real_pred: Tensor, discriminator output for real_batch - :param discr_fake_pred: Tensor, discriminator output for fake_batch - :param mask: Tensor, actual mask, which was at input of generator when making fake_batch - :return: total discriminator loss along with some values that might be interesting to log - """ - raise NotImplemented() - - def interpolate_mask(self, mask, shape): - assert mask is not None - assert self.allow_scale_mask or shape == mask.shape[-2:] - if shape != mask.shape[-2:] and self.allow_scale_mask: - if self.mask_scale_mode == 'maxpool': - mask = F.adaptive_max_pool2d(mask, shape) - else: - mask = F.interpolate(mask, size=shape, mode=self.mask_scale_mode) - return mask - -def make_r1_gp(discr_real_pred, real_batch): - if torch.is_grad_enabled(): - grad_real = torch.autograd.grad(outputs=discr_real_pred.sum(), inputs=real_batch, create_graph=True)[0] - grad_penalty = (grad_real.view(grad_real.shape[0], -1).norm(2, dim=1) ** 2).mean() - else: - grad_penalty = 0 - real_batch.requires_grad = False - - return grad_penalty - -class NonSaturatingWithR1(BaseAdversarialLoss): - def __init__(self, gp_coef=5, weight=1, mask_as_fake_target=False, allow_scale_mask=False, - mask_scale_mode='nearest', extra_mask_weight_for_gen=0, - use_unmasked_for_gen=True, use_unmasked_for_discr=True): - self.gp_coef = gp_coef - self.weight = weight - # use for discr => use for gen; - # otherwise we teach only the discr to pay attention to very small difference - assert use_unmasked_for_gen or (not use_unmasked_for_discr) - # mask as target => use unmasked for discr: - # if we don't care about unmasked regions at all - # then it doesn't matter if the value of mask_as_fake_target is true or false - assert use_unmasked_for_discr or (not mask_as_fake_target) - self.use_unmasked_for_gen = use_unmasked_for_gen - self.use_unmasked_for_discr = use_unmasked_for_discr - self.mask_as_fake_target = mask_as_fake_target - self.allow_scale_mask = allow_scale_mask - self.mask_scale_mode = mask_scale_mode - self.extra_mask_weight_for_gen = extra_mask_weight_for_gen - - def generator_loss(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - discr_real_pred: torch.Tensor, discr_fake_pred: torch.Tensor, - mask=None) \ - -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - fake_loss = F.softplus(-discr_fake_pred) - if (self.mask_as_fake_target and self.extra_mask_weight_for_gen > 0) or \ - not self.use_unmasked_for_gen: # == if masked region should be treated differently - mask = self.interpolate_mask(mask, discr_fake_pred.shape[-2:]) - if not self.use_unmasked_for_gen: - fake_loss = fake_loss * mask - else: - pixel_weights = 1 + mask * self.extra_mask_weight_for_gen - fake_loss = fake_loss * pixel_weights - - return fake_loss.mean() * self.weight, dict() - - def pre_discriminator_step(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - generator: nn.Module, discriminator: nn.Module): - real_batch.requires_grad = True - - def discriminator_loss(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - discr_real_pred: torch.Tensor, discr_fake_pred: torch.Tensor, - mask=None) \ - -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - - real_loss = F.softplus(-discr_real_pred) - grad_penalty = make_r1_gp(discr_real_pred, real_batch) * self.gp_coef - fake_loss = F.softplus(discr_fake_pred) - - if not self.use_unmasked_for_discr or self.mask_as_fake_target: - # == if masked region should be treated differently - mask = self.interpolate_mask(mask, discr_fake_pred.shape[-2:]) - # use_unmasked_for_discr=False only makes sense for fakes; - # for reals there is no difference beetween two regions - fake_loss = fake_loss * mask - if self.mask_as_fake_target: - fake_loss = fake_loss + (1 - mask) * F.softplus(-discr_fake_pred) - - sum_discr_loss = real_loss + grad_penalty + fake_loss - metrics = dict(discr_real_out=discr_real_pred.mean(), - discr_fake_out=discr_fake_pred.mean(), - discr_real_gp=grad_penalty) - return sum_discr_loss.mean(), metrics - -class BCELoss(BaseAdversarialLoss): - def __init__(self, weight): - self.weight = weight - self.bce_loss = nn.BCEWithLogitsLoss() - - def generator_loss(self, discr_fake_pred: torch.Tensor) -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - real_mask_gt = torch.zeros(discr_fake_pred.shape).to(discr_fake_pred.device) - fake_loss = self.bce_loss(discr_fake_pred, real_mask_gt) * self.weight - return fake_loss, dict() - - def pre_discriminator_step(self, real_batch: torch.Tensor, fake_batch: torch.Tensor, - generator: nn.Module, discriminator: nn.Module): - real_batch.requires_grad = True - - def discriminator_loss(self, - mask: torch.Tensor, - discr_real_pred: torch.Tensor, - discr_fake_pred: torch.Tensor) -> Tuple[torch.Tensor, Dict[str, torch.Tensor]]: - - real_mask_gt = torch.zeros(discr_real_pred.shape).to(discr_real_pred.device) - sum_discr_loss = (self.bce_loss(discr_real_pred, real_mask_gt) + self.bce_loss(discr_fake_pred, mask)) / 2 - metrics = dict(discr_real_out=discr_real_pred.mean(), - discr_fake_out=discr_fake_pred.mean(), - discr_real_gp=0) - return sum_discr_loss, metrics - - -def make_discrim_loss(kind, **kwargs): - if kind == 'r1': - return NonSaturatingWithR1(**kwargs) - elif kind == 'bce': - return BCELoss(**kwargs) - raise ValueError(f'Unknown adversarial loss kind {kind}') diff --git a/spaces/akhaliq/midi-ddsp/app.py b/spaces/akhaliq/midi-ddsp/app.py deleted file mode 100644 index ca319782d61137a811cb65af327e9e7aaf261cad..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/midi-ddsp/app.py +++ /dev/null @@ -1,27 +0,0 @@ -import gradio as gr -import os -from pathlib import Path - -os.system("midi_ddsp_download_model_weights") - -def inference(audio): - os.system("midi_ddsp_synthesize --midi_path "+audio.name) - return Path(audio.name).stem+"/0_violin.wav" - -title = "Midi-DDSP" -description = "Gradio demo for MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling. To use it, simply upload your midi file, or click one of the examples to load them. Read more at the links below." - -article = "

MIDI-DDSP: Detailed Control of Musical Performance via Hierarchical Modeling | Github Repo

" - -examples=[['input.mid']] - -gr.Interface( - inference, - gr.inputs.File(type="file", label="Input"), - [gr.outputs.Audio(type="file", label="Output")], - title=title, - description=description, - article=article, - examples=examples, - enable_queue=True - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/latex.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/latex.py deleted file mode 100644 index 60e98921f9baefa47bd51c84ad024b2edab6576a..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/formatters/latex.py +++ /dev/null @@ -1,511 +0,0 @@ -""" - pygments.formatters.latex - ~~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for LaTeX fancyvrb output. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from io import StringIO - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.lexer import Lexer, do_insertions -from pip._vendor.pygments.token import Token, STANDARD_TYPES -from pip._vendor.pygments.util import get_bool_opt, get_int_opt - - -__all__ = ['LatexFormatter'] - - -def escape_tex(text, commandprefix): - return text.replace('\\', '\x00'). \ - replace('{', '\x01'). \ - replace('}', '\x02'). \ - replace('\x00', r'\%sZbs{}' % commandprefix). \ - replace('\x01', r'\%sZob{}' % commandprefix). \ - replace('\x02', r'\%sZcb{}' % commandprefix). \ - replace('^', r'\%sZca{}' % commandprefix). \ - replace('_', r'\%sZus{}' % commandprefix). \ - replace('&', r'\%sZam{}' % commandprefix). \ - replace('<', r'\%sZlt{}' % commandprefix). \ - replace('>', r'\%sZgt{}' % commandprefix). \ - replace('#', r'\%sZsh{}' % commandprefix). \ - replace('%', r'\%sZpc{}' % commandprefix). \ - replace('$', r'\%sZdl{}' % commandprefix). \ - replace('-', r'\%sZhy{}' % commandprefix). \ - replace("'", r'\%sZsq{}' % commandprefix). \ - replace('"', r'\%sZdq{}' % commandprefix). \ - replace('~', r'\%sZti{}' % commandprefix) - - -DOC_TEMPLATE = r''' -\documentclass{%(docclass)s} -\usepackage{fancyvrb} -\usepackage{color} -\usepackage[%(encoding)s]{inputenc} -%(preamble)s - -%(styledefs)s - -\begin{document} - -\section*{%(title)s} - -%(code)s -\end{document} -''' - -## Small explanation of the mess below :) -# -# The previous version of the LaTeX formatter just assigned a command to -# each token type defined in the current style. That obviously is -# problematic if the highlighted code is produced for a different style -# than the style commands themselves. -# -# This version works much like the HTML formatter which assigns multiple -# CSS classes to each tag, from the most specific to the least -# specific token type, thus falling back to the parent token type if one -# is not defined. Here, the classes are there too and use the same short -# forms given in token.STANDARD_TYPES. -# -# Highlighted code now only uses one custom command, which by default is -# \PY and selectable by the commandprefix option (and in addition the -# escapes \PYZat, \PYZlb and \PYZrb which haven't been renamed for -# backwards compatibility purposes). -# -# \PY has two arguments: the classes, separated by +, and the text to -# render in that style. The classes are resolved into the respective -# style commands by magic, which serves to ignore unknown classes. -# -# The magic macros are: -# * \PY@it, \PY@bf, etc. are unconditionally wrapped around the text -# to render in \PY@do. Their definition determines the style. -# * \PY@reset resets \PY@it etc. to do nothing. -# * \PY@toks parses the list of classes, using magic inspired by the -# keyval package (but modified to use plusses instead of commas -# because fancyvrb redefines commas inside its environments). -# * \PY@tok processes one class, calling the \PY@tok@classname command -# if it exists. -# * \PY@tok@classname sets the \PY@it etc. to reflect the chosen style -# for its class. -# * \PY resets the style, parses the classnames and then calls \PY@do. -# -# Tip: to read this code, print it out in substituted form using e.g. -# >>> print STYLE_TEMPLATE % {'cp': 'PY'} - -STYLE_TEMPLATE = r''' -\makeatletter -\def\%(cp)s@reset{\let\%(cp)s@it=\relax \let\%(cp)s@bf=\relax%% - \let\%(cp)s@ul=\relax \let\%(cp)s@tc=\relax%% - \let\%(cp)s@bc=\relax \let\%(cp)s@ff=\relax} -\def\%(cp)s@tok#1{\csname %(cp)s@tok@#1\endcsname} -\def\%(cp)s@toks#1+{\ifx\relax#1\empty\else%% - \%(cp)s@tok{#1}\expandafter\%(cp)s@toks\fi} -\def\%(cp)s@do#1{\%(cp)s@bc{\%(cp)s@tc{\%(cp)s@ul{%% - \%(cp)s@it{\%(cp)s@bf{\%(cp)s@ff{#1}}}}}}} -\def\%(cp)s#1#2{\%(cp)s@reset\%(cp)s@toks#1+\relax+\%(cp)s@do{#2}} - -%(styles)s - -\def\%(cp)sZbs{\char`\\} -\def\%(cp)sZus{\char`\_} -\def\%(cp)sZob{\char`\{} -\def\%(cp)sZcb{\char`\}} -\def\%(cp)sZca{\char`\^} -\def\%(cp)sZam{\char`\&} -\def\%(cp)sZlt{\char`\<} -\def\%(cp)sZgt{\char`\>} -\def\%(cp)sZsh{\char`\#} -\def\%(cp)sZpc{\char`\%%} -\def\%(cp)sZdl{\char`\$} -\def\%(cp)sZhy{\char`\-} -\def\%(cp)sZsq{\char`\'} -\def\%(cp)sZdq{\char`\"} -\def\%(cp)sZti{\char`\~} -%% for compatibility with earlier versions -\def\%(cp)sZat{@} -\def\%(cp)sZlb{[} -\def\%(cp)sZrb{]} -\makeatother -''' - - -def _get_ttype_name(ttype): - fname = STANDARD_TYPES.get(ttype) - if fname: - return fname - aname = '' - while fname is None: - aname = ttype[-1] + aname - ttype = ttype.parent - fname = STANDARD_TYPES.get(ttype) - return fname + aname - - -class LatexFormatter(Formatter): - r""" - Format tokens as LaTeX code. This needs the `fancyvrb` and `color` - standard packages. - - Without the `full` option, code is formatted as one ``Verbatim`` - environment, like this: - - .. sourcecode:: latex - - \begin{Verbatim}[commandchars=\\\{\}] - \PY{k}{def }\PY{n+nf}{foo}(\PY{n}{bar}): - \PY{k}{pass} - \end{Verbatim} - - The special command used here (``\PY``) and all the other macros it needs - are output by the `get_style_defs` method. - - With the `full` option, a complete LaTeX document is output, including - the command definitions in the preamble. - - The `get_style_defs()` method of a `LatexFormatter` returns a string - containing ``\def`` commands defining the macros needed inside the - ``Verbatim`` environments. - - Additional options accepted: - - `style` - The style to use, can be a string or a Style subclass (default: - ``'default'``). - - `full` - Tells the formatter to output a "full" document, i.e. a complete - self-contained document (default: ``False``). - - `title` - If `full` is true, the title that should be used to caption the - document (default: ``''``). - - `docclass` - If the `full` option is enabled, this is the document class to use - (default: ``'article'``). - - `preamble` - If the `full` option is enabled, this can be further preamble commands, - e.g. ``\usepackage`` (default: ``''``). - - `linenos` - If set to ``True``, output line numbers (default: ``False``). - - `linenostart` - The line number for the first line (default: ``1``). - - `linenostep` - If set to a number n > 1, only every nth line number is printed. - - `verboptions` - Additional options given to the Verbatim environment (see the *fancyvrb* - docs for possible values) (default: ``''``). - - `commandprefix` - The LaTeX commands used to produce colored output are constructed - using this prefix and some letters (default: ``'PY'``). - - .. versionadded:: 0.7 - .. versionchanged:: 0.10 - The default is now ``'PY'`` instead of ``'C'``. - - `texcomments` - If set to ``True``, enables LaTeX comment lines. That is, LaTex markup - in comment tokens is not escaped so that LaTeX can render it (default: - ``False``). - - .. versionadded:: 1.2 - - `mathescape` - If set to ``True``, enables LaTeX math mode escape in comments. That - is, ``'$...$'`` inside a comment will trigger math mode (default: - ``False``). - - .. versionadded:: 1.2 - - `escapeinside` - If set to a string of length 2, enables escaping to LaTeX. Text - delimited by these 2 characters is read as LaTeX code and - typeset accordingly. It has no effect in string literals. It has - no effect in comments if `texcomments` or `mathescape` is - set. (default: ``''``). - - .. versionadded:: 2.0 - - `envname` - Allows you to pick an alternative environment name replacing Verbatim. - The alternate environment still has to support Verbatim's option syntax. - (default: ``'Verbatim'``). - - .. versionadded:: 2.0 - """ - name = 'LaTeX' - aliases = ['latex', 'tex'] - filenames = ['*.tex'] - - def __init__(self, **options): - Formatter.__init__(self, **options) - self.docclass = options.get('docclass', 'article') - self.preamble = options.get('preamble', '') - self.linenos = get_bool_opt(options, 'linenos', False) - self.linenostart = abs(get_int_opt(options, 'linenostart', 1)) - self.linenostep = abs(get_int_opt(options, 'linenostep', 1)) - self.verboptions = options.get('verboptions', '') - self.nobackground = get_bool_opt(options, 'nobackground', False) - self.commandprefix = options.get('commandprefix', 'PY') - self.texcomments = get_bool_opt(options, 'texcomments', False) - self.mathescape = get_bool_opt(options, 'mathescape', False) - self.escapeinside = options.get('escapeinside', '') - if len(self.escapeinside) == 2: - self.left = self.escapeinside[0] - self.right = self.escapeinside[1] - else: - self.escapeinside = '' - self.envname = options.get('envname', 'Verbatim') - - self._create_stylesheet() - - def _create_stylesheet(self): - t2n = self.ttype2name = {Token: ''} - c2d = self.cmd2def = {} - cp = self.commandprefix - - def rgbcolor(col): - if col: - return ','.join(['%.2f' % (int(col[i] + col[i + 1], 16) / 255.0) - for i in (0, 2, 4)]) - else: - return '1,1,1' - - for ttype, ndef in self.style: - name = _get_ttype_name(ttype) - cmndef = '' - if ndef['bold']: - cmndef += r'\let\$$@bf=\textbf' - if ndef['italic']: - cmndef += r'\let\$$@it=\textit' - if ndef['underline']: - cmndef += r'\let\$$@ul=\underline' - if ndef['roman']: - cmndef += r'\let\$$@ff=\textrm' - if ndef['sans']: - cmndef += r'\let\$$@ff=\textsf' - if ndef['mono']: - cmndef += r'\let\$$@ff=\textsf' - if ndef['color']: - cmndef += (r'\def\$$@tc##1{\textcolor[rgb]{%s}{##1}}' % - rgbcolor(ndef['color'])) - if ndef['border']: - cmndef += (r'\def\$$@bc##1{{\setlength{\fboxsep}{\string -\fboxrule}' - r'\fcolorbox[rgb]{%s}{%s}{\strut ##1}}}' % - (rgbcolor(ndef['border']), - rgbcolor(ndef['bgcolor']))) - elif ndef['bgcolor']: - cmndef += (r'\def\$$@bc##1{{\setlength{\fboxsep}{0pt}' - r'\colorbox[rgb]{%s}{\strut ##1}}}' % - rgbcolor(ndef['bgcolor'])) - if cmndef == '': - continue - cmndef = cmndef.replace('$$', cp) - t2n[ttype] = name - c2d[name] = cmndef - - def get_style_defs(self, arg=''): - """ - Return the command sequences needed to define the commands - used to format text in the verbatim environment. ``arg`` is ignored. - """ - cp = self.commandprefix - styles = [] - for name, definition in self.cmd2def.items(): - styles.append(r'\@namedef{%s@tok@%s}{%s}' % (cp, name, definition)) - return STYLE_TEMPLATE % {'cp': self.commandprefix, - 'styles': '\n'.join(styles)} - - def format_unencoded(self, tokensource, outfile): - # TODO: add support for background colors - t2n = self.ttype2name - cp = self.commandprefix - - if self.full: - realoutfile = outfile - outfile = StringIO() - - outfile.write('\\begin{' + self.envname + '}[commandchars=\\\\\\{\\}') - if self.linenos: - start, step = self.linenostart, self.linenostep - outfile.write(',numbers=left' + - (start and ',firstnumber=%d' % start or '') + - (step and ',stepnumber=%d' % step or '')) - if self.mathescape or self.texcomments or self.escapeinside: - outfile.write(',codes={\\catcode`\\$=3\\catcode`\\^=7' - '\\catcode`\\_=8\\relax}') - if self.verboptions: - outfile.write(',' + self.verboptions) - outfile.write(']\n') - - for ttype, value in tokensource: - if ttype in Token.Comment: - if self.texcomments: - # Try to guess comment starting lexeme and escape it ... - start = value[0:1] - for i in range(1, len(value)): - if start[0] != value[i]: - break - start += value[i] - - value = value[len(start):] - start = escape_tex(start, cp) - - # ... but do not escape inside comment. - value = start + value - elif self.mathescape: - # Only escape parts not inside a math environment. - parts = value.split('$') - in_math = False - for i, part in enumerate(parts): - if not in_math: - parts[i] = escape_tex(part, cp) - in_math = not in_math - value = '$'.join(parts) - elif self.escapeinside: - text = value - value = '' - while text: - a, sep1, text = text.partition(self.left) - if sep1: - b, sep2, text = text.partition(self.right) - if sep2: - value += escape_tex(a, cp) + b - else: - value += escape_tex(a + sep1 + b, cp) - else: - value += escape_tex(a, cp) - else: - value = escape_tex(value, cp) - elif ttype not in Token.Escape: - value = escape_tex(value, cp) - styles = [] - while ttype is not Token: - try: - styles.append(t2n[ttype]) - except KeyError: - # not in current style - styles.append(_get_ttype_name(ttype)) - ttype = ttype.parent - styleval = '+'.join(reversed(styles)) - if styleval: - spl = value.split('\n') - for line in spl[:-1]: - if line: - outfile.write("\\%s{%s}{%s}" % (cp, styleval, line)) - outfile.write('\n') - if spl[-1]: - outfile.write("\\%s{%s}{%s}" % (cp, styleval, spl[-1])) - else: - outfile.write(value) - - outfile.write('\\end{' + self.envname + '}\n') - - if self.full: - encoding = self.encoding or 'utf8' - # map known existings encodings from LaTeX distribution - encoding = { - 'utf_8': 'utf8', - 'latin_1': 'latin1', - 'iso_8859_1': 'latin1', - }.get(encoding.replace('-', '_'), encoding) - realoutfile.write(DOC_TEMPLATE % - dict(docclass = self.docclass, - preamble = self.preamble, - title = self.title, - encoding = encoding, - styledefs = self.get_style_defs(), - code = outfile.getvalue())) - - -class LatexEmbeddedLexer(Lexer): - """ - This lexer takes one lexer as argument, the lexer for the language - being formatted, and the left and right delimiters for escaped text. - - First everything is scanned using the language lexer to obtain - strings and comments. All other consecutive tokens are merged and - the resulting text is scanned for escaped segments, which are given - the Token.Escape type. Finally text that is not escaped is scanned - again with the language lexer. - """ - def __init__(self, left, right, lang, **options): - self.left = left - self.right = right - self.lang = lang - Lexer.__init__(self, **options) - - def get_tokens_unprocessed(self, text): - # find and remove all the escape tokens (replace with an empty string) - # this is very similar to DelegatingLexer.get_tokens_unprocessed. - buffered = '' - insertions = [] - insertion_buf = [] - for i, t, v in self._find_safe_escape_tokens(text): - if t is None: - if insertion_buf: - insertions.append((len(buffered), insertion_buf)) - insertion_buf = [] - buffered += v - else: - insertion_buf.append((i, t, v)) - if insertion_buf: - insertions.append((len(buffered), insertion_buf)) - return do_insertions(insertions, - self.lang.get_tokens_unprocessed(buffered)) - - def _find_safe_escape_tokens(self, text): - """ find escape tokens that are not in strings or comments """ - for i, t, v in self._filter_to( - self.lang.get_tokens_unprocessed(text), - lambda t: t in Token.Comment or t in Token.String - ): - if t is None: - for i2, t2, v2 in self._find_escape_tokens(v): - yield i + i2, t2, v2 - else: - yield i, None, v - - def _filter_to(self, it, pred): - """ Keep only the tokens that match `pred`, merge the others together """ - buf = '' - idx = 0 - for i, t, v in it: - if pred(t): - if buf: - yield idx, None, buf - buf = '' - yield i, t, v - else: - if not buf: - idx = i - buf += v - if buf: - yield idx, None, buf - - def _find_escape_tokens(self, text): - """ Find escape tokens within text, give token=None otherwise """ - index = 0 - while text: - a, sep1, text = text.partition(self.left) - if a: - yield index, None, a - index += len(a) - if sep1: - b, sep2, text = text.partition(self.right) - if sep2: - yield index + len(sep1), Token.Escape, b - index += len(sep1) + len(b) + len(sep2) - else: - yield index, Token.Error, sep1 - index += len(sep1) - text = b diff --git a/spaces/allandclive/Uganda_MMS/uroman/bin/uroman-quick.pl b/spaces/allandclive/Uganda_MMS/uroman/bin/uroman-quick.pl deleted file mode 100644 index 3c2bb6a84e891d68e7ee996dd72d154e8820c05d..0000000000000000000000000000000000000000 --- a/spaces/allandclive/Uganda_MMS/uroman/bin/uroman-quick.pl +++ /dev/null @@ -1,58 +0,0 @@ -#!/usr/bin/perl -w - -# uroman Nov. 12, 2015 - July 25, 2016 -# version v0.7 -# Author: Ulf Hermjakob - -# Usage: uroman-quick.pl {-l [tur|uig|ukr|yid]} < STDIN -# currently only for Arabic script languages, incl. Uyghur - -$|=1; - -use FindBin; -use Cwd "abs_path"; -use File::Basename qw(dirname); -use File::Spec; - -my $bin_dir = abs_path(dirname($0)); -my $root_dir = File::Spec->catfile($bin_dir, File::Spec->updir()); -my $data_dir = File::Spec->catfile($root_dir, "data"); -my $lib_dir = File::Spec->catfile($root_dir, "lib"); - -use lib "$FindBin::Bin/../lib"; -use NLP::Romanizer; -use NLP::UTF8; -$romanizer = NLP::Romanizer; -%ht = (); -$lang_code = ""; - -while (@ARGV) { - $arg = shift @ARGV; - if ($arg =~ /^-+(l|lc|lang-code)$/) { - $lang_code = lc (shift @ARGV || "") - } else { - print STDERR "Ignoring unrecognized arg $arg\n"; - } -} - -$romanization_table_arabic_block_filename = File::Spec->catfile($data_dir, "romanization-table-arabic-block.txt"); -$romanization_table_filename = File::Spec->catfile($data_dir, "romanization-table.txt"); - -$romanizer->load_romanization_table(*ht, $romanization_table_arabic_block_filename); -$romanizer->load_romanization_table(*ht, $romanization_table_filename); - -$line_number = 0; -while (<>) { - $line_number++; - my $line = $_; - print $romanizer->quick_romanize($line, $lang_code, *ht) . "\n"; - if ($line_number =~ /0000$/) { - print STDERR $line_number; - } elsif ($line_number =~ /000$/) { - print STDERR "."; - } -} -print STDERR "\n"; - -exit 0; - diff --git a/spaces/allknowingroger/Image-Models-Test99/app.py b/spaces/allknowingroger/Image-Models-Test99/app.py deleted file mode 100644 index e750602b328b667c0f4310e70697272428627897..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test99/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "leofto/lora-trained-xl-colab-hermes-mini-kelly", - "digiplay/HIJKLMix_v1", - "amanvarm/lora-trained-xl-colab", - "digiplay/fantasticmix2.5D_v4.5", - "ajaygupta/ajaygupta", - "snaoi/lora-trained-xl-colab", - "digiplay/fantasticmix2.5D_v4.0", - "sdfhg5243/pepe", - "metametaivan/anime-test", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/huggingface/assets/index-4c4fac98.css b/spaces/allknowingroger/huggingface/assets/index-4c4fac98.css deleted file mode 100644 index 79f233cc816beae61069a0feb08fb8fa0e410fd8..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/huggingface/assets/index-4c4fac98.css +++ /dev/null @@ -1 +0,0 @@ -*,:before,:after{box-sizing:border-box;border-width:0;border-style:solid;border-color:#e5e7eb}:before,:after{--tw-content: ""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,"Apple Color Emoji","Segoe UI Emoji",Segoe UI Symbol,"Noto Color Emoji";font-feature-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,samp,pre{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}button,[type=button],[type=reset],[type=submit]{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dl,dd,h1,h2,h3,h4,h5,h6,hr,figure,p,pre{margin:0}fieldset{margin:0;padding:0}legend{padding:0}ol,ul,menu{list-style:none;margin:0;padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}button,[role=button]{cursor:pointer}:disabled{cursor:default}img,svg,video,canvas,audio,iframe,embed,object{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*,:before,:after{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x: 0;--tw-border-spacing-y: 0;--tw-translate-x: 0;--tw-translate-y: 0;--tw-rotate: 0;--tw-skew-x: 0;--tw-skew-y: 0;--tw-scale-x: 1;--tw-scale-y: 1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness: proximity;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width: 0px;--tw-ring-offset-color: #fff;--tw-ring-color: rgb(59 130 246 / .5);--tw-ring-offset-shadow: 0 0 #0000;--tw-ring-shadow: 0 0 #0000;--tw-shadow: 0 0 #0000;--tw-shadow-colored: 0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.container{width:100%}@media (min-width: 640px){.container{max-width:640px}}@media (min-width: 768px){.container{max-width:768px}}@media (min-width: 1024px){.container{max-width:1024px}}@media (min-width: 1280px){.container{max-width:1280px}}@media (min-width: 1536px){.container{max-width:1536px}}.block{display:block}.flex{display:flex}.table{display:table}.hidden{display:none}.h-full{height:100%}.min-h-screen{min-height:100vh}.w-2\/3{width:66.666667%}.w-full{width:100%}.cursor-not-allowed{cursor:not-allowed}.cursor-pointer{cursor:pointer}.cursor-wait{cursor:wait}.flex-col{flex-direction:column}.items-center{align-items:center}.justify-center{justify-content:center}.space-y-12>:not([hidden])~:not([hidden]){--tw-space-y-reverse: 0;margin-top:calc(3rem * calc(1 - var(--tw-space-y-reverse)));margin-bottom:calc(3rem * var(--tw-space-y-reverse))}.overflow-auto{overflow:auto}.whitespace-pre-wrap{white-space:pre-wrap}.border-4{border-width:4px}.border-yellow-200{--tw-border-opacity: 1;border-color:rgb(254 240 138 / var(--tw-border-opacity))}.bg-yellow-200{--tw-bg-opacity: 1;background-color:rgb(254 240 138 / var(--tw-bg-opacity))}.bg-yellow-500{--tw-bg-opacity: 1;background-color:rgb(234 179 8 / var(--tw-bg-opacity))}.p-6{padding:1.5rem}.py-24{padding-top:6rem;padding-bottom:6rem}.py-6{padding-top:1.5rem;padding-bottom:1.5rem}.text-center{text-align:center}.text-6xl{font-size:3.75rem;line-height:1}.text-xl{font-size:1.25rem;line-height:1.75rem}.opacity-50{opacity:.5}.filter{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}*,*:before,*:after{box-sizing:inherit;-webkit-user-select:inherit;-moz-user-select:inherit;user-select:inherit}html,body,#root{box-sizing:border-box;height:100%;min-height:100vh;width:100%;min-width:100vw;margin:0;padding:0;-webkit-user-select:none;-moz-user-select:none;user-select:none}input::-webkit-file-upload-button{display:none}@media (min-width: 1024px){.lg\:w-1\/3{width:33.333333%}} diff --git a/spaces/almakedon/faster-whisper-webui/src/config.py b/spaces/almakedon/faster-whisper-webui/src/config.py deleted file mode 100644 index bd2b51478c39ce91fa55e2a8d801d9a7cf6d662e..0000000000000000000000000000000000000000 --- a/spaces/almakedon/faster-whisper-webui/src/config.py +++ /dev/null @@ -1,154 +0,0 @@ -from enum import Enum -import urllib - -import os -from typing import List -from urllib.parse import urlparse -import json5 -import torch - -from tqdm import tqdm - -class ModelConfig: - def __init__(self, name: str, url: str, path: str = None, type: str = "whisper"): - """ - Initialize a model configuration. - - name: Name of the model - url: URL to download the model from - path: Path to the model file. If not set, the model will be downloaded from the URL. - type: Type of model. Can be whisper or huggingface. - """ - self.name = name - self.url = url - self.path = path - self.type = type - -VAD_INITIAL_PROMPT_MODE_VALUES=["prepend_all_segments", "prepend_first_segment", "json_prompt_mode"] - -class VadInitialPromptMode(Enum): - PREPEND_ALL_SEGMENTS = 1 - PREPREND_FIRST_SEGMENT = 2 - JSON_PROMPT_MODE = 3 - - @staticmethod - def from_string(s: str): - normalized = s.lower() if s is not None else None - - if normalized == "prepend_all_segments": - return VadInitialPromptMode.PREPEND_ALL_SEGMENTS - elif normalized == "prepend_first_segment": - return VadInitialPromptMode.PREPREND_FIRST_SEGMENT - elif normalized == "json_prompt_mode": - return VadInitialPromptMode.JSON_PROMPT_MODE - elif normalized is not None and normalized != "": - raise ValueError(f"Invalid value for VadInitialPromptMode: {s}") - else: - return None - -class ApplicationConfig: - def __init__(self, models: List[ModelConfig] = [], input_audio_max_duration: int = 600, - share: bool = False, server_name: str = None, server_port: int = 7860, - queue_concurrency_count: int = 1, delete_uploaded_files: bool = True, - whisper_implementation: str = "whisper", - default_model_name: str = "medium", default_vad: str = "silero-vad", - vad_parallel_devices: str = "", vad_cpu_cores: int = 1, vad_process_timeout: int = 1800, - auto_parallel: bool = False, output_dir: str = None, - model_dir: str = None, device: str = None, - verbose: bool = True, task: str = "transcribe", language: str = None, - vad_initial_prompt_mode: str = "prepend_first_segment ", - vad_merge_window: float = 5, vad_max_merge_size: float = 30, - vad_padding: float = 1, vad_prompt_window: float = 3, - temperature: float = 0, best_of: int = 5, beam_size: int = 5, - patience: float = None, length_penalty: float = None, - suppress_tokens: str = "-1", initial_prompt: str = None, - condition_on_previous_text: bool = True, fp16: bool = True, - compute_type: str = "float16", - temperature_increment_on_fallback: float = 0.2, compression_ratio_threshold: float = 2.4, - logprob_threshold: float = -1.0, no_speech_threshold: float = 0.6, - # Word timestamp settings - word_timestamps: bool = False, prepend_punctuations: str = "\"\'“¿([{-", - append_punctuations: str = "\"\'.。,,!!??::”)]}、", - highlight_words: bool = False): - - self.models = models - - # WebUI settings - self.input_audio_max_duration = input_audio_max_duration - self.share = share - self.server_name = server_name - self.server_port = server_port - self.queue_concurrency_count = queue_concurrency_count - self.delete_uploaded_files = delete_uploaded_files - - self.whisper_implementation = whisper_implementation - self.default_model_name = default_model_name - self.default_vad = default_vad - self.vad_parallel_devices = vad_parallel_devices - self.vad_cpu_cores = vad_cpu_cores - self.vad_process_timeout = vad_process_timeout - self.auto_parallel = auto_parallel - self.output_dir = output_dir - - self.model_dir = model_dir - self.device = device - self.verbose = verbose - self.task = task - self.language = language - self.vad_initial_prompt_mode = vad_initial_prompt_mode - self.vad_merge_window = vad_merge_window - self.vad_max_merge_size = vad_max_merge_size - self.vad_padding = vad_padding - self.vad_prompt_window = vad_prompt_window - self.temperature = temperature - self.best_of = best_of - self.beam_size = beam_size - self.patience = patience - self.length_penalty = length_penalty - self.suppress_tokens = suppress_tokens - self.initial_prompt = initial_prompt - self.condition_on_previous_text = condition_on_previous_text - self.fp16 = fp16 - self.compute_type = compute_type - self.temperature_increment_on_fallback = temperature_increment_on_fallback - self.compression_ratio_threshold = compression_ratio_threshold - self.logprob_threshold = logprob_threshold - self.no_speech_threshold = no_speech_threshold - - # Word timestamp settings - self.word_timestamps = word_timestamps - self.prepend_punctuations = prepend_punctuations - self.append_punctuations = append_punctuations - self.highlight_words = highlight_words - - def get_model_names(self): - return [ x.name for x in self.models ] - - def update(self, **new_values): - result = ApplicationConfig(**self.__dict__) - - for key, value in new_values.items(): - setattr(result, key, value) - return result - - @staticmethod - def create_default(**kwargs): - app_config = ApplicationConfig.parse_file(os.environ.get("WHISPER_WEBUI_CONFIG", "config.json5")) - - # Update with kwargs - if len(kwargs) > 0: - app_config = app_config.update(**kwargs) - return app_config - - @staticmethod - def parse_file(config_path: str): - import json5 - - with open(config_path, "r", encoding="utf-8") as f: - # Load using json5 - data = json5.load(f) - data_models = data.pop("models", []) - - models = [ ModelConfig(**x) for x in data_models ] - - return ApplicationConfig(models, **data) diff --git a/spaces/almakedon/faster-whisper-webui/src/source.py b/spaces/almakedon/faster-whisper-webui/src/source.py deleted file mode 100644 index e304e278bfae8ef289c999fc76311ce01b547991..0000000000000000000000000000000000000000 --- a/spaces/almakedon/faster-whisper-webui/src/source.py +++ /dev/null @@ -1,80 +0,0 @@ -# Gradio seems to truncate files without keeping the extension, so we need to truncate the file prefix ourself -import os -import pathlib -from typing import List -import zipfile - -import ffmpeg -from more_itertools import unzip - -from src.download import ExceededMaximumDuration, download_url - -MAX_FILE_PREFIX_LENGTH = 17 - -class AudioSource: - def __init__(self, source_path, source_name = None, audio_duration = None): - self.source_path = source_path - self.source_name = source_name - self._audio_duration = audio_duration - - # Load source name if not provided - if (self.source_name is None): - file_path = pathlib.Path(self.source_path) - self.source_name = file_path.name - - def get_audio_duration(self): - if self._audio_duration is None: - self._audio_duration = float(ffmpeg.probe(self.source_path)["format"]["duration"]) - - return self._audio_duration - - def get_full_name(self): - return self.source_name - - def get_short_name(self, max_length: int = MAX_FILE_PREFIX_LENGTH): - file_path = pathlib.Path(self.source_name) - short_name = file_path.stem[:max_length] + file_path.suffix - - return short_name - - def __str__(self) -> str: - return self.source_path - -class AudioSourceCollection: - def __init__(self, sources: List[AudioSource]): - self.sources = sources - - def __iter__(self): - return iter(self.sources) - -def get_audio_source_collection(urlData: str, multipleFiles: List, microphoneData: str, input_audio_max_duration: float = -1) -> List[AudioSource]: - output: List[AudioSource] = [] - - if urlData: - # Download from YouTube. This could also be a playlist or a channel. - output.extend([ AudioSource(x) for x in download_url(urlData, input_audio_max_duration, playlistItems=None) ]) - else: - # Add input files - if (multipleFiles is not None): - output.extend([ AudioSource(x.name) for x in multipleFiles ]) - if (microphoneData is not None): - output.append(AudioSource(microphoneData)) - - total_duration = 0 - - # Calculate total audio length. We do this even if input_audio_max_duration - # is disabled to ensure that all the audio files are valid. - for source in output: - audioDuration = ffmpeg.probe(source.source_path)["format"]["duration"] - total_duration += float(audioDuration) - - # Save audio duration - source._audio_duration = float(audioDuration) - - # Ensure the total duration of the audio is not too long - if input_audio_max_duration > 0: - if float(total_duration) > input_audio_max_duration: - raise ExceededMaximumDuration(videoDuration=total_duration, maxDuration=input_audio_max_duration, message="Video(s) is too long") - - # Return a list of audio sources - return output \ No newline at end of file diff --git a/spaces/alwayse/MMD_MP_Text_Dection/dataTST.py b/spaces/alwayse/MMD_MP_Text_Dection/dataTST.py deleted file mode 100644 index ffbf6e302d5f60f38a7d3c6dc0ab5097b1eba1cf..0000000000000000000000000000000000000000 --- a/spaces/alwayse/MMD_MP_Text_Dection/dataTST.py +++ /dev/null @@ -1,63 +0,0 @@ -import numpy as np -import torch -import random -from meta_train import mmdPreModel -from collections import namedtuple -import joblib -from transformers import RobertaTokenizer, RobertaModel - - -def api_init(): - - random.seed(0) - np.random.seed(0) - torch.manual_seed(0) - torch.cuda.manual_seed(0) - torch.cuda.manual_seed_all(0) - torch.backends.cudnn.benchmark = False - torch.backends.cudnn.deterministic = True - - model_name = 'roberta-base-openai-detector' - model_path_api = f'.' - token_num, hidden_size = 100, 768 - - Config = namedtuple('Config', ['in_dim', 'hid_dim', 'dropout', 'out_dim', 'token_num']) - config = Config( - in_dim=hidden_size, - token_num=token_num, - hid_dim=512, - dropout=0.2, - out_dim=300,) - - net = mmdPreModel(config=config, num_mlp=0, transformer_flag=True, num_hidden_layers=1) - - # load the features and models - feature_ref_for_test_filename = f'{model_path_api}/feature_ref_for_test.pt' - model_filename = f'{model_path_api}/logistic_regression_model.pkl' - net_filename = f'{model_path_api}/net.pt' - - load_ref_data = torch.load(feature_ref_for_test_filename,map_location=torch.device('cpu')) # cpu - loaded_model = joblib.load(model_filename) # cpu - checkpoint = torch.load(net_filename,map_location=torch.device('cpu')) - net.load_state_dict(checkpoint['net']) - sigma, sigma0_u, ep = checkpoint['sigma'], checkpoint['sigma0_u'], checkpoint['ep'] - - # generic generative model - cache_dir = ".cache" - base_tokenizer = RobertaTokenizer.from_pretrained(model_name, cache_dir=cache_dir) - base_model = RobertaModel.from_pretrained(model_name, output_hidden_states=True, cache_dir=cache_dir) - - # whether load the model to gpu - gpu_using = False - - DEVICE = torch.device("cpu") - if gpu_using: - DEVICE = torch.device("cuda:0") - net = net.to(DEVICE) - sigma, sigma0_u, ep = sigma.to(DEVICE), sigma0_u.to(DEVICE), ep.to(DEVICE) - load_ref_data = load_ref_data.to(DEVICE) - base_model = base_model.to(DEVICE) - num_ref = 5000 - feature_ref = load_ref_data[np.random.permutation(load_ref_data.shape[0])][:num_ref].to(DEVICE) - - return base_model, base_tokenizer, net, feature_ref, sigma, sigma0_u, ep, loaded_model, DEVICE diff --git a/spaces/ammarnasr/Sem-GAN-Bird-Image-Generator/README.md b/spaces/ammarnasr/Sem-GAN-Bird-Image-Generator/README.md deleted file mode 100644 index fe4053762a305b8d60e2dd4ea5708bc1d5cca808..0000000000000000000000000000000000000000 --- a/spaces/ammarnasr/Sem-GAN-Bird-Image-Generator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sem GAN Bird Image Generator -emoji: 📉 -colorFrom: indigo -colorTo: yellow -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/antonbol/vocal_remover/lib/spec_utils.py b/spaces/antonbol/vocal_remover/lib/spec_utils.py deleted file mode 100644 index af34f70d8cf4f47a3c4d0e8dc9506be567018bb5..0000000000000000000000000000000000000000 --- a/spaces/antonbol/vocal_remover/lib/spec_utils.py +++ /dev/null @@ -1,227 +0,0 @@ -import os - -import librosa -import numpy as np -import soundfile as sf - - -def crop_center(h1, h2): - h1_shape = h1.size() - h2_shape = h2.size() - - if h1_shape[3] == h2_shape[3]: - return h1 - elif h1_shape[3] < h2_shape[3]: - raise ValueError('h1_shape[3] must be greater than h2_shape[3]') - - # s_freq = (h2_shape[2] - h1_shape[2]) // 2 - # e_freq = s_freq + h1_shape[2] - s_time = (h1_shape[3] - h2_shape[3]) // 2 - e_time = s_time + h2_shape[3] - h1 = h1[:, :, :, s_time:e_time] - - return h1 - - -def wave_to_spectrogram(wave, hop_length, n_fft): - wave_left = np.asfortranarray(wave[0]) - wave_right = np.asfortranarray(wave[1]) - - spec_left = librosa.stft(wave_left, n_fft, hop_length=hop_length) - spec_right = librosa.stft(wave_right, n_fft, hop_length=hop_length) - spec = np.asfortranarray([spec_left, spec_right]) - - return spec - - -def spectrogram_to_image(spec, mode='magnitude'): - if mode == 'magnitude': - if np.iscomplexobj(spec): - y = np.abs(spec) - else: - y = spec - y = np.log10(y ** 2 + 1e-8) - elif mode == 'phase': - if np.iscomplexobj(spec): - y = np.angle(spec) - else: - y = spec - - y -= y.min() - y *= 255 / y.max() - img = np.uint8(y) - - if y.ndim == 3: - img = img.transpose(1, 2, 0) - img = np.concatenate([ - np.max(img, axis=2, keepdims=True), img - ], axis=2) - - return img - - -def aggressively_remove_vocal(X, y, weight): - X_mag = np.abs(X) - y_mag = np.abs(y) - # v_mag = np.abs(X_mag - y_mag) - v_mag = X_mag - y_mag - v_mag *= v_mag > y_mag - - y_mag = np.clip(y_mag - v_mag * weight, 0, np.inf) - - return y_mag * np.exp(1.j * np.angle(y)) - - -def merge_artifacts(y_mask, thres=0.05, min_range=64, fade_size=32): - if min_range < fade_size * 2: - raise ValueError('min_range must be >= fade_size * 2') - - idx = np.where(y_mask.min(axis=(0, 1)) > thres)[0] - start_idx = np.insert(idx[np.where(np.diff(idx) != 1)[0] + 1], 0, idx[0]) - end_idx = np.append(idx[np.where(np.diff(idx) != 1)[0]], idx[-1]) - artifact_idx = np.where(end_idx - start_idx > min_range)[0] - weight = np.zeros_like(y_mask) - if len(artifact_idx) > 0: - start_idx = start_idx[artifact_idx] - end_idx = end_idx[artifact_idx] - old_e = None - for s, e in zip(start_idx, end_idx): - if old_e is not None and s - old_e < fade_size: - s = old_e - fade_size * 2 - - if s != 0: - weight[:, :, s:s + fade_size] = np.linspace(0, 1, fade_size) - else: - s -= fade_size - - if e != y_mask.shape[2]: - weight[:, :, e - fade_size:e] = np.linspace(1, 0, fade_size) - else: - e += fade_size - - weight[:, :, s + fade_size:e - fade_size] = 1 - old_e = e - - v_mask = 1 - y_mask - y_mask += weight * v_mask - - return y_mask - - -def align_wave_head_and_tail(a, b, sr): - a, _ = librosa.effects.trim(a) - b, _ = librosa.effects.trim(b) - - a_mono = a[:, :sr * 4].sum(axis=0) - b_mono = b[:, :sr * 4].sum(axis=0) - - a_mono -= a_mono.mean() - b_mono -= b_mono.mean() - - offset = len(a_mono) - 1 - delay = np.argmax(np.correlate(a_mono, b_mono, 'full')) - offset - - if delay > 0: - a = a[:, delay:] - else: - b = b[:, np.abs(delay):] - - if a.shape[1] < b.shape[1]: - b = b[:, :a.shape[1]] - else: - a = a[:, :b.shape[1]] - - return a, b - - -def cache_or_load(mix_path, inst_path, sr, hop_length, n_fft): - mix_basename = os.path.splitext(os.path.basename(mix_path))[0] - inst_basename = os.path.splitext(os.path.basename(inst_path))[0] - - cache_dir = 'sr{}_hl{}_nf{}'.format(sr, hop_length, n_fft) - mix_cache_dir = os.path.join(os.path.dirname(mix_path), cache_dir) - inst_cache_dir = os.path.join(os.path.dirname(inst_path), cache_dir) - os.makedirs(mix_cache_dir, exist_ok=True) - os.makedirs(inst_cache_dir, exist_ok=True) - - mix_cache_path = os.path.join(mix_cache_dir, mix_basename + '.npy') - inst_cache_path = os.path.join(inst_cache_dir, inst_basename + '.npy') - - if os.path.exists(mix_cache_path) and os.path.exists(inst_cache_path): - X = np.load(mix_cache_path) - y = np.load(inst_cache_path) - else: - X, _ = librosa.load( - mix_path, sr, False, dtype=np.float32, res_type='kaiser_fast') - y, _ = librosa.load( - inst_path, sr, False, dtype=np.float32, res_type='kaiser_fast') - - X, y = align_wave_head_and_tail(X, y, sr) - - X = wave_to_spectrogram(X, hop_length, n_fft) - y = wave_to_spectrogram(y, hop_length, n_fft) - - np.save(mix_cache_path, X) - np.save(inst_cache_path, y) - - return X, y, mix_cache_path, inst_cache_path - - -def spectrogram_to_wave(spec, hop_length=1024): - if spec.ndim == 2: - wave = librosa.istft(spec, hop_length=hop_length) - elif spec.ndim == 3: - spec_left = np.asfortranarray(spec[0]) - spec_right = np.asfortranarray(spec[1]) - - wave_left = librosa.istft(spec_left, hop_length=hop_length) - wave_right = librosa.istft(spec_right, hop_length=hop_length) - wave = np.asfortranarray([wave_left, wave_right]) - - return wave - - -if __name__ == "__main__": - import cv2 - import sys - - bins = 2048 // 2 + 1 - freq_to_bin = 2 * bins / 44100 - unstable_bins = int(200 * freq_to_bin) - stable_bins = int(22050 * freq_to_bin) - reduction_weight = np.concatenate([ - np.linspace(0, 1, unstable_bins, dtype=np.float32)[:, None], - np.linspace(1, 0, stable_bins - unstable_bins, dtype=np.float32)[:, None], - np.zeros((bins - stable_bins, 1)) - ], axis=0) * 0.2 - - X, _ = librosa.load( - sys.argv[1], 44100, False, dtype=np.float32, res_type='kaiser_fast') - y, _ = librosa.load( - sys.argv[2], 44100, False, dtype=np.float32, res_type='kaiser_fast') - - X, y = align_wave_head_and_tail(X, y, 44100) - X_spec = wave_to_spectrogram(X, 1024, 2048) - y_spec = wave_to_spectrogram(y, 1024, 2048) - - X_mag = np.abs(X_spec) - y_mag = np.abs(y_spec) - # v_mag = np.abs(X_mag - y_mag) - v_mag = X_mag - y_mag - v_mag *= v_mag > y_mag - - # y_mag = np.clip(y_mag - v_mag * reduction_weight, 0, np.inf) - y_spec = y_mag * np.exp(1j * np.angle(y_spec)) - v_spec = v_mag * np.exp(1j * np.angle(X_spec)) - - X_image = spectrogram_to_image(X_mag) - y_image = spectrogram_to_image(y_mag) - v_image = spectrogram_to_image(v_mag) - - cv2.imwrite('test_X.jpg', X_image) - cv2.imwrite('test_y.jpg', y_image) - cv2.imwrite('test_v.jpg', v_image) - - sf.write('test_X.wav', spectrogram_to_wave(X_spec).T, 44100) - sf.write('test_y.wav', spectrogram_to_wave(y_spec).T, 44100) - sf.write('test_v.wav', spectrogram_to_wave(v_spec).T, 44100) diff --git a/spaces/antonelli/outsidellms/d.py b/spaces/antonelli/outsidellms/d.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/anuragshas/Hindi_ASR/app.py b/spaces/anuragshas/Hindi_ASR/app.py deleted file mode 100644 index 206e16baf286400c9b65f557b29c19ad2a5cb9c1..0000000000000000000000000000000000000000 --- a/spaces/anuragshas/Hindi_ASR/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import gradio as gr -import librosa -from transformers import AutoFeatureExtractor, pipeline - - -def load_and_fix_data(input_file, model_sampling_rate): - speech, sample_rate = librosa.load(input_file) - if len(speech.shape) > 1: - speech = speech[:, 0] + speech[:, 1] - if sample_rate != model_sampling_rate: - speech = librosa.resample(speech, sample_rate, model_sampling_rate) - return speech - - -feature_extractor = AutoFeatureExtractor.from_pretrained( - "anuragshas/wav2vec2-xls-r-1b-hi-with-lm" -) -sampling_rate = feature_extractor.sampling_rate - -asr = pipeline( - "automatic-speech-recognition", model="anuragshas/wav2vec2-xls-r-1b-hi-with-lm" -) - - -def predict_and_ctc_lm_decode(input_file): - speech = load_and_fix_data(input_file, sampling_rate) - transcribed_text = asr(speech, chunk_length_s=5, stride_length_s=1) - return transcribed_text["text"] - - -gr.Interface( - predict_and_ctc_lm_decode, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath", label="Record your audio") - ], - outputs=[gr.outputs.Textbox()], - examples=[["example1.wav"]], - title="Hindi ASR using Wav2Vec2-1B with LM", - article="

visitor badge

", - description="Built during Robust Speech Event", - layout="horizontal", - theme="huggingface", -).launch(enable_queue=True, cache_examples=True) diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/vctk/fast_speech/train_fast_speech.py b/spaces/artificialguybr/video-dubbing/TTS/recipes/vctk/fast_speech/train_fast_speech.py deleted file mode 100644 index 3db7ff7afe7770ed6489650f24db5567eaaadb4f..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/recipes/vctk/fast_speech/train_fast_speech.py +++ /dev/null @@ -1,96 +0,0 @@ -import os - -from trainer import Trainer, TrainerArgs - -from TTS.config import BaseAudioConfig, BaseDatasetConfig -from TTS.tts.configs.fast_speech_config import FastSpeechConfig -from TTS.tts.datasets import load_tts_samples -from TTS.tts.models.forward_tts import ForwardTTS -from TTS.tts.utils.speakers import SpeakerManager -from TTS.tts.utils.text.tokenizer import TTSTokenizer -from TTS.utils.audio import AudioProcessor - -output_path = os.path.dirname(os.path.abspath(__file__)) -dataset_config = BaseDatasetConfig(formatter="vctk", meta_file_train="", path=os.path.join(output_path, "../VCTK/")) - -audio_config = BaseAudioConfig( - sample_rate=22050, - do_trim_silence=True, - trim_db=23.0, - signal_norm=False, - mel_fmin=0.0, - mel_fmax=8000, - spec_gain=1.0, - log_func="np.log", - ref_level_db=20, - preemphasis=0.0, -) - -config = FastSpeechConfig( - run_name="fast_speech_vctk", - audio=audio_config, - batch_size=32, - eval_batch_size=16, - num_loader_workers=8, - num_eval_loader_workers=4, - compute_input_seq_cache=True, - precompute_num_workers=4, - run_eval=True, - test_delay_epochs=-1, - epochs=1000, - text_cleaner="english_cleaners", - use_phonemes=True, - phoneme_language="en-us", - phoneme_cache_path=os.path.join(output_path, "phoneme_cache"), - print_step=50, - print_eval=False, - mixed_precision=False, - min_text_len=0, - max_text_len=500, - min_audio_len=0, - max_audio_len=500000, - output_path=output_path, - datasets=[dataset_config], - use_speaker_embedding=True, -) - -## INITIALIZE THE AUDIO PROCESSOR -# Audio processor is used for feature extraction and audio I/O. -# It mainly serves to the dataloader and the training loggers. -ap = AudioProcessor.init_from_config(config) - -# INITIALIZE THE TOKENIZER -# Tokenizer is used to convert text to sequences of token IDs. -# If characters are not defined in the config, default characters are passed to the config -tokenizer, config = TTSTokenizer.init_from_config(config) - -# LOAD DATA SAMPLES -# Each sample is a list of ```[text, audio_file_path, speaker_name]``` -# You can define your custom sample loader returning the list of samples. -# Or define your custom formatter and pass it to the `load_tts_samples`. -# Check `TTS.tts.datasets.load_tts_samples` for more details. -train_samples, eval_samples = load_tts_samples( - dataset_config, - eval_split=True, - eval_split_max_size=config.eval_split_max_size, - eval_split_size=config.eval_split_size, -) - -# init speaker manager for multi-speaker training -# it maps speaker-id to speaker-name in the model and data-loader -speaker_manager = SpeakerManager() -speaker_manager.set_ids_from_data(train_samples + eval_samples, parse_key="speaker_name") -config.model_args.num_speakers = speaker_manager.num_speakers - -# init model -model = ForwardTTS(config, ap, tokenizer, speaker_manager=speaker_manager) - -# INITIALIZE THE TRAINER -# Trainer provides a generic API to train all the 🐸TTS models with all its perks like mixed-precision training, -# distributed training, etc. -trainer = Trainer( - TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples -) - -# AND... 3,2,1... 🚀 -trainer.fit() diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_wavegrad_train.py b/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_wavegrad_train.py deleted file mode 100644 index fe56ee783f36b89879af78e58316b19ff0e23f54..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_wavegrad_train.py +++ /dev/null @@ -1,43 +0,0 @@ -import glob -import os -import shutil - -from tests import get_device_id, get_tests_output_path, run_cli -from TTS.vocoder.configs import WavegradConfig - -config_path = os.path.join(get_tests_output_path(), "test_vocoder_config.json") -output_path = os.path.join(get_tests_output_path(), "train_outputs") - -config = WavegradConfig( - batch_size=8, - eval_batch_size=8, - num_loader_workers=0, - num_eval_loader_workers=0, - run_eval=True, - test_delay_epochs=-1, - epochs=1, - seq_len=8192, - eval_split_size=1, - print_step=1, - print_eval=True, - data_path="tests/data/ljspeech", - output_path=output_path, - test_noise_schedule={"min_val": 1e-6, "max_val": 1e-2, "num_steps": 2}, -) -config.audio.do_trim_silence = True -config.audio.trim_db = 60 -config.save_json(config_path) - -# train the model for one epoch -command_train = f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_vocoder.py --config_path {config_path} " -run_cli(command_train) - -# Find latest folder -continue_path = max(glob.glob(os.path.join(output_path, "*/")), key=os.path.getmtime) - -# restore the model and continue training for one more epoch -command_train = ( - f"CUDA_VISIBLE_DEVICES='{get_device_id()}' python TTS/bin/train_vocoder.py --continue_path {continue_path} " -) -run_cli(command_train) -shutil.rmtree(continue_path) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/noising.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/noising.py deleted file mode 100644 index e92e83c2cd2e2950d387f93ae8a80acbc12f909f..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/noising.py +++ /dev/null @@ -1,334 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from fairseq.data import data_utils - - -class WordNoising(object): - """Generate a noisy version of a sentence, without changing words themselves.""" - - def __init__(self, dictionary, bpe_cont_marker="@@", bpe_end_marker=None): - self.dictionary = dictionary - self.bpe_end = None - if bpe_cont_marker: - self.bpe_end = np.array( - [ - not self.dictionary[i].endswith(bpe_cont_marker) - for i in range(len(self.dictionary)) - ] - ) - elif bpe_end_marker: - self.bpe_end = np.array( - [ - self.dictionary[i].endswith(bpe_end_marker) - for i in range(len(self.dictionary)) - ] - ) - - self.get_word_idx = ( - self._get_bpe_word_idx if self.bpe_end is not None else self._get_token_idx - ) - - def noising(self, x, lengths, noising_prob=0.0): - raise NotImplementedError() - - def _get_bpe_word_idx(self, x): - """ - Given a list of BPE tokens, for every index in the tokens list, - return the index of the word grouping that it belongs to. - For example, for input x corresponding to ["how", "are", "y@@", "ou"], - return [[0], [1], [2], [2]]. - """ - # x: (T x B) - bpe_end = self.bpe_end[x] - - if x.size(0) == 1 and x.size(1) == 1: - # Special case when we only have one word in x. If x = [[N]], - # bpe_end is a scalar (bool) instead of a 2-dim array of bools, - # which makes the sum operation below fail. - return np.array([[0]]) - - # do a reduce front sum to generate word ids - word_idx = bpe_end[::-1].cumsum(0)[::-1] - word_idx = word_idx.max(0)[None, :] - word_idx - return word_idx - - def _get_token_idx(self, x): - """ - This is to extend noising functions to be able to apply to non-bpe - tokens, e.g. word or characters. - """ - x = torch.t(x) - word_idx = np.array([range(len(x_i)) for x_i in x]) - return np.transpose(word_idx) - - -class WordDropout(WordNoising): - """Randomly drop input words. If not passing blank_idx (default is None), - then dropped words will be removed. Otherwise, it will be replaced by the - blank_idx.""" - - def __init__( - self, - dictionary, - default_dropout_prob=0.1, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary, bpe_cont_marker, bpe_end_marker) - self.default_dropout_prob = default_dropout_prob - - def noising(self, x, lengths, dropout_prob=None, blank_idx=None): - if dropout_prob is None: - dropout_prob = self.default_dropout_prob - # x: (T x B), lengths: B - if dropout_prob == 0: - return x, lengths - - assert 0 < dropout_prob < 1 - - # be sure to drop entire words - word_idx = self.get_word_idx(x) - sentences = [] - modified_lengths = [] - for i in range(lengths.size(0)): - # Since dropout probabilities need to apply over non-pad tokens, - # it is not trivial to generate the keep mask without consider - # input lengths; otherwise, this could be done outside the loop - - # We want to drop whole words based on word_idx grouping - num_words = max(word_idx[:, i]) + 1 - - # ith example: [x0, x1, ..., eos, pad, ..., pad] - # We should only generate keep probs for non-EOS tokens. Thus if the - # input sentence ends in EOS, the last word idx is not included in - # the dropout mask generation and we append True to always keep EOS. - # Otherwise, just generate the dropout mask for all word idx - # positions. - has_eos = x[lengths[i] - 1, i] == self.dictionary.eos() - if has_eos: # has eos? - keep = np.random.rand(num_words - 1) >= dropout_prob - keep = np.append(keep, [True]) # keep EOS symbol - else: - keep = np.random.rand(num_words) >= dropout_prob - - words = x[: lengths[i], i].tolist() - - # TODO: speed up the following loop - # drop words from the input according to keep - new_s = [ - w if keep[word_idx[j, i]] else blank_idx for j, w in enumerate(words) - ] - new_s = [w for w in new_s if w is not None] - # we need to have at least one word in the sentence (more than the - # start / end sentence symbols) - if len(new_s) <= 1: - # insert at beginning in case the only token left is EOS - # EOS should be at end of list. - new_s.insert(0, words[np.random.randint(0, len(words))]) - assert len(new_s) >= 1 and ( - not has_eos # Either don't have EOS at end or last token is EOS - or (len(new_s) >= 2 and new_s[-1] == self.dictionary.eos()) - ), "New sentence is invalid." - sentences.append(new_s) - modified_lengths.append(len(new_s)) - # re-construct input - modified_lengths = torch.LongTensor(modified_lengths) - modified_x = torch.LongTensor( - modified_lengths.max(), modified_lengths.size(0) - ).fill_(self.dictionary.pad()) - for i in range(modified_lengths.size(0)): - modified_x[: modified_lengths[i], i].copy_(torch.LongTensor(sentences[i])) - - return modified_x, modified_lengths - - -class WordShuffle(WordNoising): - """Shuffle words by no more than k positions.""" - - def __init__( - self, - dictionary, - default_max_shuffle_distance=3, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary, bpe_cont_marker, bpe_end_marker) - self.default_max_shuffle_distance = 3 - - def noising(self, x, lengths, max_shuffle_distance=None): - if max_shuffle_distance is None: - max_shuffle_distance = self.default_max_shuffle_distance - # x: (T x B), lengths: B - if max_shuffle_distance == 0: - return x, lengths - - # max_shuffle_distance < 1 will return the same sequence - assert max_shuffle_distance > 1 - - # define noise word scores - noise = np.random.uniform( - 0, - max_shuffle_distance, - size=(x.size(0), x.size(1)), - ) - noise[0] = -1 # do not move start sentence symbol - # be sure to shuffle entire words - word_idx = self.get_word_idx(x) - x2 = x.clone() - for i in range(lengths.size(0)): - length_no_eos = lengths[i] - if x[lengths[i] - 1, i] == self.dictionary.eos(): - length_no_eos = lengths[i] - 1 - # generate a random permutation - scores = word_idx[:length_no_eos, i] + noise[word_idx[:length_no_eos, i], i] - # ensure no reordering inside a word - scores += 1e-6 * np.arange(length_no_eos.item()) - permutation = scores.argsort() - # shuffle words - x2[:length_no_eos, i].copy_( - x2[:length_no_eos, i][torch.from_numpy(permutation)] - ) - return x2, lengths - - -class UnsupervisedMTNoising(WordNoising): - """ - Implements the default configuration for noising in UnsupervisedMT - (github.com/facebookresearch/UnsupervisedMT) - """ - - def __init__( - self, - dictionary, - max_word_shuffle_distance, - word_dropout_prob, - word_blanking_prob, - bpe_cont_marker="@@", - bpe_end_marker=None, - ): - super().__init__(dictionary) - self.max_word_shuffle_distance = max_word_shuffle_distance - self.word_dropout_prob = word_dropout_prob - self.word_blanking_prob = word_blanking_prob - - self.word_dropout = WordDropout( - dictionary=dictionary, - bpe_cont_marker=bpe_cont_marker, - bpe_end_marker=bpe_end_marker, - ) - self.word_shuffle = WordShuffle( - dictionary=dictionary, - bpe_cont_marker=bpe_cont_marker, - bpe_end_marker=bpe_end_marker, - ) - - def noising(self, x, lengths): - # 1. Word Shuffle - noisy_src_tokens, noisy_src_lengths = self.word_shuffle.noising( - x=x, - lengths=lengths, - max_shuffle_distance=self.max_word_shuffle_distance, - ) - # 2. Word Dropout - noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising( - x=noisy_src_tokens, - lengths=noisy_src_lengths, - dropout_prob=self.word_dropout_prob, - ) - # 3. Word Blanking - noisy_src_tokens, noisy_src_lengths = self.word_dropout.noising( - x=noisy_src_tokens, - lengths=noisy_src_lengths, - dropout_prob=self.word_blanking_prob, - blank_idx=self.dictionary.unk(), - ) - - return noisy_src_tokens - - -class NoisingDataset(torch.utils.data.Dataset): - def __init__( - self, - src_dataset, - src_dict, - seed, - noiser=None, - noising_class=UnsupervisedMTNoising, - **kwargs - ): - """ - Wrap a :class:`~torch.utils.data.Dataset` and apply noise to the - samples based on the supplied noising configuration. - - Args: - src_dataset (~torch.utils.data.Dataset): dataset to wrap. - to build self.src_dataset -- - a LanguagePairDataset with src dataset as the source dataset and - None as the target dataset. Should NOT have padding so that - src_lengths are accurately calculated by language_pair_dataset - collate function. - We use language_pair_dataset here to encapsulate the tgt_dataset - so we can re-use the LanguagePairDataset collater to format the - batches in the structure that SequenceGenerator expects. - src_dict (~fairseq.data.Dictionary): source dictionary - seed (int): seed to use when generating random noise - noiser (WordNoising): a pre-initialized :class:`WordNoising` - instance. If this is None, a new instance will be created using - *noising_class* and *kwargs*. - noising_class (class, optional): class to use to initialize a - default :class:`WordNoising` instance. - kwargs (dict, optional): arguments to initialize the default - :class:`WordNoising` instance given by *noiser*. - """ - self.src_dataset = src_dataset - self.src_dict = src_dict - self.seed = seed - self.noiser = ( - noiser - if noiser is not None - else noising_class( - dictionary=src_dict, - **kwargs, - ) - ) - self.sizes = src_dataset.sizes - - def __getitem__(self, index): - """ - Returns a single noisy sample. Multiple samples are fed to the collater - create a noising dataset batch. - """ - src_tokens = self.src_dataset[index] - src_lengths = torch.LongTensor([len(src_tokens)]) - src_tokens = src_tokens.unsqueeze(0) - - # Transpose src tokens to fit expected shape of x in noising function - # (batch size, sequence length) -> (sequence length, batch size) - src_tokens_t = torch.t(src_tokens) - - with data_utils.numpy_seed(self.seed + index): - noisy_src_tokens = self.noiser.noising(src_tokens_t, src_lengths) - - # Transpose back to expected src_tokens format - # (sequence length, 1) -> (1, sequence length) - noisy_src_tokens = torch.t(noisy_src_tokens) - return noisy_src_tokens[0] - - def __len__(self): - """ - The length of the noising dataset is the length of src. - """ - return len(self.src_dataset) - - @property - def supports_prefetch(self): - return self.src_dataset.supports_prefetch - - def prefetch(self, indices): - if self.src_dataset.supports_prefetch: - self.src_dataset.prefetch(indices) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/mmpt/processors/how2retriprocessor.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/mmpt/processors/how2retriprocessor.py deleted file mode 100644 index b5a7730ec0bbe91d9997564214fffb10d0aef519..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/MMPT/mmpt/processors/how2retriprocessor.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .how2processor import ( - ShardedHow2MetaProcessor, - ShardedVideoProcessor, - ShardedTextProcessor, - VariedLenAligner, - OverlappedAligner -) - - -class ShardedHow2VideoRetriMetaProcessor(ShardedHow2MetaProcessor): - def __init__(self, config): - super().__init__(config) - self.num_video_per_batch = config.num_video_per_batch - self.cands = [ - self.data[batch_offset:batch_offset + self.num_video_per_batch] - for batch_offset in - range(0, (len(self.data) // (8 * self.num_video_per_batch)) * 8 * self.num_video_per_batch, self.num_video_per_batch)] - - def __len__(self): - return len(self.cands) - - def set_candidates(self, cands): - # no changes on num of batches. - print(len(self.cands), "->", len(cands)) - # assert len(self.cands) == len(cands) - self.cands = cands - - def __getitem__(self, idx): - video_ids = self.cands[idx] - assert isinstance(video_ids, list) - sharded_video_idxs = [] - for video_id in video_ids: - shard_id, video_idx = self.video_id_to_shard[video_id] - sharded_video_idxs.append((video_id, -1, shard_id, video_idx)) - return sharded_video_idxs, sharded_video_idxs - - -class ShardedVideoRetriVideoProcessor(ShardedVideoProcessor): - """In retrival case the video_id - is a list of tuples: `(shard_id, video_idx)` .""" - - def __call__(self, sharded_video_idxs): - assert isinstance(sharded_video_idxs, list) - cand_feats = [] - for shared_video_idx in sharded_video_idxs: - feat = super().__call__(shared_video_idx) - cand_feats.append(feat) - return cand_feats - - -class ShardedVideoRetriTextProcessor(ShardedTextProcessor): - """In retrival case the video_id - is a list of tuples: `(shard_id, video_idx)` .""" - - def __call__(self, sharded_video_idxs): - assert isinstance(sharded_video_idxs, list) - cand_caps = [] - for shared_video_idx in sharded_video_idxs: - caps = super().__call__(shared_video_idx) - cand_caps.append(caps) - return cand_caps - - -class VideoRetriAligner(VariedLenAligner): - # Retritask will trim dim-0. - def __call__(self, sharded_video_idxs, video_features, text_features): - from transformers import default_data_collator - batch, video_ids = [], [] - for video_id, video_feature, text_feature in \ - zip(sharded_video_idxs, video_features, text_features): - sub_batch = super().__call__(video_id, video_feature, text_feature) - batch.append(sub_batch) - if isinstance(video_id, tuple): - video_id = video_id[0] - video_ids.append(video_id) - batch = default_data_collator(batch) - batch["video_id"] = video_ids - return batch - - -class VideoRetriOverlappedAligner(OverlappedAligner): - # Retritask will trim dim-0. - def __call__(self, sharded_video_idxs, video_features, text_features): - from transformers import default_data_collator - batch, video_ids = [], [] - for video_id, video_feature, text_feature in \ - zip(sharded_video_idxs, video_features, text_features): - sub_batch = super().__call__(video_id, video_feature, text_feature) - batch.append(sub_batch) - if isinstance(video_id, tuple): - video_id = video_id[0] - video_ids.append(video_id) - batch = default_data_collator(batch) - batch["video_id"] = video_ids - return batch diff --git a/spaces/arxnov/anotest/hubert_model.py b/spaces/arxnov/anotest/hubert_model.py deleted file mode 100644 index 6c7f8716c268d0f371f5a9f7995f59bd4b9082d1..0000000000000000000000000000000000000000 --- a/spaces/arxnov/anotest/hubert_model.py +++ /dev/null @@ -1,221 +0,0 @@ -import copy -from typing import Optional, Tuple -import random - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - x, mask = self.encode(x) - x = self.proj(x) - logits = self.logits(x) - return logits, mask - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - @torch.inference_mode() - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = F.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = F.gelu(self.norm0(self.conv0(x))) - x = F.gelu(self.conv1(x)) - x = F.gelu(self.conv2(x)) - x = F.gelu(self.conv3(x)) - x = F.gelu(self.conv4(x)) - x = F.gelu(self.conv5(x)) - x = F.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = F.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/asd998877/TsGpt/modules/__init__.py b/spaces/asd998877/TsGpt/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/asd998877/TsGpt/modules/base_model.py b/spaces/asd998877/TsGpt/modules/base_model.py deleted file mode 100644 index 2b55623f6b0989f60d818be6e0e77f5948484b82..0000000000000000000000000000000000000000 --- a/spaces/asd998877/TsGpt/modules/base_model.py +++ /dev/null @@ -1,561 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, List - -import logging -import json -import commentjson as cjson -import os -import sys -import requests -import urllib3 -import traceback - -from tqdm import tqdm -import colorama -from duckduckgo_search import ddg -import asyncio -import aiohttp -from enum import Enum - -from .presets import * -from .llama_func import * -from .utils import * -from . import shared -from .config import retrieve_proxy - - -class ModelType(Enum): - Unknown = -1 - OpenAI = 0 - ChatGLM = 1 - LLaMA = 2 - XMChat = 3 - - @classmethod - def get_type(cls, model_name: str): - model_type = None - model_name_lower = model_name.lower() - if "gpt" in model_name_lower: - model_type = ModelType.OpenAI - elif "chatglm" in model_name_lower: - model_type = ModelType.ChatGLM - elif "llama" in model_name_lower or "alpaca" in model_name_lower: - model_type = ModelType.LLaMA - elif "xmchat" in model_name_lower: - model_type = ModelType.XMChat - else: - model_type = ModelType.Unknown - return model_type - - -class BaseLLMModel: - def __init__( - self, - model_name, - system_prompt="", - temperature=1.0, - top_p=1.0, - n_choices=1, - stop=None, - max_generation_token=None, - presence_penalty=0, - frequency_penalty=0, - logit_bias=None, - user="", - ) -> None: - self.history = [] - self.all_token_counts = [] - self.model_name = model_name - self.model_type = ModelType.get_type(model_name) - try: - self.token_upper_limit = MODEL_TOKEN_LIMIT[model_name] - except KeyError: - self.token_upper_limit = DEFAULT_TOKEN_LIMIT - self.interrupted = False - self.system_prompt = system_prompt - self.api_key = None - self.need_api_key = False - self.single_turn = False - - self.temperature = temperature - self.top_p = top_p - self.n_choices = n_choices - self.stop_sequence = stop - self.max_generation_token = None - self.presence_penalty = presence_penalty - self.frequency_penalty = frequency_penalty - self.logit_bias = logit_bias - self.user_identifier = user - - def get_answer_stream_iter(self): - """stream predict, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - should return a generator, each time give the next word (str) in the answer - """ - logging.warning("stream predict not implemented, using at once predict instead") - response, _ = self.get_answer_at_once() - yield response - - def get_answer_at_once(self): - """predict at once, need to be implemented - conversations are stored in self.history, with the most recent question, in OpenAI format - Should return: - the answer (str) - total token count (int) - """ - logging.warning("at once predict not implemented, using stream predict instead") - response_iter = self.get_answer_stream_iter() - count = 0 - for response in response_iter: - count += 1 - return response, sum(self.all_token_counts) + count - - def billing_info(self): - """get billing infomation, inplement if needed""" - logging.warning("billing info not implemented, using default") - return BILLING_NOT_APPLICABLE_MSG - - def count_token(self, user_input): - """get token count from input, implement if needed""" - logging.warning("token count not implemented, using default") - return len(user_input) - - def stream_next_chatbot(self, inputs, chatbot, fake_input=None, display_append=""): - def get_return_value(): - return chatbot, status_text - - status_text = i18n("开始实时传输回答……") - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - logging.debug(f"输入token计数: {user_token_count}") - - stream_iter = self.get_answer_stream_iter() - - for partial_text in stream_iter: - chatbot[-1] = (chatbot[-1][0], partial_text + display_append) - self.all_token_counts[-1] += 1 - status_text = self.token_message() - yield get_return_value() - if self.interrupted: - self.recover() - break - self.history.append(construct_assistant(partial_text)) - - def next_chatbot_at_once(self, inputs, chatbot, fake_input=None, display_append=""): - if fake_input: - chatbot.append((fake_input, "")) - else: - chatbot.append((inputs, "")) - if fake_input is not None: - user_token_count = self.count_token(fake_input) - else: - user_token_count = self.count_token(inputs) - self.all_token_counts.append(user_token_count) - ai_reply, total_token_count = self.get_answer_at_once() - self.history.append(construct_assistant(ai_reply)) - if fake_input is not None: - self.history[-2] = construct_user(fake_input) - chatbot[-1] = (chatbot[-1][0], ai_reply + display_append) - if fake_input is not None: - self.all_token_counts[-1] += count_token(construct_assistant(ai_reply)) - else: - self.all_token_counts[-1] = total_token_count - sum(self.all_token_counts) - status_text = self.token_message() - return chatbot, status_text - - def handle_file_upload(self, files, chatbot): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - construct_index(self.api_key, file_src=files) - status = "索引构建完成" - return gr.Files.update(), chatbot, status - - def prepare_inputs(self, real_inputs, use_websearch, files, reply_language, chatbot): - fake_inputs = None - display_append = [] - limited_context = False - fake_inputs = real_inputs - if files: - from llama_index.indices.vector_store.base_query import GPTVectorStoreIndexQuery - from llama_index.indices.query.schema import QueryBundle - from langchain.embeddings.huggingface import HuggingFaceEmbeddings - from langchain.chat_models import ChatOpenAI - from llama_index import ( - GPTSimpleVectorIndex, - ServiceContext, - LangchainEmbedding, - OpenAIEmbedding, - ) - limited_context = True - msg = "加载索引中……" - logging.info(msg) - # yield chatbot + [(inputs, "")], msg - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - msg = "索引获取成功,生成回答中……" - logging.info(msg) - if local_embedding or self.model_type != ModelType.OpenAI: - embed_model = LangchainEmbedding(HuggingFaceEmbeddings(model_name = "sentence-transformers/distiluse-base-multilingual-cased-v2")) - else: - embed_model = OpenAIEmbedding() - # yield chatbot + [(inputs, "")], msg - with retrieve_proxy(): - prompt_helper = PromptHelper( - max_input_size=4096, - num_output=5, - max_chunk_overlap=20, - chunk_size_limit=600, - ) - from llama_index import ServiceContext - - service_context = ServiceContext.from_defaults( - prompt_helper=prompt_helper, embed_model=embed_model - ) - query_object = GPTVectorStoreIndexQuery( - index.index_struct, - service_context=service_context, - similarity_top_k=5, - vector_store=index._vector_store, - docstore=index._docstore, - ) - query_bundle = QueryBundle(real_inputs) - nodes = query_object.retrieve(query_bundle) - reference_results = [n.node.text for n in nodes] - reference_results = add_source_numbers(reference_results, use_source=False) - display_append = add_details(reference_results) - display_append = "\n\n" + "".join(display_append) - real_inputs = ( - replace_today(PROMPT_TEMPLATE) - .replace("{query_str}", real_inputs) - .replace("{context_str}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - elif use_websearch: - limited_context = True - search_results = ddg(real_inputs, max_results=5) - reference_results = [] - for idx, result in enumerate(search_results): - logging.debug(f"搜索结果{idx + 1}:{result}") - domain_name = urllib3.util.parse_url(result["href"]).host - reference_results.append([result["body"], result["href"]]) - display_append.append( - # f"{idx+1}. [{domain_name}]({result['href']})\n" - f"
  • {domain_name}
  • \n" - ) - reference_results = add_source_numbers(reference_results) - display_append = "
      \n\n" + "".join(display_append) + "
    " - real_inputs = ( - replace_today(WEBSEARCH_PTOMPT_TEMPLATE) - .replace("{query}", real_inputs) - .replace("{web_results}", "\n\n".join(reference_results)) - .replace("{reply_language}", reply_language) - ) - else: - display_append = "" - return limited_context, fake_inputs, display_append, real_inputs, chatbot - - def predict( - self, - inputs, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - should_check_token_count=True, - ): # repetition_penalty, top_k - - status_text = "开始生成回答……" - logging.info( - "输入为:" + colorama.Fore.BLUE + f"{inputs}" + colorama.Style.RESET_ALL - ) - if should_check_token_count: - yield chatbot + [(inputs, "")], status_text - if reply_language == "跟随问题语言(不稳定)": - reply_language = "the same language as the question, such as English, 中文, 日本語, Español, Français, or Deutsch." - - limited_context, fake_inputs, display_append, inputs, chatbot = self.prepare_inputs(real_inputs=inputs, use_websearch=use_websearch, files=files, reply_language=reply_language, chatbot=chatbot) - yield chatbot + [(fake_inputs, "")], status_text - - if ( - self.need_api_key and - self.api_key is None - and not shared.state.multi_api_key - ): - status_text = STANDARD_ERROR_MSG + NO_APIKEY_MSG - logging.info(status_text) - chatbot.append((inputs, "")) - if len(self.history) == 0: - self.history.append(construct_user(inputs)) - self.history.append("") - self.all_token_counts.append(0) - else: - self.history[-2] = construct_user(inputs) - yield chatbot + [(inputs, "")], status_text - return - elif len(inputs.strip()) == 0: - status_text = STANDARD_ERROR_MSG + NO_INPUT_MSG - logging.info(status_text) - yield chatbot + [(inputs, "")], status_text - return - - if self.single_turn: - self.history = [] - self.all_token_counts = [] - self.history.append(construct_user(inputs)) - - try: - if stream: - logging.debug("使用流式传输") - iter = self.stream_next_chatbot( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - for chatbot, status_text in iter: - yield chatbot, status_text - else: - logging.debug("不使用流式传输") - chatbot, status_text = self.next_chatbot_at_once( - inputs, - chatbot, - fake_input=fake_inputs, - display_append=display_append, - ) - yield chatbot, status_text - except Exception as e: - traceback.print_exc() - status_text = STANDARD_ERROR_MSG + str(e) - yield chatbot, status_text - - if len(self.history) > 1 and self.history[-1]["content"] != inputs: - logging.info( - "回答为:" - + colorama.Fore.BLUE - + f"{self.history[-1]['content']}" - + colorama.Style.RESET_ALL - ) - - if limited_context: - # self.history = self.history[-4:] - # self.all_token_counts = self.all_token_counts[-2:] - self.history = [] - self.all_token_counts = [] - - max_token = self.token_upper_limit - TOKEN_OFFSET - - if sum(self.all_token_counts) > max_token and should_check_token_count: - count = 0 - while ( - sum(self.all_token_counts) - > self.token_upper_limit * REDUCE_TOKEN_FACTOR - and sum(self.all_token_counts) > 0 - ): - count += 1 - del self.all_token_counts[0] - del self.history[:2] - logging.info(status_text) - status_text = f"为了防止token超限,模型忘记了早期的 {count} 轮对话" - yield chatbot, status_text - - def retry( - self, - chatbot, - stream=False, - use_websearch=False, - files=None, - reply_language="中文", - ): - logging.debug("重试中……") - if len(self.history) > 0: - inputs = self.history[-2]["content"] - del self.history[-2:] - self.all_token_counts.pop() - elif len(chatbot) > 0: - inputs = chatbot[-1][0] - else: - yield chatbot, f"{STANDARD_ERROR_MSG}上下文是空的" - return - - iter = self.predict( - inputs, - chatbot, - stream=stream, - use_websearch=use_websearch, - files=files, - reply_language=reply_language, - ) - for x in iter: - yield x - logging.debug("重试完毕") - - # def reduce_token_size(self, chatbot): - # logging.info("开始减少token数量……") - # chatbot, status_text = self.next_chatbot_at_once( - # summarize_prompt, - # chatbot - # ) - # max_token_count = self.token_upper_limit * REDUCE_TOKEN_FACTOR - # num_chat = find_n(self.all_token_counts, max_token_count) - # logging.info(f"previous_token_count: {self.all_token_counts}, keeping {num_chat} chats") - # chatbot = chatbot[:-1] - # self.history = self.history[-2*num_chat:] if num_chat > 0 else [] - # self.all_token_counts = self.all_token_counts[-num_chat:] if num_chat > 0 else [] - # msg = f"保留了最近{num_chat}轮对话" - # logging.info(msg) - # logging.info("减少token数量完毕") - # return chatbot, msg + "," + self.token_message(self.all_token_counts if len(self.all_token_counts) > 0 else [0]) - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_token_upper_limit(self, new_upper_limit): - self.token_upper_limit = new_upper_limit - print(f"token上限设置为{new_upper_limit}") - - def set_temperature(self, new_temperature): - self.temperature = new_temperature - - def set_top_p(self, new_top_p): - self.top_p = new_top_p - - def set_n_choices(self, new_n_choices): - self.n_choices = new_n_choices - - def set_stop_sequence(self, new_stop_sequence: str): - new_stop_sequence = new_stop_sequence.split(",") - self.stop_sequence = new_stop_sequence - - def set_max_tokens(self, new_max_tokens): - self.max_generation_token = new_max_tokens - - def set_presence_penalty(self, new_presence_penalty): - self.presence_penalty = new_presence_penalty - - def set_frequency_penalty(self, new_frequency_penalty): - self.frequency_penalty = new_frequency_penalty - - def set_logit_bias(self, logit_bias): - logit_bias = logit_bias.split() - bias_map = {} - encoding = tiktoken.get_encoding("cl100k_base") - for line in logit_bias: - word, bias_amount = line.split(":") - if word: - for token in encoding.encode(word): - bias_map[token] = float(bias_amount) - self.logit_bias = bias_map - - def set_user_identifier(self, new_user_identifier): - self.user_identifier = new_user_identifier - - def set_system_prompt(self, new_system_prompt): - self.system_prompt = new_system_prompt - - def set_key(self, new_access_key): - self.api_key = new_access_key.strip() - msg = i18n("API密钥更改为了") + hide_middle_chars(self.api_key) - logging.info(msg) - return self.api_key, msg - - def set_single_turn(self, new_single_turn): - self.single_turn = new_single_turn - - def reset(self): - self.history = [] - self.all_token_counts = [] - self.interrupted = False - return [], self.token_message([0]) - - def delete_first_conversation(self): - if self.history: - del self.history[:2] - del self.all_token_counts[0] - return self.token_message() - - def delete_last_conversation(self, chatbot): - if len(chatbot) > 0 and STANDARD_ERROR_MSG in chatbot[-1][1]: - msg = "由于包含报错信息,只删除chatbot记录" - chatbot.pop() - return chatbot, self.history - if len(self.history) > 0: - self.history.pop() - self.history.pop() - if len(chatbot) > 0: - msg = "删除了一组chatbot对话" - chatbot.pop() - if len(self.all_token_counts) > 0: - msg = "删除了一组对话的token计数记录" - self.all_token_counts.pop() - msg = "删除了一组对话" - return chatbot, msg - - def token_message(self, token_lst=None): - if token_lst is None: - token_lst = self.all_token_counts - token_sum = 0 - for i in range(len(token_lst)): - token_sum += sum(token_lst[: i + 1]) - return i18n("Token 计数: ") + f"{sum(token_lst)}" + i18n(",本次对话累计消耗了 ") + f"{token_sum} tokens" - - def save_chat_history(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def export_markdown(self, filename, chatbot, user_name): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, self.system_prompt, self.history, chatbot, user_name) - - def load_chat_history(self, filename, chatbot, user_name): - logging.debug(f"{user_name} 加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, user_name, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.debug(f"{user_name} 加载对话历史完毕") - self.history = json_s["history"] - return filename, json_s["system"], json_s["chatbot"] - except FileNotFoundError: - logging.warning(f"{user_name} 没有找到对话历史文件,不执行任何操作") - return filename, self.system_prompt, chatbot - - def like(self): - """like the last response, implement if needed - """ - return gr.update() - - def dislike(self): - """dislike the last response, implement if needed - """ - return gr.update() diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Biren Ghandi.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Biren Ghandi.html deleted file mode 100644 index e0a6aac046bc1c95db7b204e8240ead9cf2d41d9..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Biren Ghandi.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - Biren Ghandi - - - - -
    -

    Biren Ghandi

    - -
    -
    How did you hear about SM?
    • going through a lot of articles, and found a blog post about us
    • then researched us and liked it

    Brief background
    • mechanical engineer
    • sales and marketing throughout my career 12 years
    • oil and gas company in India
    • UT Austen DS course - graduating this month
    • Now at TD as a data scientist, data analytics team

    Mentorship exp
    • last 7-8 months been mentoring ppl on the mentoring club
    • running his own digital academy, to train folks in different tools - python, R, SWL, PowerBI
    • for some people he designs a curriculum

    What do beginners need and how can you help?
    • lack structure, where to begin, and what to learn
    • hop around from course to course, tutorial to tutorial
    • how to measure progress
    • see DS as magic, don't see the domain aspect - solving a business problem
    • you're not building models in a silo
    • people don't see the big picture
    • I understand business
    • Most ppl's apprehension is not being able to code (If I can do it, anybody can do it)
      • where to start, where to end, and how to approach problems
    -
    -
    Questions about SM:
    • What's the size of the platform? How many mentors?
    • What's the time commitment?
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/autotrain-projects/llm-merge-adapter/app.py b/spaces/autotrain-projects/llm-merge-adapter/app.py deleted file mode 100644 index 282988cde1e116c1b7785dfb980567ff2a47b5ad..0000000000000000000000000000000000000000 --- a/spaces/autotrain-projects/llm-merge-adapter/app.py +++ /dev/null @@ -1,60 +0,0 @@ -import gradio as gr -from transformers import AutoModelForCausalLM, AutoTokenizer -from peft import PeftModel -import torch - - -def merge(base_model, trained_adapter, token): - base = AutoModelForCausalLM.from_pretrained( - base_model, torch_dtype=torch.float16, low_cpu_mem_usage=True, token=token - ) - model = PeftModel.from_pretrained(base, trained_adapter, token=token) - try: - tokenizer = AutoTokenizer.from_pretrained(base_model, token=token) - except RecursionError: - tokenizer = AutoTokenizer.from_pretrained( - base_model, unk_token="", token=token - ) - - model = model.merge_and_unload() - - print("Saving target model") - model.push_to_hub(trained_adapter, token=token) - tokenizer.push_to_hub(trained_adapter, token=token) - return gr.Markdown.update( - value="Model successfully merged and pushed! Please shutdown/pause this space" - ) - - -with gr.Blocks() as demo: - gr.Markdown("## AutoTrain Merge Adapter") - gr.Markdown("Please duplicate this space and attach a GPU in order to use it.") - token = gr.Textbox( - label="Hugging Face Write Token", - value="", - lines=1, - max_lines=1, - interactive=True, - type="password", - ) - base_model = gr.Textbox( - label="Base Model (e.g. meta-llama/Llama-2-7b-chat-hf)", - value="", - lines=1, - max_lines=1, - interactive=True, - ) - trained_adapter = gr.Textbox( - label="Trained Adapter Model (e.g. username/autotrain-my-llama)", - value="", - lines=1, - max_lines=1, - interactive=True, - ) - submit = gr.Button(value="Merge & Push") - op = gr.Markdown(interactive=False) - submit.click(merge, inputs=[base_model, trained_adapter, token], outputs=[op]) - - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/awacke1/AutoMLUsingStreamlit-Plotly/app.py b/spaces/awacke1/AutoMLUsingStreamlit-Plotly/app.py deleted file mode 100644 index 9a535fc892d806115a655a01f4d59f985c58b58b..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AutoMLUsingStreamlit-Plotly/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import streamlit as st -import pandas as pd -import plotly.express as px - -st.set_page_config(page_title="AutoML Streamlit App", page_icon=":robot:", layout="wide") - -st.title("AutoML Streamlit App") - -# Upload a CSV dataset -uploaded_file = st.file_uploader("Upload your dataset", type=["csv"]) -if uploaded_file is not None: - # Load the dataset and display the first 5 rows - df = pd.read_csv(uploaded_file) - st.dataframe(df.head()) - - # Generate a treemap or sunburst plot based on data types - numerical_cols = df.select_dtypes(include=["float", "int"]).columns - categorical_cols = df.select_dtypes(include=["object"]).columns - - if len(numerical_cols) >= 2: - fig = px.scatter_matrix(df, dimensions=numerical_cols) - st.plotly_chart(fig) - elif len(categorical_cols) >= 2: - fig = px.treemap(df, path=categorical_cols) - st.plotly_chart(fig) - else: - fig = px.sunburst(df, path=categorical_cols + numerical_cols) - st.plotly_chart(fig) \ No newline at end of file diff --git a/spaces/awacke1/BigScienceBloomRootsMemory/README.md b/spaces/awacke1/BigScienceBloomRootsMemory/README.md deleted file mode 100644 index 9765e36c83fe44aae471e2309ab3a598d5ecdea6..0000000000000000000000000000000000000000 --- a/spaces/awacke1/BigScienceBloomRootsMemory/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BigScienceBloomRootsMemory -emoji: ⚡ -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Generative-AI-Writers-Dashboard/index.html b/spaces/awacke1/Generative-AI-Writers-Dashboard/index.html deleted file mode 100644 index 5c70f9ce323b8ff11da7462206a35aa09ef8f313..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Generative-AI-Writers-Dashboard/index.html +++ /dev/null @@ -1,26 +0,0 @@ - - - - - - My static Space - - - - - - - - - - diff --git a/spaces/awacke1/PandasDataframeAutoFilter/app.py b/spaces/awacke1/PandasDataframeAutoFilter/app.py deleted file mode 100644 index 17f245490473b27473ffb105bee2dfe12f72c276..0000000000000000000000000000000000000000 --- a/spaces/awacke1/PandasDataframeAutoFilter/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import pandas as pd - -def create_dataframe(file_path): - # Read the CSV file into a Pandas dataframe - df = pd.read_csv(file_path) - - # Create dynamic filters for each field - filters = {} - for col in df.columns: - filters[col] = df[col].unique().tolist() - - return df, filters - -if __name__ == '__main__': - file_path = 'Carddata.csv' - df, filters = create_dataframe(file_path) - print('Dataframe:') - print(df) - print('\nFilters:') - print(filters) diff --git a/spaces/awacke1/PermutationsAndSequencesGPT/README.md b/spaces/awacke1/PermutationsAndSequencesGPT/README.md deleted file mode 100644 index e55ca3f992a564b30591219315ca9aa0f6595c0f..0000000000000000000000000000000000000000 --- a/spaces/awacke1/PermutationsAndSequencesGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PermutationsAndSequencesGPT -emoji: 🚀 -colorFrom: purple -colorTo: yellow -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/StreamlitHeatmapKMeansCluster/app.py b/spaces/awacke1/StreamlitHeatmapKMeansCluster/app.py deleted file mode 100644 index 040b6f71a2b254f3826994176b1140b08ce6ef8a..0000000000000000000000000000000000000000 --- a/spaces/awacke1/StreamlitHeatmapKMeansCluster/app.py +++ /dev/null @@ -1,73 +0,0 @@ -import streamlit as st -import nltk -from transformers import pipeline -from sentence_transformers import SentenceTransformer -from scipy.spatial.distance import cosine -import numpy as np -import seaborn as sns -import matplotlib.pyplot as plt -from sklearn.cluster import KMeans -import tensorflow as tf -import tensorflow_hub as hub - - -def cluster_examples(messages, embed, nc=3): - km = KMeans( - n_clusters=nc, init='random', - n_init=10, max_iter=300, - tol=1e-04, random_state=0 - ) - km = km.fit_predict(embed) - for n in range(nc): - idxs = [i for i in range(len(km)) if km[i] == n] - ms = [messages[i] for i in idxs] - st.markdown ("CLUSTER : %d"%n) - for m in ms: - st.markdown (m) - - -def plot_heatmap(labels, heatmap, rotation=90): - sns.set(font_scale=1.2) - fig, ax = plt.subplots() - g = sns.heatmap( - heatmap, - xticklabels=labels, - yticklabels=labels, - vmin=-1, - vmax=1, - cmap="coolwarm") - g.set_xticklabels(labels, rotation=rotation) - g.set_title("Textual Similarity") - st.pyplot(fig) - -# Streamlit text boxes -text = st.text_area('Enter sentences:', value="Behavior right this is a kind of Heisenberg uncertainty principle situation if I told you, then you behave differently. What would be the impressive thing is you have talked about winning a nobel prize in a system winning a nobel prize. Adjusting it and then making your own. That is when I fell in love with computers. I realized that they were a very magical device. Can go to sleep come back the next day and it is solved. You know that feels magical to me.") - -nc = st.slider('Select a number of clusters:', min_value=1, max_value=15, value=3) - -model_type = st.radio("Choose model:", ('Sentence Transformer', 'Universal Sentence Encoder'), index=0) - -# Model setup -if model_type == "Sentence Transformer": - model = SentenceTransformer('paraphrase-distilroberta-base-v1') -elif model_type == "Universal Sentence Encoder": - model_url = "https://tfhub.dev/google/universal-sentence-encoder-large/5" - model = hub.load(model_url) - -nltk.download('punkt') - -# Run model -if text: - sentences = nltk.tokenize.sent_tokenize(text) - if model_type == "Sentence Transformer": - embed = model.encode(sentences) - elif model_type == "Universal Sentence Encoder": - embed = model(sentences).numpy() - sim = np.zeros([len(embed), len(embed)]) - for i,em in enumerate(embed): - for j,ea in enumerate(embed): - sim[i][j] = 1.0-cosine(em,ea) - st.subheader("Similarity Heatmap") - plot_heatmap(sentences, sim) - st.subheader("Results from K-Means Clustering") - cluster_examples(sentences, embed, nc) \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/ShaderToon.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/ShaderToon.js deleted file mode 100644 index 4566ce9a91bbf3fb99e4a01567339834e6829949..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/ShaderToon.js +++ /dev/null @@ -1,331 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - * @author alteredq / http://alteredqualia.com/ - * - * ShaderToon currently contains: - * - * toon1 - * toon2 - * hatching - * dotted - */ - -THREE.ShaderToon = { - - 'toon1' : { - - uniforms: { - - "uDirLightPos": { value: new THREE.Vector3() }, - "uDirLightColor": { value: new THREE.Color( 0xeeeeee ) }, - - "uAmbientLightColor": { value: new THREE.Color( 0x050505 ) }, - - "uBaseColor": { value: new THREE.Color( 0xffffff ) } - - }, - - vertexShader: [ - - "varying vec3 vNormal;", - "varying vec3 vRefract;", - - "void main() {", - - "vec4 worldPosition = modelMatrix * vec4( position, 1.0 );", - "vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );", - "vec3 worldNormal = normalize ( mat3( modelMatrix[0].xyz, modelMatrix[1].xyz, modelMatrix[2].xyz ) * normal );", - - "vNormal = normalize( normalMatrix * normal );", - - "vec3 I = worldPosition.xyz - cameraPosition;", - "vRefract = refract( normalize( I ), worldNormal, 1.02 );", - - "gl_Position = projectionMatrix * mvPosition;", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform vec3 uBaseColor;", - - "uniform vec3 uDirLightPos;", - "uniform vec3 uDirLightColor;", - - "uniform vec3 uAmbientLightColor;", - - "varying vec3 vNormal;", - - "varying vec3 vRefract;", - - "void main() {", - - "float directionalLightWeighting = max( dot( normalize( vNormal ), uDirLightPos ), 0.0);", - "vec3 lightWeighting = uAmbientLightColor + uDirLightColor * directionalLightWeighting;", - - "float intensity = smoothstep( - 0.5, 1.0, pow( length(lightWeighting), 20.0 ) );", - "intensity += length(lightWeighting) * 0.2;", - - "float cameraWeighting = dot( normalize( vNormal ), vRefract );", - "intensity += pow( 1.0 - length( cameraWeighting ), 6.0 );", - "intensity = intensity * 0.2 + 0.3;", - - "if ( intensity < 0.50 ) {", - - "gl_FragColor = vec4( 2.0 * intensity * uBaseColor, 1.0 );", - - "} else {", - - "gl_FragColor = vec4( 1.0 - 2.0 * ( 1.0 - intensity ) * ( 1.0 - uBaseColor ), 1.0 );", - - "}", - - "}" - - ].join( "\n" ) - - }, - - 'toon2' : { - - uniforms: { - - "uDirLightPos": { value: new THREE.Vector3() }, - "uDirLightColor": { value: new THREE.Color( 0xeeeeee ) }, - - "uAmbientLightColor": { value: new THREE.Color( 0x050505 ) }, - - "uBaseColor": { value: new THREE.Color( 0xeeeeee ) }, - "uLineColor1": { value: new THREE.Color( 0x808080 ) }, - "uLineColor2": { value: new THREE.Color( 0x000000 ) }, - "uLineColor3": { value: new THREE.Color( 0x000000 ) }, - "uLineColor4": { value: new THREE.Color( 0x000000 ) } - - }, - - vertexShader: [ - - "varying vec3 vNormal;", - - "void main() {", - - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - "vNormal = normalize( normalMatrix * normal );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform vec3 uBaseColor;", - "uniform vec3 uLineColor1;", - "uniform vec3 uLineColor2;", - "uniform vec3 uLineColor3;", - "uniform vec3 uLineColor4;", - - "uniform vec3 uDirLightPos;", - "uniform vec3 uDirLightColor;", - - "uniform vec3 uAmbientLightColor;", - - "varying vec3 vNormal;", - - "void main() {", - - "float camera = max( dot( normalize( vNormal ), vec3( 0.0, 0.0, 1.0 ) ), 0.4);", - "float light = max( dot( normalize( vNormal ), uDirLightPos ), 0.0);", - - "gl_FragColor = vec4( uBaseColor, 1.0 );", - - "if ( length(uAmbientLightColor + uDirLightColor * light) < 1.00 ) {", - - "gl_FragColor *= vec4( uLineColor1, 1.0 );", - - "}", - - "if ( length(uAmbientLightColor + uDirLightColor * camera) < 0.50 ) {", - - "gl_FragColor *= vec4( uLineColor2, 1.0 );", - - "}", - - "}" - - ].join( "\n" ) - - }, - - 'hatching' : { - - uniforms: { - - "uDirLightPos": { value: new THREE.Vector3() }, - "uDirLightColor": { value: new THREE.Color( 0xeeeeee ) }, - - "uAmbientLightColor": { value: new THREE.Color( 0x050505 ) }, - - "uBaseColor": { value: new THREE.Color( 0xffffff ) }, - "uLineColor1": { value: new THREE.Color( 0x000000 ) }, - "uLineColor2": { value: new THREE.Color( 0x000000 ) }, - "uLineColor3": { value: new THREE.Color( 0x000000 ) }, - "uLineColor4": { value: new THREE.Color( 0x000000 ) } - - }, - - vertexShader: [ - - "varying vec3 vNormal;", - - "void main() {", - - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - "vNormal = normalize( normalMatrix * normal );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform vec3 uBaseColor;", - "uniform vec3 uLineColor1;", - "uniform vec3 uLineColor2;", - "uniform vec3 uLineColor3;", - "uniform vec3 uLineColor4;", - - "uniform vec3 uDirLightPos;", - "uniform vec3 uDirLightColor;", - - "uniform vec3 uAmbientLightColor;", - - "varying vec3 vNormal;", - - "void main() {", - - "float directionalLightWeighting = max( dot( normalize(vNormal), uDirLightPos ), 0.0);", - "vec3 lightWeighting = uAmbientLightColor + uDirLightColor * directionalLightWeighting;", - - "gl_FragColor = vec4( uBaseColor, 1.0 );", - - "if ( length(lightWeighting) < 1.00 ) {", - - "if ( mod(gl_FragCoord.x + gl_FragCoord.y, 10.0) == 0.0) {", - - "gl_FragColor = vec4( uLineColor1, 1.0 );", - - "}", - - "}", - - "if ( length(lightWeighting) < 0.75 ) {", - - "if (mod(gl_FragCoord.x - gl_FragCoord.y, 10.0) == 0.0) {", - - "gl_FragColor = vec4( uLineColor2, 1.0 );", - - "}", - "}", - - "if ( length(lightWeighting) < 0.50 ) {", - - "if (mod(gl_FragCoord.x + gl_FragCoord.y - 5.0, 10.0) == 0.0) {", - - "gl_FragColor = vec4( uLineColor3, 1.0 );", - - "}", - "}", - - "if ( length(lightWeighting) < 0.3465 ) {", - - "if (mod(gl_FragCoord.x - gl_FragCoord.y - 5.0, 10.0) == 0.0) {", - - "gl_FragColor = vec4( uLineColor4, 1.0 );", - - "}", - "}", - - "}" - - ].join( "\n" ) - - }, - - 'dotted' : { - - uniforms: { - - "uDirLightPos": { value: new THREE.Vector3() }, - "uDirLightColor": { value: new THREE.Color( 0xeeeeee ) }, - - "uAmbientLightColor": { value: new THREE.Color( 0x050505 ) }, - - "uBaseColor": { value: new THREE.Color( 0xffffff ) }, - "uLineColor1": { value: new THREE.Color( 0x000000 ) } - - }, - - vertexShader: [ - - "varying vec3 vNormal;", - - "void main() {", - - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - "vNormal = normalize( normalMatrix * normal );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform vec3 uBaseColor;", - "uniform vec3 uLineColor1;", - "uniform vec3 uLineColor2;", - "uniform vec3 uLineColor3;", - "uniform vec3 uLineColor4;", - - "uniform vec3 uDirLightPos;", - "uniform vec3 uDirLightColor;", - - "uniform vec3 uAmbientLightColor;", - - "varying vec3 vNormal;", - - "void main() {", - - "float directionalLightWeighting = max( dot( normalize(vNormal), uDirLightPos ), 0.0);", - "vec3 lightWeighting = uAmbientLightColor + uDirLightColor * directionalLightWeighting;", - - "gl_FragColor = vec4( uBaseColor, 1.0 );", - - "if ( length(lightWeighting) < 1.00 ) {", - - "if ( ( mod(gl_FragCoord.x, 4.001) + mod(gl_FragCoord.y, 4.0) ) > 6.00 ) {", - - "gl_FragColor = vec4( uLineColor1, 1.0 );", - - "}", - - "}", - - "if ( length(lightWeighting) < 0.50 ) {", - - "if ( ( mod(gl_FragCoord.x + 2.0, 4.001) + mod(gl_FragCoord.y + 2.0, 4.0) ) > 6.00 ) {", - - "gl_FragColor = vec4( uLineColor1, 1.0 );", - - "}", - - "}", - - "}" - - ].join( "\n" ) - - } - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/pmrem/PMREMGenerator.d.ts b/spaces/banana-projects/web3d/node_modules/three/examples/jsm/pmrem/PMREMGenerator.d.ts deleted file mode 100644 index 7ca98ee587e241690494381df4843d7dec1f4a54..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/jsm/pmrem/PMREMGenerator.d.ts +++ /dev/null @@ -1,16 +0,0 @@ -import { - Renderer, - RenderTarget, - Texture, - CubeTexture -} from '../../../src/Three'; - -export class PMREMGenerator { - cubeLods:CubeTexture[]; - - constructor(sourceTexture:Texture, samplesPerLevel?:number, resolution?:number); - update(renderer:Renderer): void; - renderToCubeMapTarget(renderer:Renderer, renderTarget:any): void; - renderToCubeMapTargetFace(renderer:Renderer, renderTarget:RenderTarget, faceIndex:number): void; - dispose(): void; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/cameras/CubeCamera.js b/spaces/banana-projects/web3d/node_modules/three/src/cameras/CubeCamera.js deleted file mode 100644 index ba46c9eeebd3f84d9e9c6c26dc9736b32570364b..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/cameras/CubeCamera.js +++ /dev/null @@ -1,116 +0,0 @@ -import { Object3D } from '../core/Object3D.js'; -import { WebGLRenderTargetCube } from '../renderers/WebGLRenderTargetCube.js'; -import { LinearFilter, RGBFormat } from '../constants.js'; -import { Vector3 } from '../math/Vector3.js'; -import { PerspectiveCamera } from './PerspectiveCamera.js'; - -/** - * Camera for rendering cube maps - * - renders scene into axis-aligned cube - * - * @author alteredq / http://alteredqualia.com/ - */ - -function CubeCamera( near, far, cubeResolution, options ) { - - Object3D.call( this ); - - this.type = 'CubeCamera'; - - var fov = 90, aspect = 1; - - var cameraPX = new PerspectiveCamera( fov, aspect, near, far ); - cameraPX.up.set( 0, - 1, 0 ); - cameraPX.lookAt( new Vector3( 1, 0, 0 ) ); - this.add( cameraPX ); - - var cameraNX = new PerspectiveCamera( fov, aspect, near, far ); - cameraNX.up.set( 0, - 1, 0 ); - cameraNX.lookAt( new Vector3( - 1, 0, 0 ) ); - this.add( cameraNX ); - - var cameraPY = new PerspectiveCamera( fov, aspect, near, far ); - cameraPY.up.set( 0, 0, 1 ); - cameraPY.lookAt( new Vector3( 0, 1, 0 ) ); - this.add( cameraPY ); - - var cameraNY = new PerspectiveCamera( fov, aspect, near, far ); - cameraNY.up.set( 0, 0, - 1 ); - cameraNY.lookAt( new Vector3( 0, - 1, 0 ) ); - this.add( cameraNY ); - - var cameraPZ = new PerspectiveCamera( fov, aspect, near, far ); - cameraPZ.up.set( 0, - 1, 0 ); - cameraPZ.lookAt( new Vector3( 0, 0, 1 ) ); - this.add( cameraPZ ); - - var cameraNZ = new PerspectiveCamera( fov, aspect, near, far ); - cameraNZ.up.set( 0, - 1, 0 ); - cameraNZ.lookAt( new Vector3( 0, 0, - 1 ) ); - this.add( cameraNZ ); - - options = options || { format: RGBFormat, magFilter: LinearFilter, minFilter: LinearFilter }; - - this.renderTarget = new WebGLRenderTargetCube( cubeResolution, cubeResolution, options ); - this.renderTarget.texture.name = "CubeCamera"; - - this.update = function ( renderer, scene ) { - - if ( this.parent === null ) this.updateMatrixWorld(); - - var currentRenderTarget = renderer.getRenderTarget(); - - var renderTarget = this.renderTarget; - var generateMipmaps = renderTarget.texture.generateMipmaps; - - renderTarget.texture.generateMipmaps = false; - - renderer.setRenderTarget( renderTarget, 0 ); - renderer.render( scene, cameraPX ); - - renderer.setRenderTarget( renderTarget, 1 ); - renderer.render( scene, cameraNX ); - - renderer.setRenderTarget( renderTarget, 2 ); - renderer.render( scene, cameraPY ); - - renderer.setRenderTarget( renderTarget, 3 ); - renderer.render( scene, cameraNY ); - - renderer.setRenderTarget( renderTarget, 4 ); - renderer.render( scene, cameraPZ ); - - renderTarget.texture.generateMipmaps = generateMipmaps; - - renderer.setRenderTarget( renderTarget, 5 ); - renderer.render( scene, cameraNZ ); - - renderer.setRenderTarget( currentRenderTarget ); - - }; - - this.clear = function ( renderer, color, depth, stencil ) { - - var currentRenderTarget = renderer.getRenderTarget(); - - var renderTarget = this.renderTarget; - - for ( var i = 0; i < 6; i ++ ) { - - renderer.setRenderTarget( renderTarget, i ); - - renderer.clear( color, depth, stencil ); - - } - - renderer.setRenderTarget( currentRenderTarget ); - - }; - -} - -CubeCamera.prototype = Object.create( Object3D.prototype ); -CubeCamera.prototype.constructor = CubeCamera; - - -export { CubeCamera }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/uv2_pars_vertex.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/uv2_pars_vertex.glsl.js deleted file mode 100644 index 6a5a77167691219c9e921f1b6fd259af761100e7..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/uv2_pars_vertex.glsl.js +++ /dev/null @@ -1,8 +0,0 @@ -export default /* glsl */` -#if defined( USE_LIGHTMAP ) || defined( USE_AOMAP ) - - attribute vec2 uv2; - varying vec2 vUv2; - -#endif -`; diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220620151019.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220620151019.py deleted file mode 100644 index fcc7582a02bc8baf33d55bbc88ed4e8d4151656f..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220620151019.py +++ /dev/null @@ -1,40 +0,0 @@ -#-*- coding : utf-8-*- -import pandas as pd -import streamlit as st -import os,base64,subprocess -from subprocess import STDOUT #os process manipuation - -@st.cache -def gh(): - """install ghostscript on the linux machine""" - proc = subprocess.Popen('apt-get install -y ghostscript', shell=True, stdin=None, stdout=open(os.devnull,"wb"), stderr=STDOUT, executable="/bin/bash") - proc.wait() - -gh() - -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") - -input_pdf = st.file_uploader(label = "", type = 'pdf') - -page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) - -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - - # read the pdf and parse it using stream - tables = cam.read_pdf("input.pdf", pages=page_number) - result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter') - tables[0].to_excel(result,index=False) - # for i in range(0,len(tables)): - # table = tables[i].df - # sheetname = str(i) - # table.to_excel(result, sheetname,index=False) - - with open('result.xlsx','rb') as f: - st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel") \ No newline at end of file diff --git "a/spaces/betterme/mestreamlit/pages/889_\346\234\272\345\231\250\347\233\221\346\216\247.py" "b/spaces/betterme/mestreamlit/pages/889_\346\234\272\345\231\250\347\233\221\346\216\247.py" deleted file mode 100644 index 44e0da2a239ec2e86205ce42b7b7dad1867db125..0000000000000000000000000000000000000000 --- "a/spaces/betterme/mestreamlit/pages/889_\346\234\272\345\231\250\347\233\221\346\216\247.py" +++ /dev/null @@ -1,118 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# @Project : Python. -# @File : 991_streamlit_apex_charts -# @Time : 2022/10/17 上午10:48 -# @Author : yuanjie -# @WeChat : meutils -# @Software : PyCharm -# @Description : - - -import psutil -import streamlit as st -import time -import datetime -from streamlit_autorefresh import st_autorefresh -from streamlit_apex_charts import bar_chart, pie_chart -import pandas as pd -import platform -import os - - -st.set_page_config(page_title="系统信息查看器", page_icon="💻", layout="wide") - -#st_autorefresh(interval=5000, limit=100000, key="Mr.R") - -st.header("系统信息查看器") -base_infor = [[datetime.datetime.now().strftime("%Y-%m-%d %H: %M: %S"),str(psutil.users()[0][0]),platform.platform()]] -df_base_infor = pd.DataFrame(base_infor, columns=["当前时间","登陆者","操作系统"]) -st.table(df_base_infor) - -#获取网卡名称 -def get_key(): - key_info = psutil.net_io_counters(pernic=True).keys() # 获取网卡名称 - recv = {} - sent = {} - for key in key_info: - recv.setdefault(key, psutil.net_io_counters(pernic=True).get(key).bytes_recv) # 各网卡接收的字节数 - sent.setdefault(key, psutil.net_io_counters(pernic=True).get(key).bytes_sent) # 各网卡发送的字节数 - return key_info, recv, sent - -#获取网卡速率 -def get_rate(func): - key_info, old_recv, old_sent = func() # 上一秒收集的数据 - time.sleep(1) - key_info, now_recv, now_sent = func() # 当前所收集的数据 - net_in = {} - net_out = {} - for key in key_info: - net_in.setdefault(key, (now_recv.get(key) - old_recv.get(key)) / 1024) # 每秒接收速率 - net_out.setdefault(key, (now_sent.get(key) - old_sent.get(key)) / 1024) # 每秒发送速率 - return key_info, net_in, net_out - - -c1, c2, c3 = st.columns(3) - -with c1: - #内存 - mem = psutil.virtual_memory() - zj = float(mem.total) / 1024 / 1024 / 1024 - ysy = float(mem.used) / 1024 / 1024 / 1024 - kx = float(mem.free) / 1024 / 1024 / 1024 - - data_neicun = [[round(ysy,2),round(kx, 2)]] - df_neicun = pd.DataFrame(data_neicun, columns=["已用内存","空闲内存"]) - pie_chart("内存使用情况(GB)", df_neicun) - - - #CPU - cpu_liyonglv = (str(psutil.cpu_percent(1))) + '%' - cpu_data = [[cpu_liyonglv]] - df_cpu = pd.DataFrame(cpu_data, columns=["CPU利用率"]) - bar_chart("CPU利用率(%)", df_cpu) - -with c2: - #磁盘 - dk = psutil.disk_usage('/') - total = dk.total / 1024 / 1024 / 1024 - used = dk.used / 1024 / 1024 / 1024 - free = dk.free / 1024 / 1024 / 1024 - - cipan_shiyong = [[used, free]] - df_cipan = pd.DataFrame(cipan_shiyong, columns=["已使用磁盘大小","空闲磁盘大小"]) - pie_chart("磁盘使用率(%)", df_cipan) - - #网络速率 - key_info, net_in, net_out = get_rate(get_key) - wangka_liuliang = [] - for key in key_info: - wangka_liuliang.append([net_in.get(key),net_out.get(key)]) - speed_internet = wangka_liuliang - df_speed = pd.DataFrame(speed_internet, columns=["下行速率","上行速率"]) - bar_chart("网络速率(kb/s)", df_speed) - - - -with c3: - #进程信息 - pids = psutil.pids() - process = [] - for pid in pids: - p = psutil.Process(pid) - process_name = p.name() - process.append([pid, process_name, p.is_running()]) - - df_process = pd.DataFrame(process, columns=["PID","进程名","是否还在运行"]) - st.dataframe(df_process) - - # #已安装软件 - # import wmi - # c = wmi.WMI() - # software_list = [] - # for s in c.Win32_Product(): - # software_list.append([s.Caption, s.Vendor, s.Version]) - # if len(software_list)>1: - # st.dataframe(pd.DataFrame(software_list, columns=["名称","发布人","版本"])) - # else: - # st.info("正在导出已安装的软件程序列表") \ No newline at end of file diff --git a/spaces/bguberfain/Detic/CODE_OF_CONDUCT.md b/spaces/bguberfain/Detic/CODE_OF_CONDUCT.md deleted file mode 100644 index 0f7ad8bfc173eac554f0b6ef7c684861e8014bbe..0000000000000000000000000000000000000000 --- a/spaces/bguberfain/Detic/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,5 +0,0 @@ -# Code of Conduct - -Facebook has adopted a Code of Conduct that we expect project participants to adhere to. -Please read the [full text](https://code.fb.com/codeofconduct/) -so that you can understand what actions will and will not be tolerated. diff --git a/spaces/bioriAsaeru/text-to-voice/Adobe Acrobat 7.0 Professional Authorization Code Keygen Benefits and Features.md b/spaces/bioriAsaeru/text-to-voice/Adobe Acrobat 7.0 Professional Authorization Code Keygen Benefits and Features.md deleted file mode 100644 index b2326d2d5b4cb82cceb88784d0cc1638ba87f9ee..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Adobe Acrobat 7.0 Professional Authorization Code Keygen Benefits and Features.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    my previous laptop is broken and i got a new desktop, I cannot retrieve my activation Code. Please help","isUseLiaRichMedia":false,"autoTitleLink":" _0.form.messageeditor.tinymceeditor:getautotitle?t:ac=board-id/acrobat-sdk/thread-id/63323","isGteEditorV2":true,"linkTooltipTexts":"bareURL":"Bare URL","unlink":"Unlink","openLink":"Open link","autoTitle":"Auto-title","elementSelector":"#tinyMceEditor_10c33df06506724","preLoadedAddOnAssetUrls":["/html/js/lib/tinymce/4.7.13/themes/modern/theme.js","/html/js/lib/tinymce/4.7.13/plugins/lists/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/compat3x/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/image/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/link/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/textcolor/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/table/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/tabfocus/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/paste/plugin.js","/plugin/editors/tinymce/plugins/spoiler/plugin.js","/plugin/editors/tinymce/plugins/spoiler/langs/en.js","/plugin/editors/tinymce/plugins/insertcode/plugin.js","/plugin/editors/tinymce/plugins/insertcode/langs/en.js","/html/js/lib/tinymce/4.7.13/plugins/advlist/plugin.js","/html/js/lib/tinymce/4.7.13/plugins/autolink/plugin.js","/plugin/editors/tinymce/plugins/liarichmedia/plugin.js","/plugin/editors/tinymce/plugins/liarichmedia/langs/en.js","/plugin/editors/tinymce/plugins/liaexpandtoolbar/plugin.js","/plugin/editors/tinymce/plugins/liaexpandtoolbar/langs/en.js","/html/js/lib/tinymce/4.7.13/plugins/codesample/plugin.js","/plugin/editors/tinymce/plugins/liaquote/plugin.js","/plugin/editors/tinymce/plugins/liaquote/langs/en.js","/plugin/editors/tinymce/plugins/liamacros/plugin.js","/plugin/editors/tinymce/plugins/liamacros/langs/en.js","/plugin/editors/tinymce/plugins/liafullscreendone/plugin.js","/plugin/editors/tinymce/plugins/liafullscreendone/langs/en.js","/html/js/lib/tinymce/4.7.13/plugins/code/plugin.js","/plugin/editors/tinymce/plugins/mentions/plugin.js","/plugin/editors/tinymce/plugins/mentions/langs/en.js","/html/js/lib/tinymce/4.7.13/plugins/noneditable/plugin.js","/plugin/editors/tinymce/plugins/emoticons/plugin.js","/plugin/editors/tinymce/plugins/emoticons/langs/en.js","/plugin/editors/tinymce/plugins/spellchecker/plugin.js"],"isOoyalaVideoEnabled":false,"isInlineLinkEditingEnabled":true,"optionsParam":"messageMentionTemplate":"#title","spellcheckerUrl":"/spellchecker/lucene","useUserMentions":true,"toolbarSelector":".mce-toolbar-grp","useProductMentions":false,"mediaUploadOptions":"attachmentOverlayText":"Drop your files here","createVideoLink":" _0.form.messageeditor.tinymceeditor:createvideo?t:ac=board-id/acrobat-sdk/thread-id/63323","imageUploadSettings":"validImageExts":"*.jpg;*.JPG;*.jpeg;*.JPEG;*.gif;*.GIF;*.png;*.PNG","maxFileBytes":10264576,"maxImagesPerUpload":10,"editorOverlayText":"Drop your media files here","copyPasteSettings":"copyPasteEvent":"LITHIUM:liaCopyPasteImages","copyPasteBatchSize":3,"copyPasteCss":"lia-copypaste-placeholder","username":"Deleted User","videoImageTooltip":"\"Please wait while we upload and process your video. This may take a few minutes, so please check back later.\"","enableFormActionButtonsEvent":"LITHIUM:enableFormActionButtons","videoUploadingUrlsLink":" _0.form.messageeditor.tinymceeditor:videouploadingurls?t:ac=board-id/acrobat-sdk/thread-id/63323","isOverlayVisible":true,"videoEmbedThumbnail":"/i/skins/default/video-loading-new.gif","videoStatusUpdateLink":" _0.form.messageeditor.tinymceeditor:videostatusupdate?t:ac=board-id/acrobat-sdk/thread-id/63323","token":"Mlcl-XDf-LHGtbmsyMut_ctmh_h4e62x-1X00Y6Zcvc.","defaultAlbumId":1,"imageFormatFeedbackErrorContainer":".lia-file-error-msg","fileUploadSelector":".lia-file-upload","isCanUploadImages":false,"videoUploadSettings":"maxFileBytes":512000000,"validVideoExts":".wmv;.avi;.mov;.moov;.mpg;.mpeg;.m2t;.m2v;.vob;.flv;.mp4;.mpg4;.mkv;.asf;.m4v;.m2p;.3gp;.3g2;.f4v;.mp3;.m4a;.wma;.aac","disableFormActionButtonsEvent":"LITHIUM:disableFormActionButtons","isOoyalaVideoEnabled":false,"videoEmbedSizes":"small":"width":200,"height":150,"original":"width":400,"height":300,"large":"width":600,"height":450,"medium":"width":400,"height":300,"isMobileDevice":false,"removeAllOverlays":"LITHIUM:removeAllOverlays","isCanUploadVideo":false,"passToAttachmentEvent":"LITHIUM:passToAttachment","imageUrlPattern":" -id//image-size/?v=v2&px=-1","useMessageMentions":false,"spellcheckerLangs":"English (US)=en,Spanish=es,Portuguese=pt,German=de,French=fr,Arabic=ar","mentionsVersion":"2","iframeTitle":"Body Rich Text Area. Press ALT-F10 for toolbar and Escape to return to the editor.","events":"editorPasteEvent":"LITHIUM:editorPaste","editorLoadedEvent":"LITHIUM:editorLoaded","useGraphicalEditor":true});LITHIUM.InformationBox("updateFeedbackEvent":"LITHIUM:updateAjaxFeedback","componentSelector":"#informationbox_10c33df06506724_18","feedbackSelector":".InfoMessage");LITHIUM.Text.set("ajax.createUrlSnippet.loader.feedback.title":"Loading...");LITHIUM.AjaxSupport("ajaxOptionsParam":"useLoader":true,"event":"LITHIUM:createUrlSnippet","tokenId":"ajax","elementSelector":"#messagepresnippet_10c33df06506724","action":"createUrlSnippet","feedbackSelector":"#messagepresnippet_10c33df06506724","url":" _0.form.messageeditor.messagepresnippet:createurlsnippet?t:ac=board-id/acrobat-sdk/thread-id/63323","ajaxErrorEventName":"LITHIUM:ajaxError","token":"Q4Ad0SDmNVj9K1m1OXNqjsgPGW92vAUzMoODXkVBYbo.");LITHIUM.MessagePreSnippet("pasteEvent":"LITHIUM:editorPaste","maxUrlListSize":10,"snippetExistsTextClass":"lia-media-snippet-preview-exists","tinyMceSelector":"#messageEditor_10c33df06506724_0","messageSnippetEvent":"LITHIUM:createUrlSnippet","elementSelector":"#messagepresnippet_10c33df06506724","snippetUpdateEvent":"LITHIUM:updateUrlSnippet","urlFormFieldSelector":".lia-form-media-snippet-url-input","snippetCloseEvent":"LITHIUM:closeUrlSnippet");LITHIUM.BlockEvents('.lia-js-block-events', [".lia-spoiler-link",".oo-icon",".oo-volume-bar",".oo-close-button"], '.message-preview');LITHIUM.KeepSessionAlive("/t5/status/blankpage?keepalive", 300000);new LITHIUM.MessageEditor("previewButtonSelector":"#previewButton_10c33df06506724","defaultTabSelector":".rich-link","defaultTabName":"rich","usesInlinePreview":true,"formHasErrorsEvent":"LITHIUM:formHasErrors","exitPreviewButtonSelector":"#exitPreviewButton_10c33df06506724","isTabsPresent":false,"ajaxCompleteEvent":"LITHIUM:ajaxComplete","isGteEditorV2":true,"previewSubmitElementSelector":"#submitContext_10c33df06506724","tinyMceElementSelector":"#tinyMceEditor_10c33df06506724","elementSelector":"#messageEditor_10c33df06506724_0","macroChangeEvent":"LITHIUM:change-macro","preExitPreviewEvent":"LITHIUM:refreshAttachments");LITHIUM.MessageEditor.MessageQuote("#messageQuote_10c33df06506724", "#tinyMceEditor_10c33df06506724", " wrote:
    my previous laptop is broken and i got a new desktop, I cannot retrieve my activation Code. Please help", true);LITHIUM.FileDragDrop("urls":"uploadUrl":" _0.form.attachmentscomponent:uploadfileaction/attachments-key/ade750be-e30d-4407-ac8e-66dd1cdfbc0b?t:ac=board-id/acrobat-sdk/thread-id/63323","selectors":"container":"#filedragdrop_10c33df06506724","feedbackElement":"#dragDropFeedback .AjaxFeedback","cancelUploadProgress":"lia-remove-attachment-inprogress","fileUpload":"#filedragdrop_10c33df06506724 .lia-file-upload","events":"uploadDoneEvent":"LITHIUM:uploadDone","refreshAttachmentsEvent":"LITHIUM:refreshAttachments","formHasErrorsEvent":"LITHIUM:formHasErrors","misc":"actionTokenId":"uploadFile","fileDataParam":"Filedata","isEditorGteV2":true,"actionToken":"V5bT4UNuiyOj1R5bOTfpeSTCBFCn5tqrFPOelfStkpk.");LITHIUM.InformationBox("updateFeedbackEvent":"LITHIUM:updateAjaxFeedback","componentSelector":"#informationbox_10c33df06506724_19","feedbackSelector":".InfoMessage");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:refreshAttachments","parameters":"clientId":"inlinemessagereplyeditor_0_10c33df06506724","attachmentKey":"ade750be-e30d-4407-ac8e-66dd1cdfbc0b","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10c33df06506724","action":"refreshAttachments","feedbackSelector":"#attachmentsComponent_10c33df06506724","url":" _0.form.attachmentscomponent:refreshattachments?t:ac=board-id/acrobat-sdk/thread-id/63323","ajaxErrorEventName":"LITHIUM:ajaxError","token":"99Wu7gy1Coyjx0T5RseO42DR4_PQmbzByXHOT8i55Bk.");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:removeNewAttachment","parameters":"clientId":"inlinemessagereplyeditor_0_10c33df06506724","attachmentKey":"ade750be-e30d-4407-ac8e-66dd1cdfbc0b","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-file-upload","action":"removeNewAttachment","feedbackSelector":"#attachmentsComponent_10c33df06506724","url":" _0.form.attachmentscomponent:removenewattachment?t:ac=board-id/acrobat-sdk/thread-id/63323","ajaxErrorEventName":"LITHIUM:ajaxError","token":"wbe-9anZisCucaMQZHiZ1jZWe60iMIPEU7_ugKJWWB0.");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:removePreviewAttachment","parameters":"clientId":"inlinemessagereplyeditor_0_10c33df06506724","attachmentKey":"ade750be-e30d-4407-ac8e-66dd1cdfbc0b","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-file-upload","action":"removePreviewAttachment","feedbackSelector":"#attachmentsComponent_10c33df06506724","url":" _0.form.attachmentscomponent:removepreviewattachment?t:ac=board-id/acrobat-sdk/thread-id/63323","ajaxErrorEventName":"LITHIUM:ajaxError","token":"FcCQp_4ny0GzAqCAEf3lL1pZ8wHZ1Kc450dbInIx1EA.");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:removeExistingAttachment","parameters":"clientId":"inlinemessagereplyeditor_0_10c33df06506724","attachmentKey":"ade750be-e30d-4407-ac8e-66dd1cdfbc0b","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-file-upload","action":"removeExistingAttachment","feedbackSelector":"#attachmentsComponent_10c33df06506724","url":" _0.form.attachmentscomponent:removeexistingattachment?t:ac=board-id/acrobat-sdk/thread-id/63323","ajaxErrorEventName":"LITHIUM:ajaxError","token":"hvQf59KD881aZoQebjKkOkObSh1BU_MkyIyRhTGxQIQ.");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:removeInProgressNewAttachment","parameters":"clientId":"inlinemessagereplyeditor_0_10c33df06506724","attachmentKey":"ade750be-e30d-4407-ac8e-66dd1cdfbc0b","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-file-upload","action":"removeInProgressNewAttachment","feedbackSelector":"#attachmentsComponent_10c33df06506724","url":" _0.form.attachmentscomponent:removeinprogressnewattachment?t:ac=board-id/acrobat-sdk/thread-id/63323","ajaxErrorEventName":"LITHIUM:ajaxError","token":"iy94pPSMW8-oOaCohVrarr9J9dQ7y8f_kZofvAaLadg.");LITHIUM.DragDropAttachmentsComponent("fileSizeErrorText":"The file () exceeds the maximum file size. The maximum file size is 47 MB.","validExts":"8bf, abf, abr, act, aep, afm, ai, arw, as, ase, avi, bmp, book, cel, cfc, chproj, cptx, cr2, cr3, crf, crw, css, csv, dn, dng, doc, docx, eps, epub, exif, fbx, fla, flac, flv, fm, gif, icma, icml, ico, ics, idml, indd, jpeg, jpg, jsfl, json, log, loss, lrcat, lrtemplate, m4a, mif, mov, mp3, mp4, mpg, nef, nrw, obj, odt, orf, otc, otf, pdf, pfb, pfm, pmd, png, ppj, ppt, pptx, prc, prel, prproj, ps, psb, psd, raf, raw, rtf, sbs, sbsar, sbsm, scc, ses, sesx, skp, sol, srt, srw, ssa, stl, svg, swf, tif, ttc, ttf, txt, wav, wmv, x3f, xd, xls, xlsx, xml, xmp","dropZoneSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-attachments-drop-zone","uploadingText":"Uploading...","changeNumAttachmentsEvent":"LITHIUM:changeNumAttachments","storageUnitKB":"KB","currAttachments":0,"removeNewAttachmentSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-remove-attachment","removeInProgressNewAttachment":"LITHIUM:removeInProgressNewAttachment","elementSelector":"#inlinemessagereplyeditor_0_10c33df06506724","maxAttachments":10,"removeAllOverlays":"LITHIUM:removeAllOverlays","inProgressAttachmentsContainerSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-in-progress-attachments","removeExistingAttachmentEvent":"LITHIUM:removeExistingAttachment","inputFieldSelector":".lia-form-type-file.lia-form-type-file-hidden","dropFilesHereText":"attachments.overlay.text","enableFormActionButtonsEvent":"LITHIUM:enableFormActionButtons","maxFileSize":50000000,"tooManyAttachmentsMsg":"The maximum number of attachments has been reached. Maximum number of attachments allowed is: 10","attachmentErrorSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-file-error-msg","cancelAttachmentProgressCss":"lia-remove-attachment-inprogress","fileUploadSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-file-upload","newAttachmentSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-new-attachment","attachmentsTooManyErrorSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-attachment-upload-error-many","fileTypeErrorText":"The file type () is not supported. Valid file types are: 8bf, abf, abr, act, aep, afm, ai, arw, as, ase, avi, bmp, book, cel, cfc, chproj, cptx, cr2, cr3, crf, crw, css, csv, dn, dng, doc, docx, eps, epub, exif, fbx, fla, flac, flv, fm, gif, icma, icml, ico, ics, idml, indd, jpeg, jpg, jsfl, json, log, loss, lrcat, lrtemplate, m4a, mif, mov, mp3, mp4, mpg, nef, nrw, obj, odt, orf, otc, otf, pdf, pfb, pfm, pmd, png, ppj, ppt, pptx, prc, prel, prproj, ps, psb, psd, raf, raw, rtf, sbs, sbsar, sbsm, scc, ses, sesx, skp, sol, srt, srw, ssa, stl, svg, swf, tif, ttc, ttf, txt, wav, wmv, x3f, xd, xls, xlsx, xml, xmp.","uploadDoneEvent":"LITHIUM:uploadDone","disableFormActionButtonsEvent":"LITHIUM:disableFormActionButtons","inProgressAttachmentSelector":".lia-in-progress-attachment","removePreviewAttachmentEvent":"LITHIUM:removePreviewAttachment","removeNewAttachmentEvent":"LITHIUM:removeNewAttachment","passToAttachmentEvent":"LITHIUM:passToAttachment");LITHIUM.InformationBox("updateFeedbackEvent":"LITHIUM:updateAjaxFeedback","componentSelector":"#informationbox_10c33df06506724_20","feedbackSelector":".InfoMessage");LITHIUM.Form.resetFieldForFocusFound();LITHIUM.Text.set("ajax.InlineMessageReply.loader.feedback.title":"Loading...");LITHIUM.AjaxSupport.fromForm('#form_10c33df06506724', 'InlineMessageReply', '#ajaxFeedback_10c33df06506724_0', 'LITHIUM:ajaxError', "useLoader":false,"ignoreFormActions":["Cancel","SaveDraft"],"event":"submit","httpMethod":"POST", false);LITHIUM.InputEditForm("form_10c33df06506724", "submitButton":".lia-button-Submit-action","enableFormButtonEvent":"LITHIUM:enableFormButton","warnUnsavedDataActionCssClasses":["lia-form-action-ignore-unsaved-data","lia-button-Cancel-action"],"useUnsavedDataWarning":true,"ignoreDisableFormDuringSubmitCssClasses":[],"submitOnChange":false,"swallowEnterEvent":true,"enableFormEvent":"LITHIUM:enableForm","disableFormButtonEvent":"LITHIUM:disableFormButton","disableFormEvent":"LITHIUM:disableForm","unloadMessage":"Unsaved information will be lost.","ignoreOnChangeCssClasses":[],"disableFormOnSubmit":true,"buttonWrapperSelector":".lia-button-wrapper","showUnsavedDataWarningDataKey":"showUnsavedDataWarning","liaBodyTagId":"#lia-body");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:autosaveInline","parameters":"clientId":"inlinemessagereplyeditor_0_10c33df06506724","tokenId":"ajax","elementSelector":"#form_10c33df06506724","action":"autosaveInline","feedbackSelector":"#form_10c33df06506724","url":" _0.form:autosaveinline?t:ac=board-id/acrobat-sdk/thread-id/63323","ajaxErrorEventName":"LITHIUM:ajaxError","token":"5YR4sgN6tG_pI-e5nPg187Lq460qc17M10PoAvZ6puQ.");LITHIUM.InlineMessageReplyEditor("openEditsSelector":".lia-inline-message-edit","ajaxFeebackSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-inline-ajax-feedback","collapseEvent":"LITHIUM:collapseInlineMessageEditor","confimationText":"You have other message editors open and your data inside of them might be lost. Are you sure you want to proceed?","topicMessageSelector":".lia-forum-topic-message-gte-5","focusEditor":false,"hidePlaceholderShowFormEvent":"LITHIUM:hidePlaceholderShowForm","formWrapperSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-form-wrapper","reRenderInlineEditorEvent":"LITHIUM:reRenderInlineEditor","ajaxBeforeSendEvent":"LITHIUM:ajaxBeforeSend:InlineMessageReply","element":"input","clientIdSelector":"#inlinemessagereplyeditor_0_10c33df06506724","loadAutosaveAction":false,"newPostPlaceholderSelector":".lia-new-post-placeholder","placeholderWrapperSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-placeholder-wrapper","messageId":8292431,"formSelector":"#inlinemessagereplyeditor_0_10c33df06506724","expandedClass":"lia-inline-message-reply-form-expanded","expandedRepliesSelector":".lia-inline-message-reply-form-expanded","newPostPlaceholderClass":"lia-new-post-placeholder","editorLoadedEvent":"LITHIUM:editorLoaded","replyEditorPlaceholderWrapperCssClass":"lia-placeholder-wrapper","messageActionsClass":"lia-message-actions","cancelButtonSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-button-Cancel-action","isGteForumV5":true,"messageViewWrapperSelector":".lia-threaded-detail-display-message-view","disabledReplyClass":"lia-inline-message-reply-disabled-reply");LITHIUM.Text.set("ajax.reRenderInlineEditor.loader.feedback.title":"Loading...");LITHIUM.AjaxSupport("ajaxOptionsParam":"useLoader":true,"blockUI":"","event":"LITHIUM:reRenderInlineEditor","parameters":"clientId":"inlinemessagereplyeditor_0_10c33df06506724","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10c33df06506724","action":"reRenderInlineEditor","feedbackSelector":"#inlinemessagereplyeditor_0_10c33df06506724","url":" _0:rerenderinlineeditor?t:ac=board-id/acrobat-sdk/thread-id/63323","ajaxErrorEventName":"LITHIUM:ajaxError","token":"ZD-AEo1Eks8t5k0nYF-XhBoa-HkSNSofuYIxkA-yk8c.");LITHIUM.InlineMessageEditor("ajaxFeebackSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-inline-ajax-feedback","submitButtonSelector":"#inlinemessagereplyeditor_0_10c33df06506724 .lia-button-Submit-action");LITHIUM.AjaxSupport("ajaxOptionsParam":"event":"LITHIUM:lazyLoadComponent","parameters":"componentId":"messages.widget.emoticons-lazy-load-runner","tokenId":"ajax","elementSelector":"#inlinemessagereplyeditor_0_10c33df06506724","action":"lazyLoadComponent","feedbackSelector":false,"url":" _0:lazyloadcomponent?t:ac=board-id/acrobat-sdk/thread-id/63323","ajaxErrorEventName":"LITHIUM:ajaxError","token":"J4iQBq9dlKN5JBt7pTxftKDrSaEz9jhys_v1oI2pkJw.");LITHIUM.lazyLoadComponent("selectors":"elementSelector":"#inlinemessagereplyeditor_0_10c33df06506724","events":"lazyLoadComponentEvent":"LITHIUM:lazyLoadComponent","misc":"isLazyLoadEnabled":true);;(function($)try const RESOURCE_LINK = 'Community: resourcesLinkClick'; const RESOURCE_EDIT = 'Community: resourcesEditClick'; const RESOURCE_ADD_GROUP = 'Community: resourcesAddGroupClick'; const RESOURCE_ADD_LINK = 'Community: resourcesAddLinkClick'; const RESOURCE_EDIT_GROUP = 'Community: resourcesEditGroup'; const RESOURCE_EDIT_LINK = 'Community: resourcesEditLink'; const RESOURCE_DELETE_GROUP = 'Community: resourcesDeleteGroup'; const RESOURCE_DELETE_LINK = 'Community: resourcesDeleteLink'; if($('.resources-container').length > 0) $('.links-list-item-title-url-container .list-link').on('click', function(e) trackResourceEvents(e.currentTarget,RESOURCE_LINK,true,true); ); $('.resources-header-edit-icon').on('click',function(e) trackResourceEvents(null,RESOURCE_EDIT,false,false); ); $('.add-group-container').on('click',function(e) trackResourceEvents(null,RESOURCE_ADD_GROUP,false,false); ); $(document).on('click', '.group-form .add-link', function(e) trackResourceEvents(null,RESOURCE_ADD_LINK,false,false); ); $(document).on('click', '.group-list-item .group-edit-button', function(e) trackResourceEvents(e.currentTarget,RESOURCE_EDIT_GROUP,true,false); ); $(document).on('click', '.group-list-item .group-delete-button', function(e) trackResourceEvents(e.currentTarget,RESOURCE_DELETE_GROUP,true,false); ); $(document).on('click', '.saved-link__edit', function(e) trackResourceEvents(e.currentTarget,RESOURCE_EDIT_LINK,true,true); ); $(document).on('click', '.saved-link__delete', function(e) trackResourceEvents(e.currentTarget,RESOURCE_DELETE_LINK,true,true); ); catch(ex) console.log(ex); )(LITHIUM.jQuery); ;(function($)tryconst CC_LINKS_TYPE= '0': 'GetAppsBanner', '1': 'GetApps', '2': 'InstallTheApp', '3': 'LaunchTheExperience', '4': 'ManageAccount'; const CONVERSATION_FLAG_TYPE= '-1': '', '0': 'Top Reply', '1': 'Correct Answer', '2': 'Featured', '3': 'Announcement', '4': 'Pinned Reply'; const PAGE_NAME='digitalData.page.pageInfo.pageName';const LANGUAGE='digitalData.page.pageInfo.language';const SITE_SECTION='digitalData.page.pageInfo.siteSection';const COMMUNITY_CATEGORY='digitalData.community.communityInfo.communityCategory';const COMMUNITY_ID='digitalData.community.communityInfo.communityId';const COMMUNITY_TITLE='digitalData.community.communityInfo.communityTitle'; const CONVERSATION_PAGE='Community: conversationPage';//evar203 mapped variablesconst CARD_CREATED_DATE='digitalData.community.communityAttributes.cardCreatedDate';const COUNT_CORRECT_ANSWER='digitalData.community.communityAttributes.countCorrectAnswer';const COMMUNITY_FLAG='digitalData.community.communityInfo.communityFlag'; const COUNT_REPLY='digitalData.community.communityAttributes.countReply'; const RELATED_CONVERSATION_ACTION='relatedConversationClick';const COMMUNITY_DD_PROPERTY='digitalData.community';const CONVERSATION_REPORT='Community: conversationReportClick';const REPLY_REPORT='Community: repliesReportClick';const MARKED_CORRECT='Community: Marked as Correct';const UNMARKED_CORRECT='Community: UnMarked as Correct';const REPLY_MARKED_CORRECT='replyMarkedCorrect';const REPLY_UNMARKED_CORRECT='replyUnmarkedCorrect';const CONVERSATION_FOLLOW='Community: conversationFollowClick';const REPLY_FOLLOW='Community: repliesFollowClick';const CONVERSATION_UNFOLLOW='Community: conversationUnfollowClick';const REPLY_UNFOLLOW='Community: repliesUnfollowClick';const SOPHIA_EVENTS = 'digitalData.sophiaResponse.fromPage';const CC_LINK1 = 'Community: CCD_';const CC_LINK2 = 'Click';const CC_LINK_CLICK = 'ccdLinkClick';const CC_MANAGE_ACCOUNT_CLICK = 'manageAccountLinkClick'; const REC_CONVO_FEEDBACK_SHOWN='digitalData.community.communityAttributes.recConvoFeedbackShown';const CONVERSATION_EDIT='Community: conversationEditClick';const CONVERSATION_VIEW_HISTORY='Community: conversationViewHistoryClick';const CONVERSATION_MOVE_MERGE='Community: conversationMoveMergeClick';const CONVERSATION_SPAM='Community: conversationSpamClick';const CONVERSATION_DELETE='Community: conversationDeleteClick';const CONVERSATION_BAN_USER='Community: conversationBanUserClick';const REPLY_BAN_USER='Community: repliesBanUserClick';const REPLY_SPAM='Community: repliesSpamClick';const REPLY_DELETE='Community: repliesDeleteClick';const REPLY_MOVE_MERGE='Community: repliesMoveMergeClick';const REPLY_VIEW_HISTORY='Community: repliesViewHistoryClick';const REPLY_EDIT='Community: repliesEditClick';const REPLIES_IN_RESPONSE_TO ='Community: repliesInResponseToClick';$.when(promise1).done( function () userProfilePromise.then(trackConversationPageLoad);); function trackConversationPageLoad() //Conversation Page Load Tracking const subject = $('.userStrip').attr('data-message-subject');let messageUid = '8292431';const tempDD = digitalData; let boardId = normalizeBoardId('acrobat-sdk'); let community = normalizeCategoryBoardId(); let contentType = getBoardType(boardId); //track new post success trackNewPostSuccess(community, subject, messageUid); //track merge message success trackMergeSuccess(subject,community,'8292431',contentType); //recover digital data property digitalData = tempDD; const valArr = location.pathname.split('/'); let pageName; let layoutView = 'threaded'; if('ForumTopicPage' === 'IdeaPage') layoutView = 'linear'; //Ideas do not support threaded view so it will always be linear let sortOrder = 'by_date_ascending'=="by_date_ascending"?"Earliest":"Latest"; if(PAGE_LANG!=='en') pageName = location.hostname + ':t5:' + boardId + ':' + 'conversationPage'; else if(valArr && valArr.length > 2) pageName = location.hostname + ':' + valArr[1] + ':' + community + ':' + 'conversationPage'; if(pageName) setDigitalDataProperty(PAGE_NAME, pageName); if(messageUid) setDigitalDataProperty(COMMUNITY_ID, messageUid); setDigitalDataProperty(LANGUAGE, getLocale()); setDigitalDataProperty(SITE_SECTION, CONVERSATION_PAGE); setPrimaryEvent(CONVERSATION_PAGE, 'pageload');let replyCount = 0;if($('.reply-count__text').length > 0) replyCount = $('.reply-count__text').attr('data-reply-count'); let status = ''; let voteCount = 0; if($('.message-status-link').length > 0) status = $('.message-status-link')[0].innerText; if($('#messageKudosCount_').length > 0) voteCount = $('#messageKudosCount_')[0].getAttribute('data-upvote-count'); const correctAnswerCount = $('.correct-answer-div').attr('data-correct-answer-count'); const creationDate = $('.roleTimestamp').attr('data-post-time'); setDigitalDataProperty(CARD_CREATED_DATE, creationDate); //setDigitalDataProperty(COUNT_REPLY, replyCount?replyCount:'0'); setDigitalDataProperty(COUNT_CORRECT_ANSWER, correctAnswerCount?correctAnswerCount:'0'); setDigitalDataProperty(COMMUNITY_CONTENT_TYPE, contentType); setDigitalDataProperty(COMMUNITY_CATEGORY, community); setDigitalDataProperty(COMMUNITY_TITLE, subject); let solnType = $('.conversation-page-container').attr('data-solution-type'); if(parseInt(solnType) 0) solnType = '1'; else if($('#special-reply-pinned').length > 0) solnType = '4'; solnType = CONVERSATION_FLAG_TYPE[solnType]; let flag = solnType; if($('.body-outer-container').attr('data-pin-flag') === "true") if(flag != '') flag = flag + ';Pinned'; else flag = 'Pinned'; if(flag != '') setDigitalDataProperty(COMMUNITY_FLAG, flag); if(document.getElementById('feedback_view_1')) setDigitalDataProperty(REC_CONVO_FEEDBACK_SHOWN, 'true'); dnmsTrackConversationFeedback('render', 'feedback-answer', [messageUid, community, null, 'radio button']); setDigitalDataProperty(FILTERS, [createGPSortInfoObj(sortOrder)]); setDigitalDataProperty(SOPHIA_EVENTS,['CampaignId': relatedConvCampaignId, 'ControlGroupId': relatedConvControlGroupId, 'VariationId': relatedConvVariationId, 'ActionBlockId': relatedConvActionBlockId, 'CampaignId': manageAccountCampaignId, 'ControlGroupId': manageAccountControlGroupId, 'VariationId': manageAccountVariationId, 'ActionBlockId': manageAccountActionBlockId]); captureSnapshot('state'); //dunamis api call dnmsConversationPageRender(community, replyCount, subject, getCommunityCurrentPageNum(), getConversationTags().toString(), messageUid, layoutView, flag, status, voteCount); cleanDigitalDataProperties([SOPHIA_EVENTS]); if ($('.promos-wrapper').length > 0) let promotype = $('.promos-wrapper').attr('data-promotype'); let promosubtype = $('.promos-wrapper').attr('data-promosubtype'); dnmsPromoRender(promotype, promosubtype, community, messageUid); //Track related conversation clickdetectRelatedConversationsLoad(); //track status update success if(localStorage.hasOwnProperty('messageStatusUpdate')) trackStatusUpdateSuccess(); //Track reply post success trackReplyPostSuccess(); let lsCleanUpArr = ['gpEditMessageType', 'gpEditMessagePageNum', 'gpReportMessageDetails', 'gpReportMessageType'];clearStorage(lsCleanUpArr);cleanDigitalDataProperties(['digitalData.primaryEvent.eventInfo', FILTERS]); function getPayload(params) var sophiaPayload = []; try params = params.split("&"); var keyMapping = 'aid':'ActionBlockId','campid':'CampaignId', 'cid':'ContainerId','cgid':'ControlGroupId','tid':'TreatmentId','vid':'VariationId','sid':'SurfaceId'; var sophiaMap = ; for(let i=0;i 1 && (keys[0] in keyMapping)) sophiaMap[keyMapping[keys[0]]] = keys[1]; sophiaPayload.push(sophiaMap); catch(err) console.log(err); return sophiaPayload;function trackNewPostSuccess(communityName, subject, messageUid) const npsDD = localStorage.getItem('npsDigitalData'); if(npsDD) const ddVal = JSON.parse(npsDD);if(subject === ddVal.community.communityInfo.communityTitle) digitalData = ddVal; setDigitalDataProperty(COMMUNITY_ID, messageUid); dnmsNewPostSuccess(communityName, subject, messageUid, JSON.parse(npsDD).sophiaResponse); captureSnapshot('event'); cleanDigitalDataProperties([SOPHIA_EVENTS]); localStorage.removeItem('npsDigitalData');function trackMergeSuccess(subject,community,messageId,contentType) try const mergeMsgDD = localStorage.getItem('mergeMsgDigitalData'); if(mergeMsgDD) const ddVal = JSON.parse(mergeMsgDD); if(messageId === ddVal.community.communityInfo.communityId) digitalData = ddVal; setDigitalDataProperty(COMMUNITY_CATEGORY, community); setDigitalDataProperty('digitalData.community.communityInfo.communityContentTab', contentType); setDigitalDataProperty(COMMUNITY_TITLE, subject); captureSnapshot('event'); let cnvrstnIds = []; let slctdCnvrstnArr = ddVal.community.attributes.selectedConversations; for(let i=0;i 4) let triggerBy = moveMergeTriggerDetails[0]; let cName = community; // merged to which community if(cName !== moveMergeTriggerDetails[1]) cName = community + ' let cId = messageId; let cType = moveMergeTriggerDetails[3]; //merged from which community type let msgType = moveMergeTriggerDetails[4]; let replyType = msgType!=='originalPost'?msgType:null; let xArr = [cName, cId, cType, messageId+' localStorage.removeItem('mergeMsgDigitalData'); catch(err) console.log(err); function clearStorage(items) for(let x=0; x 0) $('.related-conversations-card').on('click', function(e) if(e.target.hasAttribute('data-related-content-type')) //section tab click events let destinationTab = e.target.getAttribute('data-related-content-type'); dnmsCPSectionTabClick(getDigitalDataProperty(COMMUNITY_CATEGORY), 'related conversation', destinationTab); setPrimaryEvent('Community: relatedConversationLabelClick', SECTION_TAB_ACTION); setDigitalDataProperty(COMMUNITY_CONTENT_TYPE, destinationTab); captureSnapshot('event'); else let subject = e.target.getAttribute('data-related-conversation-subject'); let boardId = e.target.getAttribute('data-related-conversation-board'); let relatedCommContentType = getBoardType(boardId); let community = normalizeCategoryBoardId(); let target_href = e.target.href; let convo_id = e.target.getAttribute('data-related-conversation-id'); let org_convo_id = getDigitalDataProperty(COMMUNITY_ID); dnmsRelatedConversationsClick(community, target_href, org_convo_id, convo_id, "", subject, relatedConvCampaignId, relatedConvControlGroupId, relatedConvVariationId, relatedCommContentType); setPrimaryEvent(RELATED_CONVERSATION_CLICK, RELATED_CONVERSATION_ACTION); cleanDigitalDataProperties([COMMUNITY_DD_PROPERTY]); setDigitalDataProperty(COMMUNITY_CATEGORY, community); setDigitalDataProperty(COMMUNITY_CONTENT_TYPE,relatedCommContentType); setDigitalDataProperty(COMMUNITY_ID, convo_id); setDigitalDataProperty(COMMUNITY_TITLE, subject); setDigitalDataProperty(SOPHIA_EVENTS,['CampaignId': relatedConvCampaignId, 'ControlGroupId': relatedConvControlGroupId, 'VariationId': relatedConvVariationId, 'ActionBlockId': relatedConvActionBlockId]); captureSnapshot('event'); cleanDigitalDataProperties([SOPHIA_EVENTS]); ); //Track actions on conversation and repliesif($('.lia-quilt-column-main_content').length > 0) $('.lia-quilt-column-main_content').on('click', function(e) targetElement.hasClass('delete-message')) trackDeleteMessageClick(targetElement); //Track ban user click if(targetElement.hasClass('ban-user')) trackBanUserClick(targetElement); //Track follow click if(targetElement.hasClass('addMessageUserEmailSubscription')) trackFollowUnfollowClick(targetElement, 'follow'); //Track unfollow click if(targetElement.hasClass('removeMessageUserEmailSubscription')) trackFollowUnfollowClick(targetElement, 'unfollow'); //Track in response to if(targetElement.hasClass('lia-message-reply-in-response-to')) setPrimaryEvent(REPLIES_IN_RESPONSE_TO, REPLY_ACTION); captureSnapshot('event'); dnmsTrackInResponseTo(getConversationPageDetails()); );//Track edit message clickif($('.edit-message').length > 0) $('.edit-message').on('click', function(e) trackEditMessageClick($(e.target)); );//Track mark spam clickif($('.lia-component-spam-action-mark-message-as-spam').length > 0) $('.lia-component-spam-action-mark-message-as-spam').on('click', function(e) trackMarkSpamClick($(e.target)); ); //Track conversation page CC clicksvar ccElements = document.querySelectorAll(".cc-links-cta-container__anchor, .cc-links-banner-p2 a button");for (let i = 0; i < ccElements.length; i++) if($(ccElements[i]).length) $(ccElements[i]).on('click', function(e) let ccType = e.currentTarget.getAttribute('data-type'); let ccurl = e.currentTarget.getAttribute('href'); if(ccType && CC_LINKS_TYPE[ccType]) if (ccType == '4') let primaryEvent = "Community: ManageAccountBtn_Click"; setPrimaryEvent(primaryEvent, CC_MANAGE_ACCOUNT_CLICK); setDigitalDataProperty(SOPHIA_EVENTS,['CampaignId': manageAccountCampaignId, 'ControlGroupId': manageAccountControlGroupId, 'VariationId': manageAccountVariationId, 'ActionBlockId': manageAccountActionBlockId]); captureSnapshot('event'); cleanDigitalDataProperties([SOPHIA_EVENTS]); dnmsManageAccountEvent(getDigitalDataProperty(COMMUNITY_CATEGORY), ccurl, 'ManageAccount', 'click', 'Conversation', manageAccountCampaignId, manageAccountVariationId, manageAccountControlGroupId); else let primaryEvent = CC_LINK1+CC_LINKS_TYPE[ccType]+CC_LINK2; setPrimaryEvent(primaryEvent, CC_LINK_CLICK); captureSnapshot('event'); dnmsCCLinkClick(getDigitalDataProperty(COMMUNITY_CATEGORY), ccurl, CC_LINKS_TYPE[ccType], 'Conversation'); ); function trackFollowUnfollowClick(tElement, action) let isFollowAction = action==='follow'; if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(isFollowAction?CONVERSATION_FOLLOW:CONVERSATION_UNFOLLOW, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick(action, getConversationPageDetails()); else setPrimaryEvent(isFollowAction?REPLY_FOLLOW:REPLY_UNFOLLOW, REPLY_ACTION); let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick(action, replyType, getConversationPageDetails()); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackBanUserClick(tElement) if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_BAN_USER, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('ban user', getConversationPageDetails()); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('ban user', replyType, getConversationPageDetails()); setPrimaryEvent(REPLY_BAN_USER, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackMarkSpamClick(tElement) if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_SPAM, CONVERSATION_ACTION); //dunamis api call let convArray = getConversationPageDetails(); dnmsConversationActionsClick('mark as spam', convArray); if(convArray.length > 1) syncDataOnS3('Spam', convArray[1]); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('mark as spam', replyType, getConversationPageDetails()); setPrimaryEvent(REPLY_SPAM, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackDeleteMessageClick(tElement) if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_DELETE, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('delete the conversation', getConversationPageDetails()); localStorage.setItem('moveMergeDeletetriggeredBy','conversationPage:originalPost'+':'+getConversationPageDetails().toString()+':'+getDigitalDataProperty(COMMUNITY_CONTENT_TYPE)); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('delete the reply', replyType, getConversationPageDetails()); localStorage.setItem('moveMergeDeletetriggeredBy','conversationPage:'+replyType+':'+getConversationPageDetails().toString()+':'+getDigitalDataProperty(COMMUNITY_CONTENT_TYPE)); setPrimaryEvent(REPLY_DELETE, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackMoveMergeClick(tElement) localStorage.setItem("movingConversationId", getDigitalDataProperty(COMMUNITY_ID)); if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_MOVE_MERGE, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('move/merge the conversation', getConversationPageDetails()); localStorage.setItem('moveMergeDeletetriggeredBy','conversationPage:originalPost'+':'+getConversationPageDetails().toString()+':'+getDigitalDataProperty(COMMUNITY_CONTENT_TYPE)); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('move/merge the conversation', replyType, getConversationPageDetails()); localStorage.setItem('moveMergeDeletetriggeredBy','conversationPage:'+replyType+':'+getConversationPageDetails().toString()+':'+getDigitalDataProperty(COMMUNITY_CONTENT_TYPE)); setPrimaryEvent(REPLY_MOVE_MERGE, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackViewHistoryClick(tElement) if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_VIEW_HISTORY, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('view history', getConversationPageDetails()); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('view history', replyType, getConversationPageDetails()); setPrimaryEvent(REPLY_VIEW_HISTORY, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackEditMessageClick(tElement) if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_EDIT, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('edit message', getConversationPageDetails()); localStorage.setItem('gpEditMessagePageNum', getCommunityCurrentPageNum()); else let replyType = getReplyType(tElement); if(replyType) localStorage.setItem('gpEditMessagePageNum', getCommunityCurrentPageNum()); dnmsConversationReplyActionsClick('edit message', replyType, getConversationPageDetails()); localStorage.setItem('gpEditMessageType', replyType); setPrimaryEvent(REPLY_EDIT, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackReportClick(tElement) let tempConversationPageDetails = getConversationPageDetails(); tempConversationPageDetails[2] = encodeURIComponent(tempConversationPageDetails[2]); localStorage.setItem('gpReportMessageDetails', tempConversationPageDetails); if(tElement.closest('.lia-thread-topic').length > 0) setPrimaryEvent(CONVERSATION_REPORT, CONVERSATION_ACTION); //dunamis api call dnmsConversationActionsClick('report', getConversationPageDetails()); else let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick('report', replyType, getConversationPageDetails()); localStorage.setItem('gpReportMessageType', replyType); setPrimaryEvent(REPLY_REPORT, REPLY_ACTION); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); captureSnapshot('event');function trackMarkUnmarkCorrectAnswer(action, tElement) let correctFlag = action==='mark correct answer'; setPrimaryEvent(correctFlag?MARKED_CORRECT:UNMARKED_CORRECT, correctFlag?REPLY_MARKED_CORRECT:REPLY_UNMARKED_CORRECT); cleanDigitalDataProperties([COMMUNITY_ATTRIBUTES]); convDetails = getConversationPageDetails(); if(correctFlag) convDetails = setSophiaPayload(convDetails); captureSnapshot('event'); let replyType = getReplyType(tElement); if(replyType) dnmsConversationReplyActionsClick(action, replyType, convDetails); cleanDigitalDataProperties([SOPHIA_EVENTS]);function detectRelatedConversationsLoad() { if($('.personalised-related-conversations').length > 0) let targetNode = $('.personalised-related-conversations')[0]; let config = childList: true ; let callback = function(mutationsList, observer) for(let i=0; i 0) status = $('.message-status-link')[0].innerText; dnmsConversationStatusUpdate('success',getConversationPageDetails(), comment, status); setPrimaryEvent('Community: StatusChanged'+status.replace(' ',''),'conversationStatusUpdated'); setDigitalDataProperty(PRIMARY_FILTER, createGPFilterInfoObj(status, 'statusChange')); captureSnapshot('event'); localStorage.removeItem('messageStatusUpdate'); cleanDigitalDataProperties([PRIMARY_FILTER, FILTERS]); catch(e) console.log(e); function isReplyBodyEmpty() { let result = false; let xNode;if($('.mce-edit-area').length > 0 && $('.mce-edit-area').children().length > 0) { let mceEditAreaiFrames = $('.mce-edit-area').children(); for(let i=0; i 0 && (content[0].hasAttribute('data-mce-bogus') || tinymce.innerHTML === '

    -

    Adobe Acrobat 7.0 Professional Authorization Code Keygen


    Download Zip 🗸 https://urloso.com/2uyS3G



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Being John Malkovich Blu Ray Torrent.md b/spaces/bioriAsaeru/text-to-voice/Being John Malkovich Blu Ray Torrent.md deleted file mode 100644 index def153fe9eada1d2f0100978bbab2d684d9729d8..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Being John Malkovich Blu Ray Torrent.md +++ /dev/null @@ -1,5 +0,0 @@ -
    -

    the scourges.pg-13(or r) 218 min
    " as if possessed, malkovich ( john travolta ) stands in the passenger seat of the car of his friend and colleague, dr. burton ( william hurt), a fellow university professor. dr. burton is driving to a meeting at the hoover institution at stanford university with academic colleague dr. grant ( cameron diaz ) and student hildy ( emily watson ). at the institute, malkovich will be lecturing about his experiences while under the influence of a drug called dmt. the drug has a tendency to make people see the world in a unique way, as malkovich does in the passenger seat of the car. as the car leaves the parking lot, it is struck by a motorcycle driven by max gail ( bruce willis ). gail is killed, and malkovich is unharmed, but he is strangely catatonic for the rest of the evening. upon his return home, he telephones dr. burton, who encourages him to go to the hoover institute and report on his experience. once at the institute, malkovich is brought into a large conference room, where dr. grant sits on a raised platform, conversing with a man and woman sitting behind him who are seemingly having a conversation of their own. the man is dr. asher (tommy lee jones), who is an expert on dmt, and the woman is dr. elizabeth (susan sarandon ), who is a professor of neurology. in the middle of the presentation, dr. asher demonstrates the drug on malkovich, who, upon leaving the platform, claims to have seen the world through the eyes of a cat. as the evening progresses, dr. asher and dr. elizabeth are visited by max gail, who is alive and well, and claims to be dr. grant. however, he is not a man but a malevolent, furry, cat-like creature who wants to find a way to take over the world. he is then confronted by a monster ( who is actually dr. elizabeth's cat, hamlet ) and is chased out of the institute by the guard. asher then leaves the presentation, and the cat-like creature is glimpsed by dr. elizabeth, who is a nervous wreck after her cat is injured. at this point, as dr. grant says that he would like to "verify" the authenticity of the dmt experience, a short film is shown in which the cat-like creature seeks to conquer the world. afterward, dr. burton and dr. grant discuss the video in the car with dr. asher, and dr. grant states that he believes it to be a "travelogue." the film then cuts to the point in time when gail was killed. the cat-like creature then enters the car and begins to strangle malkovich. the film ends as the cat-like creature is seen mauling dr. elizabeth, and then attacks dr.

    -

    being john malkovich blu ray torrent


    Download 🗸🗸🗸 https://urloso.com/2uyRql



    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Eboostr 45 Build 596 Crack How to Speed Up Your PC with This Software.md b/spaces/bioriAsaeru/text-to-voice/Eboostr 45 Build 596 Crack How to Speed Up Your PC with This Software.md deleted file mode 100644 index b4a0e66c1f0a864658fa381967c4b0bc2a98da6d..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Eboostr 45 Build 596 Crack How to Speed Up Your PC with This Software.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Eboostr 45 Build 596 Crack


    Download Filehttps://urloso.com/2uyOSZ



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Groove Coaster Touhou 12 DLC Download No Survey.md b/spaces/bioriAsaeru/text-to-voice/Groove Coaster Touhou 12 DLC Download No Survey.md deleted file mode 100644 index ef5eb617ff8fb04ee29a6cac0e826f26e0f6f8f2..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Groove Coaster Touhou 12 DLC Download No Survey.md +++ /dev/null @@ -1,10 +0,0 @@ -

    Groove Coaster Touhou 12 DLC Download No Survey


    Download Filehttps://urloso.com/2uyROs



    - -Dec 23, 2021 — ... 1640292876 Groove Coaster Touhou 12 DLC Download No SurveySocial sharing script reset by phpPowerInspect 2014 Portable Torrent. With this tool, you can check your site or your project for security in general and find security vulnerabilities. -Use it to check a website for security vulnerabilities. -It uses vulnerability scanning, website scanning and application scanning. -It can scan all web pages or applications that you want to check for vulnerabilities. -You don't need to install it on your computer to use it. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/breadlicker45/badapple/README.md b/spaces/breadlicker45/badapple/README.md deleted file mode 100644 index 77aa9b9cb12b2e928906e5540ab2737f8a743625..0000000000000000000000000000000000000000 --- a/spaces/breadlicker45/badapple/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Badapple -emoji: 👁 -colorFrom: blue -colorTo: red -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bugbugbug/vits-uma-genshin-honkai/app.py b/spaces/bugbugbug/vits-uma-genshin-honkai/app.py deleted file mode 100644 index 92ddafdcd240434f58569b0e6964ef331a971dcf..0000000000000000000000000000000000000000 --- a/spaces/bugbugbug/vits-uma-genshin-honkai/app.py +++ /dev/null @@ -1,124 +0,0 @@ -import time -import gradio as gr -import utils -import commons -from models import SynthesizerTrn -from text import text_to_sequence -from torch import no_grad, LongTensor -import torch - -hps_ms = utils.get_hparams_from_file(r'./model/config.json') -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers, - **hps_ms.model).to(device) -_ = net_g_ms.eval() -speakers = hps_ms.speakers -model, optimizer, learning_rate, epochs = utils.load_checkpoint(r'./model/G_953000.pth', net_g_ms, None) - -def get_text(text, hps): - text_norm, clean_text = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def vits(text, language, speaker_id, noise_scale, noise_scale_w, length_scale): - start = time.perf_counter() - if not len(text): - return "输入文本不能为空!", None, None - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if len(text) > 500: - return f"输入文字过长!{len(text)}>100", None, None - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms) - with no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = LongTensor([stn_tst.size(0)]) - speaker_id = LongTensor([speaker_id]) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=speaker_id, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.cpu().float().numpy() - - return "生成成功!", (22050, audio), f"生成耗时 {round(time.perf_counter()-start, 2)} s" - -def search_speaker(search_value): - for s in speakers: - if search_value == s: - return s - for s in speakers: - if search_value in s: - return s - -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2 - else: - return 0.6, 0.668, 1.1 - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio").querySelector("audio"); - let text = root.querySelector("#input-text").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - with gr.Blocks() as app: - gr.Markdown( - "#
    VITS语音在线合成demo\n" - "
    主要有赛马娘,原神中文,原神日语,崩坏3的音色
    " - '' - '' - ) - - with gr.Tabs(): - with gr.TabItem("vits"): - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value="今天晚上吃啥好呢。", elem_id=f"input-text") - lang = gr.Dropdown(label="Language", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文") - btn = gr.Button(value="Submit") - with gr.Row(): - search = gr.Textbox(label="Search Speaker", lines=1) - btn2 = gr.Button(value="Search") - sid = gr.Dropdown(label="Speaker", choices=speakers, type="index", value=speakers[228]) - with gr.Row(): - ns = gr.Slider(label="noise_scale(控制感情变化程度)", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w(控制音素发音长度)", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale(控制整体语速)", minimum=0.1, maximum=2.0, step=0.1, value=1.2, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio") - o3 = gr.Textbox(label="Extra Info") - download = gr.Button("Download Audio") - btn.click(vits, inputs=[input_text, lang, sid, ns, nsw, ls], outputs=[o1, o2, o3], api_name="generate") - download.click(None, [], [], _js=download_audio_js.format()) - btn2.click(search_speaker, inputs=[search], outputs=[sid]) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - with gr.TabItem("可用人物一览"): - gr.Radio(label="Speaker", choices=speakers, interactive=False, type="index") - app.queue(concurrency_count=1).launch() \ No newline at end of file diff --git a/spaces/calvinchaochao/text_generation/app.py b/spaces/calvinchaochao/text_generation/app.py deleted file mode 100644 index e26e5af16ae272e6d450d1e31e6f1b0e4fe86e2e..0000000000000000000000000000000000000000 --- a/spaces/calvinchaochao/text_generation/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import gradio as gr -from transformers import AutoModelForCausalLM, AutoTokenizer,BitsAndBytesConfig -from transformers.generation import GenerationConfig -quantization_config = BitsAndBytesConfig( - load_in_4bit=True, - bnb_4bit_quant_type='int8', - bnb_4bit_compute_dtype=torch.bfloat16) -# Note: The default behavior now has injection attack prevention off. -tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) - -model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-7B-Chat", device_map="auto", trust_remote_code=True,quantization_config=quantization_config).eval() - -# Specify hyperparameters for generation -model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-7B-Chat", trust_remote_code=True) # 可指定不同的生成长度、top_p等相关超参 - - -def generate(text): - response, history = model.chat(tokenizer, text, history=None) - - return response - -examples = [ - ["The Moon's orbit around Earth has"], - ["The smooth Borealis basin in the Northern Hemisphere covers 40%"], -] - -demo = gr.Interface( - fn=generate, - inputs=gr.inputs.Textbox(lines=5, label="Input Text"), - outputs=gr.outputs.Textbox(label="Generated Text"), - examples=examples -) - -demo.launch() diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/latent_diffusion/__init__.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/audioldm/latent_diffusion/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/densepose_checkpoint.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/densepose_checkpoint.py deleted file mode 100644 index 8c2b4f2e2cc9c6c798cf1bdb9c38dedc84058bd5..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/densepose_checkpoint.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from collections import OrderedDict - -from detectron2.checkpoint import DetectionCheckpointer - - -def _rename_HRNet_weights(weights): - # We detect and rename HRNet weights for DensePose. 1956 and 1716 are values that are - # common to all HRNet pretrained weights, and should be enough to accurately identify them - if ( - len(weights["model"].keys()) == 1956 - and len([k for k in weights["model"].keys() if k.startswith("stage")]) == 1716 - ): - hrnet_weights = OrderedDict() - for k in weights["model"].keys(): - hrnet_weights["backbone.bottom_up." + str(k)] = weights["model"][k] - return {"model": hrnet_weights} - else: - return weights - - -class DensePoseCheckpointer(DetectionCheckpointer): - """ - Same as :class:`DetectionCheckpointer`, but is able to handle HRNet weights - """ - - def __init__(self, model, save_dir="", *, save_to_disk=None, **checkpointables): - super().__init__(model, save_dir, save_to_disk=save_to_disk, **checkpointables) - - def _load_file(self, filename: str) -> object: - """ - Adding hrnet support - """ - weights = super()._load_file(filename) - return _rename_HRNet_weights(weights) diff --git a/spaces/caslabs/midi-autocompletion/musicautobot/music_transformer/model.py b/spaces/caslabs/midi-autocompletion/musicautobot/music_transformer/model.py deleted file mode 100644 index 3341452c9cb0438ef46843e6355342f9e847135a..0000000000000000000000000000000000000000 --- a/spaces/caslabs/midi-autocompletion/musicautobot/music_transformer/model.py +++ /dev/null @@ -1,66 +0,0 @@ -from fastai.basics import * -from fastai.text.models.transformer import TransformerXL -from ..utils.attention_mask import rand_window_mask - -class MusicTransformerXL(TransformerXL): - "Exactly like fastai's TransformerXL, but with more aggressive attention mask: see `rand_window_mask`" - def __init__(self, *args, encode_position=True, mask_steps=1, **kwargs): - import inspect - sig = inspect.signature(TransformerXL) - arg_params = { k:kwargs[k] for k in sig.parameters if k in kwargs } - super().__init__(*args, **arg_params) - - self.encode_position = encode_position - if self.encode_position: self.beat_enc = BeatPositionEncoder(kwargs['d_model']) - - self.mask_steps=mask_steps - - - def forward(self, x): - #The hidden state has to be initiliazed in the forward pass for nn.DataParallel - if self.mem_len > 0 and not self.init: - self.reset() - self.init = True - - benc = 0 - if self.encode_position: - x,pos = x['x'], x['pos'] - benc = self.beat_enc(pos) - - bs,x_len = x.size() - inp = self.drop_emb(self.encoder(x) + benc) #.mul_(self.d_model ** 0.5) - m_len = self.hidden[0].size(1) if hasattr(self, 'hidden') and len(self.hidden[0].size()) > 1 else 0 - seq_len = m_len + x_len - - mask = rand_window_mask(x_len, m_len, inp.device, max_size=self.mask_steps, is_eval=not self.training) if self.mask else None - if m_len == 0: mask[...,0,0] = 0 - #[None,:,:None] for einsum implementation of attention - hids = [] - pos = torch.arange(seq_len-1, -1, -1, device=inp.device, dtype=inp.dtype) - pos_enc = self.pos_enc(pos) - hids.append(inp) - for i, layer in enumerate(self.layers): - mem = self.hidden[i] if self.mem_len > 0 else None - inp = layer(inp, r=pos_enc, u=self.u, v=self.v, mask=mask, mem=mem) - hids.append(inp) - core_out = inp[:,-x_len:] - if self.mem_len > 0 : self._update_mems(hids) - return (self.hidden if self.mem_len > 0 else [core_out]),[core_out] - - - # Beat encoder -class BeatPositionEncoder(nn.Module): - "Embedding + positional encoding + dropout" - def __init__(self, emb_sz:int, beat_len=32, max_bar_len=1024): - super().__init__() - - self.beat_len, self.max_bar_len = beat_len, max_bar_len - self.beat_enc = nn.Embedding(beat_len, emb_sz, padding_idx=0) - self.bar_enc = nn.Embedding(max_bar_len, emb_sz, padding_idx=0) - - def forward(self, pos): - beat_enc = self.beat_enc(pos % self.beat_len) - bar_pos = pos // self.beat_len % self.max_bar_len - bar_pos[bar_pos >= self.max_bar_len] = self.max_bar_len - 1 - bar_enc = self.bar_enc((bar_pos)) - return beat_enc + bar_enc \ No newline at end of file diff --git a/spaces/chjun/movie_rating_bot/README.md b/spaces/chjun/movie_rating_bot/README.md deleted file mode 100644 index b2192ec50ca2c4ec61c4a7553619c47abea11a50..0000000000000000000000000000000000000000 --- a/spaces/chjun/movie_rating_bot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Movie Rating Bot -emoji: 🏢 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/inspect_tools.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/inspect_tools.py deleted file mode 100644 index 1182156a82154367e3da83f526a7e8b5212c8057..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/inspect_tools.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import faiss - - -def get_invlist(invlists, l): - """ returns the inverted lists content as a pair of (list_ids, list_codes). - The codes are reshaped to a proper size - """ - invlists = faiss.downcast_InvertedLists(invlists) - ls = invlists.list_size(l) - list_ids = np.zeros(ls, dtype='int64') - ids = codes = None - try: - ids = invlists.get_ids(l) - if ls > 0: - faiss.memcpy(faiss.swig_ptr(list_ids), ids, list_ids.nbytes) - codes = invlists.get_codes(l) - if invlists.code_size != faiss.InvertedLists.INVALID_CODE_SIZE: - list_codes = np.zeros((ls, invlists.code_size), dtype='uint8') - else: - # it's a BlockInvertedLists - npb = invlists.n_per_block - bs = invlists.block_size - ls_round = (ls + npb - 1) // npb - list_codes = np.zeros((ls_round, bs // npb, npb), dtype='uint8') - if ls > 0: - faiss.memcpy(faiss.swig_ptr(list_codes), codes, list_codes.nbytes) - finally: - if ids is not None: - invlists.release_ids(l, ids) - if codes is not None: - invlists.release_codes(l, codes) - return list_ids, list_codes - - -def get_invlist_sizes(invlists): - """ return the array of sizes of the inverted lists """ - return np.array([ - invlists.list_size(i) - for i in range(invlists.nlist) - ], dtype='int64') - - -def print_object_fields(obj): - """ list values all fields of an object known to SWIG """ - - for name in obj.__class__.__swig_getmethods__: - print(f"{name} = {getattr(obj, name)}") - - -def get_pq_centroids(pq): - """ return the PQ centroids as an array """ - cen = faiss.vector_to_array(pq.centroids) - return cen.reshape(pq.M, pq.ksub, pq.dsub) - - -def get_LinearTransform_matrix(pca): - """ extract matrix + bias from the PCA object - works for any linear transform (OPQ, random rotation, etc.) - """ - b = faiss.vector_to_array(pca.b) - A = faiss.vector_to_array(pca.A).reshape(pca.d_out, pca.d_in) - return A, b - - -def get_additive_quantizer_codebooks(aq): - """ return to codebooks of an additive quantizer """ - codebooks = faiss.vector_to_array(aq.codebooks).reshape(-1, aq.d) - co = faiss.vector_to_array(aq.codebook_offsets) - return [ - codebooks[co[i]:co[i + 1]] - for i in range(aq.M) - ] - - -def get_flat_data(index): - """ copy and return the data matrix in an IndexFlat """ - xb = faiss.vector_to_array(index.codes).view("float32") - return xb.reshape(index.ntotal, index.d) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/__init__.py deleted file mode 100644 index 3d0610ea2e39566e8534b6de3a631abfc3edbef9..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# -*- coding: utf-8 -*- - -from __future__ import absolute_import - -from .filetype import * # noqa -from .helpers import * # noqa -from .match import * # noqa - -# Current package semver version -__version__ = version = '1.2.0' diff --git a/spaces/cihyFjudo/fairness-paper-search/MICROSOFT.OFFICE.2010.ProfessionalPlus.64Bit sMileyBoY07 H33T.iso The Ultimate Guide to Office 2010 Professional Plus.md b/spaces/cihyFjudo/fairness-paper-search/MICROSOFT.OFFICE.2010.ProfessionalPlus.64Bit sMileyBoY07 H33T.iso The Ultimate Guide to Office 2010 Professional Plus.md deleted file mode 100644 index d88e80bf011a883e18a3487e8c20d05ebf5631c7..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/MICROSOFT.OFFICE.2010.ProfessionalPlus.64Bit sMileyBoY07 H33T.iso The Ultimate Guide to Office 2010 Professional Plus.md +++ /dev/null @@ -1,6 +0,0 @@ -

    MICROSOFT.OFFICE.2010.ProfessionalPlus.64Bit {sMileyBoY07} {H33T}.iso


    DOWNLOAD ✓✓✓ https://tinurli.com/2uwhJR



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Saiunkoku Monogatari The Best Sites to Stream the Anime with High Quality.md b/spaces/cihyFjudo/fairness-paper-search/Saiunkoku Monogatari The Best Sites to Stream the Anime with High Quality.md deleted file mode 100644 index 564feb00d665b34d5309cfa2670d16af3be915b8..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Saiunkoku Monogatari The Best Sites to Stream the Anime with High Quality.md +++ /dev/null @@ -1,6 +0,0 @@ -

    saiunkoku monogatari nonton anime org


    Download ✓✓✓ https://tinurli.com/2uwkRU



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/codellama/codellama-13b-chat/style.css b/spaces/codellama/codellama-13b-chat/style.css deleted file mode 100644 index 303c3d7ef3b06c42b211797cd2d5af9800589092..0000000000000000000000000000000000000000 --- a/spaces/codellama/codellama-13b-chat/style.css +++ /dev/null @@ -1,16 +0,0 @@ -h1 { - text-align: center; -} - -#duplicate-button { - margin: auto; - color: white; - background: #1565c0; - border-radius: 100vh; -} - -#component-0 { - max-width: 900px; - margin: auto; - padding-top: 1.5rem; -} diff --git a/spaces/codertoro/gpt-academic/crazy_functions/test_project/cpp/longcode/jpge.cpp b/spaces/codertoro/gpt-academic/crazy_functions/test_project/cpp/longcode/jpge.cpp deleted file mode 100644 index 2e26b71ed5aad0d46478fdbcd3a880be1401f946..0000000000000000000000000000000000000000 --- a/spaces/codertoro/gpt-academic/crazy_functions/test_project/cpp/longcode/jpge.cpp +++ /dev/null @@ -1,1049 +0,0 @@ -// jpge.cpp - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// v1.01, Dec. 18, 2010 - Initial release -// v1.02, Apr. 6, 2011 - Removed 2x2 ordered dither in H2V1 chroma subsampling method load_block_16_8_8(). (The rounding factor was 2, when it should have been 1. Either way, it wasn't helping.) -// v1.03, Apr. 16, 2011 - Added support for optimized Huffman code tables, optimized dynamic memory allocation down to only 1 alloc. -// Also from Alex Evans: Added RGBA support, linear memory allocator (no longer needed in v1.03). -// v1.04, May. 19, 2012: Forgot to set m_pFile ptr to NULL in cfile_stream::close(). Thanks to Owen Kaluza for reporting this bug. -// Code tweaks to fix VS2008 static code analysis warnings (all looked harmless). -// Code review revealed method load_block_16_8_8() (used for the non-default H2V1 sampling mode to downsample chroma) somehow didn't get the rounding factor fix from v1.02. - -#include "jpge.h" - -#include -#include -#if PLATFORM_WINDOWS -#include -#endif - -#define JPGE_MAX(a,b) (((a)>(b))?(a):(b)) -#define JPGE_MIN(a,b) (((a)<(b))?(a):(b)) - -namespace jpge { - -static inline void *jpge_malloc(size_t nSize) { return FMemory::Malloc(nSize); } -static inline void jpge_free(void *p) { FMemory::Free(p);; } - -// Various JPEG enums and tables. -enum { M_SOF0 = 0xC0, M_DHT = 0xC4, M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_APP0 = 0xE0 }; -enum { DC_LUM_CODES = 12, AC_LUM_CODES = 256, DC_CHROMA_CODES = 12, AC_CHROMA_CODES = 256, MAX_HUFF_SYMBOLS = 257, MAX_HUFF_CODESIZE = 32 }; - -static uint8 s_zag[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; -static int16 s_std_lum_quant[64] = { 16,11,12,14,12,10,16,14,13,14,18,17,16,19,24,40,26,24,22,22,24,49,35,37,29,40,58,51,61,60,57,51,56,55,64,72,92,78,64,68,87,69,55,56,80,109,81,87,95,98,103,104,103,62,77,113,121,112,100,120,92,101,103,99 }; -static int16 s_std_croma_quant[64] = { 17,18,18,24,21,24,47,26,26,47,99,66,56,66,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99,99 }; -static uint8 s_dc_lum_bits[17] = { 0,0,1,5,1,1,1,1,1,1,0,0,0,0,0,0,0 }; -static uint8 s_dc_lum_val[DC_LUM_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_lum_bits[17] = { 0,0,2,1,3,3,2,4,3,5,5,4,4,0,0,1,0x7d }; -static uint8 s_ac_lum_val[AC_LUM_CODES] = -{ - 0x01,0x02,0x03,0x00,0x04,0x11,0x05,0x12,0x21,0x31,0x41,0x06,0x13,0x51,0x61,0x07,0x22,0x71,0x14,0x32,0x81,0x91,0xa1,0x08,0x23,0x42,0xb1,0xc1,0x15,0x52,0xd1,0xf0, - 0x24,0x33,0x62,0x72,0x82,0x09,0x0a,0x16,0x17,0x18,0x19,0x1a,0x25,0x26,0x27,0x28,0x29,0x2a,0x34,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48,0x49, - 0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x83,0x84,0x85,0x86,0x87,0x88,0x89, - 0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3,0xc4,0xc5, - 0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf1,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; -static uint8 s_dc_chroma_bits[17] = { 0,0,3,1,1,1,1,1,1,1,1,1,0,0,0,0,0 }; -static uint8 s_dc_chroma_val[DC_CHROMA_CODES] = { 0,1,2,3,4,5,6,7,8,9,10,11 }; -static uint8 s_ac_chroma_bits[17] = { 0,0,2,1,2,4,4,3,4,7,5,4,4,0,1,2,0x77 }; -static uint8 s_ac_chroma_val[AC_CHROMA_CODES] = -{ - 0x00,0x01,0x02,0x03,0x11,0x04,0x05,0x21,0x31,0x06,0x12,0x41,0x51,0x07,0x61,0x71,0x13,0x22,0x32,0x81,0x08,0x14,0x42,0x91,0xa1,0xb1,0xc1,0x09,0x23,0x33,0x52,0xf0, - 0x15,0x62,0x72,0xd1,0x0a,0x16,0x24,0x34,0xe1,0x25,0xf1,0x17,0x18,0x19,0x1a,0x26,0x27,0x28,0x29,0x2a,0x35,0x36,0x37,0x38,0x39,0x3a,0x43,0x44,0x45,0x46,0x47,0x48, - 0x49,0x4a,0x53,0x54,0x55,0x56,0x57,0x58,0x59,0x5a,0x63,0x64,0x65,0x66,0x67,0x68,0x69,0x6a,0x73,0x74,0x75,0x76,0x77,0x78,0x79,0x7a,0x82,0x83,0x84,0x85,0x86,0x87, - 0x88,0x89,0x8a,0x92,0x93,0x94,0x95,0x96,0x97,0x98,0x99,0x9a,0xa2,0xa3,0xa4,0xa5,0xa6,0xa7,0xa8,0xa9,0xaa,0xb2,0xb3,0xb4,0xb5,0xb6,0xb7,0xb8,0xb9,0xba,0xc2,0xc3, - 0xc4,0xc5,0xc6,0xc7,0xc8,0xc9,0xca,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xd8,0xd9,0xda,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7,0xe8,0xe9,0xea,0xf2,0xf3,0xf4,0xf5,0xf6,0xf7,0xf8, - 0xf9,0xfa -}; - -// Low-level helper functions. -template inline void clear_obj(T &obj) { memset(&obj, 0, sizeof(obj)); } - -const int YR = 19595, YG = 38470, YB = 7471, CB_R = -11059, CB_G = -21709, CB_B = 32768, CR_R = 32768, CR_G = -27439, CR_B = -5329; -static inline uint8 clamp(int i) { if (static_cast(i) > 255U) { if (i < 0) i = 0; else if (i > 255) i = 255; } return static_cast(i); } - -static void RGB_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 3, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGB_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 3, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void RGBA_to_YCC(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst += 3, pSrc += 4, num_pixels--) - { - const int r = pSrc[0], g = pSrc[1], b = pSrc[2]; - pDst[0] = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - pDst[1] = clamp(128 + ((r * CB_R + g * CB_G + b * CB_B + 32768) >> 16)); - pDst[2] = clamp(128 + ((r * CR_R + g * CR_G + b * CR_B + 32768) >> 16)); - } -} - -static void RGBA_to_Y(uint8* pDst, const uint8 *pSrc, int num_pixels) -{ - for ( ; num_pixels; pDst++, pSrc += 4, num_pixels--) - pDst[0] = static_cast((pSrc[0] * YR + pSrc[1] * YG + pSrc[2] * YB + 32768) >> 16); -} - -static void Y_to_YCC(uint8* pDst, const uint8* pSrc, int num_pixels) -{ - for( ; num_pixels; pDst += 3, pSrc++, num_pixels--) { pDst[0] = pSrc[0]; pDst[1] = 128; pDst[2] = 128; } -} - -// Forward DCT - DCT derived from jfdctint. -#define CONST_BITS 13 -#define ROW_BITS 2 -#define DCT_DESCALE(x, n) (((x) + (((int32)1) << ((n) - 1))) >> (n)) -#define DCT_MUL(var, c) (static_cast(var) * static_cast(c)) -#define DCT1D(s0, s1, s2, s3, s4, s5, s6, s7) \ - int32 t0 = s0 + s7, t7 = s0 - s7, t1 = s1 + s6, t6 = s1 - s6, t2 = s2 + s5, t5 = s2 - s5, t3 = s3 + s4, t4 = s3 - s4; \ - int32 t10 = t0 + t3, t13 = t0 - t3, t11 = t1 + t2, t12 = t1 - t2; \ - int32 u1 = DCT_MUL(t12 + t13, 4433); \ - s2 = u1 + DCT_MUL(t13, 6270); \ - s6 = u1 + DCT_MUL(t12, -15137); \ - u1 = t4 + t7; \ - int32 u2 = t5 + t6, u3 = t4 + t6, u4 = t5 + t7; \ - int32 z5 = DCT_MUL(u3 + u4, 9633); \ - t4 = DCT_MUL(t4, 2446); t5 = DCT_MUL(t5, 16819); \ - t6 = DCT_MUL(t6, 25172); t7 = DCT_MUL(t7, 12299); \ - u1 = DCT_MUL(u1, -7373); u2 = DCT_MUL(u2, -20995); \ - u3 = DCT_MUL(u3, -16069); u4 = DCT_MUL(u4, -3196); \ - u3 += z5; u4 += z5; \ - s0 = t10 + t11; s1 = t7 + u1 + u4; s3 = t6 + u2 + u3; s4 = t10 - t11; s5 = t5 + u2 + u4; s7 = t4 + u1 + u3; - -static void DCT2D(int32 *p) -{ - int32 c, *q = p; - for (c = 7; c >= 0; c--, q += 8) - { - int32 s0 = q[0], s1 = q[1], s2 = q[2], s3 = q[3], s4 = q[4], s5 = q[5], s6 = q[6], s7 = q[7]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0] = s0 << ROW_BITS; q[1] = DCT_DESCALE(s1, CONST_BITS-ROW_BITS); q[2] = DCT_DESCALE(s2, CONST_BITS-ROW_BITS); q[3] = DCT_DESCALE(s3, CONST_BITS-ROW_BITS); - q[4] = s4 << ROW_BITS; q[5] = DCT_DESCALE(s5, CONST_BITS-ROW_BITS); q[6] = DCT_DESCALE(s6, CONST_BITS-ROW_BITS); q[7] = DCT_DESCALE(s7, CONST_BITS-ROW_BITS); - } - for (q = p, c = 7; c >= 0; c--, q++) - { - int32 s0 = q[0*8], s1 = q[1*8], s2 = q[2*8], s3 = q[3*8], s4 = q[4*8], s5 = q[5*8], s6 = q[6*8], s7 = q[7*8]; - DCT1D(s0, s1, s2, s3, s4, s5, s6, s7); - q[0*8] = DCT_DESCALE(s0, ROW_BITS+3); q[1*8] = DCT_DESCALE(s1, CONST_BITS+ROW_BITS+3); q[2*8] = DCT_DESCALE(s2, CONST_BITS+ROW_BITS+3); q[3*8] = DCT_DESCALE(s3, CONST_BITS+ROW_BITS+3); - q[4*8] = DCT_DESCALE(s4, ROW_BITS+3); q[5*8] = DCT_DESCALE(s5, CONST_BITS+ROW_BITS+3); q[6*8] = DCT_DESCALE(s6, CONST_BITS+ROW_BITS+3); q[7*8] = DCT_DESCALE(s7, CONST_BITS+ROW_BITS+3); - } -} - -struct sym_freq { uint m_key, m_sym_index; }; - -// Radix sorts sym_freq[] array by 32-bit key m_key. Returns ptr to sorted values. -static inline sym_freq* radix_sort_syms(uint num_syms, sym_freq* pSyms0, sym_freq* pSyms1) -{ - const uint cMaxPasses = 4; - uint32 hist[256 * cMaxPasses]; clear_obj(hist); - for (uint i = 0; i < num_syms; i++) { uint freq = pSyms0[i].m_key; hist[freq & 0xFF]++; hist[256 + ((freq >> 8) & 0xFF)]++; hist[256*2 + ((freq >> 16) & 0xFF)]++; hist[256*3 + ((freq >> 24) & 0xFF)]++; } - sym_freq* pCur_syms = pSyms0, *pNew_syms = pSyms1; - uint total_passes = cMaxPasses; while ((total_passes > 1) && (num_syms == hist[(total_passes - 1) * 256])) total_passes--; - for (uint pass_shift = 0, pass = 0; pass < total_passes; pass++, pass_shift += 8) - { - const uint32* pHist = &hist[pass << 8]; - uint offsets[256], cur_ofs = 0; - for (uint i = 0; i < 256; i++) { offsets[i] = cur_ofs; cur_ofs += pHist[i]; } - for (uint i = 0; i < num_syms; i++) - pNew_syms[offsets[(pCur_syms[i].m_key >> pass_shift) & 0xFF]++] = pCur_syms[i]; - sym_freq* t = pCur_syms; pCur_syms = pNew_syms; pNew_syms = t; - } - return pCur_syms; -} - -// calculate_minimum_redundancy() originally written by: Alistair Moffat, alistair@cs.mu.oz.au, Jyrki Katajainen, jyrki@diku.dk, November 1996. -static void calculate_minimum_redundancy(sym_freq *A, int n) -{ - int root, leaf, next, avbl, used, dpth; - if (n==0) return; else if (n==1) { A[0].m_key = 1; return; } - A[0].m_key += A[1].m_key; root = 0; leaf = 2; - for (next=1; next < n-1; next++) - { - if (leaf>=n || A[root].m_key=n || (root=0; next--) A[next].m_key = A[A[next].m_key].m_key+1; - avbl = 1; used = dpth = 0; root = n-2; next = n-1; - while (avbl>0) - { - while (root>=0 && (int)A[root].m_key==dpth) { used++; root--; } - while (avbl>used) { A[next--].m_key = dpth; avbl--; } - avbl = 2*used; dpth++; used = 0; - } -} - -// Limits canonical Huffman code table's max code size to max_code_size. -static void huffman_enforce_max_code_size(int *pNum_codes, int code_list_len, int max_code_size) -{ - if (code_list_len <= 1) return; - - for (int i = max_code_size + 1; i <= MAX_HUFF_CODESIZE; i++) pNum_codes[max_code_size] += pNum_codes[i]; - - uint32 total = 0; - for (int i = max_code_size; i > 0; i--) - total += (((uint32)pNum_codes[i]) << (max_code_size - i)); - - while (total != (1UL << max_code_size)) - { - pNum_codes[max_code_size]--; - for (int i = max_code_size - 1; i > 0; i--) - { - if (pNum_codes[i]) { pNum_codes[i]--; pNum_codes[i + 1] += 2; break; } - } - total--; - } -} - -// Generates an optimized offman table. -void jpeg_encoder::optimize_huffman_table(int table_num, int table_len) -{ - sym_freq syms0[MAX_HUFF_SYMBOLS], syms1[MAX_HUFF_SYMBOLS]; - syms0[0].m_key = 1; syms0[0].m_sym_index = 0; // dummy symbol, assures that no valid code contains all 1's - int num_used_syms = 1; - const uint32 *pSym_count = &m_huff_count[table_num][0]; - for (int i = 0; i < table_len; i++) - if (pSym_count[i]) { syms0[num_used_syms].m_key = pSym_count[i]; syms0[num_used_syms++].m_sym_index = i + 1; } - sym_freq* pSyms = radix_sort_syms(num_used_syms, syms0, syms1); - calculate_minimum_redundancy(pSyms, num_used_syms); - - // Count the # of symbols of each code size. - int num_codes[1 + MAX_HUFF_CODESIZE]; clear_obj(num_codes); - for (int i = 0; i < num_used_syms; i++) - num_codes[pSyms[i].m_key]++; - - const uint JPGE_CODE_SIZE_LIMIT = 16; // the maximum possible size of a JPEG Huffman code (valid range is [9,16] - 9 vs. 8 because of the dummy symbol) - huffman_enforce_max_code_size(num_codes, num_used_syms, JPGE_CODE_SIZE_LIMIT); - - // Compute m_huff_bits array, which contains the # of symbols per code size. - clear_obj(m_huff_bits[table_num]); - for (int i = 1; i <= (int)JPGE_CODE_SIZE_LIMIT; i++) - m_huff_bits[table_num][i] = static_cast(num_codes[i]); - - // Remove the dummy symbol added above, which must be in largest bucket. - for (int i = JPGE_CODE_SIZE_LIMIT; i >= 1; i--) - { - if (m_huff_bits[table_num][i]) { m_huff_bits[table_num][i]--; break; } - } - - // Compute the m_huff_val array, which contains the symbol indices sorted by code size (smallest to largest). - for (int i = num_used_syms - 1; i >= 1; i--) - m_huff_val[table_num][num_used_syms - 1 - i] = static_cast(pSyms[i].m_sym_index - 1); -} - -// JPEG marker generation. -void jpeg_encoder::emit_byte(uint8 i) -{ - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_obj(i); -} - -void jpeg_encoder::emit_word(uint i) -{ - emit_byte(uint8(i >> 8)); emit_byte(uint8(i & 0xFF)); -} - -void jpeg_encoder::emit_marker(int marker) -{ - emit_byte(uint8(0xFF)); emit_byte(uint8(marker)); -} - -// Emit JFIF marker -void jpeg_encoder::emit_jfif_app0() -{ - emit_marker(M_APP0); - emit_word(2 + 4 + 1 + 2 + 1 + 2 + 2 + 1 + 1); - emit_byte(0x4A); emit_byte(0x46); emit_byte(0x49); emit_byte(0x46); /* Identifier: ASCII "JFIF" */ - emit_byte(0); - emit_byte(1); /* Major version */ - emit_byte(1); /* Minor version */ - emit_byte(0); /* Density unit */ - emit_word(1); - emit_word(1); - emit_byte(0); /* No thumbnail image */ - emit_byte(0); -} - -// Emit quantization tables -void jpeg_encoder::emit_dqt() -{ - for (int i = 0; i < ((m_num_components == 3) ? 2 : 1); i++) - { - emit_marker(M_DQT); - emit_word(64 + 1 + 2); - emit_byte(static_cast(i)); - for (int j = 0; j < 64; j++) - emit_byte(static_cast(m_quantization_tables[i][j])); - } -} - -// Emit start of frame marker -void jpeg_encoder::emit_sof() -{ - emit_marker(M_SOF0); /* baseline */ - emit_word(3 * m_num_components + 2 + 5 + 1); - emit_byte(8); /* precision */ - emit_word(m_image_y); - emit_word(m_image_x); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); /* component ID */ - emit_byte((m_comp_h_samp[i] << 4) + m_comp_v_samp[i]); /* h and v sampling */ - emit_byte(i > 0); /* quant. table num */ - } -} - -// Emit Huffman table. -void jpeg_encoder::emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag) -{ - emit_marker(M_DHT); - - int length = 0; - for (int i = 1; i <= 16; i++) - length += bits[i]; - - emit_word(length + 2 + 1 + 16); - emit_byte(static_cast(index + (ac_flag << 4))); - - for (int i = 1; i <= 16; i++) - emit_byte(bits[i]); - - for (int i = 0; i < length; i++) - emit_byte(val[i]); -} - -// Emit all Huffman tables. -void jpeg_encoder::emit_dhts() -{ - emit_dht(m_huff_bits[0+0], m_huff_val[0+0], 0, false); - emit_dht(m_huff_bits[2+0], m_huff_val[2+0], 0, true); - if (m_num_components == 3) - { - emit_dht(m_huff_bits[0+1], m_huff_val[0+1], 1, false); - emit_dht(m_huff_bits[2+1], m_huff_val[2+1], 1, true); - } -} - -// emit start of scan -void jpeg_encoder::emit_sos() -{ - emit_marker(M_SOS); - emit_word(2 * m_num_components + 2 + 1 + 3); - emit_byte(m_num_components); - for (int i = 0; i < m_num_components; i++) - { - emit_byte(static_cast(i + 1)); - if (i == 0) - emit_byte((0 << 4) + 0); - else - emit_byte((1 << 4) + 1); - } - emit_byte(0); /* spectral selection */ - emit_byte(63); - emit_byte(0); -} - -// Emit all markers at beginning of image file. -void jpeg_encoder::emit_markers() -{ - emit_marker(M_SOI); - emit_jfif_app0(); - emit_dqt(); - emit_sof(); - emit_dhts(); - emit_sos(); -} - -// Compute the actual canonical Huffman codes/code sizes given the JPEG huff bits and val arrays. -void jpeg_encoder::compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val) -{ - int i, l, last_p, si; - uint8 huff_size[257]; - uint huff_code[257]; - uint code; - - int p = 0; - for (l = 1; l <= 16; l++) - for (i = 1; i <= bits[l]; i++) - huff_size[p++] = (char)l; - - huff_size[p] = 0; last_p = p; // write sentinel - - code = 0; si = huff_size[0]; p = 0; - - while (huff_size[p]) - { - while (huff_size[p] == si) - huff_code[p++] = code++; - code <<= 1; - si++; - } - - memset(codes, 0, sizeof(codes[0])*256); - memset(code_sizes, 0, sizeof(code_sizes[0])*256); - for (p = 0; p < last_p; p++) - { - codes[val[p]] = huff_code[p]; - code_sizes[val[p]] = huff_size[p]; - } -} - -// Quantization table generation. -void jpeg_encoder::compute_quant_table(int32 *pDst, int16 *pSrc) -{ - int32 q; - if (m_params.m_quality < 50) - q = 5000 / m_params.m_quality; - else - q = 200 - m_params.m_quality * 2; - for (int i = 0; i < 64; i++) - { - int32 j = *pSrc++; j = (j * q + 50L) / 100L; - *pDst++ = JPGE_MIN(JPGE_MAX(j, 1), 255); - } -} - -// Higher-level methods. -void jpeg_encoder::first_pass_init() -{ - m_bit_buffer = 0; m_bits_in = 0; - memset(m_last_dc_val, 0, 3 * sizeof(m_last_dc_val[0])); - m_mcu_y_ofs = 0; - m_pass_num = 1; -} - -bool jpeg_encoder::second_pass_init() -{ - compute_huffman_table(&m_huff_codes[0+0][0], &m_huff_code_sizes[0+0][0], m_huff_bits[0+0], m_huff_val[0+0]); - compute_huffman_table(&m_huff_codes[2+0][0], &m_huff_code_sizes[2+0][0], m_huff_bits[2+0], m_huff_val[2+0]); - if (m_num_components > 1) - { - compute_huffman_table(&m_huff_codes[0+1][0], &m_huff_code_sizes[0+1][0], m_huff_bits[0+1], m_huff_val[0+1]); - compute_huffman_table(&m_huff_codes[2+1][0], &m_huff_code_sizes[2+1][0], m_huff_bits[2+1], m_huff_val[2+1]); - } - first_pass_init(); - emit_markers(); - m_pass_num = 2; - return true; -} - -bool jpeg_encoder::jpg_open(int p_x_res, int p_y_res, int src_channels) -{ - m_num_components = 3; - switch (m_params.m_subsampling) - { - case Y_ONLY: - { - m_num_components = 1; - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H1V1: - { - m_comp_h_samp[0] = 1; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 8; m_mcu_y = 8; - break; - } - case H2V1: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 1; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 8; - break; - } - case H2V2: - { - m_comp_h_samp[0] = 2; m_comp_v_samp[0] = 2; - m_comp_h_samp[1] = 1; m_comp_v_samp[1] = 1; - m_comp_h_samp[2] = 1; m_comp_v_samp[2] = 1; - m_mcu_x = 16; m_mcu_y = 16; - } - } - - m_image_x = p_x_res; m_image_y = p_y_res; - m_image_bpp = src_channels; - m_image_bpl = m_image_x * src_channels; - m_image_x_mcu = (m_image_x + m_mcu_x - 1) & (~(m_mcu_x - 1)); - m_image_y_mcu = (m_image_y + m_mcu_y - 1) & (~(m_mcu_y - 1)); - m_image_bpl_xlt = m_image_x * m_num_components; - m_image_bpl_mcu = m_image_x_mcu * m_num_components; - m_mcus_per_row = m_image_x_mcu / m_mcu_x; - - if ((m_mcu_lines[0] = static_cast(jpge_malloc(m_image_bpl_mcu * m_mcu_y))) == NULL) return false; - for (int i = 1; i < m_mcu_y; i++) - m_mcu_lines[i] = m_mcu_lines[i-1] + m_image_bpl_mcu; - - compute_quant_table(m_quantization_tables[0], s_std_lum_quant); - compute_quant_table(m_quantization_tables[1], m_params.m_no_chroma_discrim_flag ? s_std_lum_quant : s_std_croma_quant); - - m_out_buf_left = JPGE_OUT_BUF_SIZE; - m_pOut_buf = m_out_buf; - - if (m_params.m_two_pass_flag) - { - clear_obj(m_huff_count); - first_pass_init(); - } - else - { - memcpy(m_huff_bits[0+0], s_dc_lum_bits, 17); memcpy(m_huff_val [0+0], s_dc_lum_val, DC_LUM_CODES); - memcpy(m_huff_bits[2+0], s_ac_lum_bits, 17); memcpy(m_huff_val [2+0], s_ac_lum_val, AC_LUM_CODES); - memcpy(m_huff_bits[0+1], s_dc_chroma_bits, 17); memcpy(m_huff_val [0+1], s_dc_chroma_val, DC_CHROMA_CODES); - memcpy(m_huff_bits[2+1], s_ac_chroma_bits, 17); memcpy(m_huff_val [2+1], s_ac_chroma_val, AC_CHROMA_CODES); - if (!second_pass_init()) return false; // in effect, skip over the first pass - } - return m_all_stream_writes_succeeded; -} - -void jpeg_encoder::load_block_8_8_grey(int x) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[i] + x; - pDst[0] = pSrc[0] - 128; pDst[1] = pSrc[1] - 128; pDst[2] = pSrc[2] - 128; pDst[3] = pSrc[3] - 128; - pDst[4] = pSrc[4] - 128; pDst[5] = pSrc[5] - 128; pDst[6] = pSrc[6] - 128; pDst[7] = pSrc[7] - 128; - } -} - -void jpeg_encoder::load_block_8_8(int x, int y, int c) -{ - uint8 *pSrc; - sample_array_t *pDst = m_sample_array; - x = (x * (8 * 3)) + c; - y <<= 3; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc = m_mcu_lines[y + i] + x; - pDst[0] = pSrc[0 * 3] - 128; pDst[1] = pSrc[1 * 3] - 128; pDst[2] = pSrc[2 * 3] - 128; pDst[3] = pSrc[3 * 3] - 128; - pDst[4] = pSrc[4 * 3] - 128; pDst[5] = pSrc[5 * 3] - 128; pDst[6] = pSrc[6 * 3] - 128; pDst[7] = pSrc[7 * 3] - 128; - } -} - -void jpeg_encoder::load_block_16_8(int x, int c) -{ - uint8 *pSrc1, *pSrc2; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - int a = 0, b = 2; - for (int i = 0; i < 16; i += 2, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pSrc2 = m_mcu_lines[i + 1] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3] + pSrc2[ 0 * 3] + pSrc2[ 1 * 3] + a) >> 2) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3] + pSrc2[ 2 * 3] + pSrc2[ 3 * 3] + b) >> 2) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3] + pSrc2[ 4 * 3] + pSrc2[ 5 * 3] + a) >> 2) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3] + pSrc2[ 6 * 3] + pSrc2[ 7 * 3] + b) >> 2) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3] + pSrc2[ 8 * 3] + pSrc2[ 9 * 3] + a) >> 2) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3] + pSrc2[10 * 3] + pSrc2[11 * 3] + b) >> 2) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3] + pSrc2[12 * 3] + pSrc2[13 * 3] + a) >> 2) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3] + pSrc2[14 * 3] + pSrc2[15 * 3] + b) >> 2) - 128; - int temp = a; a = b; b = temp; - } -} - -void jpeg_encoder::load_block_16_8_8(int x, int c) -{ - uint8 *pSrc1; - sample_array_t *pDst = m_sample_array; - x = (x * (16 * 3)) + c; - for (int i = 0; i < 8; i++, pDst += 8) - { - pSrc1 = m_mcu_lines[i + 0] + x; - pDst[0] = ((pSrc1[ 0 * 3] + pSrc1[ 1 * 3]) >> 1) - 128; pDst[1] = ((pSrc1[ 2 * 3] + pSrc1[ 3 * 3]) >> 1) - 128; - pDst[2] = ((pSrc1[ 4 * 3] + pSrc1[ 5 * 3]) >> 1) - 128; pDst[3] = ((pSrc1[ 6 * 3] + pSrc1[ 7 * 3]) >> 1) - 128; - pDst[4] = ((pSrc1[ 8 * 3] + pSrc1[ 9 * 3]) >> 1) - 128; pDst[5] = ((pSrc1[10 * 3] + pSrc1[11 * 3]) >> 1) - 128; - pDst[6] = ((pSrc1[12 * 3] + pSrc1[13 * 3]) >> 1) - 128; pDst[7] = ((pSrc1[14 * 3] + pSrc1[15 * 3]) >> 1) - 128; - } -} - -void jpeg_encoder::load_quantized_coefficients(int component_num) -{ - int32 *q = m_quantization_tables[component_num > 0]; - int16 *pDst = m_coefficient_array; - for (int i = 0; i < 64; i++) - { - sample_array_t j = m_sample_array[s_zag[i]]; - if (j < 0) - { - if ((j = -j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast(-(j / *q)); - } - else - { - if ((j = j + (*q >> 1)) < *q) - *pDst++ = 0; - else - *pDst++ = static_cast((j / *q)); - } - q++; - } -} - -void jpeg_encoder::flush_output_buffer() -{ - if (m_out_buf_left != JPGE_OUT_BUF_SIZE) - m_all_stream_writes_succeeded = m_all_stream_writes_succeeded && m_pStream->put_buf(m_out_buf, JPGE_OUT_BUF_SIZE - m_out_buf_left); - m_pOut_buf = m_out_buf; - m_out_buf_left = JPGE_OUT_BUF_SIZE; -} - -void jpeg_encoder::put_bits(uint bits, uint len) -{ - m_bit_buffer |= ((uint32)bits << (24 - (m_bits_in += len))); - while (m_bits_in >= 8) - { - uint8 c; - #define JPGE_PUT_BYTE(c) { *m_pOut_buf++ = (c); if (--m_out_buf_left == 0) flush_output_buffer(); } - JPGE_PUT_BYTE(c = (uint8)((m_bit_buffer >> 16) & 0xFF)); - if (c == 0xFF) JPGE_PUT_BYTE(0); - m_bit_buffer <<= 8; - m_bits_in -= 8; - } -} - -void jpeg_encoder::code_coefficients_pass_one(int component_num) -{ - if (component_num >= 3) return; // just to shut up static analysis - int i, run_len, nbits, temp1; - int16 *src = m_coefficient_array; - uint32 *dc_count = component_num ? m_huff_count[0 + 1] : m_huff_count[0 + 0], *ac_count = component_num ? m_huff_count[2 + 1] : m_huff_count[2 + 0]; - - temp1 = src[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = src[0]; - if (temp1 < 0) temp1 = -temp1; - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - dc_count[nbits]++; - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - ac_count[0xF0]++; - run_len -= 16; - } - if (temp1 < 0) temp1 = -temp1; - nbits = 1; - while (temp1 >>= 1) nbits++; - ac_count[(run_len << 4) + nbits]++; - run_len = 0; - } - } - if (run_len) ac_count[0]++; -} - -void jpeg_encoder::code_coefficients_pass_two(int component_num) -{ - int i, j, run_len, nbits, temp1, temp2; - int16 *pSrc = m_coefficient_array; - uint *codes[2]; - uint8 *code_sizes[2]; - - if (component_num == 0) - { - codes[0] = m_huff_codes[0 + 0]; codes[1] = m_huff_codes[2 + 0]; - code_sizes[0] = m_huff_code_sizes[0 + 0]; code_sizes[1] = m_huff_code_sizes[2 + 0]; - } - else - { - codes[0] = m_huff_codes[0 + 1]; codes[1] = m_huff_codes[2 + 1]; - code_sizes[0] = m_huff_code_sizes[0 + 1]; code_sizes[1] = m_huff_code_sizes[2 + 1]; - } - - temp1 = temp2 = pSrc[0] - m_last_dc_val[component_num]; - m_last_dc_val[component_num] = pSrc[0]; - - if (temp1 < 0) - { - temp1 = -temp1; temp2--; - } - - nbits = 0; - while (temp1) - { - nbits++; temp1 >>= 1; - } - - put_bits(codes[0][nbits], code_sizes[0][nbits]); - if (nbits) put_bits(temp2 & ((1 << nbits) - 1), nbits); - - for (run_len = 0, i = 1; i < 64; i++) - { - if ((temp1 = m_coefficient_array[i]) == 0) - run_len++; - else - { - while (run_len >= 16) - { - put_bits(codes[1][0xF0], code_sizes[1][0xF0]); - run_len -= 16; - } - if ((temp2 = temp1) < 0) - { - temp1 = -temp1; - temp2--; - } - nbits = 1; - while (temp1 >>= 1) - nbits++; - j = (run_len << 4) + nbits; - put_bits(codes[1][j], code_sizes[1][j]); - put_bits(temp2 & ((1 << nbits) - 1), nbits); - run_len = 0; - } - } - if (run_len) - put_bits(codes[1][0], code_sizes[1][0]); -} - -void jpeg_encoder::code_block(int component_num) -{ - DCT2D(m_sample_array); - load_quantized_coefficients(component_num); - if (m_pass_num == 1) - code_coefficients_pass_one(component_num); - else - code_coefficients_pass_two(component_num); -} - -void jpeg_encoder::process_mcu_row() -{ - if (m_num_components == 1) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8_grey(i); code_block(0); - } - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i, 0, 0); code_block(0); load_block_8_8(i, 0, 1); code_block(1); load_block_8_8(i, 0, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_16_8_8(i, 1); code_block(1); load_block_16_8_8(i, 2); code_block(2); - } - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - for (int i = 0; i < m_mcus_per_row; i++) - { - load_block_8_8(i * 2 + 0, 0, 0); code_block(0); load_block_8_8(i * 2 + 1, 0, 0); code_block(0); - load_block_8_8(i * 2 + 0, 1, 0); code_block(0); load_block_8_8(i * 2 + 1, 1, 0); code_block(0); - load_block_16_8(i, 1); code_block(1); load_block_16_8(i, 2); code_block(2); - } - } -} - -bool jpeg_encoder::terminate_pass_one() -{ - optimize_huffman_table(0+0, DC_LUM_CODES); optimize_huffman_table(2+0, AC_LUM_CODES); - if (m_num_components > 1) - { - optimize_huffman_table(0+1, DC_CHROMA_CODES); optimize_huffman_table(2+1, AC_CHROMA_CODES); - } - return second_pass_init(); -} - -bool jpeg_encoder::terminate_pass_two() -{ - put_bits(0x7F, 7); - flush_output_buffer(); - emit_marker(M_EOI); - m_pass_num++; // purposely bump up m_pass_num, for debugging - return true; -} - -bool jpeg_encoder::process_end_of_image() -{ - if (m_mcu_y_ofs) - { - if (m_mcu_y_ofs < 16) // check here just to shut up static analysis - { - for (int i = m_mcu_y_ofs; i < m_mcu_y; i++) - memcpy(m_mcu_lines[i], m_mcu_lines[m_mcu_y_ofs - 1], m_image_bpl_mcu); - } - - process_mcu_row(); - } - - if (m_pass_num == 1) - return terminate_pass_one(); - else - return terminate_pass_two(); -} - -void jpeg_encoder::load_mcu(const void *pSrc) -{ - const uint8* Psrc = reinterpret_cast(pSrc); - - uint8* pDst = m_mcu_lines[m_mcu_y_ofs]; // OK to write up to m_image_bpl_xlt bytes to pDst - - if (m_num_components == 1) - { - if (m_image_bpp == 4) - RGBA_to_Y(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_Y(pDst, Psrc, m_image_x); - else - memcpy(pDst, Psrc, m_image_x); - } - else - { - if (m_image_bpp == 4) - RGBA_to_YCC(pDst, Psrc, m_image_x); - else if (m_image_bpp == 3) - RGB_to_YCC(pDst, Psrc, m_image_x); - else - Y_to_YCC(pDst, Psrc, m_image_x); - } - - // Possibly duplicate pixels at end of scanline if not a multiple of 8 or 16 - if (m_num_components == 1) - memset(m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt, pDst[m_image_bpl_xlt - 1], m_image_x_mcu - m_image_x); - else - { - const uint8 y = pDst[m_image_bpl_xlt - 3 + 0], cb = pDst[m_image_bpl_xlt - 3 + 1], cr = pDst[m_image_bpl_xlt - 3 + 2]; - uint8 *q = m_mcu_lines[m_mcu_y_ofs] + m_image_bpl_xlt; - for (int i = m_image_x; i < m_image_x_mcu; i++) - { - *q++ = y; *q++ = cb; *q++ = cr; - } - } - - if (++m_mcu_y_ofs == m_mcu_y) - { - process_mcu_row(); - m_mcu_y_ofs = 0; - } -} - -void jpeg_encoder::clear() -{ - m_mcu_lines[0] = NULL; - m_pass_num = 0; - m_all_stream_writes_succeeded = true; -} - -jpeg_encoder::jpeg_encoder() -{ - clear(); -} - -jpeg_encoder::~jpeg_encoder() -{ - deinit(); -} - -bool jpeg_encoder::init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params) -{ - deinit(); - if (((!pStream) || (width < 1) || (height < 1)) || ((src_channels != 1) && (src_channels != 3) && (src_channels != 4)) || (!comp_params.check_valid())) return false; - m_pStream = pStream; - m_params = comp_params; - return jpg_open(width, height, src_channels); -} - -void jpeg_encoder::deinit() -{ - jpge_free(m_mcu_lines[0]); - clear(); -} - -bool jpeg_encoder::process_scanline(const void* pScanline) -{ - if ((m_pass_num < 1) || (m_pass_num > 2)) return false; - if (m_all_stream_writes_succeeded) - { - if (!pScanline) - { - if (!process_end_of_image()) return false; - } - else - { - load_mcu(pScanline); - } - } - return m_all_stream_writes_succeeded; -} - -// Higher level wrappers/examples (optional). -#include - -class cfile_stream : public output_stream -{ - cfile_stream(const cfile_stream &); - cfile_stream &operator= (const cfile_stream &); - - FILE* m_pFile; - bool m_bStatus; - -public: - cfile_stream() : m_pFile(NULL), m_bStatus(false) { } - - virtual ~cfile_stream() - { - close(); - } - - bool open(const char *pFilename) - { - close(); -#if defined(_MSC_VER) - if (fopen_s(&m_pFile, pFilename, "wb") != 0) - { - return false; - } -#else - m_pFile = fopen(pFilename, "wb"); -#endif - m_bStatus = (m_pFile != NULL); - return m_bStatus; - } - - bool close() - { - if (m_pFile) - { - if (fclose(m_pFile) == EOF) - { - m_bStatus = false; - } - m_pFile = NULL; - } - return m_bStatus; - } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - m_bStatus = m_bStatus && (fwrite(pBuf, len, 1, m_pFile) == 1); - return m_bStatus; - } - - uint get_size() const - { - return m_pFile ? ftell(m_pFile) : 0; - } -}; - -// Writes JPEG image to file. -bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - cfile_stream dst_stream; - if (!dst_stream.open(pFilename)) - return false; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - // i, width, and num_channels are all 64bit - const uint8* pBuf = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pBuf)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - return dst_stream.close(); -} - -class memory_stream : public output_stream -{ - memory_stream(const memory_stream &); - memory_stream &operator= (const memory_stream &); - - uint8 *m_pBuf; - uint64_t m_buf_size, m_buf_ofs; - -public: - memory_stream(void *pBuf, uint64_t buf_size) : m_pBuf(static_cast(pBuf)), m_buf_size(buf_size), m_buf_ofs(0) { } - - virtual ~memory_stream() { } - - virtual bool put_buf(const void* pBuf, int64_t len) - { - uint64_t buf_remaining = m_buf_size - m_buf_ofs; - if ((uint64_t)len > buf_remaining) - return false; - memcpy(m_pBuf + m_buf_ofs, pBuf, len); - m_buf_ofs += len; - return true; - } - - uint64_t get_size() const - { - return m_buf_ofs; - } -}; - -bool compress_image_to_jpeg_file_in_memory(void *pDstBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params) -{ - if ((!pDstBuf) || (!buf_size)) - return false; - - memory_stream dst_stream(pDstBuf, buf_size); - - buf_size = 0; - - jpge::jpeg_encoder dst_image; - if (!dst_image.init(&dst_stream, width, height, num_channels, comp_params)) - return false; - - for (uint pass_index = 0; pass_index < dst_image.get_total_passes(); pass_index++) - { - for (int64_t i = 0; i < height; i++) - { - const uint8* pScanline = pImage_data + i * width * num_channels; - if (!dst_image.process_scanline(pScanline)) - return false; - } - if (!dst_image.process_scanline(NULL)) - return false; - } - - dst_image.deinit(); - - buf_size = dst_stream.get_size(); - return true; -} - -} // namespace jpge \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/asv.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/asv.c deleted file mode 100644 index 3aa08c30c0975986d2ba9e50bab57f7ab1bf1a5e..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/asv.c +++ /dev/null @@ -1,103 +0,0 @@ -/* - * Copyright (c) 2003 Michael Niedermayer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * ASUS V1/V2 encoder/decoder common data - */ - -#include - -#include "libavutil/attributes.h" - -#include "asv.h" -#include "avcodec.h" -#include "bswapdsp.h" - -const uint8_t ff_asv_scantab[64] = { - 0x00, 0x08, 0x01, 0x09, 0x10, 0x18, 0x11, 0x19, - 0x02, 0x0A, 0x03, 0x0B, 0x12, 0x1A, 0x13, 0x1B, - 0x04, 0x0C, 0x05, 0x0D, 0x20, 0x28, 0x21, 0x29, - 0x06, 0x0E, 0x07, 0x0F, 0x14, 0x1C, 0x15, 0x1D, - 0x22, 0x2A, 0x23, 0x2B, 0x30, 0x38, 0x31, 0x39, - 0x16, 0x1E, 0x17, 0x1F, 0x24, 0x2C, 0x25, 0x2D, - 0x32, 0x3A, 0x33, 0x3B, 0x26, 0x2E, 0x27, 0x2F, - 0x34, 0x3C, 0x35, 0x3D, 0x36, 0x3E, 0x37, 0x3F, -}; - -const uint8_t ff_asv_ccp_tab[17][2] = { - { 0x2, 2 }, { 0x7, 5 }, { 0xB, 5 }, { 0x3, 5 }, - { 0xD, 5 }, { 0x5, 5 }, { 0x9, 5 }, { 0x1, 5 }, - { 0xE, 5 }, { 0x6, 5 }, { 0xA, 5 }, { 0x2, 5 }, - { 0xC, 5 }, { 0x4, 5 }, { 0x8, 5 }, { 0x3, 2 }, - { 0xF, 5 }, // EOB -}; - -const uint8_t ff_asv_level_tab[7][2] = { - { 3, 4 }, { 3, 3 }, { 3, 2 }, { 0, 3 }, { 2, 2 }, { 2, 3 }, { 2, 4 } -}; - -const uint8_t ff_asv_dc_ccp_tab[8][2] = { - { 0x2, 2 }, { 0xB, 4 }, { 0xF, 4 }, { 0x3, 4 }, - { 0x5, 3 }, { 0x7, 4 }, { 0x1, 3 }, { 0x0, 2 }, -}; - -const uint8_t ff_asv_ac_ccp_tab[16][2] = { - { 0x00, 2 }, { 0x37, 6 }, { 0x05, 4 }, { 0x17, 6 }, - { 0x02, 3 }, { 0x27, 6 }, { 0x0F, 6 }, { 0x07, 6 }, - { 0x06, 3 }, { 0x2F, 6 }, { 0x01, 4 }, { 0x1F, 5 }, - { 0x09, 4 }, { 0x0D, 4 }, { 0x0B, 4 }, { 0x03, 4 }, -}; - -const uint16_t ff_asv2_level_tab[63][2] = { - { 0x3F0, 10 }, { 0x3D0, 10 }, { 0x3B0, 10 }, { 0x390, 10 }, { 0x370, 10 }, - { 0x350, 10 }, { 0x330, 10 }, { 0x310, 10 }, { 0x2F0, 10 }, { 0x2D0, 10 }, - { 0x2B0, 10 }, { 0x290, 10 }, { 0x270, 10 }, { 0x250, 10 }, { 0x230, 10 }, - { 0x210, 10 }, - { 0x0F8, 8 }, { 0x0E8, 8 }, { 0x0D8, 8 }, { 0x0C8, 8 }, { 0x0B8, 8 }, - { 0x0A8, 8 }, { 0x098, 8 }, { 0x088, 8 }, - { 0x03C, 6 }, { 0x034, 6 }, { 0x02C, 6 }, { 0x024, 6 }, - { 0x00E, 4 }, { 0x00A, 4 }, - { 0x003, 2 }, - { 0x000, 5 }, - { 0x001, 2 }, - { 0x002, 4 }, { 0x006, 4 }, - { 0x004, 6 }, { 0x00C, 6 }, { 0x014, 6 }, { 0x01C, 6 }, - { 0x008, 8 }, { 0x018, 8 }, { 0x028, 8 }, { 0x038, 8 }, { 0x048, 8 }, - { 0x058, 8 }, { 0x068, 8 }, { 0x078, 8 }, - { 0x010, 10 }, { 0x030, 10 }, { 0x050, 10 }, { 0x070, 10 }, { 0x090, 10 }, - { 0x0B0, 10 }, { 0x0D0, 10 }, { 0x0F0, 10 }, { 0x110, 10 }, { 0x130, 10 }, - { 0x150, 10 }, { 0x170, 10 }, { 0x190, 10 }, { 0x1B0, 10 }, { 0x1D0, 10 }, - { 0x1F0, 10 } -}; - -av_cold void ff_asv_common_init(AVCodecContext *avctx) -{ - ASVCommonContext *const a = avctx->priv_data; - - ff_bswapdsp_init(&a->bbdsp); - - a->mb_width = (avctx->width + 15) / 16; - a->mb_height = (avctx->height + 15) / 16; - a->mb_width2 = (avctx->width + 0) / 16; - a->mb_height2 = (avctx->height + 0) / 16; - - a->avctx = avctx; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_parse.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_parse.c deleted file mode 100644 index 59ea0bc6e75720cf3fad820aaceb3eb3e1be30e3..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/av1_parse.c +++ /dev/null @@ -1,110 +0,0 @@ -/* - * AV1 common parsing code - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config.h" - -#include "libavutil/mem.h" - -#include "av1.h" -#include "av1_parse.h" -#include "bytestream.h" - -int ff_av1_extract_obu(AV1OBU *obu, const uint8_t *buf, int length, void *logctx) -{ - int64_t obu_size; - int start_pos, type, temporal_id, spatial_id; - int len; - - len = parse_obu_header(buf, length, &obu_size, &start_pos, - &type, &temporal_id, &spatial_id); - if (len < 0) - return len; - - obu->type = type; - obu->temporal_id = temporal_id; - obu->spatial_id = spatial_id; - - obu->data = buf + start_pos; - obu->size = obu_size; - obu->raw_data = buf; - obu->raw_size = len; - - av_log(logctx, AV_LOG_DEBUG, - "obu_type: %d, temporal_id: %d, spatial_id: %d, payload size: %d\n", - obu->type, obu->temporal_id, obu->spatial_id, obu->size); - - return len; -} - -int ff_av1_packet_split(AV1Packet *pkt, const uint8_t *buf, int length, void *logctx) -{ - GetByteContext bc; - int ret, consumed; - - bytestream2_init(&bc, buf, length); - pkt->nb_obus = 0; - - while (bytestream2_get_bytes_left(&bc) > 0) { - AV1OBU *obu; - - if (pkt->obus_allocated < pkt->nb_obus + 1) { - int new_size = pkt->obus_allocated + 1; - AV1OBU *tmp; - - if (new_size >= INT_MAX / sizeof(*tmp)) - return AVERROR(ENOMEM); - tmp = av_fast_realloc(pkt->obus, &pkt->obus_allocated_size, new_size * sizeof(*tmp)); - if (!tmp) - return AVERROR(ENOMEM); - - pkt->obus = tmp; - memset(pkt->obus + pkt->obus_allocated, 0, sizeof(*pkt->obus)); - pkt->obus_allocated = new_size; - } - obu = &pkt->obus[pkt->nb_obus]; - - consumed = ff_av1_extract_obu(obu, bc.buffer, bytestream2_get_bytes_left(&bc), logctx); - if (consumed < 0) - return consumed; - - bytestream2_skip(&bc, consumed); - - obu->size_bits = get_obu_bit_length(obu->data, obu->size, obu->type); - - if (obu->size_bits < 0 || (!obu->size_bits && obu->type != AV1_OBU_TEMPORAL_DELIMITER)) { - av_log(logctx, AV_LOG_ERROR, "Invalid OBU of type %d, skipping.\n", obu->type); - continue; - } - - pkt->nb_obus++; - - ret = init_get_bits(&obu->gb, obu->data, obu->size_bits); - if (ret < 0) - return ret; - } - - return 0; -} - -void ff_av1_packet_uninit(AV1Packet *pkt) -{ - av_freep(&pkt->obus); - pkt->obus_allocated = pkt->obus_allocated_size = 0; -} diff --git a/spaces/colornative/goofyai-3d_render_style_xl/app.py b/spaces/colornative/goofyai-3d_render_style_xl/app.py deleted file mode 100644 index 4f2d3011c603b276c7800e5d1e9de8bf628eeda2..0000000000000000000000000000000000000000 --- a/spaces/colornative/goofyai-3d_render_style_xl/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/goofyai/3d_render_style_xl").launch() \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Angry Birds 2 3.4.2 APK New Levels Enemies and Fun Modes.md b/spaces/congsaPfin/Manga-OCR/logs/Angry Birds 2 3.4.2 APK New Levels Enemies and Fun Modes.md deleted file mode 100644 index 8dabb902c18fd50191b358b457e76da640789f8f..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Angry Birds 2 3.4.2 APK New Levels Enemies and Fun Modes.md +++ /dev/null @@ -1,159 +0,0 @@ - -

    Angry Birds 2 3.4.2 APK: Everything You Need to Know

    -

    If you are a fan of the Angry Birds franchise, you probably know that Angry Birds 2 is one of the best games in the series. It is a puzzle game that challenges you to fling birds at structures made of glass, wood, and stone, where evil pigs are hiding. The game features stunning graphics, addictive gameplay, and tons of levels to enjoy.

    -

    But did you know that there is a new version of Angry Birds 2 available for download? It is called Angry Birds 2 3.4.2 APK, and it brings some new features and improvements to the game. In this article, we will tell you everything you need to know about this update, including how to download and install it, what's new in it, how to play it, and more. So, let's get started!

    -

    angry birds 2 3.4.2 apk


    Downloadhttps://urlca.com/2uOctZ



    -

    How to Download and Install Angry Birds 2 3.4.2 APK

    -

    Angry Birds 2 is available for free on the Google Play Store and the Apple App Store, but if you want to get the latest version of the game, you will need to download and install the APK file manually. APK stands for Android Package Kit, and it is a file format that contains all the data and code needed to run an app on Android devices.

    -

    To download and install Angry Birds 2 3.4.2 APK, you will need to follow these steps:

    -
      -
    1. Find a reliable source for the APK file. You can use one of these links from trusted websites:
      -- [Angry Birds 2 3.4.2 APK Download - Softpedia](^1^)
      -- [Angry Birds 2 APK (Android Game) - Free Download - APKCombo](^2^)
    2. -
    3. Enable unknown sources on your device. This will allow you to install apps from sources other than the official app stores. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    4. -
    5. Download and install the APK file. Once you have found a link for the APK file, tap on it and follow the instructions on your screen. You may need to grant some permissions to the app before it can be installed.
    6. -
    -

    Congratulations! You have successfully downloaded and installed Angry Birds 2 3.4.2 APK on your device. Now you can enjoy the game with the latest features and improvements.

    -

    What's New in Angry Birds 2 3.4.2 APK

    -

    Angry Birds 2 3.4.2 APK is the latest version of the game, and it comes with some new features and improvements that make the game more fun and exciting. Here are some of the highlights of this update:

    -
      -
    • New levels: The game has added more than 80 new levels across 4 new chapters, each with its own theme and challenges. You will encounter new enemies, obstacles, and surprises as you progress through the game.
    • -
    • New birds: The game has introduced two new birds to join your flock: Silver and Bubbles. Silver is a fast and agile bird that can perform a loop-de-loop in the air, while Bubbles is a cute and chubby bird that can inflate and explode, causing massive damage.
    • -
    • New spells: The game has added two new spells to help you in your quest: Pig Inflater and Hot Chili. Pig Inflater will inflate all the pigs on the screen, making them easier to pop, while Hot Chili will set fire to a random structure, causing it to burn and collapse.
    • -
    • New hats: The game has added more than 100 new hats for you to collect and customize your birds. You can find hats of different styles, themes, and rarities, such as cowboy hats, pirate hats, ninja hats, and more.
    • -
    • Bug fixes and performance improvements: The game has fixed some bugs and glitches that were affecting the gameplay, such as crashes, freezes, and loading issues. The game has also improved its performance and stability, making it run smoother and faster on your device.
    • -
    -

    These are some of the new features and improvements that Angry Birds 2 3.4.2 APK brings to the game. You can find more details about this update on the official website of the game or on the app store page.

    -

    How to Play Angry Birds 2

    -

    Angry Birds 2 is a puzzle game that requires you to use your logic, skill, and strategy to complete each level. The goal of the game is to fling birds at structures made of glass, wood, and stone, where evil pigs are hiding. You have to destroy all the pigs in each level to win.

    -

    The game has a card system that allows you to choose which bird you want to use for each shot. You can see the cards at the bottom of the screen, and you can swipe left or right to select one. Each bird has its own special ability that can be activated by tapping on the screen while it is in the air.

    -

    angry birds 2 apk download latest version 3.4.2
    -angry birds 2 mod apk unlimited gems 3.4.2
    -angry birds 2 hack apk android 3.4.2
    -angry birds 2 update 3.4.2 apk
    -angry birds 2 offline apk 3.4.2
    -angry birds 2 apk free download for android 3.4.2
    -angry birds 2 apk mirror 3.4.2
    -angry birds 2 apk pure 3.4.2
    -angry birds 2 apk old version 3.4.2
    -angry birds 2 apk revdl 3.4.2
    -angry birds 2 apk mod money 3.4.2
    -angry birds 2 apk obb 3.4.2
    -angry birds 2 apk uptodown 3.4.2
    -angry birds 2 apk rexdl 3.4.2
    -angry birds 2 apk mod menu 3.4.2
    -angry birds 2 apk hack download 3.4.2
    -angry birds 2 apk no ads 3.4.2
    -angry birds 2 apk unlimited everything 3.4.2
    -angry birds 2 apk mod offline 3.4.2
    -angry birds 2 apk data download 3.4.2
    -angry birds 2 apk full version free download 3.4.2
    -angry birds 2 apk mod unlimited lives and boosters 3.4.2
    -angry birds 2 apk mod all unlocked and unlimited gems and coins and black pearls and spells and feathers and lives and energy and hats and outfits and slingshots and cards and levels and stars and arena tickets and clan perks and tower of fortune tickets and mighty eagle rewards and daily challenge rewards and league rewards and clan rewards and tower of fortune rewards and mighty eagle bootcamp rewards and hatchlings rewards and daily tasks rewards and events rewards and chests rewards and gifts rewards and achievements rewards and leaderboard rewards and quests rewards and shop items and vip subscription (just kidding) 😜
    -angry birds 2 apk for pc windows 10 download free full version latest update new game play online offline installer setup file exe zip rar iso compressed high quality graphics low mb size best android emulator bluestacks nox app player memu ldplayer gameloop tencent gaming buddy smartgaga koplayer droid4x genymotion msi app player remix os player phoenix os console os prime os bliss os android x86 project androidx86 amigaos aranym atari st dosbox freedos ms-dos pcem qemu virtualbox vmware workstation player (just kidding again) 😂
    -angry birds 2 apk for ios iphone ipad ipod touch macbook imac apple tv app store itunes icloud game center airplay siri shortcuts widgets arcade family sharing icade controller support retina display metal graphics mfi controller support (not kidding this time) 😊

    -

    The game also has a Destructometer that fills up as you cause damage to the structures. When the Destructometer is full, you will get an extra card or a spell card that can help you in your mission. Spell cards are powerful items that can create various effects on the screen, such as raining rubber ducks, summoning mighty eagles, or dropping golden ducks.

    -

    The game has hundreds of levels that are randomly generated, meaning that each level is different and challenging every time you play it. You will also encounter boss battles where you have to face off against giant pigs with special abilities and weapons.

    -

    The game also has other modes and features that add more fun and variety to the gameplay. You can compete with other players around the world in the Arena mode, where you have to score as many points as possible in a limited time. You can also participate in special events and join clans with other players, where you can chat, share tips, and cooperate in clan challenges.

    -

    Angry Birds 2 is a game that will keep you entertained for hours with its colorful graphics, addictive gameplay, and tons of content. If you want to learn more about how to play Angry Birds 2, here are some tips for each aspect of the game:

    -

    Choose Your Bird Wisely

    One of the most important aspects of Angry Birds 2 is choosing the right bird for each shot. You have to consider the type of structure, the position of the pigs, and the special ability of the bird. Here are some tips for each bird:

    -
      -
    • Red: Red is the default bird that you will use most of the time. He has no special ability, but he is good at breaking wood and glass. He is also good at pushing objects and knocking down towers.
    • -
    • Chuck: Chuck is the yellow bird that can speed up in the air. He is good at breaking wood and hitting targets from a distance. He is also good at creating chain reactions and hitting multiple pigs at once.
    • -
    • Bomb: Bomb is the black bird that can explode on impact or on command. He is good at breaking stone and causing massive damage. He is also good at clearing large areas and destroying metal objects.
    • -
    • Matilda: Matilda is the white bird that can drop an egg bomb while flying. She is good at breaking glass and hitting targets below her. She is also good at creating holes and exposing hidden pigs.
    • -
    • The Blues: The Blues are the three blue birds that can split into three smaller birds. They are good at breaking glass and hitting small targets. They are also good at spreading damage and hitting multiple pigs at once.
    • -
    • Silver: Silver is the new bird that can perform a loop-de-loop in the air. She is good at breaking wood and hitting targets from above. She is also good at changing direction and hitting hard-to-reach pigs.
    • -
    • Bubbles: Bubbles is the new bird that can inflate and explode, causing massive damage. He is good at breaking wood and glass and hitting large targets. He is also good at pushing objects and popping balloons.
    • -
    -

    These are the main birds that you will use in Angry Birds 2, but there are also some other birds that you can unlock or use as spells, such as Stella, Terence, Hal, Mighty Eagle, and more. Each bird has its own strengths and weaknesses, so you have to experiment and find out which one works best for each level.

    -

    Use the Environment to Your Advantage

    -

    Another important aspect of Angry Birds 2 is using the environment to your advantage. Each level has different elements that can help you or hinder you in your mission. You have to pay attention to these elements and use them wisely. Here are some examples of environmental elements:

    -
      -
    • Flowers: Flowers are plants that can catch your birds and launch them back into the air. You can use them to change the trajectory of your birds or to hit targets that are out of reach.
    • -
    • Portals: Portals are devices that can teleport your birds from one place to another. You can use them to bypass obstacles or to hit targets from different angles.
    • -
    • Fans: Fans are machines that can blow air in a certain direction. You can use them to alter the speed or direction of your birds or to push objects or pigs.
    • -
    • Balloons: Balloons are objects that can float in the air and carry other objects or pigs with them. You can use them to lift or drop objects or pigs or to pop them with your birds.
    • -
    • TNT: TNT is an explosive that can detonate when hit by your birds or other objects. You can use it to cause massive damage or to create chain reactions.
    • -
    -

    These are some of the environmental elements that you will encounter in Angry Birds 2, but there are also many others, such as ice, water, lava, magnets, ropes, wheels, and more. Each element has its own physics and behavior, so you have to experiment and find out how they work.

    -

    Fill Up the Destructometer Quickly

    One of the key features of Angry Birds 2 is the Destructometer, which is a meter that fills up as you cause damage to the structures and the pigs. When the Destructometer is full, you will get an extra card or a spell card that can help you in your mission. Therefore, it is important to fill up the Destructometer quickly and efficiently. Here are some tips for doing that:

    -
      -
    • Use the right bird for the right structure. As we mentioned before, each bird has its own special ability and strength. You have to use the bird that can cause the most damage to the type of structure you are facing. For example, use Bomb for stone, Chuck for wood, and The Blues for glass.
    • -
    • Hit the weak points of the structures. Each structure has some weak points that can make it collapse or fall apart easily. You have to aim for these weak points and hit them with your birds. For example, hit the joints, supports, or bases of the structures.
    • -
    • Use the environmental elements to your advantage. As we mentioned before, each level has different elements that can help you or hinder you in your mission. You have to use these elements to cause more damage or to create chain reactions. For example, use flowers to launch your birds back into the air, portals to teleport your birds to different places, or TNT to explode and destroy everything around it.
    • -
    • Use spell cards wisely. Spell cards are powerful items that can create various effects on the screen, such as raining rubber ducks, summoning mighty eagles, or dropping golden ducks. You have to use these spell cards wisely and strategically, as they can fill up the Destructometer quickly and help you win the level. However, you also have to save them for tough situations and not waste them on easy levels.
    • -
    -

    These are some of the tips for filling up the Destructometer quickly and efficiently in Angry Birds 2. By doing this, you will be able to get more cards and spells and have more chances to complete each level.

    -

    Avoid Spending Spell Cards Unless You Need To

    -

    As we mentioned before, spell cards are powerful items that can create various effects on the screen, such as raining rubber ducks, summoning mighty eagles, or dropping golden ducks. They can help you in your mission by causing massive damage or by giving you extra cards or spells.

    -

    However, spell cards are also limited and rare, and you have to earn them by filling up the Destructometer or by buying them with gems. Therefore, you have to avoid spending spell cards unless you really need to. Here are some tips for doing that:

    -
      -
    • Save your spell cards for hard levels. Some levels are harder than others, and they may require more than one attempt to complete them. You have to save your spell cards for these levels, as they can make a big difference and help you win.
    • -
    • Use your spell cards strategically. Each spell card has its own effect and usage. You have to use your spell cards strategically and according to the situation. For example, use the rubber duck spell when there are many pigs on the screen, use the mighty eagle spell when there are many structures on the screen, or use the golden duck spell when you need more cards or spells.
    • -
    • Don't rely on your spell cards too much. Spell cards are helpful and fun, but they are not essential for completing each level. You have to rely on your own skill and strategy more than on your spell cards. Try to complete each level with as few spell cards as possible, and only use them when you are stuck or desperate.
    • -
    -

    These are some of the tips for avoiding spending spell cards unless you need to in Angry Birds 2. By doing this, you will be able to save your gems and your spell cards for later levels.

    -

    Don't Fight Attack Piggies Head On

    -

    One of the challenges of Angry Birds 2 is fighting against attack piggies, which are pigs that have special abilities and weapons that can harm your birds or defend themselves from your attacks. These include pigs that can shoot lasers, rockets, arrows, or bubbles at your birds; pigs that can wear helmets, shields, or armor; pigs that can fly with jetpacks or balloons; and pigs that can summon other pigs or objects.

    -

    Fighting against attack piggies can be tricky and frustrating, as they can reduce your chances of completing each level. Therefore, you have to avoid fighting them head on and use some tactics to deal with them. Here are some tips for doing that:

    -
      -
    • Aim for their weak spots. Each attack piggy has some weak spots that can make them vulnerable or exposed. You have to aim for these weak spots and hit them with your birds. For example, hit the lasers, rockets, arrows, or bubbles before they hit your birds; hit the helmets, shields, or armor to break them; hit the jetpacks or balloons to make them explode or pop; or hit the summoners to stop them from calling reinforcements.
    • -
    • Use the environment to your advantage. As we mentioned before, each level has different elements that can help you or hinder you in your mission. You have to use these elements to deal with the attack piggies. For example, use flowers to launch your birds back at them; use portals to teleport your birds behind them; use fans to blow them away; use balloons to lift them up; or use TNT to blow them up.
    • -
    • Use spell cards wisely. As we mentioned before, spell cards are powerful items that can create various effects on the screen. You have to use these spell cards wisely and strategically to deal with the attack piggies. For example, use the rubber duck spell to distract them; use the mighty eagle spell to crush them; use the golden duck spell to get more cards or spells; or use the pig inflator spell to make them easier to pop.
    • -
    -

    These are some of the tips for avoiding fighting attack piggies head on in Angry Birds 2. By doing this, you will be able to overcome their defenses and attacks and win each level.

    -

    Don't Forget Your Daily Quests

    -

    One of the features of Angry Birds 2 that can help you progress faster and have more fun is the daily quests system. Daily quests are tasks that you can complete every day to earn gems, feathers, and other rewards. Gems are the premium currency of the game that you can use to buy more cards, spells, hats, and other items. Feathers are the items that you can use to level up your birds and make them stronger.

    -

    To access your daily quests, you have to tap on the quest icon on the top left corner of the screen. You will see a list of quests that you can complete, such as flinging a certain number of birds, destroying a certain number of structures, using a certain number of spells, and more. Each quest has a reward that you can claim once you complete it.

    -

    You can also see your progress for each quest by tapping on it. You can also skip a quest if you don't want to do it or if you find it too hard. However, you can only skip one quest per day, and you will lose the reward for that quest.

    -

    Daily quests are a great way to earn more gems and feathers and to challenge yourself with different goals. They can also help you improve your skills and strategies for the game. Therefore, you should not forget your daily quests and try to complete as many as possible every day.

    -

    Angry Birds 2 Game Features

    -

    Angry Birds 2 is a game that has many features that make it fun and exciting. Here are some of the main features of the game:

    -

    Randomly Generated Levels

    -

    One of the features that makes Angry Birds 2 unique and challenging is that each level is randomly generated, meaning that each level is different and unpredictable every time you play it. You will never know what kind of structure, pig, or element you will face in each level.

    -

    This feature adds more variety and replay value to the game, as you will always have a new experience and a new challenge every time you play. It also makes the game more fair and balanced, as you will not be able to memorize or repeat the same strategy for each level.

    -

    The Arena Mode

    -

    One of the features that makes Angry Birds 2 competitive and social is the Arena mode, where you can compete with other players around the world in real time. In this mode, you have to score as many points as possible in a limited time by destroying structures and pigs with your birds.

    -

    You will be matched with players who have similar skill levels and bird levels as you. You will also be able to see their scores and their shots on your screen. You will earn trophies for winning matches and lose trophies for losing matches. You will also earn gems and feathers for participating in matches.

    -

    The Arena mode is a great way to test your skills and strategies against other players and to earn more rewards and recognition. You can also chat with other players and make friends or rivals in this mode.

    -

    Events and Clans

    -

    One of the features that makes Angry Birds 2 fun and engaging is the events and clans system, where you can participate in special events and join clans with other players. Events are limited-time challenges that offer exclusive rewards and prizes for completing them. Clans are groups of players who can chat, share tips, and cooperate in clan challenges. Events and clans are a great way to have more fun and variety in the game and to interact with other players.

    -

    Hats and Hatchlings

    -

    One of the features that makes Angry Birds 2 cute and customizable is the hats and hatchlings system, where you can collect and customize your birds and hatchlings. Hats are accessories that you can put on your birds to change their appearance and give them some bonuses. Hatchlings are adorable baby birds that you can collect and take care of.

    -

    You can find hats of different styles, themes, and rarities in the game, such as cowboy hats, pirate hats, ninja hats, and more. You can also level up your hats and make them more powerful. You can collect hatchlings by hatching eggs that you find in the game or by buying them with gems. You can also feed, play, and dress up your hatchlings and watch them grow.

    -

    Hats and hatchlings are a great way to personalize your birds and to collect cute creatures in the game. You can also show off your hats and hatchlings to other players and see theirs.

    -

    Angry Birds 2 Game Review

    -

    Angry Birds 2 is a game that has a lot to offer for fans of the Angry Birds franchise and for puzzle game lovers in general. It has stunning graphics, addictive gameplay, and tons of content that will keep you entertained for hours. Here is a brief review of the game based on its graphics, gameplay, and fun factor:

    -

    Graphics

    -

    Angry Birds 2 has amazing graphics that are colorful, detailed, and realistic. The game uses a 3D engine that makes the game look like a movie. The game also has smooth animations, dynamic lighting, and realistic physics that make the game more immersive and enjoyable.

    -

    The game also has a variety of environments that are beautiful and diverse. You will see different landscapes, such as forests, deserts, islands, mountains, and more. You will also see different weather effects, such as rain, snow, fog, and more. The game also has a lot of humor and charm in its graphics, such as funny expressions, costumes, and actions of the birds and the pigs.

    -

    The game also has a user-friendly interface that is easy to navigate and use. The game has clear icons, buttons, menus, and indicators that make the game accessible and intuitive. The game also has a vibrant and cheerful soundtrack that matches the mood and theme of the game.

    -

    Gameplay

    -

    Angry Birds 2 has addictive gameplay that is simple to learn but hard to master. The game has a card system that allows you to choose which bird you want to use for each shot. The game also has a Destructometer that fills up as you cause damage to the structures and the pigs. The game also has spell cards that can create various effects on the screen.

    -

    The game has hundreds of levels that are randomly generated, meaning that each level is different and challenging every time you play it. The game also has boss battles where you have to face off against giant pigs with special abilities and weapons.

    -

    The game also has other modes and features that add more fun and variety to the gameplay. You can compete with other players around the world in the Arena mode, where you have to score as many points as possible in a limited time. You can also participate in special events and join clans with other players, where you can chat, share tips, and cooperate in clan challenges.

    -

    The game also has a hats and hatchlings system, where you can collect and customize your birds and hatchlings. You can find hats of different styles, themes, and rarities in the game, such as cowboy hats, pirate hats, ninja hats, and more. You can also level up your hats and make them more powerful. You can collect hatchlings by hatching eggs that you find in the game or by buying them with gems. You can also feed, play, and dress up your hatchlings and watch them grow.

    -

    Fun Factor

    -

    Angry Birds 2 is a game that has a high fun factor that will keep you entertained for hours. The game has a lot of humor and charm in its graphics, sound, and story. The game also has a lot of challenge and variety in its gameplay, levels, and modes. The game also has a lot of interaction and socialization with other players, events, and clans.

    -

    The game is suitable for all ages and audiences, as it is easy to play but hard to master. The game is also free to play but offers in-app purchases for those who want to enhance their experience or support the developers.

    -

    Conclusion

    -

    Angry Birds 2 is a game that is worth playing for fans of the Angry Birds franchise and for puzzle game lovers in general. It is a game that has stunning graphics, addictive gameplay, and tons of content that will keep you entertained for hours. It is a game that has many features that make it fun and exciting, such as randomly generated levels, the Arena mode, events and clans, hats and hatchlings, and more. It is a game that has a high fun factor that will make you laugh and smile.

    -

    If you want to download and install Angry Birds 2 3.4.2 APK on your device, you can follow the steps we mentioned above. You can also find more information about the game on its official website or on its app store page.

    -

    So, what are you waiting for? Download Angry Birds 2 3.4.2 APK today and join the fun!

    -

    FAQs

    -

    Here are some frequently asked questions and answers about Angry Birds 2:

    -
      -
    1. Q: Is Angry Birds 2 free to play?
      -A: Yes, Angry Birds 2 is free to play, but it offers in-app purchases for those who want to enhance their experience or support the developers.
    2. -
    3. Q: How can I level up my birds?
      -A: You can level up your birds by using feathers that you can earn by playing the game or by buying them with gems. You can also level up your birds by using hats that you can find or buy in the game.
    4. -
    5. Q: How can I join a clan?
      -A: You can join a clan by tapping on the clan icon on the top right corner of the screen. You can either create your own clan or join an existing one. You can also search for clans by name or by code.
    6. -
    7. Q: How can I get more gems?
      -A: You can get more gems by completing daily quests, participating in events, winning matches in the Arena mode, or buying them with real money.
    8. -
    9. Q: How can I contact the developers?
      -A: You can contact the developers by tapping on the settings icon on the top right corner of the screen and then tapping on the help icon. You can also visit their official website or their social media pages.
    10. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Dead Space Remake How to Use the Pulse Rifle Exploit for Unlimited Money and Ammo.md b/spaces/congsaPfin/Manga-OCR/logs/Dead Space Remake How to Use the Pulse Rifle Exploit for Unlimited Money and Ammo.md deleted file mode 100644 index 96d5df001b82114a2221bcff3b1d3cee334c2c9c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Dead Space Remake How to Use the Pulse Rifle Exploit for Unlimited Money and Ammo.md +++ /dev/null @@ -1,109 +0,0 @@ - - -

    Dead Space apk mod unlimited money: A guide for horror fans

    -

    If you are a fan of survival horror games, you might have heard of Dead Space, a sci-fi classic that features a terrifying story, a strategic combat system, and a cosmic horror atmosphere. But did you know that you can also play this game with an apk mod that gives you unlimited money? In this article, we will explain what Dead Space is, what an apk mod is, how to get and use the Dead Space apk mod unlimited money, and some tips and tricks to enjoy this game even more.

    -

    dead space apk mod unlimited money


    Download Filehttps://urlca.com/2uO79M



    -

    What is Dead Space and why is it a great horror game?

    -

    Dead Space is a science fiction survival horror video game that was released in 2008 by EA. The game follows Isaac Clarke, an engineer who is sent to repair a spaceship called the USG Ishimura, only to find out that it has been overrun by monstrous creatures called Necromorphs. Isaac must fight his way through the ship, using various weapons and tools to dismember the enemies, while also uncovering the mystery behind the origin and purpose of the Necromorphs.

    -

    The story and setting of Dead Space

    -

    The story of Dead Space is set in the 26th century, when humanity has exhausted most of its natural resources and has resorted to mining planets for minerals. The Ishimura is one of these mining ships, called planet-crackers, that can break apart entire worlds. However, during one of its missions, the Ishimura encounters a mysterious alien artifact called the Marker, which triggers a series of events that lead to the creation of the Necromorphs.

    -

    The Necromorphs are reanimated corpses that have been mutated by an infection that spreads through dead tissue. They are extremely hostile and violent, and can only be killed by severing their limbs. They also have a hive mind that is controlled by larger Necromorphs called Brethren Moons, which are planet-sized entities that seek to consume all life in the universe.

    -

    Isaac's mission is complicated by several factors, such as his deteriorating mental state due to exposure to the Marker's influence, his personal connection to his girlfriend Nicole who was part of the Ishimura's crew, and his involvement in a conflict between two factions: the Earth Government and a religious cult called Unitology. Unitology worships the Markers as divine objects that can bring about human transcendence, and seeks to sabotage Isaac's efforts to stop the Necromorph outbreak.

    -

    dead space apk mod unlimited credits
    -dead space apk mod unlimited nodes
    -dead space apk mod unlimited ammo
    -dead space apk mod unlimited health
    -dead space apk mod unlimited resources
    -dead space apk mod free download
    -dead space apk mod offline
    -dead space apk mod latest version
    -dead space apk mod no root
    -dead space apk mod android
    -dead space apk mod obb
    -dead space apk mod hack
    -dead space apk mod cheat
    -dead space apk mod menu
    -dead space apk mod god mode
    -dead space remake apk mod unlimited money
    -dead space 2 apk mod unlimited money
    -dead space 3 apk mod unlimited money
    -dead space extraction apk mod unlimited money
    -dead space ignition apk mod unlimited money
    -download dead space apk mod unlimited money
    -how to install dead space apk mod unlimited money
    -how to play dead space apk mod unlimited money
    -how to get infinite money in the dead space remake[^1^]
    -how to use the pulse rifle exploit to get free ammo in the dead space remake[^1^]
    -how to reset upgrades in the dead space remake[^1^]
    -best place to use the exploit in the dead space remake[^1^]
    -how to get infinite nodes in the dead space remake[^1^]
    -how to get infinite health in the dead space remake[^1^]
    -how to get infinite resources in the dead space remake[^1^]
    -how to get infinite credits in the dead space remake[^1^]
    -how to get infinite ammo in the dead space remake[^1^]
    -how to get infinite stasis in the dead space remake[^1^]
    -how to get infinite oxygen in the dead space remake[^1^]
    -how to get infinite inventory slots in the dead space remake[^1^]
    -how to unlock all weapons in the dead space remake[^1^]
    -how to unlock all suits in the dead space remake[^1^]
    -how to unlock all achievements in the dead space remake[^1^]
    -how to unlock all modes in the dead space remake[^1^]
    -how to unlock all secrets in the dead space remake[^1^]
    -how to beat all bosses in the dead space remake[^1^]
    -how to beat all chapters in the dead space remake[^1^]
    -how to beat all puzzles in the dead space remake[^1^]
    -how to beat all challenges in the dead space remake[^1^]
    -how to beat all necromorphs in the dead space remake[^1^]
    -how to survive on impossible difficulty in the dead space remake[^1^]
    -how to speedrun the dead space remake[^1^]
    -how to glitch the dead space remake[^1^]
    -how to fix bugs and errors in the dead space remake[^1^]
    -how to update the dead space remake[^1^]

    -

    The gameplay and features of Dead Space

    -

    The gameplay of Dead Space is based on three main aspects: combat, exploration, and puzzle-solving. The combat system requires

    The benefits and drawbacks of using the Dead Space apk mod unlimited money

    -

    Using the Dead Space apk mod unlimited money can have some benefits and drawbacks for your gaming experience. Here are some of them:

    -

    Benefits

    -
      -
    • You can buy any weapon, suit, or upgrade node you want without worrying about the cost. This can make the game easier and more fun, as you can experiment with different combinations and strategies.
    • -
    • You can enjoy the game without having to grind for credits or nodes. This can save you time and frustration, as you don't have to repeat levels or search for hidden items.
    • -
    • You can unlock achievements and trophies faster and easier. Some of them require you to spend a certain amount of credits or nodes, which can be tedious in the original game.
    • -
    -

    Drawbacks

    -
      -
    • You can lose the challenge and tension of the game. Part of what makes Dead Space a great horror game is the scarcity of resources and the need to manage them wisely. Having unlimited money can make the game too easy and boring, as you don't have to worry about ammo, health, or stasis.
    • -
    • You can risk getting banned or suspended from online services. Using a modded apk can violate the terms and conditions of the game developer or publisher, as well as the platform you are playing on. They can detect if you are using an unauthorized version of the game and take action against your account.
    • -
    • You can expose your device to malware or viruses. Downloading and installing an apk mod from an unofficial source can be dangerous, as it may contain harmful software that can damage your device or steal your personal information. You should always be careful and use a reputable antivirus program before opening any apk file.
    • -
    -

    These are some of the pros and cons of using the Dead Space apk mod unlimited money. Ultimately, it is up to you to decide whether you want to use it or not, depending on your preferences and goals. However, we recommend that you play the game as it was intended by the developers, as it will give you a more authentic and satisfying experience.

    -

    The tips and tricks to enjoy the Dead Space apk mod unlimited money

    -

    If you decide to use the Dead Space apk mod unlimited money, here are some tips and tricks to help you enjoy it more:

    -
      -
    • Use different weapons and suits for each chapter. This will add some variety and challenge to your gameplay, as well as let you see how each weapon and suit performs in different situations.
    • -
    • Try playing on a higher difficulty level. This will increase the number and strength of the enemies, as well as reduce the amount of ammo and health available. This will make the game more challenging and exciting, even if you have unlimited money.
    • -
    • Explore every corner of the Ishimura. The game has a lot of hidden secrets and easter eggs that you can discover by exploring every room and corridor. You can also find audio logs, text logs, and video logs that reveal more about the story and lore of Dead Space.
    • -
    • Play with headphones and in a dark room. This will enhance the atmosphere and immersion of the game, as well as make it more scary and thrilling. You will hear every sound and scream in detail, as well as feel every jump scare and shock moment.
    • -
    • Have fun! Dead Space is a great game that offers a lot of entertainment and enjoyment. Whether you use the apk mod or not, you should have fun playing it and appreciate its quality and creativity.
    • -
    -

    These are some of the tips and tricks to enjoy the Dead Space apk mod unlimited money. We hope that they will help you have a better gaming experience.

    -

    Conclusion

    -

    In this article, we have explained what Dead Space is, what an apk mod is, how to get and use the Dead Space apk mod unlimited money, and some tips and tricks to enjoy it more. We have also discussed some of the benefits and drawbacks of using this modded version of the game.

    -

    We hope that this article has been informative and helpful for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

    -

    Frequently Asked Questions

    -

    Here are some frequently asked questions about Dead Space apk mod unlimited money:

    -
      -
    1. Where can I download the Dead Space apk mod unlimited money?
      -You can download it from various websites that offer apk files for Android devices. However, we advise you to be careful and use a reliable antivirus program before opening any apk file, as some of them may contain malware or viruses.
    2. -
    3. Is it legal to use the Dead Space apk mod unlimited money?<
      -You can download it from various websites that offer apk files for Android devices. However, we advise you to be careful and use a reliable antivirus program before opening any apk file, as some of them may contain malware or viruses.
    4. -
    5. Is it legal to use the Dead Space apk mod unlimited money?
      -It depends on the laws and regulations of your country and region. Generally speaking, using an apk mod is not illegal, but it may violate the terms and conditions of the game developer or publisher, as well as the platform you are playing on. They can take action against your account if they detect that you are using an unauthorized version of the game. Therefore, we recommend that you use the apk mod at your own risk and discretion.
    6. -
    7. Will the Dead Space apk mod unlimited money affect my progress and save data?
      -No, the Dead Space apk mod unlimited money will not affect your progress and save data. You can play the game normally and save your progress as usual. However, you should backup your save data before installing the apk mod, just in case something goes wrong or you want to revert to the original version of the game.
    8. -
    9. Can I play online with the Dead Space apk mod unlimited money?
      -No, you cannot play online with the Dead Space apk mod unlimited money. The game does not have a multiplayer mode, and the apk mod will not work with online services such as Google Play Games or EA Origin. You can only play offline with the apk mod.
    10. -
    11. Are there any other mods for Dead Space?
      -Yes, there are other mods for Dead Space that you can find online. Some of them offer different features and enhancements, such as improved graphics, sound effects, controls, or gameplay. However, we cannot guarantee the quality or safety of these mods, so you should use them at your own risk and discretion.
    12. -
    -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Low Battery Videos for Free - Royalty-Free Stock Footage.md b/spaces/congsaPfin/Manga-OCR/logs/Download Low Battery Videos for Free - Royalty-Free Stock Footage.md deleted file mode 100644 index 44b00dd9ad731afc325c93dcd77c258b71817bf7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Low Battery Videos for Free - Royalty-Free Stock Footage.md +++ /dev/null @@ -1,120 +0,0 @@ - -

    How to Download Low Battery Videos for Free

    -

    Have you ever seen a video that shows a low battery icon on a device screen? These are called low battery videos, and they are very popular and useful for various purposes. In this article, you will learn what low battery videos are, where to find them, how to download them, and how to use them.

    -

    low battery video - download


    DOWNLOADhttps://urlca.com/2uO8uZ



    -

    What are Low Battery Videos?

    -

    Low battery videos are short clips that show a device screen with a low battery indicator. They can be used to convey a sense of urgency, frustration, or humor. For example, you can use a low battery video to show that your phone is about to die in the middle of an important call, or that your laptop is running out of power while you are working on a project.

    -

    Low battery videos can also be used as transitions, backgrounds, overlays, or effects in your video projects. They can add some interest and variety to your videos, as well as create a connection with your audience. For instance, you can use a low battery video to transition from one scene to another, or to create a contrast between two different situations.

    -

    Where to Find Low Battery Videos?

    -

    There are many sources of free low battery stock video footage online. You can find them on websites that offer free video clips for personal or commercial use. Some examples are:

    -
      -
    • Videezy: This website has 287 free low battery stock videos that you can download and use for your projects.
    • -
    • Videvo: This website has 305 free low resolution stock videos that you can use as low battery effects.
    • -
    • that corresponds to the video format and quality that you want. You may need to sign up or log in to the website to download some videos.
    • -
    • Save the video file to your computer or device. You can choose the location and name of the file.
    • - -

      If you want to download low battery videos from other sources, such as YouTube, Vimeo, or Facebook, you will need to use some tools or software that can help you download videos from these platforms. Some examples are:

      -

      low battery video - download free
      -download low battery warning video
      -low battery animation video download
      -low battery video clip download
      -low battery sound effect video download
      -download video of low battery indicator
      -low battery alert video download
      -low battery status video download
      -low battery message video download
      -low battery notification video download
      -download low battery screen video
      -low battery symbol video download
      -low battery icon video download
      -low battery logo video download
      -low battery prank video download
      -download video showing low battery
      -low battery charge video download
      -low battery phone video download
      -low battery laptop video download
      -low battery car video download
      -download low battery funny video
      -low battery meme video download
      -low battery tiktok video download
      -low battery comedy video download
      -low battery horror video download
      -download low battery hd video
      -low battery 4k video download
      -low battery 1080p video download
      -low battery 720p video download
      -low battery mp4 video download
      -download low battery stock video footage
      -low battery royalty free video download
      -low battery creative commons video download
      -low battery editorial use only video download
      -low battery after effects video download
      -download low battery green screen video
      -low battery chroma key video download
      -low battery transparent background video download
      -low battery overlay video download
      -low battery intro video download
      -download low battery youtube video
      -low battery facebook video download
      -low battery instagram video download
      -low battery twitter video download
      -low battery whatsapp status video download
      -download how to fix low battery problem video
      -how to charge a low battery fast video download
      -how to extend a low battery life video download

      -
        -
      • 4K Video Downloader: This is a free software that allows you to download videos from YouTube, Vimeo, Facebook, and other websites in high quality. You can also download playlists, channels, subtitles, and 3D videos.
      • -
      • ClipGrab: This is a free software that lets you download and convert videos from YouTube, Vimeo, Facebook, and other websites. You can choose from various formats and qualities, such as MP4, WMV, OGG, MP3, and more.
      • -
      • Online Video Converter: This is a free online tool that enables you to download and convert videos from YouTube, Vimeo, Facebook, and other websites. You can select from different formats and qualities, such as MP4, AVI, MOV, MP3, and more.
      • -
      -

      To use these tools or software, you will need to copy the URL of the video that you want to download, paste it into the tool or software, choose the format and quality that you want, and click on the download button or link.

      -

      How to Use Low Battery Videos?

      -

      Once you have downloaded low battery videos, you can use them for various purposes. Here are some ways to incorporate low battery videos into your projects:

      -
        -
      • Use low battery videos as transitions between scenes or segments in your video. You can use them to create a sense of continuity or contrast in your video. For example, you can use a low battery video to transition from a scene where someone is having a good time to a scene where someone is having a bad time.
      • -
      • Use low battery videos as backgrounds for your text or graphics in your video. You can use them to create a mood or atmosphere in your video. For example, you can use a low battery video as a background for a text that says "Hurry up!" or "Don't miss this opportunity!"
      • -
      • Use low battery videos as overlays or effects on top of your main video. You can use them to add some interest or variety to your video. For example, you can use a low battery video as an overlay on a video of a product or service that you are promoting.
      • -
      -

      To use low battery videos in your projects, you will need to import them into your video editing software or app. You can then adjust the size, position, duration, opacity, and other settings of the low battery videos according to your preferences.

      -

      Conclusion

      -

      Low battery videos are short clips that show a device screen with a low battery indicator. They are very popular and useful for various purposes. You can find free low battery stock video footage online on websites like Videezy, Videvo, or Pixabay. You can download them easily and quickly by following some simple steps. You can also use tools or software like 4K Video Downloader, ClipGrab, or Online Video Converter to download low battery videos from other sources like YouTube, Vimeo, or Facebook. You can use low battery videos in your projects as transitions, backgrounds, overlays, or effects. They can help you convey a sense of urgency, frustration, or humor, as well as create a connection with your audience.

      -

      We hope this article has helped you learn how to download low battery videos for free and how to use them in your projects. If you have any questions or comments, please feel free to leave them below. And don't forget to share this article with your friends and colleagues who might find it useful. Thanks for reading!

      -

      FAQs

      -

      Here are some frequently asked questions about low battery videos:

      -

      Q1: What are some common uses of low battery videos?

      -

      A1: Some common uses of low battery videos are:

      -
        -
      • To create a sense of urgency or scarcity in your marketing or sales videos. For example, you can use a low battery video to show that your offer is limited or expiring soon.
      • -
      • To create a sense of frustration or humor in your comedy or prank videos. For example, you can use a low battery video to show that your prank victim's phone is dying at the worst possible moment.
      • -
      • To create a sense of contrast or irony in your storytelling or documentary videos. For example, you can use a low battery video to show that your protagonist's device is running out of power while they are trying to achieve something important.
      • -
      -

      Q2: How can I edit low battery videos after downloading them?

      -

      A2: You can edit low battery videos after downloading them using any video editing software or app that you prefer. Some examples are:

      -
        -
      • Adobe Premiere Pro: This is a professional video editing software that offers a lot of features and tools for editing low battery videos. You can trim, crop, rotate, resize, adjust the color, add transitions, effects, text, and more.
      • -
      • Filmora: This is a user-friendly video editing software that is suitable for beginners and intermediate users. You can edit low battery videos easily and quickly with its intuitive interface and drag-and-drop functionality. You can also add filters, stickers, titles, music, and more.
      • -
      • InShot: This is a popular video editing app that you can use on your mobile devices. You can edit low battery videos on the go with its simple and powerful features. You can also add stickers, emojis, text, music, and more.
      • -
      -

      Q3: How can I avoid copyright issues when using low battery videos?

      -

      A3: You can avoid copyright issues when using low battery videos by following these tips:

      -
        -
      • Use low battery videos that are free for personal or commercial use. You can find them on websites that offer free stock video footage with no attribution required.
      • -
      • Use low battery videos that are licensed under Creative Commons or other similar licenses. You can find them on websites that offer free stock video footage with attribution required. You will need to give credit to the original creator of the video according to the license terms.
      • -
      • Use low battery videos that you have created yourself or have permission to use from the original creator. You can create your own low battery videos using your device screen or using online tools like Low Battery Video Maker.
      • -
      -

      Q4: How can I optimize low battery videos for different platforms?

      -

      A4: You can optimize low battery videos for different platforms by following these tips:

      -
        -
      • Choose the right format and quality for your low battery videos. Different platforms have different requirements and preferences for video formats and qualities. For example, YouTube prefers MP4 format and 1080p quality, while Instagram prefers MOV format and 720p quality.
      • -
      • Choose the right size and aspect ratio for your low battery videos. Different platforms have different sizes and aspect ratios for displaying videos. For example, YouTube displays videos in 16:9 aspect ratio, while Instagram displays videos in 1:1 aspect ratio.
      • -
      • Choose the right length and duration for your low battery videos. Different platforms have different limits and recommendations for video lengths and durations. For example, YouTube allows up to 12 hours of video length, while Instagram allows up to 60 seconds of video duration.
      • -
      -

      Q5: How can I find more low battery videos online?

      -

      A5: You can find more low battery videos online by using these methods:

      -
        -
      • Use search engines like Google or Bing to search for keywords like "low battery video", "low battery stock video", "low battery effect", etc.
      • -
      • Use social media platforms like YouTube, Vimeo, Facebook, Instagram, TikTok, etc. to

        to browse for low battery videos that other users have uploaded or shared.

      • -
      • Use online tools like Low Battery Video Generator or Low Battery Video Maker to create your own low battery videos with custom settings and options.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Get Ready for the Perfect Avenger MOD APK - No Ads Unlimited Money and Gems Included.md b/spaces/congsaPfin/Manga-OCR/logs/Get Ready for the Perfect Avenger MOD APK - No Ads Unlimited Money and Gems Included.md deleted file mode 100644 index 6b34bf84ebc3ea4436963e73e6a87646e97e81aa..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Get Ready for the Perfect Avenger MOD APK - No Ads Unlimited Money and Gems Included.md +++ /dev/null @@ -1,102 +0,0 @@ -
      -

      Perfect Avenger Mod APK: The Ultimate Action Game for Android

      -

      If you are a fan of superhero games, you will love Perfect Avenger, a thrilling action game that lets you become one of the legendary heroes and fight against evil forces. In this game, you can choose from a variety of characters, each with their own unique abilities and weapons, and complete various missions to save the world. You can also customize your hero's appearance and equipment, and upgrade your skills to become more powerful.

      -

      However, if you want to enjoy the full potential of this game, you will need to download Perfect Avenger Mod APK, a premium version of Perfect Avenger that unlocks all features and removes ads. With this mod, you will have unlimited money and resources to buy anything you want, and you will not have to watch any annoying ads or wait for energy to refill. You will also be able to access all the characters, weapons, and missions without any restrictions.

      -

      perfect avengers mod apk no ads


      Download File 🆗 https://urlca.com/2uO7ko



      -

      In this article, we will tell you everything you need to know about Perfect Avenger Mod APK, including what it is, how to download and install it on your device, and why you should play it. We will also answer some frequently asked questions about this mod. So, without further ado, let's get started!

      -

      What is Perfect Avenger?

      -

      A thrilling superhero game with amazing graphics and gameplay

      -

      Perfect Avenger is an action game developed by Jojoy, a popular gaming studio that has created many other successful games such as Stickman Legends and Zombie Hunter. In this game, you can immerse yourself in a stunning 3D world full of superheroes and villains, where you can unleash your inner hero and fight for justice.

      -

      The game has amazing graphics that will make you feel like you are in a movie. The characters are well-designed and animated, and the environments are realistic and detailed. The game also has a dynamic soundtrack that matches the mood of each scene, and sound effects that enhance the immersion.

      -

      The gameplay of Perfect Avenger is fast-paced and exciting, as you have to use your skills and weapons to defeat various enemies and bosses. You can also use special moves and combos to deal more damage and create spectacular effects. The game has a simple control system that allows you to move, jump, attack, dodge, and switch weapons easily. You can also adjust the sensitivity and layout of the buttons according to your preference.

      -

      A variety of characters, weapons, and missions to choose from

      -

      One of the best features of Perfect Avenger is that it offers a lot of diversity and customization options. You can choose from over 20 different characters, each with their own personality, backstory, appearance, voice, skills, and weapons. Some of the characters are inspired by famous superheroes such as Iron Man, Captain America, Spider-Man, Thor, Hulk, Black Widow, Hawkeye, Ant-Man, Black Panther, Doctor Strange, Captain Marvel , and more. You can also unlock and collect other original characters that have their own unique abilities and weapons.

      -

      You can also customize your character's appearance and equipment by changing their costumes, masks, helmets, gloves, boots, capes, and accessories. You can mix and match different items to create your own style and look. You can also upgrade your character's skills and weapons by spending coins and gems that you earn from completing missions and achievements.

      -

      The game has a variety of missions that you can play in different modes. You can play the story mode, where you have to follow the plot and complete objectives to progress. You can also play the challenge mode, where you have to face waves of enemies and survive as long as you can. You can also play the arena mode, where you can compete with other players online and rank up on the leaderboard. The game also has daily and weekly events that offer rewards and bonuses.

      -

      perfect avengers mod apk unlimited money and gems
      -perfect avengers hack apk download free
      -perfect avengers modded apk latest version
      -perfect avengers apk mod no root required
      -perfect avengers mod apk offline gameplay
      -perfect avengers cheat apk for android
      -perfect avengers mod apk with all characters unlocked
      -perfect avengers hacked apk no verification
      -perfect avengers mod apk high damage and health
      -perfect avengers mod apk without ads and pop-ups
      -perfect avengers premium apk free download
      -perfect avengers mod apk full game unlocked
      -perfect avengers cracked apk for ios
      -perfect avengers mod apk with unlimited resources
      -perfect avengers hack apk no survey or password
      -perfect avengers mod apk easy installation
      -perfect avengers modded apk with anti-ban feature
      -perfect avengers apk mod online multiplayer mode
      -perfect avengers cheat apk with auto-update function
      -perfect avengers mod apk without any bugs or errors
      -perfect avengers pro apk no ads and in-app purchases
      -perfect avengers mod apk best graphics and sound quality
      -perfect avengers hacked apk with all levels and missions unlocked
      -perfect avengers mod apk fast and smooth performance
      -perfect avengers modded apk safe and secure download link
      -perfect avengers hack apk with unlimited coins and diamonds
      -perfect avengers mod apk fun and addictive gameplay
      -perfect avengers cheat apk compatible with all devices
      -perfect avengers mod apk with all skins and costumes unlocked
      -perfect avengers hacked apk with all weapons and items unlocked
      -perfect avengers modded apk with realistic physics and animations
      -perfect avengers hack apk with unlimited energy and stamina
      -perfect avengers cheat apk with all achievements and rewards unlocked
      -perfect avengers modded apk with custom settings and options
      -perfect avengers hack apk with unlimited lives and revives
      -perfect avengers cheat apk with all modes and features unlocked
      -perfect avengers modded apk with awesome effects and filters
      -perfect avengers hack apk with unlimited skills and abilities
      -perfect avengers cheat apk with all secrets and tips unlocked
      -perfect avengers modded apk with advanced controls and interface

      -

      What is Perfect Avenger Mod APK?

      -

      A premium version of Perfect Avenger that unlocks all features and removes ads

      -

      Perfect Avenger Mod APK is a modified version of Perfect Avenger that gives you access to all the features and content of the game without any limitations or costs. With this mod, you will be able to enjoy the following benefits:

      -
        -
      • Unlimited money and resources: You will have unlimited coins and gems that you can use to buy anything you want in the game, such as costumes, weapons, skills, and upgrades. You will also have unlimited energy that you can use to play as many missions as you want without waiting for it to refill.
      • -
      • No ads: You will not have to watch any ads or pop-ups that interrupt your gameplay or waste your time. You will also not have to pay any money to remove them.
      • -
      • All characters, weapons, and missions unlocked: You will be able to access all the characters, weapons, and missions in the game without having to unlock them by playing or paying. You will be able to choose any character you want and switch between them anytime. You will also be able to play any mission you want in any mode you want.
      • -
      -

      How to download and install Perfect Avenger Mod APK on your device

      -

      If you want to download and install Perfect Avenger Mod APK on your device, you will need to follow these simple steps:

      -
        -
      1. Download the mod file: You will need to download the mod file from a reliable source that offers the latest version of the mod. You can use this link to download the mod file directly on your device.
      2. -
      3. Enable unknown sources: You will need to enable unknown sources on your device settings to allow the installation of apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
      4. -
      5. Install the mod file: You will need to locate the mod file on your device storage and tap on it to start the installation process. Follow the instructions on the screen and wait for it to finish.
      6. -
      7. Launch the game: You will need to launch the game from your app drawer or home screen and enjoy playing Perfect Avenger Mod APK with all features unlocked.
      8. -
      -

      Why should you play Perfect Avenger Mod APK?

      -

      Enjoy unlimited money and resources to upgrade your skills and equipment

      -

      One of the main reasons why you should play Perfect Avenger Mod APK is that it gives you unlimited money and resources that you can use to upgrade your skills and equipment. This way, you will be able to make your character more powerful and efficient in combat. You will be able to unlock new skills and combos that will help you defeat enemies faster and easier. You will also be able to buy new weapons and accessories that will enhance your performance and appearance. You will not have to worry about running out of money or resources or grinding for them in the game.

      -

      Experience the ultimate action and adventure without any interruptions or limitations

      -

      Another reason why you should play Perfect Avenger Mod APK is that it lets you experience the ultimate action and adventure without any interruptions or limitations. You will be able to play as many missions as you want without having to wait for energy to refill or watch ads. You will also be able to access all the characters, weapons, and missions without having to unlock them by playing or paying. You will be able to enjoy the full potential of this game without any restrictions or costs.

      -

      Compare your scores and achievements with other players online

      -

      A final reason why you should play Perfect Avenger Mod APK is that it allows you to compare your scores and achievements with other players online. You will be able to connect your game account with your Facebook account and share your progress and results with your friends. You will also be able to join the online community of Perfect Avenger players and chat with them, exchange tips and tricks, and challenge them to duels. You will be able to show off your skills and rank up on the global leaderboard. You will also be able to earn rewards and trophies for completing achievements and milestones.

      -

      Conclusion

      -

      Perfect Avenger Mod APK is the best choice for fans of superhero games

      -

      In conclusion, Perfect Avenger Mod APK is the best choice for fans of superhero games who want to enjoy the ultimate action and adventure on their Android devices. This mod gives you access to all the features and content of the game without any limitations or costs. You will be able to play as your favorite heroes, customize their appearance and equipment, upgrade their skills and weapons, and complete various missions to save the world. You will also be able to enjoy unlimited money and resources, no ads, and online features.

      -

      Download it now and join the epic battle against evil forces

      -

      If you are ready to download Perfect Avenger Mod APK and join the epic battle against evil forces, you can use this link to get the mod file directly on your device. You can also follow the instructions above to install it easily and safely. Once you launch the game, you will be able to create your own hero and start your adventure. You will also be able to connect with other players online and compare your scores and achievements. So, what are you waiting for? Download Perfect Avenger Mod APK now and unleash your inner hero!

      -

      FAQs

      -

      Is Perfect Avenger Mod APK safe to use?

      -

      Yes, Perfect Avenger Mod APK is safe to use, as long as you download it from a reliable source that offers the latest version of the mod. The mod file does not contain any viruses or malware that can harm your device or compromise your privacy. However, you should always scan the mod file before installing it, and use a VPN service if you want to protect your online identity.

      -

      Do I need to root my device to install Perfect Avenger Mod APK?

      -

      No, you do not need to root your device to install Perfect Avenger Mod APK. The mod works on both rooted and non-rooted devices, as it does not require any special permissions or access. However, you should always backup your data before installing any mod, in case something goes wrong or you want to restore your original game.

      -

      What are the minimum requirements to play Perfect Avenger Mod APK?

      -

      The minimum requirements to play Perfect Avenger Mod APK are as follows:

      -
        -
      • An Android device running on version 4.4 or higher
      • -
      • At least 1 GB of RAM
      • -
      • At least 200 MB of free storage space
      • -
      • A stable internet connection
      • -
      -

      How can I update Perfect Avenger Mod APK to the latest version?

      -

      To update Perfect Avenger Mod APK to the latest version, you will need to download the new mod file from the same source that you used before, and install it over the existing one. You do not need to uninstall the previous version, as it will be overwritten by the new one. However, you should always backup your data before updating any mod, in case something goes wrong or you want to revert back.

      -

      How can I contact the developers of Perfect Avenger Mod APK?

      -

      If you have any questions, suggestions, feedback, or issues regarding Perfect Avenger Mod APK, you can contact the developers of the mod by visiting their official website or Facebook page. You can also leave a comment on their YouTube channel or email them at jojoy@gmail.com. They will try to respond as soon as possible and help you with your queries.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Doremisoft Swf Video Converter Crack Free.md b/spaces/contluForse/HuggingGPT/assets/Doremisoft Swf Video Converter Crack Free.md deleted file mode 100644 index 19c80543bc62aa500222ccb68342818e0a8267ad..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Doremisoft Swf Video Converter Crack Free.md +++ /dev/null @@ -1,111 +0,0 @@ -
      -

      Doremisoft Swf Video Converter Crack Free: A Review

      -

      If you are looking for a way to convert SWF files to various video formats, you may have come across Doremisoft Swf Video Converter. This software claims to be able to convert any source SWF file, including SWF games, to a wide range of multimedia file formats like AVI, WMV, MPEG3, MPEG2, MP3, WAV, etc. It also boasts of having specific output selections for different devices and applications, such as iPhone, iPad, Samsung Galaxy, Windows Movie Maker, Sony Vegas, Premiere Pro, and more. But is Doremisoft Swf Video Converter worth your money? And can you get it for free with a crack?

      -

      Doremisoft Swf Video Converter Crack Free


      Download Zip ✏ ✏ ✏ https://ssurll.com/2uzxdV



      - -

      What is Doremisoft Swf Video Converter?

      -

      Doremisoft Swf Video Converter is a software that can convert SWF files to various video formats. SWF files are Flash files that contain animations, graphics, sound, and interactivity. They are often used for web games, banners, ads, and videos. However, SWF files are not widely supported by most media players and devices. To play or edit SWF files on different platforms, you need to convert them to other video formats.

      -

      Doremisoft Swf Video Converter can help you with that. It can convert SWF files to almost all popular video formats, such as AVI, MPEG, WMV, MKV, MOV, M4V, DV, RM, FLV, 3GP, 3G2, etc. It can also extract audio from SWF files and save them as MP3, AIFF, WAV, and more. Moreover, it can convert SWF files to HTML5 video formats like MP4/OGG/WebM for web compatibility. It can also save SWF files as images like GIF animation or JPG picture series.

      - -

      What are the features of Doremisoft Swf Video Converter?

      -

      Doremisoft Swf Video Converter has some useful features that make it stand out from other SWF converters. Some of these features are:

      -

      -
        -
      • It can convert SWF files to various video formats with high quality and fast speed.
      • -
      • It can record Flash games and animations while playing and save them as videos.
      • -
      • It can import SWF files to Windows Movie Maker, Sony Vegas, Premiere Pro for editing.
      • -
      • It can enjoy SWF videos on iOS devices like iPhone and iPad.
      • -
      • It can crop SWF files to remove black borders or unwanted edges.
      • -
      • It can add image watermark or logo to SWF files.
      • -
      • It can keep the .swf file extension if you only need to crop or watermark the file without changing the file format.
      • -
      - -

      How to get Doremisoft Swf Video Converter Crack Free?

      -

      Doremisoft Swf Video Converter is not a free software. It costs $69.00 for a single license. However, some people may try to get it for free with a crack. A crack is a program that modifies the original software to bypass its security or registration features. By using a crack, you may be able to use Doremisoft Swf Video Converter without paying for it.

      -

      However, using a crack is not recommended for several reasons. First of all, it is illegal and unethical. You are violating the copyright and license agreement of the software developer by using a crack. Secondly, it is risky and unsafe. You may download a crack from an untrusted source that contains viruses or malware that can harm your computer or steal your personal information. Thirdly, it is unreliable and unstable. You may encounter errors or bugs while using a cracked version of Doremisoft Swf Video Converter that can ruin your conversion process or damage your files.

      - -

      What is the best alternative to Doremisoft Swf Video Converter Crack Free?

      -

      The best alternative to Doremisoft Swf Video Converter Crack Free is to buy the original software from the official website. By doing so, you will get the following benefits:

      -
        -
      • You will support the software developer and encourage them to create more quality products.
      • -
      • You will get the latest version of Doremisoft Swf Video Converter with all the features and updates.
      • -
      • You will get technical support and customer service from the software developer if you have any problems or questions.
      • -
      • You will get a 30-day money-back guarantee if you are not satisfied with the software.
      • -
      - -

      Conclusion

      -

      Doremisoft Swf Video Converter is a powerful and versatile software that can convert SWF files to various video formats with ease and efficiency. It has many features that make it suitable for different purposes and platforms. However, it is not a free software and using a crack to get it for free is not advisable. The best way to get Doremisoft Swf Video Converter is to buy it from the official website and enjoy its full functionality and benefits.

      -

      How to use Doremisoft Swf Video Converter?

      -

      Doremisoft Swf Video Converter is easy to use and has a user-friendly interface. You can follow these simple steps to convert your SWF files:

      -
        -
      1. Download and install Doremisoft Swf Video Converter from the official website.
      2. -
      3. Launch the software and click the "Select File" button to import your SWF file.
      4. -
      5. Choose the output format and settings according to your needs.
      6. -
      7. Click the "Edit" button to crop, watermark, or adjust the SWF file if you want.
      8. -
      9. Click the "Next" button to start the conversion process.
      10. -
      11. Wait for the conversion to finish and find the converted file in the output folder.
      12. -
      - -

      What are the pros and cons of Doremisoft Swf Video Converter?

      -

      Doremisoft Swf Video Converter has some advantages and disadvantages that you should consider before buying it. Here are some of them:

      - - - - - - - -
      ProsCons
      It can convert SWF files to various video formats with high quality and fast speed.It is not a free software and costs $69.00 for a single license.
      It can record Flash games and animations while playing and save them as videos.It may not support some SWF files that are protected or encrypted.
      It can import SWF files to Windows Movie Maker, Sony Vegas, Premiere Pro for editing.It may not be compatible with some devices or applications that have specific requirements.
      It can enjoy SWF videos on iOS devices like iPhone and iPad.It may not be able to convert SWF files that contain complex interactivity or scripts.
      It can crop, watermark, or adjust the SWF file before conversion.It may not be able to preserve the original quality or effects of the SWF file after conversion.
      - -

      Is Doremisoft Swf Video Converter worth buying?

      -

      Doremisoft Swf Video Converter is a powerful and versatile software that can convert SWF files to various video formats with ease and efficiency. It has many features that make it suitable for different purposes and platforms. However, it is not a free software and using a crack to get it for free is not advisable. The best way to get Doremisoft Swf Video Converter is to buy it from the official website and enjoy its full functionality and benefits. If you are looking for a reliable and professional SWF converter, Doremisoft Swf Video Converter is worth buying.

      -

      What are the customer reviews of Doremisoft Swf Video Converter?

      -

      Doremisoft Swf Video Converter has received many positive reviews from customers who have used it. Here are some of the testimonials from the official website:

      -
      -

      "I used the product to convert SWF to MOV file format. It did work well." --- Reviewed by Scott

      -

      "I have tried many SWF converters but this one is the best. It can convert SWF to MP4 with high quality and fast speed. It also supports many devices and applications. I can enjoy my SWF videos on my iPhone and iPad easily." --- Reviewed by Lisa

      -

      "This software is amazing. It can record Flash games and animations while playing and save them as videos. I can share my Flash gameplay with my friends on YouTube and Vimeo. It is very fun and easy to use." --- Reviewed by Mike

      -
      - -

      What are the system requirements of Doremisoft Swf Video Converter?

      -

      Doremisoft Swf Video Converter is compatible with Windows and Mac operating systems. Here are the minimum system requirements for each platform:

      - - - -
      WindowsMac
      OS: Windows 8/7/XP/Vista/2003/2008
      CPU: 1GHz Intel/AMD processor or above
      RAM: 256MB RAM (512MB or above recommended)
      Free Hard Disk: 100MB space for installation
      Graphic Card: Super VGA (800×600) resolution, 16-bit graphics card or higher
      OS: Mac OS X 10.10 (Yosemite), 10.9, 10.8, 10.7 and 10.6
      CPU: Intel processor
      RAM: At least 512M physical RAM
      Free Hard Disk: 100MB space for installation
      Graphic Card: Super VGA (800×600) resolution, 16-bit graphics card or higher
      - -

      How to contact Doremisoft Swf Video Converter support team?

      -

      If you have any problems or questions about Doremisoft Swf Video Converter, you can contact the support team by email or online form. The email address is support@doremisoft.com. The online form is available on the official website under the "Contact Us" section. You can also check the FAQ page or the User Guide page for more information and tips.

      -

      How to download Doremisoft Swf Video Converter from the official website?

      -

      If you want to buy and download Doremisoft Swf Video Converter from the official website, you can follow these steps:

      -
        -
      1. Go to the official website of Doremisoft Swf Video Converter: http://www.doremisoft.net/swf-video-converter/
      2. -
      3. Click the "Buy Now" button and choose the license type you want.
      4. -
      5. Fill in your payment information and confirm your order.
      6. -
      7. After your payment is processed, you will receive an email with the download link and the registration code.
      8. -
      9. Click the download link and save the installation file on your computer.
      10. -
      11. Run the installation file and follow the instructions to install Doremisoft Swf Video Converter on your computer.
      12. -
      13. Launch Doremisoft Swf Video Converter and enter the registration code to activate it.
      14. -
      - -

      How to update Doremisoft Swf Video Converter to the latest version?

      -

      Doremisoft Swf Video Converter will check for updates automatically when you launch it. If there is a new version available, you will see a pop-up window that prompts you to update. You can also check for updates manually by clicking the "Help" menu and choosing "Check for Updates". If you want to update Doremisoft Swf Video Converter to the latest version, you can follow these steps:

      -
        -
      1. Click the "Update" button on the pop-up window or the "Check for Updates" menu.
      2. -
      3. The software will download the latest version and install it automatically.
      4. -
      5. Restart Doremisoft Swf Video Converter to enjoy the new features and improvements.
      6. -
      - -

      How to uninstall Doremisoft Swf Video Converter from your computer?

      -

      If you want to uninstall Doremisoft Swf Video Converter from your computer, you can follow these steps:

      -
        -
      1. Close Doremisoft Swf Video Converter if it is running.
      2. -
      3. Go to the "Start" menu and choose "Control Panel".
      4. -
      5. Click "Programs and Features" or "Uninstall a Program".
      6. -
      7. Find Doremisoft Swf Video Converter in the list of installed programs and click "Uninstall".
      8. -
      9. Follow the instructions to complete the uninstallation process.
      10. -
      -

      Conclusion

      -

      Doremisoft Swf Video Converter is a powerful and versatile software that can convert SWF files to various video formats with ease and efficiency. It has many features that make it suitable for different purposes and platforms. However, it is not a free software and using a crack to get it for free is not advisable. The best way to get Doremisoft Swf Video Converter is to buy it from the official website and enjoy its full functionality and benefits. If you are looking for a reliable and professional SWF converter, Doremisoft Swf Video Converter is worth buying.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Eset Nod32 Antivirus Smart Security 9.0.349.14 (x86x64) Keys.md b/spaces/contluForse/HuggingGPT/assets/Eset Nod32 Antivirus Smart Security 9.0.349.14 (x86x64) Keys.md deleted file mode 100644 index 63404fbc572c85fbd19e1017454f50e559311761..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Eset Nod32 Antivirus Smart Security 9.0.349.14 (x86x64) Keys.md +++ /dev/null @@ -1,16 +0,0 @@ -

      Eset Nod32 Antivirus Smart Security 9.0.349.14 (x86x64) Keys


      Downloadhttps://ssurll.com/2uzxMs



      - -25 Feb 2018 - Processor architecture: i386 (Intel 80386), amd64 (x86-64) Free Disk 4 ?... Antivirus ESET NOD32 Eset Smart Security (x32x64) Rus KEY ,. System requirements: Windows 7, 8, 10 32/64-bit. -Download ESET NOD32 Smart Security: Download ESET NOD32 Smart Security: Download ESET NOD32 Antivirus: Download ESET NOD32 Antivirus: Keys for NOD32 Smart Security and NOD32 antivirus. -Keys for ESET NOD32. -Keys for ESET NOD32 Smart Security. -Keys for ESET NOD32 Antivirus. -Keys for NOD32 Smart Security. -Keys for NOD32 Antivirus. -Keys for NOD32 for free. -ESET NOD32 for free. -ESET NOD32 keys. -Keys ESET Nod32 Antivirus. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/cooelf/Multimodal-CoT/timm/utils/metrics.py b/spaces/cooelf/Multimodal-CoT/timm/utils/metrics.py deleted file mode 100644 index 8e0b1f9989a9dc95708a0dbb42e747f9a8565378..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/utils/metrics.py +++ /dev/null @@ -1,32 +0,0 @@ -""" Eval metrics and related - -Hacked together by / Copyright 2020 Ross Wightman -""" - - -class AverageMeter: - """Computes and stores the average and current value""" - def __init__(self): - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - -def accuracy(output, target, topk=(1,)): - """Computes the accuracy over the k top predictions for the specified values of k""" - maxk = max(topk) - batch_size = target.size(0) - _, pred = output.topk(maxk, 1, True, True) - pred = pred.t() - correct = pred.eq(target.reshape(1, -1).expand_as(pred)) - return [correct[:k].reshape(-1).float().sum(0) * 100. / batch_size for k in topk] diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/keypose/faster_rcnn_r50_fpn_coco.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/keypose/faster_rcnn_r50_fpn_coco.py deleted file mode 100644 index a9ad9528b22163ae7ce1390375b69227fd6eafd9..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/keypose/faster_rcnn_r50_fpn_coco.py +++ /dev/null @@ -1,182 +0,0 @@ -checkpoint_config = dict(interval=1) -# yapf:disable -log_config = dict( - interval=50, - hooks=[ - dict(type='TextLoggerHook'), - # dict(type='TensorboardLoggerHook') - ]) -# yapf:enable -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] -# optimizer -optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[8, 11]) -total_epochs = 12 - -model = dict( - type='FasterRCNN', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - roi_head=dict( - type='StandardRoIHead', - bbox_roi_extractor=dict( - type='SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100) - # soft-nms is also supported for rcnn testing - # e.g., nms=dict(type='soft_nms', iou_threshold=0.5, min_score=0.05) - )) - -dataset_type = 'CocoDataset' -data_root = 'data/coco' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type=dataset_type, - ann_file=f'{data_root}/annotations/instances_train2017.json', - img_prefix=f'{data_root}/train2017/', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - ann_file=f'{data_root}/annotations/instances_val2017.json', - img_prefix=f'{data_root}/val2017/', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - ann_file=f'{data_root}/annotations/instances_val2017.json', - img_prefix=f'{data_root}/val2017/', - pipeline=test_pipeline)) -evaluation = dict(interval=1, metric='bbox') diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/structures/rotated_boxes.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/structures/rotated_boxes.py deleted file mode 100644 index aacfc730dfdf4b6bed5f8c861b720db7656f1cab..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/structures/rotated_boxes.py +++ /dev/null @@ -1,505 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from typing import List, Tuple -import torch - -from annotator.oneformer.detectron2.layers.rotated_boxes import pairwise_iou_rotated - -from .boxes import Boxes - - -class RotatedBoxes(Boxes): - """ - This structure stores a list of rotated boxes as a Nx5 torch.Tensor. - It supports some common methods about boxes - (`area`, `clip`, `nonempty`, etc), - and also behaves like a Tensor - (support indexing, `to(device)`, `.device`, and iteration over all boxes) - """ - - def __init__(self, tensor: torch.Tensor): - """ - Args: - tensor (Tensor[float]): a Nx5 matrix. Each row is - (x_center, y_center, width, height, angle), - in which angle is represented in degrees. - While there's no strict range restriction for it, - the recommended principal range is between [-180, 180) degrees. - - Assume we have a horizontal box B = (x_center, y_center, width, height), - where width is along the x-axis and height is along the y-axis. - The rotated box B_rot (x_center, y_center, width, height, angle) - can be seen as: - - 1. When angle == 0: - B_rot == B - 2. When angle > 0: - B_rot is obtained by rotating B w.r.t its center by :math:`|angle|` degrees CCW; - 3. When angle < 0: - B_rot is obtained by rotating B w.r.t its center by :math:`|angle|` degrees CW. - - Mathematically, since the right-handed coordinate system for image space - is (y, x), where y is top->down and x is left->right, the 4 vertices of the - rotated rectangle :math:`(yr_i, xr_i)` (i = 1, 2, 3, 4) can be obtained from - the vertices of the horizontal rectangle :math:`(y_i, x_i)` (i = 1, 2, 3, 4) - in the following way (:math:`\\theta = angle*\\pi/180` is the angle in radians, - :math:`(y_c, x_c)` is the center of the rectangle): - - .. math:: - - yr_i = \\cos(\\theta) (y_i - y_c) - \\sin(\\theta) (x_i - x_c) + y_c, - - xr_i = \\sin(\\theta) (y_i - y_c) + \\cos(\\theta) (x_i - x_c) + x_c, - - which is the standard rigid-body rotation transformation. - - Intuitively, the angle is - (1) the rotation angle from y-axis in image space - to the height vector (top->down in the box's local coordinate system) - of the box in CCW, and - (2) the rotation angle from x-axis in image space - to the width vector (left->right in the box's local coordinate system) - of the box in CCW. - - More intuitively, consider the following horizontal box ABCD represented - in (x1, y1, x2, y2): (3, 2, 7, 4), - covering the [3, 7] x [2, 4] region of the continuous coordinate system - which looks like this: - - .. code:: none - - O--------> x - | - | A---B - | | | - | D---C - | - v y - - Note that each capital letter represents one 0-dimensional geometric point - instead of a 'square pixel' here. - - In the example above, using (x, y) to represent a point we have: - - .. math:: - - O = (0, 0), A = (3, 2), B = (7, 2), C = (7, 4), D = (3, 4) - - We name vector AB = vector DC as the width vector in box's local coordinate system, and - vector AD = vector BC as the height vector in box's local coordinate system. Initially, - when angle = 0 degree, they're aligned with the positive directions of x-axis and y-axis - in the image space, respectively. - - For better illustration, we denote the center of the box as E, - - .. code:: none - - O--------> x - | - | A---B - | | E | - | D---C - | - v y - - where the center E = ((3+7)/2, (2+4)/2) = (5, 3). - - Also, - - .. math:: - - width = |AB| = |CD| = 7 - 3 = 4, - height = |AD| = |BC| = 4 - 2 = 2. - - Therefore, the corresponding representation for the same shape in rotated box in - (x_center, y_center, width, height, angle) format is: - - (5, 3, 4, 2, 0), - - Now, let's consider (5, 3, 4, 2, 90), which is rotated by 90 degrees - CCW (counter-clockwise) by definition. It looks like this: - - .. code:: none - - O--------> x - | B-C - | | | - | |E| - | | | - | A-D - v y - - The center E is still located at the same point (5, 3), while the vertices - ABCD are rotated by 90 degrees CCW with regard to E: - A = (4, 5), B = (4, 1), C = (6, 1), D = (6, 5) - - Here, 90 degrees can be seen as the CCW angle to rotate from y-axis to - vector AD or vector BC (the top->down height vector in box's local coordinate system), - or the CCW angle to rotate from x-axis to vector AB or vector DC (the left->right - width vector in box's local coordinate system). - - .. math:: - - width = |AB| = |CD| = 5 - 1 = 4, - height = |AD| = |BC| = 6 - 4 = 2. - - Next, how about (5, 3, 4, 2, -90), which is rotated by 90 degrees CW (clockwise) - by definition? It looks like this: - - .. code:: none - - O--------> x - | D-A - | | | - | |E| - | | | - | C-B - v y - - The center E is still located at the same point (5, 3), while the vertices - ABCD are rotated by 90 degrees CW with regard to E: - A = (6, 1), B = (6, 5), C = (4, 5), D = (4, 1) - - .. math:: - - width = |AB| = |CD| = 5 - 1 = 4, - height = |AD| = |BC| = 6 - 4 = 2. - - This covers exactly the same region as (5, 3, 4, 2, 90) does, and their IoU - will be 1. However, these two will generate different RoI Pooling results and - should not be treated as an identical box. - - On the other hand, it's easy to see that (X, Y, W, H, A) is identical to - (X, Y, W, H, A+360N), for any integer N. For example (5, 3, 4, 2, 270) would be - identical to (5, 3, 4, 2, -90), because rotating the shape 270 degrees CCW is - equivalent to rotating the same shape 90 degrees CW. - - We could rotate further to get (5, 3, 4, 2, 180), or (5, 3, 4, 2, -180): - - .. code:: none - - O--------> x - | - | C---D - | | E | - | B---A - | - v y - - .. math:: - - A = (7, 4), B = (3, 4), C = (3, 2), D = (7, 2), - - width = |AB| = |CD| = 7 - 3 = 4, - height = |AD| = |BC| = 4 - 2 = 2. - - Finally, this is a very inaccurate (heavily quantized) illustration of - how (5, 3, 4, 2, 60) looks like in case anyone wonders: - - .. code:: none - - O--------> x - | B\ - | / C - | /E / - | A / - | `D - v y - - It's still a rectangle with center of (5, 3), width of 4 and height of 2, - but its angle (and thus orientation) is somewhere between - (5, 3, 4, 2, 0) and (5, 3, 4, 2, 90). - """ - device = tensor.device if isinstance(tensor, torch.Tensor) else torch.device("cpu") - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=device) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that does not depend on - # the inputs (and consequently confuses jit) - tensor = tensor.reshape((0, 5)).to(dtype=torch.float32, device=device) - assert tensor.dim() == 2 and tensor.size(-1) == 5, tensor.size() - - self.tensor = tensor - - def clone(self) -> "RotatedBoxes": - """ - Clone the RotatedBoxes. - - Returns: - RotatedBoxes - """ - return RotatedBoxes(self.tensor.clone()) - - def to(self, device: torch.device): - # Boxes are assumed float32 and does not support to(dtype) - return RotatedBoxes(self.tensor.to(device=device)) - - def area(self) -> torch.Tensor: - """ - Computes the area of all the boxes. - - Returns: - torch.Tensor: a vector with areas of each box. - """ - box = self.tensor - area = box[:, 2] * box[:, 3] - return area - - # Avoid in-place operations so that we can torchscript; NOTE: this creates a new tensor - def normalize_angles(self) -> None: - """ - Restrict angles to the range of [-180, 180) degrees - """ - angle_tensor = (self.tensor[:, 4] + 180.0) % 360.0 - 180.0 - self.tensor = torch.cat((self.tensor[:, :4], angle_tensor[:, None]), dim=1) - - def clip(self, box_size: Tuple[int, int], clip_angle_threshold: float = 1.0) -> None: - """ - Clip (in place) the boxes by limiting x coordinates to the range [0, width] - and y coordinates to the range [0, height]. - - For RRPN: - Only clip boxes that are almost horizontal with a tolerance of - clip_angle_threshold to maintain backward compatibility. - - Rotated boxes beyond this threshold are not clipped for two reasons: - - 1. There are potentially multiple ways to clip a rotated box to make it - fit within the image. - 2. It's tricky to make the entire rectangular box fit within the image - and still be able to not leave out pixels of interest. - - Therefore we rely on ops like RoIAlignRotated to safely handle this. - - Args: - box_size (height, width): The clipping box's size. - clip_angle_threshold: - Iff. abs(normalized(angle)) <= clip_angle_threshold (in degrees), - we do the clipping as horizontal boxes. - """ - h, w = box_size - - # normalize angles to be within (-180, 180] degrees - self.normalize_angles() - - idx = torch.where(torch.abs(self.tensor[:, 4]) <= clip_angle_threshold)[0] - - # convert to (x1, y1, x2, y2) - x1 = self.tensor[idx, 0] - self.tensor[idx, 2] / 2.0 - y1 = self.tensor[idx, 1] - self.tensor[idx, 3] / 2.0 - x2 = self.tensor[idx, 0] + self.tensor[idx, 2] / 2.0 - y2 = self.tensor[idx, 1] + self.tensor[idx, 3] / 2.0 - - # clip - x1.clamp_(min=0, max=w) - y1.clamp_(min=0, max=h) - x2.clamp_(min=0, max=w) - y2.clamp_(min=0, max=h) - - # convert back to (xc, yc, w, h) - self.tensor[idx, 0] = (x1 + x2) / 2.0 - self.tensor[idx, 1] = (y1 + y2) / 2.0 - # make sure widths and heights do not increase due to numerical errors - self.tensor[idx, 2] = torch.min(self.tensor[idx, 2], x2 - x1) - self.tensor[idx, 3] = torch.min(self.tensor[idx, 3], y2 - y1) - - def nonempty(self, threshold: float = 0.0) -> torch.Tensor: - """ - Find boxes that are non-empty. - A box is considered empty, if either of its side is no larger than threshold. - - Returns: - Tensor: a binary vector which represents - whether each box is empty (False) or non-empty (True). - """ - box = self.tensor - widths = box[:, 2] - heights = box[:, 3] - keep = (widths > threshold) & (heights > threshold) - return keep - - def __getitem__(self, item) -> "RotatedBoxes": - """ - Returns: - RotatedBoxes: Create a new :class:`RotatedBoxes` by indexing. - - The following usage are allowed: - - 1. `new_boxes = boxes[3]`: return a `RotatedBoxes` which contains only one box. - 2. `new_boxes = boxes[2:10]`: return a slice of boxes. - 3. `new_boxes = boxes[vector]`, where vector is a torch.ByteTensor - with `length = len(boxes)`. Nonzero elements in the vector will be selected. - - Note that the returned RotatedBoxes might share storage with this RotatedBoxes, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return RotatedBoxes(self.tensor[item].view(1, -1)) - b = self.tensor[item] - assert b.dim() == 2, "Indexing on RotatedBoxes with {} failed to return a matrix!".format( - item - ) - return RotatedBoxes(b) - - def __len__(self) -> int: - return self.tensor.shape[0] - - def __repr__(self) -> str: - return "RotatedBoxes(" + str(self.tensor) + ")" - - def inside_box(self, box_size: Tuple[int, int], boundary_threshold: int = 0) -> torch.Tensor: - """ - Args: - box_size (height, width): Size of the reference box covering - [0, width] x [0, height] - boundary_threshold (int): Boxes that extend beyond the reference box - boundary by more than boundary_threshold are considered "outside". - - For RRPN, it might not be necessary to call this function since it's common - for rotated box to extend to outside of the image boundaries - (the clip function only clips the near-horizontal boxes) - - Returns: - a binary vector, indicating whether each box is inside the reference box. - """ - height, width = box_size - - cnt_x = self.tensor[..., 0] - cnt_y = self.tensor[..., 1] - half_w = self.tensor[..., 2] / 2.0 - half_h = self.tensor[..., 3] / 2.0 - a = self.tensor[..., 4] - c = torch.abs(torch.cos(a * math.pi / 180.0)) - s = torch.abs(torch.sin(a * math.pi / 180.0)) - # This basically computes the horizontal bounding rectangle of the rotated box - max_rect_dx = c * half_w + s * half_h - max_rect_dy = c * half_h + s * half_w - - inds_inside = ( - (cnt_x - max_rect_dx >= -boundary_threshold) - & (cnt_y - max_rect_dy >= -boundary_threshold) - & (cnt_x + max_rect_dx < width + boundary_threshold) - & (cnt_y + max_rect_dy < height + boundary_threshold) - ) - - return inds_inside - - def get_centers(self) -> torch.Tensor: - """ - Returns: - The box centers in a Nx2 array of (x, y). - """ - return self.tensor[:, :2] - - def scale(self, scale_x: float, scale_y: float) -> None: - """ - Scale the rotated box with horizontal and vertical scaling factors - Note: when scale_factor_x != scale_factor_y, - the rotated box does not preserve the rectangular shape when the angle - is not a multiple of 90 degrees under resize transformation. - Instead, the shape is a parallelogram (that has skew) - Here we make an approximation by fitting a rotated rectangle to the parallelogram. - """ - self.tensor[:, 0] *= scale_x - self.tensor[:, 1] *= scale_y - theta = self.tensor[:, 4] * math.pi / 180.0 - c = torch.cos(theta) - s = torch.sin(theta) - - # In image space, y is top->down and x is left->right - # Consider the local coordintate system for the rotated box, - # where the box center is located at (0, 0), and the four vertices ABCD are - # A(-w / 2, -h / 2), B(w / 2, -h / 2), C(w / 2, h / 2), D(-w / 2, h / 2) - # the midpoint of the left edge AD of the rotated box E is: - # E = (A+D)/2 = (-w / 2, 0) - # the midpoint of the top edge AB of the rotated box F is: - # F(0, -h / 2) - # To get the old coordinates in the global system, apply the rotation transformation - # (Note: the right-handed coordinate system for image space is yOx): - # (old_x, old_y) = (s * y + c * x, c * y - s * x) - # E(old) = (s * 0 + c * (-w/2), c * 0 - s * (-w/2)) = (-c * w / 2, s * w / 2) - # F(old) = (s * (-h / 2) + c * 0, c * (-h / 2) - s * 0) = (-s * h / 2, -c * h / 2) - # After applying the scaling factor (sfx, sfy): - # E(new) = (-sfx * c * w / 2, sfy * s * w / 2) - # F(new) = (-sfx * s * h / 2, -sfy * c * h / 2) - # The new width after scaling tranformation becomes: - - # w(new) = |E(new) - O| * 2 - # = sqrt[(sfx * c * w / 2)^2 + (sfy * s * w / 2)^2] * 2 - # = sqrt[(sfx * c)^2 + (sfy * s)^2] * w - # i.e., scale_factor_w = sqrt[(sfx * c)^2 + (sfy * s)^2] - # - # For example, - # when angle = 0 or 180, |c| = 1, s = 0, scale_factor_w == scale_factor_x; - # when |angle| = 90, c = 0, |s| = 1, scale_factor_w == scale_factor_y - self.tensor[:, 2] *= torch.sqrt((scale_x * c) ** 2 + (scale_y * s) ** 2) - - # h(new) = |F(new) - O| * 2 - # = sqrt[(sfx * s * h / 2)^2 + (sfy * c * h / 2)^2] * 2 - # = sqrt[(sfx * s)^2 + (sfy * c)^2] * h - # i.e., scale_factor_h = sqrt[(sfx * s)^2 + (sfy * c)^2] - # - # For example, - # when angle = 0 or 180, |c| = 1, s = 0, scale_factor_h == scale_factor_y; - # when |angle| = 90, c = 0, |s| = 1, scale_factor_h == scale_factor_x - self.tensor[:, 3] *= torch.sqrt((scale_x * s) ** 2 + (scale_y * c) ** 2) - - # The angle is the rotation angle from y-axis in image space to the height - # vector (top->down in the box's local coordinate system) of the box in CCW. - # - # angle(new) = angle_yOx(O - F(new)) - # = angle_yOx( (sfx * s * h / 2, sfy * c * h / 2) ) - # = atan2(sfx * s * h / 2, sfy * c * h / 2) - # = atan2(sfx * s, sfy * c) - # - # For example, - # when sfx == sfy, angle(new) == atan2(s, c) == angle(old) - self.tensor[:, 4] = torch.atan2(scale_x * s, scale_y * c) * 180 / math.pi - - @classmethod - def cat(cls, boxes_list: List["RotatedBoxes"]) -> "RotatedBoxes": - """ - Concatenates a list of RotatedBoxes into a single RotatedBoxes - - Arguments: - boxes_list (list[RotatedBoxes]) - - Returns: - RotatedBoxes: the concatenated RotatedBoxes - """ - assert isinstance(boxes_list, (list, tuple)) - if len(boxes_list) == 0: - return cls(torch.empty(0)) - assert all([isinstance(box, RotatedBoxes) for box in boxes_list]) - - # use torch.cat (v.s. layers.cat) so the returned boxes never share storage with input - cat_boxes = cls(torch.cat([b.tensor for b in boxes_list], dim=0)) - return cat_boxes - - @property - def device(self) -> torch.device: - return self.tensor.device - - @torch.jit.unused - def __iter__(self): - """ - Yield a box as a Tensor of shape (5,) at a time. - """ - yield from self.tensor - - -def pairwise_iou(boxes1: RotatedBoxes, boxes2: RotatedBoxes) -> None: - """ - Given two lists of rotated boxes of size N and M, - compute the IoU (intersection over union) - between **all** N x M pairs of boxes. - The box order must be (x_center, y_center, width, height, angle). - - Args: - boxes1, boxes2 (RotatedBoxes): - two `RotatedBoxes`. Contains N & M rotated boxes, respectively. - - Returns: - Tensor: IoU, sized [N,M]. - """ - - return pairwise_iou_rotated(boxes1.tensor, boxes2.tensor) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/midas_net.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/midas_net.py deleted file mode 100644 index 8a954977800b0a0f48807e80fa63041910e33c1f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/midas_net.py +++ /dev/null @@ -1,76 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, Interpolate, _make_encoder - - -class MidasNet(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=256, non_negative=True): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet, self).__init__() - - use_pretrained = False if path is None else True - - self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained) - - self.scratch.refinenet4 = FeatureFusionBlock(features) - self.scratch.refinenet3 = FeatureFusionBlock(features) - self.scratch.refinenet2 = FeatureFusionBlock(features) - self.scratch.refinenet1 = FeatureFusionBlock(features) - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - ) - - if path: - self.load(path) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) diff --git a/spaces/csuhan/opendet2/opendet2/modeling/roi_heads/roi_heads.py b/spaces/csuhan/opendet2/opendet2/modeling/roi_heads/roi_heads.py deleted file mode 100644 index 7d1310d1aac2692fbf1b04a5ffdc2dec394d1c13..0000000000000000000000000000000000000000 --- a/spaces/csuhan/opendet2/opendet2/modeling/roi_heads/roi_heads.py +++ /dev/null @@ -1,150 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -from typing import Dict, List - -import numpy as np -import torch -import torch.nn.functional as F -from detectron2.config import configurable -from detectron2.layers import ShapeSpec -from detectron2.modeling.poolers import ROIPooler -from detectron2.modeling.roi_heads.box_head import build_box_head -from detectron2.modeling.roi_heads.roi_heads import ( - ROI_HEADS_REGISTRY, StandardROIHeads, add_ground_truth_to_proposals) -from detectron2.structures import Boxes, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage -from detectron2.utils.registry import Registry -from torch import nn - -from .fast_rcnn import build_roi_box_output_layers - -logger = logging.getLogger(__name__) - - -@ROI_HEADS_REGISTRY.register() -class OpenSetStandardROIHeads(StandardROIHeads): - - @torch.no_grad() - def label_and_sample_proposals(self, proposals: List[Instances], targets: List[Instances]) -> List[Instances]: - if self.proposal_append_gt: - proposals = add_ground_truth_to_proposals(targets, proposals) - - proposals_with_gt = [] - - num_fg_samples = [] - num_bg_samples = [] - for proposals_per_image, targets_per_image in zip(proposals, targets): - has_gt = len(targets_per_image) > 0 - match_quality_matrix = pairwise_iou( - targets_per_image.gt_boxes, proposals_per_image.proposal_boxes - ) - matched_idxs, matched_labels = self.proposal_matcher( - match_quality_matrix) - sampled_idxs, gt_classes = self._sample_proposals( - matched_idxs, matched_labels, targets_per_image.gt_classes - ) - - # Set target attributes of the sampled proposals: - proposals_per_image = proposals_per_image[sampled_idxs] - proposals_per_image.gt_classes = gt_classes - # NOTE: add iou of each proposal - ious, _ = match_quality_matrix.max(dim=0) - proposals_per_image.iou = ious[sampled_idxs] - - if has_gt: - sampled_targets = matched_idxs[sampled_idxs] - for (trg_name, trg_value) in targets_per_image.get_fields().items(): - if trg_name.startswith("gt_") and not proposals_per_image.has(trg_name): - proposals_per_image.set( - trg_name, trg_value[sampled_targets]) - - num_bg_samples.append( - (gt_classes == self.num_classes).sum().item()) - num_fg_samples.append(gt_classes.numel() - num_bg_samples[-1]) - proposals_with_gt.append(proposals_per_image) - - # Log the number of fg/bg samples that are selected for training ROI heads - storage = get_event_storage() - storage.put_scalar("roi_head/num_fg_samples", np.mean(num_fg_samples)) - storage.put_scalar("roi_head/num_bg_samples", np.mean(num_bg_samples)) - - return proposals_with_gt - - @classmethod - def _init_box_head(cls, cfg, input_shape): - # fmt: off - in_features = cfg.MODEL.ROI_HEADS.IN_FEATURES - pooler_resolution = cfg.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION - pooler_scales = tuple(1.0 / input_shape[k].stride for k in in_features) - sampling_ratio = cfg.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO - pooler_type = cfg.MODEL.ROI_BOX_HEAD.POOLER_TYPE - # fmt: on - - # If StandardROIHeads is applied on multiple feature maps (as in FPN), - # then we share the same predictors and therefore the channel counts must be the same - in_channels = [input_shape[f].channels for f in in_features] - # Check all channel counts are equal - assert len(set(in_channels)) == 1, in_channels - in_channels = in_channels[0] - - box_pooler = ROIPooler( - output_size=pooler_resolution, - scales=pooler_scales, - sampling_ratio=sampling_ratio, - pooler_type=pooler_type, - ) - - box_head = build_box_head( - cfg, ShapeSpec(channels=in_channels, - height=pooler_resolution, width=pooler_resolution) - ) - # register output layers - box_predictor = build_roi_box_output_layers(cfg, box_head.output_shape) - return { - "box_in_features": in_features, - "box_pooler": box_pooler, - "box_head": box_head, - "box_predictor": box_predictor, - } - - -@ROI_HEADS_REGISTRY.register() -class DropoutStandardROIHeads(OpenSetStandardROIHeads): - @configurable - def __init__(self, *args, **kwargs,): - super().__init__(*args, **kwargs) - # num of sampling - self.num_sample = 30 - - def _forward_box(self, features: Dict[str, torch.Tensor], proposals: List[Instances], targets=None): - - features = [features[f] for f in self.box_in_features] - box_features = self.box_pooler( - features, [x.proposal_boxes for x in proposals]) - box_features = self.box_head(box_features) - - # if testing, we run multiple inference for dropout sampling - if self.training: - predictions = self.box_predictor(box_features) - else: - predictions = [self.box_predictor( - box_features, testing=True) for _ in range(self.num_sample)] - - del box_features - - if self.training: - losses = self.box_predictor.losses(predictions, proposals) - # proposals is modified in-place below, so losses must be computed first. - if self.train_on_pred_boxes: - with torch.no_grad(): - pred_boxes = self.box_predictor.predict_boxes_for_gt_classes( - predictions, proposals - ) - for proposals_per_image, pred_boxes_per_image in zip(proposals, pred_boxes): - proposals_per_image.proposal_boxes = Boxes( - pred_boxes_per_image) - return losses - else: - pred_instances, _ = self.box_predictor.inference( - predictions, proposals) - return pred_instances diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/scunet_model_arch.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/scunet_model_arch.py deleted file mode 100644 index 972a2639a00a85a5417a96994e9c6f53bec2119c..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/modules/scunet_model_arch.py +++ /dev/null @@ -1,265 +0,0 @@ -# -*- coding: utf-8 -*- -import numpy as np -import torch -import torch.nn as nn -from einops import rearrange -from einops.layers.torch import Rearrange -from timm.models.layers import trunc_normal_, DropPath - - -class WMSA(nn.Module): - """ Self-attention module in Swin Transformer - """ - - def __init__(self, input_dim, output_dim, head_dim, window_size, type): - super(WMSA, self).__init__() - self.input_dim = input_dim - self.output_dim = output_dim - self.head_dim = head_dim - self.scale = self.head_dim ** -0.5 - self.n_heads = input_dim // head_dim - self.window_size = window_size - self.type = type - self.embedding_layer = nn.Linear(self.input_dim, 3 * self.input_dim, bias=True) - - self.relative_position_params = nn.Parameter( - torch.zeros((2 * window_size - 1) * (2 * window_size - 1), self.n_heads)) - - self.linear = nn.Linear(self.input_dim, self.output_dim) - - trunc_normal_(self.relative_position_params, std=.02) - self.relative_position_params = torch.nn.Parameter( - self.relative_position_params.view(2 * window_size - 1, 2 * window_size - 1, self.n_heads).transpose(1, - 2).transpose( - 0, 1)) - - def generate_mask(self, h, w, p, shift): - """ generating the mask of SW-MSA - Args: - shift: shift parameters in CyclicShift. - Returns: - attn_mask: should be (1 1 w p p), - """ - # supporting sqaure. - attn_mask = torch.zeros(h, w, p, p, p, p, dtype=torch.bool, device=self.relative_position_params.device) - if self.type == 'W': - return attn_mask - - s = p - shift - attn_mask[-1, :, :s, :, s:, :] = True - attn_mask[-1, :, s:, :, :s, :] = True - attn_mask[:, -1, :, :s, :, s:] = True - attn_mask[:, -1, :, s:, :, :s] = True - attn_mask = rearrange(attn_mask, 'w1 w2 p1 p2 p3 p4 -> 1 1 (w1 w2) (p1 p2) (p3 p4)') - return attn_mask - - def forward(self, x): - """ Forward pass of Window Multi-head Self-attention module. - Args: - x: input tensor with shape of [b h w c]; - attn_mask: attention mask, fill -inf where the value is True; - Returns: - output: tensor shape [b h w c] - """ - if self.type != 'W': x = torch.roll(x, shifts=(-(self.window_size // 2), -(self.window_size // 2)), dims=(1, 2)) - x = rearrange(x, 'b (w1 p1) (w2 p2) c -> b w1 w2 p1 p2 c', p1=self.window_size, p2=self.window_size) - h_windows = x.size(1) - w_windows = x.size(2) - # sqaure validation - # assert h_windows == w_windows - - x = rearrange(x, 'b w1 w2 p1 p2 c -> b (w1 w2) (p1 p2) c', p1=self.window_size, p2=self.window_size) - qkv = self.embedding_layer(x) - q, k, v = rearrange(qkv, 'b nw np (threeh c) -> threeh b nw np c', c=self.head_dim).chunk(3, dim=0) - sim = torch.einsum('hbwpc,hbwqc->hbwpq', q, k) * self.scale - # Adding learnable relative embedding - sim = sim + rearrange(self.relative_embedding(), 'h p q -> h 1 1 p q') - # Using Attn Mask to distinguish different subwindows. - if self.type != 'W': - attn_mask = self.generate_mask(h_windows, w_windows, self.window_size, shift=self.window_size // 2) - sim = sim.masked_fill_(attn_mask, float("-inf")) - - probs = nn.functional.softmax(sim, dim=-1) - output = torch.einsum('hbwij,hbwjc->hbwic', probs, v) - output = rearrange(output, 'h b w p c -> b w p (h c)') - output = self.linear(output) - output = rearrange(output, 'b (w1 w2) (p1 p2) c -> b (w1 p1) (w2 p2) c', w1=h_windows, p1=self.window_size) - - if self.type != 'W': output = torch.roll(output, shifts=(self.window_size // 2, self.window_size // 2), - dims=(1, 2)) - return output - - def relative_embedding(self): - cord = torch.tensor(np.array([[i, j] for i in range(self.window_size) for j in range(self.window_size)])) - relation = cord[:, None, :] - cord[None, :, :] + self.window_size - 1 - # negative is allowed - return self.relative_position_params[:, relation[:, :, 0].long(), relation[:, :, 1].long()] - - -class Block(nn.Module): - def __init__(self, input_dim, output_dim, head_dim, window_size, drop_path, type='W', input_resolution=None): - """ SwinTransformer Block - """ - super(Block, self).__init__() - self.input_dim = input_dim - self.output_dim = output_dim - assert type in ['W', 'SW'] - self.type = type - if input_resolution <= window_size: - self.type = 'W' - - self.ln1 = nn.LayerNorm(input_dim) - self.msa = WMSA(input_dim, input_dim, head_dim, window_size, self.type) - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.ln2 = nn.LayerNorm(input_dim) - self.mlp = nn.Sequential( - nn.Linear(input_dim, 4 * input_dim), - nn.GELU(), - nn.Linear(4 * input_dim, output_dim), - ) - - def forward(self, x): - x = x + self.drop_path(self.msa(self.ln1(x))) - x = x + self.drop_path(self.mlp(self.ln2(x))) - return x - - -class ConvTransBlock(nn.Module): - def __init__(self, conv_dim, trans_dim, head_dim, window_size, drop_path, type='W', input_resolution=None): - """ SwinTransformer and Conv Block - """ - super(ConvTransBlock, self).__init__() - self.conv_dim = conv_dim - self.trans_dim = trans_dim - self.head_dim = head_dim - self.window_size = window_size - self.drop_path = drop_path - self.type = type - self.input_resolution = input_resolution - - assert self.type in ['W', 'SW'] - if self.input_resolution <= self.window_size: - self.type = 'W' - - self.trans_block = Block(self.trans_dim, self.trans_dim, self.head_dim, self.window_size, self.drop_path, - self.type, self.input_resolution) - self.conv1_1 = nn.Conv2d(self.conv_dim + self.trans_dim, self.conv_dim + self.trans_dim, 1, 1, 0, bias=True) - self.conv1_2 = nn.Conv2d(self.conv_dim + self.trans_dim, self.conv_dim + self.trans_dim, 1, 1, 0, bias=True) - - self.conv_block = nn.Sequential( - nn.Conv2d(self.conv_dim, self.conv_dim, 3, 1, 1, bias=False), - nn.ReLU(True), - nn.Conv2d(self.conv_dim, self.conv_dim, 3, 1, 1, bias=False) - ) - - def forward(self, x): - conv_x, trans_x = torch.split(self.conv1_1(x), (self.conv_dim, self.trans_dim), dim=1) - conv_x = self.conv_block(conv_x) + conv_x - trans_x = Rearrange('b c h w -> b h w c')(trans_x) - trans_x = self.trans_block(trans_x) - trans_x = Rearrange('b h w c -> b c h w')(trans_x) - res = self.conv1_2(torch.cat((conv_x, trans_x), dim=1)) - x = x + res - - return x - - -class SCUNet(nn.Module): - # def __init__(self, in_nc=3, config=[2, 2, 2, 2, 2, 2, 2], dim=64, drop_path_rate=0.0, input_resolution=256): - def __init__(self, in_nc=3, config=None, dim=64, drop_path_rate=0.0, input_resolution=256): - super(SCUNet, self).__init__() - if config is None: - config = [2, 2, 2, 2, 2, 2, 2] - self.config = config - self.dim = dim - self.head_dim = 32 - self.window_size = 8 - - # drop path rate for each layer - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(config))] - - self.m_head = [nn.Conv2d(in_nc, dim, 3, 1, 1, bias=False)] - - begin = 0 - self.m_down1 = [ConvTransBlock(dim // 2, dim // 2, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution) - for i in range(config[0])] + \ - [nn.Conv2d(dim, 2 * dim, 2, 2, 0, bias=False)] - - begin += config[0] - self.m_down2 = [ConvTransBlock(dim, dim, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution // 2) - for i in range(config[1])] + \ - [nn.Conv2d(2 * dim, 4 * dim, 2, 2, 0, bias=False)] - - begin += config[1] - self.m_down3 = [ConvTransBlock(2 * dim, 2 * dim, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution // 4) - for i in range(config[2])] + \ - [nn.Conv2d(4 * dim, 8 * dim, 2, 2, 0, bias=False)] - - begin += config[2] - self.m_body = [ConvTransBlock(4 * dim, 4 * dim, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution // 8) - for i in range(config[3])] - - begin += config[3] - self.m_up3 = [nn.ConvTranspose2d(8 * dim, 4 * dim, 2, 2, 0, bias=False), ] + \ - [ConvTransBlock(2 * dim, 2 * dim, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution // 4) - for i in range(config[4])] - - begin += config[4] - self.m_up2 = [nn.ConvTranspose2d(4 * dim, 2 * dim, 2, 2, 0, bias=False), ] + \ - [ConvTransBlock(dim, dim, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution // 2) - for i in range(config[5])] - - begin += config[5] - self.m_up1 = [nn.ConvTranspose2d(2 * dim, dim, 2, 2, 0, bias=False), ] + \ - [ConvTransBlock(dim // 2, dim // 2, self.head_dim, self.window_size, dpr[i + begin], - 'W' if not i % 2 else 'SW', input_resolution) - for i in range(config[6])] - - self.m_tail = [nn.Conv2d(dim, in_nc, 3, 1, 1, bias=False)] - - self.m_head = nn.Sequential(*self.m_head) - self.m_down1 = nn.Sequential(*self.m_down1) - self.m_down2 = nn.Sequential(*self.m_down2) - self.m_down3 = nn.Sequential(*self.m_down3) - self.m_body = nn.Sequential(*self.m_body) - self.m_up3 = nn.Sequential(*self.m_up3) - self.m_up2 = nn.Sequential(*self.m_up2) - self.m_up1 = nn.Sequential(*self.m_up1) - self.m_tail = nn.Sequential(*self.m_tail) - # self.apply(self._init_weights) - - def forward(self, x0): - - h, w = x0.size()[-2:] - paddingBottom = int(np.ceil(h / 64) * 64 - h) - paddingRight = int(np.ceil(w / 64) * 64 - w) - x0 = nn.ReplicationPad2d((0, paddingRight, 0, paddingBottom))(x0) - - x1 = self.m_head(x0) - x2 = self.m_down1(x1) - x3 = self.m_down2(x2) - x4 = self.m_down3(x3) - x = self.m_body(x4) - x = self.m_up3(x + x4) - x = self.m_up2(x + x3) - x = self.m_up1(x + x2) - x = self.m_tail(x + x1) - - x = x[..., :h, :w] - - return x - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) \ No newline at end of file diff --git a/spaces/dawood/Kanye-AI/modules/mel_processing.py b/spaces/dawood/Kanye-AI/modules/mel_processing.py deleted file mode 100644 index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000 --- a/spaces/dawood/Kanye-AI/modules/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/daydayup1225/Chat-web/app.py b/spaces/daydayup1225/Chat-web/app.py deleted file mode 100644 index 0dbb48d21655850c487d8cb914e1c14e34319346..0000000000000000000000000000000000000000 --- a/spaces/daydayup1225/Chat-web/app.py +++ /dev/null @@ -1,245 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys -import gradio as gr -import torch -import gc -from app_modules.utils import * -from app_modules.presets import * -from app_modules.overwrites import * - -# import os -# os.environ["CUDA_VISIBLE_DEVICES"] = "0" - - -logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -# base_model = "decapoda-research/llama-7b-hf" -# adapter_model = "project-baize/baize-lora-7B" - -# base_model = "facebook/opt-1.3b" -# adapter_model = "msuhail97/opt-1.3b-lora" -# tokenizer, model, device = load_tokenizer_and_model(base_model, adapter_model) - -finetune_model_path = "facebook/opt-350m" -# finetune_model_path = "./ft_models/ft-opt-1.3b" -tokenizer, model, device = load_finetune_tokenizer_and_model(finetune_model_path) - - -total_count = 0 -def predict(text, - chatbot, - history, - top_p, - temperature, - max_length_tokens, - max_context_length_tokens,): - if text=="": - yield chatbot,history,"Empty context." - return - try: - model - except: - yield [[text,"No Model Found"]],[],"No Model Found" - return - - inputs = generate_prompt_with_history(text,history,tokenizer,max_length=max_context_length_tokens) - if inputs is None: - yield chatbot,history,"Input too long." - return - else: - prompt,inputs=inputs - begin_length = len(prompt) - input_ids = inputs["input_ids"][:,-max_context_length_tokens:].to(device) - torch.cuda.empty_cache() - global total_count - total_count += 1 - print(total_count) - if total_count % 50 == 0 : - os.system("nvidia-smi") - with torch.no_grad(): - for x in greedy_search(input_ids,model,tokenizer,stop_words=["[|Human|]", "[|AI|]"],max_length=max_length_tokens,temperature=temperature,top_p=top_p): - if is_stop_word_or_prefix(x,["[|Human|]", "[|AI|]"]) is False: - if "[|Human|]" in x: - x = x[:x.index("[|Human|]")].strip() - if "[|AI|]" in x: - x = x[:x.index("[|AI|]")].strip() - x = x.strip() - a, b= [[y[0],convert_to_markdown(y[1])] for y in history]+[[text, convert_to_markdown(x)]],history + [[text,x]] - yield a, b, "Generating..." - if shared_state.interrupted: - shared_state.recover() - try: - yield a, b, "Stop: Success" - return - except: - pass - del input_ids - gc.collect() - torch.cuda.empty_cache() - #print(text) - #print(x) - #print("="*80) - try: - yield a,b,"Generate: Success" - except: - pass - -def retry( - text, - chatbot, - history, - top_p, - temperature, - max_length_tokens, - max_context_length_tokens, - ): - logging.info("Retry...") - if len(history) == 0: - yield chatbot, history, f"Empty context" - return - chatbot.pop() - inputs = history.pop()[0] - for x in predict(inputs,chatbot,history,top_p,temperature,max_length_tokens,max_context_length_tokens): - yield x - - -gr.Chatbot.postprocess = postprocess - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks(css=customCSS, theme=small_and_beautiful_theme) as demo: - history = gr.State([]) - user_question = gr.State("") - with gr.Row(): - gr.HTML(title) - status_display = gr.Markdown("Success", elem_id="status_display") - - with gr.Row(scale=1).style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(scale=1): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(scale=1): - with gr.Column(scale=12): - user_input = gr.Textbox( - show_label=False, placeholder="Enter text" - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("Send") - with gr.Column(min_width=70, scale=1): - cancelBtn = gr.Button("Stop") - with gr.Row(scale=1): - emptyBtn = gr.Button( - "🧹 New Conversation", - ) - retryBtn = gr.Button("🔄 Regenerate") - delLastBtn = gr.Button("🗑️ Remove Last Turn") - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="Parameter Setting"): - gr.Markdown("# Parameters") - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=0.95, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=0.1, - maximum=2.0, - value=1, - step=0.1, - interactive=True, - label="Temperature", - ) - max_length_tokens = gr.Slider( - minimum=0, - maximum=512, - value=256, - step=8, - interactive=True, - label="Max Generation Tokens", - ) - max_context_length_tokens = gr.Slider( - minimum=0, - maximum=4096, - value=2048, - step=128, - interactive=True, - label="Max History Tokens", - ) - # gr.Markdown(description) - - predict_args = dict( - fn=predict, - inputs=[ - user_question, - chatbot, - history, - top_p, - temperature, - max_length_tokens, - max_context_length_tokens, - ], - outputs=[chatbot, history, status_display], - show_progress=True, - ) - retry_args = dict( - fn=retry, - inputs=[ - user_input, - chatbot, - history, - top_p, - temperature, - max_length_tokens, - max_context_length_tokens, - ], - outputs=[chatbot, history, status_display], - show_progress=True, - ) - - reset_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input, status_display] - ) - - # Chatbot - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input, submitBtn], show_progress=True - ) - - predict_event1 = user_input.submit(**transfer_input_args).then(**predict_args) - - predict_event2 = submitBtn.click(**transfer_input_args).then(**predict_args) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, status_display], - show_progress=True, - ) - emptyBtn.click(**reset_args) - - predict_event3 = retryBtn.click(**retry_args) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history], - [chatbot, history, status_display], - show_progress=True, - ) - cancelBtn.click( - cancel_outputing, [], [status_display], - cancels=[ - predict_event1,predict_event2,predict_event3 - ] - ) -# demo.title = "Baize" - -demo.queue(concurrency_count=1).launch() \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/SgiImagePlugin.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/SgiImagePlugin.py deleted file mode 100644 index 3662ffd1571821e196d07330fdeecf4b0e5c2efa..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/PIL/SgiImagePlugin.py +++ /dev/null @@ -1,231 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# SGI image file handling -# -# See "The SGI Image File Format (Draft version 0.97)", Paul Haeberli. -# -# -# -# History: -# 2017-22-07 mb Add RLE decompression -# 2016-16-10 mb Add save method without compression -# 1995-09-10 fl Created -# -# Copyright (c) 2016 by Mickael Bonfill. -# Copyright (c) 2008 by Karsten Hiddemann. -# Copyright (c) 1997 by Secret Labs AB. -# Copyright (c) 1995 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - - -import os -import struct - -from . import Image, ImageFile -from ._binary import i16be as i16 -from ._binary import o8 - - -def _accept(prefix): - return len(prefix) >= 2 and i16(prefix) == 474 - - -MODES = { - (1, 1, 1): "L", - (1, 2, 1): "L", - (2, 1, 1): "L;16B", - (2, 2, 1): "L;16B", - (1, 3, 3): "RGB", - (2, 3, 3): "RGB;16B", - (1, 3, 4): "RGBA", - (2, 3, 4): "RGBA;16B", -} - - -## -# Image plugin for SGI images. -class SgiImageFile(ImageFile.ImageFile): - format = "SGI" - format_description = "SGI Image File Format" - - def _open(self): - # HEAD - headlen = 512 - s = self.fp.read(headlen) - - if not _accept(s): - msg = "Not an SGI image file" - raise ValueError(msg) - - # compression : verbatim or RLE - compression = s[2] - - # bpc : 1 or 2 bytes (8bits or 16bits) - bpc = s[3] - - # dimension : 1, 2 or 3 (depending on xsize, ysize and zsize) - dimension = i16(s, 4) - - # xsize : width - xsize = i16(s, 6) - - # ysize : height - ysize = i16(s, 8) - - # zsize : channels count - zsize = i16(s, 10) - - # layout - layout = bpc, dimension, zsize - - # determine mode from bits/zsize - rawmode = "" - try: - rawmode = MODES[layout] - except KeyError: - pass - - if rawmode == "": - msg = "Unsupported SGI image mode" - raise ValueError(msg) - - self._size = xsize, ysize - self.mode = rawmode.split(";")[0] - if self.mode == "RGB": - self.custom_mimetype = "image/rgb" - - # orientation -1 : scanlines begins at the bottom-left corner - orientation = -1 - - # decoder info - if compression == 0: - pagesize = xsize * ysize * bpc - if bpc == 2: - self.tile = [ - ("SGI16", (0, 0) + self.size, headlen, (self.mode, 0, orientation)) - ] - else: - self.tile = [] - offset = headlen - for layer in self.mode: - self.tile.append( - ("raw", (0, 0) + self.size, offset, (layer, 0, orientation)) - ) - offset += pagesize - elif compression == 1: - self.tile = [ - ("sgi_rle", (0, 0) + self.size, headlen, (rawmode, orientation, bpc)) - ] - - -def _save(im, fp, filename): - if im.mode != "RGB" and im.mode != "RGBA" and im.mode != "L": - msg = "Unsupported SGI image mode" - raise ValueError(msg) - - # Get the keyword arguments - info = im.encoderinfo - - # Byte-per-pixel precision, 1 = 8bits per pixel - bpc = info.get("bpc", 1) - - if bpc not in (1, 2): - msg = "Unsupported number of bytes per pixel" - raise ValueError(msg) - - # Flip the image, since the origin of SGI file is the bottom-left corner - orientation = -1 - # Define the file as SGI File Format - magic_number = 474 - # Run-Length Encoding Compression - Unsupported at this time - rle = 0 - - # Number of dimensions (x,y,z) - dim = 3 - # X Dimension = width / Y Dimension = height - x, y = im.size - if im.mode == "L" and y == 1: - dim = 1 - elif im.mode == "L": - dim = 2 - # Z Dimension: Number of channels - z = len(im.mode) - - if dim == 1 or dim == 2: - z = 1 - - # assert we've got the right number of bands. - if len(im.getbands()) != z: - msg = f"incorrect number of bands in SGI write: {z} vs {len(im.getbands())}" - raise ValueError(msg) - - # Minimum Byte value - pinmin = 0 - # Maximum Byte value (255 = 8bits per pixel) - pinmax = 255 - # Image name (79 characters max, truncated below in write) - img_name = os.path.splitext(os.path.basename(filename))[0] - img_name = img_name.encode("ascii", "ignore") - # Standard representation of pixel in the file - colormap = 0 - fp.write(struct.pack(">h", magic_number)) - fp.write(o8(rle)) - fp.write(o8(bpc)) - fp.write(struct.pack(">H", dim)) - fp.write(struct.pack(">H", x)) - fp.write(struct.pack(">H", y)) - fp.write(struct.pack(">H", z)) - fp.write(struct.pack(">l", pinmin)) - fp.write(struct.pack(">l", pinmax)) - fp.write(struct.pack("4s", b"")) # dummy - fp.write(struct.pack("79s", img_name)) # truncates to 79 chars - fp.write(struct.pack("s", b"")) # force null byte after img_name - fp.write(struct.pack(">l", colormap)) - fp.write(struct.pack("404s", b"")) # dummy - - rawmode = "L" - if bpc == 2: - rawmode = "L;16B" - - for channel in im.split(): - fp.write(channel.tobytes("raw", rawmode, 0, orientation)) - - if hasattr(fp, "flush"): - fp.flush() - - -class SGI16Decoder(ImageFile.PyDecoder): - _pulls_fd = True - - def decode(self, buffer): - rawmode, stride, orientation = self.args - pagesize = self.state.xsize * self.state.ysize - zsize = len(self.mode) - self.fd.seek(512) - - for band in range(zsize): - channel = Image.new("L", (self.state.xsize, self.state.ysize)) - channel.frombytes( - self.fd.read(2 * pagesize), "raw", "L;16B", stride, orientation - ) - self.im.putband(channel.im, band) - - return -1, 0 - - -# -# registry - - -Image.register_decoder("SGI16", SGI16Decoder) -Image.register_open(SgiImageFile.format, SgiImageFile, _accept) -Image.register_save(SgiImageFile.format, _save) -Image.register_mime(SgiImageFile.format, "image/sgi") - -Image.register_extensions(SgiImageFile.format, [".bw", ".rgb", ".rgba", ".sgi"]) - -# End of file diff --git a/spaces/dcq/freegpt-webui/client/css/style.css b/spaces/dcq/freegpt-webui/client/css/style.css deleted file mode 100644 index 3c038cdc422e222b3e54b87fb7596ddf5bb0edca..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/client/css/style.css +++ /dev/null @@ -1,17 +0,0 @@ -@import "./global.css"; -@import "./hljs.css"; -@import "./main.css"; -@import "./sidebar.css"; -@import "./conversation.css"; -@import "./message.css"; -@import "./stop-generating.css"; -@import "./typing.css"; -@import "./checkbox.css"; -@import "./label.css"; -@import "./button.css"; -@import "./buttons.css"; -@import "./dropdown.css"; -@import "./field.css"; -@import "./select.css"; -@import "./options.css"; -@import "./theme-toggler.css"; diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py deleted file mode 100644 index b8272a4ef3d6cb68ac5e973cab6afb96a92e8923..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py +++ /dev/null @@ -1,1003 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -import inspect -import os -from typing import Any, Callable, Dict, List, Optional, Tuple, Union - -import numpy as np -import PIL.Image -import torch -from torch import nn -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from ...loaders import TextualInversionLoaderMixin -from ...models import AutoencoderKL, ControlNetModel, UNet2DConditionModel -from ...models.controlnet import ControlNetOutput -from ...models.modeling_utils import ModelMixin -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import ( - PIL_INTERPOLATION, - is_accelerate_available, - is_accelerate_version, - logging, - randn_tensor, - replace_example_docstring, -) -from ..pipeline_utils import DiffusionPipeline -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> # !pip install opencv-python transformers accelerate - >>> from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler - >>> from diffusers.utils import load_image - >>> import numpy as np - >>> import torch - - >>> import cv2 - >>> from PIL import Image - - >>> # download an image - >>> image = load_image( - ... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png" - ... ) - >>> image = np.array(image) - - >>> # get canny image - >>> image = cv2.Canny(image, 100, 200) - >>> image = image[:, :, None] - >>> image = np.concatenate([image, image, image], axis=2) - >>> canny_image = Image.fromarray(image) - - >>> # load control net and stable diffusion v1-5 - >>> controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16) - >>> pipe = StableDiffusionControlNetPipeline.from_pretrained( - ... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 - ... ) - - >>> # speed up diffusion process with faster scheduler and memory optimization - >>> pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) - >>> # remove following line if xformers is not installed - >>> pipe.enable_xformers_memory_efficient_attention() - - >>> pipe.enable_model_cpu_offload() - - >>> # generate image - >>> generator = torch.manual_seed(0) - >>> image = pipe( - ... "futuristic-looking woman", num_inference_steps=20, generator=generator, image=canny_image - ... ).images[0] - ``` -""" - - -class MultiControlNetModel(ModelMixin): - r""" - Multiple `ControlNetModel` wrapper class for Multi-ControlNet - - This module is a wrapper for multiple instances of the `ControlNetModel`. The `forward()` API is designed to be - compatible with `ControlNetModel`. - - Args: - controlnets (`List[ControlNetModel]`): - Provides additional conditioning to the unet during the denoising process. You must set multiple - `ControlNetModel` as a list. - """ - - def __init__(self, controlnets: Union[List[ControlNetModel], Tuple[ControlNetModel]]): - super().__init__() - self.nets = nn.ModuleList(controlnets) - - def forward( - self, - sample: torch.FloatTensor, - timestep: Union[torch.Tensor, float, int], - encoder_hidden_states: torch.Tensor, - controlnet_cond: List[torch.tensor], - conditioning_scale: List[float], - class_labels: Optional[torch.Tensor] = None, - timestep_cond: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - return_dict: bool = True, - ) -> Union[ControlNetOutput, Tuple]: - for i, (image, scale, controlnet) in enumerate(zip(controlnet_cond, conditioning_scale, self.nets)): - down_samples, mid_sample = controlnet( - sample, - timestep, - encoder_hidden_states, - image, - scale, - class_labels, - timestep_cond, - attention_mask, - cross_attention_kwargs, - return_dict, - ) - - # merge samples - if i == 0: - down_block_res_samples, mid_block_res_sample = down_samples, mid_sample - else: - down_block_res_samples = [ - samples_prev + samples_curr - for samples_prev, samples_curr in zip(down_block_res_samples, down_samples) - ] - mid_block_res_sample += mid_sample - - return down_block_res_samples, mid_block_res_sample - - -class StableDiffusionControlNetPipeline(DiffusionPipeline, TextualInversionLoaderMixin): - r""" - Pipeline for text-to-image generation using Stable Diffusion with ControlNet guidance. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - controlnet ([`ControlNetModel`] or `List[ControlNetModel]`): - Provides additional conditioning to the unet during the denoising process. If you set multiple ControlNets - as a list, the outputs from each ControlNet are added together to create one combined additional - conditioning. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - controlnet: Union[ControlNetModel, List[ControlNetModel], Tuple[ControlNetModel], MultiControlNetModel], - scheduler: KarrasDiffusionSchedulers, - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - if isinstance(controlnet, (list, tuple)): - controlnet = MultiControlNetModel(controlnet) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - controlnet=controlnet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing - def enable_vae_slicing(self): - r""" - Enable sliced VAE decoding. - - When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several - steps. This is useful to save some memory and allow larger batch sizes. - """ - self.vae.enable_slicing() - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing - def disable_vae_slicing(self): - r""" - Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to - computing decoding in one step. - """ - self.vae.disable_slicing() - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae, controlnet, and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - Note that offloading happens on a submodule basis. Memory savings are higher than with - `enable_model_cpu_offload`, but performance is lower. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.controlnet]: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - cpu_offload(self.safety_checker, execution_device=device, offload_buffers=True) - - def enable_model_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared - to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward` - method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with - `enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`. - """ - if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"): - from accelerate import cpu_offload_with_hook - else: - raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.") - - device = torch.device(f"cuda:{gpu_id}") - - hook = None - for cpu_offloaded_model in [self.text_encoder, self.unet, self.vae]: - _, hook = cpu_offload_with_hook(cpu_offloaded_model, device, prev_module_hook=hook) - - if self.safety_checker is not None: - # the safety checker can offload the vae again - _, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook) - - # control net hook has be manually offloaded as it alternates with unet - cpu_offload_with_hook(self.controlnet, device) - - # We'll offload the last model manually. - self.final_offload_hook = hook - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt=None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`, *optional*): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - """ - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - if prompt_embeds is None: - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - prompt = self.maybe_convert_prompt(prompt, self.tokenizer) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal( - text_input_ids, untruncated_ids - ): - removed_text = self.tokenizer.batch_decode( - untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1] - ) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - bs_embed, seq_len, _ = prompt_embeds.shape - # duplicate text embeddings for each generation per prompt, using mps friendly method - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance and negative_prompt_embeds is None: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - # textual inversion: procecss multi-vector tokens if necessary - if isinstance(self, TextualInversionLoaderMixin): - uncond_tokens = self.maybe_convert_prompt(uncond_tokens, self.tokenizer) - - max_length = prompt_embeds.shape[1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - if do_classifier_free_guidance: - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - - negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder.dtype, device=device) - - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs( - self, - prompt, - image, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - controlnet_conditioning_scale=1.0, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # `prompt` needs more sophisticated handling when there are multiple - # conditionings. - if isinstance(self.controlnet, MultiControlNetModel): - if isinstance(prompt, list): - logger.warning( - f"You have {len(self.controlnet.nets)} ControlNets and you have passed {len(prompt)}" - " prompts. The conditionings will be fixed across the prompts." - ) - - # Check `image` - if isinstance(self.controlnet, ControlNetModel): - self.check_image(image, prompt, prompt_embeds) - elif isinstance(self.controlnet, MultiControlNetModel): - if not isinstance(image, list): - raise TypeError("For multiple controlnets: `image` must be type `list`") - - # When `image` is a nested list: - # (e.g. [[canny_image_1, pose_image_1], [canny_image_2, pose_image_2]]) - elif any(isinstance(i, list) for i in image): - raise ValueError("A single batch of multiple conditionings are supported at the moment.") - elif len(image) != len(self.controlnet.nets): - raise ValueError( - "For multiple controlnets: `image` must have the same length as the number of controlnets." - ) - - for image_ in image: - self.check_image(image_, prompt, prompt_embeds) - else: - assert False - - # Check `controlnet_conditioning_scale` - if isinstance(self.controlnet, ControlNetModel): - if not isinstance(controlnet_conditioning_scale, float): - raise TypeError("For single controlnet: `controlnet_conditioning_scale` must be type `float`.") - elif isinstance(self.controlnet, MultiControlNetModel): - if isinstance(controlnet_conditioning_scale, list): - if any(isinstance(i, list) for i in controlnet_conditioning_scale): - raise ValueError("A single batch of multiple conditionings are supported at the moment.") - elif isinstance(controlnet_conditioning_scale, list) and len(controlnet_conditioning_scale) != len( - self.controlnet.nets - ): - raise ValueError( - "For multiple controlnets: When `controlnet_conditioning_scale` is specified as `list`, it must have" - " the same length as the number of controlnets" - ) - else: - assert False - - def check_image(self, image, prompt, prompt_embeds): - image_is_pil = isinstance(image, PIL.Image.Image) - image_is_tensor = isinstance(image, torch.Tensor) - image_is_pil_list = isinstance(image, list) and isinstance(image[0], PIL.Image.Image) - image_is_tensor_list = isinstance(image, list) and isinstance(image[0], torch.Tensor) - - if not image_is_pil and not image_is_tensor and not image_is_pil_list and not image_is_tensor_list: - raise TypeError( - "image must be passed and be one of PIL image, torch tensor, list of PIL images, or list of torch tensors" - ) - - if image_is_pil: - image_batch_size = 1 - elif image_is_tensor: - image_batch_size = image.shape[0] - elif image_is_pil_list: - image_batch_size = len(image) - elif image_is_tensor_list: - image_batch_size = len(image) - - if prompt is not None and isinstance(prompt, str): - prompt_batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - prompt_batch_size = len(prompt) - elif prompt_embeds is not None: - prompt_batch_size = prompt_embeds.shape[0] - - if image_batch_size != 1 and image_batch_size != prompt_batch_size: - raise ValueError( - f"If image batch size is not 1, image batch size must be same as prompt batch size. image batch size: {image_batch_size}, prompt batch size: {prompt_batch_size}" - ) - - def prepare_image( - self, image, width, height, batch_size, num_images_per_prompt, device, dtype, do_classifier_free_guidance - ): - if not isinstance(image, torch.Tensor): - if isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - images = [] - - for image_ in image: - image_ = image_.convert("RGB") - image_ = image_.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]) - image_ = np.array(image_) - image_ = image_[None, :] - images.append(image_) - - image = images - - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, dim=0) - - image_batch_size = image.shape[0] - - if image_batch_size == 1: - repeat_by = batch_size - else: - # image batch size is the same as prompt batch size - repeat_by = num_images_per_prompt - - image = image.repeat_interleave(repeat_by, dim=0) - - image = image.to(device=device, dtype=dtype) - - if do_classifier_free_guidance: - image = torch.cat([image] * 2) - - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - def _default_height_width(self, height, width, image): - # NOTE: It is possible that a list of images have different - # dimensions for each image, so just checking the first image - # is not _exactly_ correct, but it is simple. - while isinstance(image, list): - image = image[0] - - if height is None: - if isinstance(image, PIL.Image.Image): - height = image.height - elif isinstance(image, torch.Tensor): - height = image.shape[2] - - height = (height // 8) * 8 # round down to nearest multiple of 8 - - if width is None: - if isinstance(image, PIL.Image.Image): - width = image.width - elif isinstance(image, torch.Tensor): - width = image.shape[3] - - width = (width // 8) * 8 # round down to nearest multiple of 8 - - return height, width - - # override DiffusionPipeline - def save_pretrained( - self, - save_directory: Union[str, os.PathLike], - safe_serialization: bool = False, - variant: Optional[str] = None, - ): - if isinstance(self.controlnet, ControlNetModel): - super().save_pretrained(save_directory, safe_serialization, variant) - else: - raise NotImplementedError("Currently, the `save_pretrained()` is not implemented for Multi-ControlNet.") - - @torch.no_grad() - @replace_example_docstring(EXAMPLE_DOC_STRING) - def __call__( - self, - prompt: Union[str, List[str]] = None, - image: Union[torch.FloatTensor, PIL.Image.Image, List[torch.FloatTensor], List[PIL.Image.Image]] = None, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - prompt_embeds: Optional[torch.FloatTensor] = None, - negative_prompt_embeds: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - cross_attention_kwargs: Optional[Dict[str, Any]] = None, - controlnet_conditioning_scale: Union[float, List[float]] = 1.0, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`, *optional*): - The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`. - instead. - image (`torch.FloatTensor`, `PIL.Image.Image`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, - `List[List[torch.FloatTensor]]`, or `List[List[PIL.Image.Image]]`): - The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If - the type is specified as `Torch.FloatTensor`, it is passed to ControlNet as is. `PIL.Image.Image` can - also be accepted as an image. The dimensions of the output image defaults to `image`'s dimensions. If - height and/or width are passed, `image` is resized according to them. If multiple ControlNets are - specified in init, images must be passed as a list such that each element of the list can be correctly - batched for input to a single controlnet. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. If not defined, one has to pass - `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is - less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not - provided, text embeddings will be generated from `prompt` input argument. - negative_prompt_embeds (`torch.FloatTensor`, *optional*): - Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt - weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input - argument. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - cross_attention_kwargs (`dict`, *optional*): - A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under - `self.processor` in - [diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py). - controlnet_conditioning_scale (`float` or `List[float]`, *optional*, defaults to 1.0): - The outputs of the controlnet are multiplied by `controlnet_conditioning_scale` before they are added - to the residual in the original unet. If multiple ControlNets are specified in init, you can set the - corresponding scale as a list. - Examples: - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height, width = self._default_height_width(height, width, image) - - # 1. Check inputs. Raise error if not correct - self.check_inputs( - prompt, - image, - height, - width, - callback_steps, - negative_prompt, - prompt_embeds, - negative_prompt_embeds, - controlnet_conditioning_scale, - ) - - # 2. Define call parameters - if prompt is not None and isinstance(prompt, str): - batch_size = 1 - elif prompt is not None and isinstance(prompt, list): - batch_size = len(prompt) - else: - batch_size = prompt_embeds.shape[0] - - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - if isinstance(self.controlnet, MultiControlNetModel) and isinstance(controlnet_conditioning_scale, float): - controlnet_conditioning_scale = [controlnet_conditioning_scale] * len(self.controlnet.nets) - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_prompt_embeds, - ) - - # 4. Prepare image - if isinstance(self.controlnet, ControlNetModel): - image = self.prepare_image( - image=image, - width=width, - height=height, - batch_size=batch_size * num_images_per_prompt, - num_images_per_prompt=num_images_per_prompt, - device=device, - dtype=self.controlnet.dtype, - do_classifier_free_guidance=do_classifier_free_guidance, - ) - elif isinstance(self.controlnet, MultiControlNetModel): - images = [] - - for image_ in image: - image_ = self.prepare_image( - image=image_, - width=width, - height=height, - batch_size=batch_size * num_images_per_prompt, - num_images_per_prompt=num_images_per_prompt, - device=device, - dtype=self.controlnet.dtype, - do_classifier_free_guidance=do_classifier_free_guidance, - ) - - images.append(image_) - - image = images - else: - assert False - - # 5. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 6. Prepare latent variables - num_channels_latents = self.unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 8. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # controlnet(s) inference - down_block_res_samples, mid_block_res_sample = self.controlnet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - controlnet_cond=image, - conditioning_scale=controlnet_conditioning_scale, - return_dict=False, - ) - - # predict the noise residual - noise_pred = self.unet( - latent_model_input, - t, - encoder_hidden_states=prompt_embeds, - cross_attention_kwargs=cross_attention_kwargs, - down_block_additional_residuals=down_block_res_samples, - mid_block_additional_residual=mid_block_res_sample, - ).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # If we do sequential model offloading, let's offload unet and controlnet - # manually for max memory savings - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.unet.to("cpu") - self.controlnet.to("cpu") - torch.cuda.empty_cache() - - if output_type == "latent": - image = latents - has_nsfw_concept = None - elif output_type == "pil": - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # 10. Convert to PIL - image = self.numpy_to_pil(image) - else: - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, prompt_embeds.dtype) - - # Offload last model to CPU - if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None: - self.final_offload_hook.offload() - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/ddim/test_ddim.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/ddim/test_ddim.py deleted file mode 100644 index 4d2c4e490d638861c4d06fb3c2ddff489a2773d3..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/ddim/test_ddim.py +++ /dev/null @@ -1,132 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import unittest - -import numpy as np -import torch - -from diffusers import DDIMPipeline, DDIMScheduler, UNet2DModel -from diffusers.utils.testing_utils import require_torch_gpu, slow, torch_device - -from ...pipeline_params import UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS, UNCONDITIONAL_IMAGE_GENERATION_PARAMS -from ...test_pipelines_common import PipelineTesterMixin - - -torch.backends.cuda.matmul.allow_tf32 = False - - -class DDIMPipelineFastTests(PipelineTesterMixin, unittest.TestCase): - pipeline_class = DDIMPipeline - params = UNCONDITIONAL_IMAGE_GENERATION_PARAMS - required_optional_params = PipelineTesterMixin.required_optional_params - { - "num_images_per_prompt", - "latents", - "callback", - "callback_steps", - } - batch_params = UNCONDITIONAL_IMAGE_GENERATION_BATCH_PARAMS - test_cpu_offload = False - - def get_dummy_components(self): - torch.manual_seed(0) - unet = UNet2DModel( - block_out_channels=(32, 64), - layers_per_block=2, - sample_size=32, - in_channels=3, - out_channels=3, - down_block_types=("DownBlock2D", "AttnDownBlock2D"), - up_block_types=("AttnUpBlock2D", "UpBlock2D"), - ) - scheduler = DDIMScheduler() - components = {"unet": unet, "scheduler": scheduler} - return components - - def get_dummy_inputs(self, device, seed=0): - if str(device).startswith("mps"): - generator = torch.manual_seed(seed) - else: - generator = torch.Generator(device=device).manual_seed(seed) - inputs = { - "batch_size": 1, - "generator": generator, - "num_inference_steps": 2, - "output_type": "numpy", - } - return inputs - - def test_inference(self): - device = "cpu" - - components = self.get_dummy_components() - pipe = self.pipeline_class(**components) - pipe.to(device) - pipe.set_progress_bar_config(disable=None) - - inputs = self.get_dummy_inputs(device) - image = pipe(**inputs).images - image_slice = image[0, -3:, -3:, -1] - - self.assertEqual(image.shape, (1, 32, 32, 3)) - expected_slice = np.array( - [1.000e00, 5.717e-01, 4.717e-01, 1.000e00, 0.000e00, 1.000e00, 3.000e-04, 0.000e00, 9.000e-04] - ) - max_diff = np.abs(image_slice.flatten() - expected_slice).max() - self.assertLessEqual(max_diff, 1e-3) - - -@slow -@require_torch_gpu -class DDIMPipelineIntegrationTests(unittest.TestCase): - def test_inference_cifar10(self): - model_id = "google/ddpm-cifar10-32" - - unet = UNet2DModel.from_pretrained(model_id) - scheduler = DDIMScheduler() - - ddim = DDIMPipeline(unet=unet, scheduler=scheduler) - ddim.to(torch_device) - ddim.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - image = ddim(generator=generator, eta=0.0, output_type="numpy").images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 32, 32, 3) - expected_slice = np.array([0.1723, 0.1617, 0.1600, 0.1626, 0.1497, 0.1513, 0.1505, 0.1442, 0.1453]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 - - def test_inference_ema_bedroom(self): - model_id = "google/ddpm-ema-bedroom-256" - - unet = UNet2DModel.from_pretrained(model_id) - scheduler = DDIMScheduler.from_pretrained(model_id) - - ddpm = DDIMPipeline(unet=unet, scheduler=scheduler) - ddpm.to(torch_device) - ddpm.set_progress_bar_config(disable=None) - - generator = torch.manual_seed(0) - image = ddpm(generator=generator, output_type="numpy").images - - image_slice = image[0, -3:, -3:, -1] - - assert image.shape == (1, 256, 256, 3) - expected_slice = np.array([0.0060, 0.0201, 0.0344, 0.0024, 0.0018, 0.0002, 0.0022, 0.0000, 0.0069]) - - assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2 diff --git a/spaces/deepliteai/yolobench/README.md b/spaces/deepliteai/yolobench/README.md deleted file mode 100644 index e090c36a5e53fc50976c95ee23af32b5238e10bf..0000000000000000000000000000000000000000 --- a/spaces/deepliteai/yolobench/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: YOLOBench -emoji: 🚀 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.45.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/deepwisdom/MetaGPT/metagpt/actions/analyze_dep_libs.py b/spaces/deepwisdom/MetaGPT/metagpt/actions/analyze_dep_libs.py deleted file mode 100644 index 23c35cdf80ae5080b8482d2e5f3c82dd501c1a0a..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/actions/analyze_dep_libs.py +++ /dev/null @@ -1,37 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/19 12:01 -@Author : alexanderwu -@File : analyze_dep_libs.py -""" - -from metagpt.actions import Action - -PROMPT = """You are an AI developer, trying to write a program that generates code for users based on their intentions. - -For the user's prompt: - ---- -The API is: {prompt} ---- - -We decide the generated files are: {filepaths_string} - -Now that we have a file list, we need to understand the shared dependencies they have. -Please list and briefly describe the shared contents between the files we are generating, including exported variables, -data patterns, id names of all DOM elements that javascript functions will use, message names and function names. -Focus only on the names of shared dependencies, do not add any other explanations. -""" - - -class AnalyzeDepLibs(Action): - def __init__(self, name, context=None, llm=None): - super().__init__(name, context, llm) - self.desc = "根据上下文,分析程序运行依赖库" - - async def run(self, requirement, filepaths_string): - # prompt = f"以下是产品需求文档(PRD):\n\n{prd}\n\n{PROMPT}" - prompt = PROMPT.format(prompt=requirement, filepaths_string=filepaths_string) - design_filenames = await self._aask(prompt) - return design_filenames diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/test_gpt.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/test_gpt.py deleted file mode 100644 index 89dd726a856297ae81fad5b3a8f1cffbd495952d..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/test_gpt.py +++ /dev/null @@ -1,43 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/4/29 19:47 -@Author : alexanderwu -@File : test_gpt.py -""" - -import pytest - -from metagpt.logs import logger - - -@pytest.mark.usefixtures("llm_api") -class TestGPT: - def test_llm_api_ask(self, llm_api): - answer = llm_api.ask('hello chatgpt') - assert len(answer) > 0 - - # def test_gptapi_ask_batch(self, llm_api): - # answer = llm_api.ask_batch(['请扮演一个Google Python专家工程师,如果理解,回复明白', '写一个hello world']) - # assert len(answer) > 0 - - def test_llm_api_ask_code(self, llm_api): - answer = llm_api.ask_code(['请扮演一个Google Python专家工程师,如果理解,回复明白', '写一个hello world']) - assert len(answer) > 0 - - @pytest.mark.asyncio - async def test_llm_api_aask(self, llm_api): - answer = await llm_api.aask('hello chatgpt') - assert len(answer) > 0 - - @pytest.mark.asyncio - async def test_llm_api_aask_code(self, llm_api): - answer = await llm_api.aask_code(['请扮演一个Google Python专家工程师,如果理解,回复明白', '写一个hello world']) - assert len(answer) > 0 - - @pytest.mark.asyncio - async def test_llm_api_costs(self, llm_api): - await llm_api.aask('hello chatgpt') - costs = llm_api.get_costs() - logger.info(costs) - assert costs.total_cost > 0 diff --git a/spaces/devthedeveloper/Bark-with-Voice-Cloning/bark/api.py b/spaces/devthedeveloper/Bark-with-Voice-Cloning/bark/api.py deleted file mode 100644 index 7a4319ceaa13798912637290f8e9e88c50d5420a..0000000000000000000000000000000000000000 --- a/spaces/devthedeveloper/Bark-with-Voice-Cloning/bark/api.py +++ /dev/null @@ -1,158 +0,0 @@ -from typing import Dict, Optional, Union - -import numpy as np - -from .generation import codec_decode, generate_coarse, generate_fine, generate_text_semantic - - -def generate_with_settings(text_prompt, semantic_temp=0.6, eos_p=0.2, coarse_temp=0.7, fine_temp=0.5, voice_name=None, output_full=False): - - # generation with more control - x_semantic = generate_text_semantic( - text_prompt, - history_prompt=voice_name, - temp=semantic_temp, - min_eos_p = eos_p, - use_kv_caching=True - ) - - x_coarse_gen = generate_coarse( - x_semantic, - history_prompt=voice_name, - temp=coarse_temp, - use_kv_caching=True - ) - x_fine_gen = generate_fine( - x_coarse_gen, - history_prompt=voice_name, - temp=fine_temp, - ) - - if output_full: - full_generation = { - 'semantic_prompt': x_semantic, - 'coarse_prompt': x_coarse_gen, - 'fine_prompt': x_fine_gen - } - return full_generation, codec_decode(x_fine_gen) - return codec_decode(x_fine_gen) - - -def text_to_semantic( - text: str, - history_prompt: Optional[Union[Dict, str]] = None, - temp: float = 0.7, - silent: bool = False, -): - """Generate semantic array from text. - - Args: - text: text to be turned into audio - history_prompt: history choice for audio cloning - temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - - Returns: - numpy semantic array to be fed into `semantic_to_waveform` - """ - x_semantic = generate_text_semantic( - text, - history_prompt=history_prompt, - temp=temp, - silent=silent, - use_kv_caching=True - ) - return x_semantic - - -def semantic_to_waveform( - semantic_tokens: np.ndarray, - history_prompt: Optional[Union[Dict, str]] = None, - temp: float = 0.7, - silent: bool = False, - output_full: bool = False, -): - """Generate audio array from semantic input. - - Args: - semantic_tokens: semantic token output from `text_to_semantic` - history_prompt: history choice for audio cloning - temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - output_full: return full generation to be used as a history prompt - - Returns: - numpy audio array at sample frequency 24khz - """ - coarse_tokens = generate_coarse( - semantic_tokens, - history_prompt=history_prompt, - temp=temp, - silent=silent, - use_kv_caching=True - ) - fine_tokens = generate_fine( - coarse_tokens, - history_prompt=history_prompt, - temp=0.5, - ) - audio_arr = codec_decode(fine_tokens) - if output_full: - full_generation = { - "semantic_prompt": semantic_tokens, - "coarse_prompt": coarse_tokens, - "fine_prompt": fine_tokens, - } - return full_generation, audio_arr - return audio_arr - - -def save_as_prompt(filepath, full_generation): - assert(filepath.endswith(".npz")) - assert(isinstance(full_generation, dict)) - assert("semantic_prompt" in full_generation) - assert("coarse_prompt" in full_generation) - assert("fine_prompt" in full_generation) - np.savez(filepath, **full_generation) - - -def generate_audio( - text: str, - history_prompt: Optional[Union[Dict, str]] = None, - text_temp: float = 0.7, - waveform_temp: float = 0.7, - silent: bool = False, - output_full: bool = False, -): - """Generate audio array from input text. - - Args: - text: text to be turned into audio - history_prompt: history choice for audio cloning - text_temp: generation temperature (1.0 more diverse, 0.0 more conservative) - waveform_temp: generation temperature (1.0 more diverse, 0.0 more conservative) - silent: disable progress bar - output_full: return full generation to be used as a history prompt - - Returns: - numpy audio array at sample frequency 24khz - """ - semantic_tokens = text_to_semantic( - text, - history_prompt=history_prompt, - temp=text_temp, - silent=silent, - ) - out = semantic_to_waveform( - semantic_tokens, - history_prompt=history_prompt, - temp=waveform_temp, - silent=silent, - output_full=output_full, - ) - if output_full: - full_generation, audio_arr = out - return full_generation, audio_arr - else: - audio_arr = out - return audio_arr diff --git a/spaces/diacanFperku/AutoGPT/150gamehousegamescollection((INSTALL)) Freedownloadfull16.md b/spaces/diacanFperku/AutoGPT/150gamehousegamescollection((INSTALL)) Freedownloadfull16.md deleted file mode 100644 index ddce096661ffa2cbb74625730b2a7e5989565f52..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/150gamehousegamescollection((INSTALL)) Freedownloadfull16.md +++ /dev/null @@ -1,86 +0,0 @@ -

      150gamehousegamescollectionfreedownloadfull16


      Download ⚙⚙⚙ https://gohhs.com/2uFUP8



      - -(16) - -Game pieces and counters as seen in this collection of Unique Game Pieces and Unique Game Counters, many of which are multi-purpose and can be used for many different board games. - -Designer Mike - -City: - -Quebec City, QC, Canada - -Manufacturer Part Number: - -UPC Number: - -Average Review Date: - -31-Feb-2013 - -Wish you had all of these? - -Thank you for your feedback! It may take up to 5-10 business days for your review to post to the website. While we do our best to respond to reviews efficiently, we cannot always be thorough. Please note that we cannot always respond to reviews submitted immediately. - -Ask a Question - -Ask the Seller - -Talk to the Seller - -Email address will not be published. - -Name - -Website - -*i - -I have a question about this product - -What is 2 + 2? - -* Where are you shopping? - -Costco.ca - -Amazon.ca - -eBay.ca - -Your Recently Viewed Items - -You have no recently viewed items. After viewing product detail pages or search results, look here to find an easy way to navigate back to products you are interested in. - -Your Recommended Items - -You currently have no recommended items. Browse a few more items to give us an idea of what you like./* - - * Copyright (c) 2020, 2020, Oracle and/or its affiliates. All rights reserved. - - * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER. - - * - - * The Universal Permissive License (UPL), Version 1.0 - - * Subject to the condition set forth below, permission is hereby granted to any - - * person obtaining a copy of this software, associated documentation and/or - - * data (collectively the "Software"), free of charge and under any and all - - * copyright rights in the Software, and any and all patent rights owned or - - * freely licensable by each licensor hereunder covering either (i) the - - * unmodified Software as contributed to or provided by such licensor, or (ii) - - * the Larger Works (as defined below), to deal in both - - * (a) the Software, and - - * (b) any piece of software and/or hardware listed in the 4fefd39f24
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Autodesk Revit 2014 Serial Number And Product Key.md b/spaces/diacanFperku/AutoGPT/Autodesk Revit 2014 Serial Number And Product Key.md deleted file mode 100644 index e1bbf5d0c70f8bef9a9c36c3638d181157ef4165..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Autodesk Revit 2014 Serial Number And Product Key.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Autodesk Revit 2014 Serial Number And Product Key


      DOWNLOAD ✔✔✔ https://gohhs.com/2uFUuB



      -
      -... for that product. The product keys for Autodesk 2014 products are as follows: ... 340G1, 3/28/2014. Autodesk AutoCAD Revit LT Suite 2015, 834G1, 4/11/2014. 1fdad05405
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Corel Videostudio Pro X5 Wedding Template Pack.md b/spaces/diacanFperku/AutoGPT/Corel Videostudio Pro X5 Wedding Template Pack.md deleted file mode 100644 index 73bc570d5dadb6cf1235fbed195056498c4c3224..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Corel Videostudio Pro X5 Wedding Template Pack.md +++ /dev/null @@ -1,6 +0,0 @@ -

      corel videostudio pro x5 wedding template pack


      Download Zip ————— https://gohhs.com/2uFUCc



      -
      -VideoStudio Pro & Ultimate X5/X6 · Daily Romance Template Pack · Wedding Template Pack ·. Positive Vibe Template Pack · Corel. Corel PaintShop Pro X7 ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Download Crack Stronghold 3 64 Bitl.md b/spaces/diacanFperku/AutoGPT/Download Crack Stronghold 3 64 Bitl.md deleted file mode 100644 index eb5792a12af7437abf1f40aedb67c2b605fee2fc..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Download Crack Stronghold 3 64 Bitl.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Download Crack Stronghold 3 64 Bitl


      Download Zip ✑ ✑ ✑ https://gohhs.com/2uFUlk



      - - d5da3c52bf
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Mp9PenCameraDriverFree[PORTABLE] Download.md b/spaces/diacanFperku/AutoGPT/Mp9PenCameraDriverFree[PORTABLE] Download.md deleted file mode 100644 index 070e1e30c605da142501dc1a59a1e97608b0d6cb..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Mp9PenCameraDriverFree[PORTABLE] Download.md +++ /dev/null @@ -1,108 +0,0 @@ -
      -

      Mp9PenCameraDriverFreeDownload: How to Use a Spy Pen Camera on Your PC

      - -

      A spy pen camera is a device that can record video and audio secretly. It looks like a normal pen, but it has a hidden camera and a micro SD card slot. You can use a spy pen camera to capture evidence, monitor activities, or have fun.

      - -

      However, to use a spy pen camera on your PC, you need to install a driver that can recognize and communicate with the device. A driver is a software that allows your PC to interact with hardware devices. Without a driver, your PC cannot access the files stored on the spy pen camera.

      -

      Mp9PenCameraDriverFreeDownload


      DOWNLOADhttps://gohhs.com/2uFTId



      - -

      In this article, we will show you how to download and install Mp9PenCameraDriverFreeDownload, which is a driver for a popular spy pen camera model called Mp9. We will also explain the benefits and drawbacks of using Mp9PenCameraDriverFreeDownload, and provide some tips on how to use a spy pen camera safely and effectively.

      - -

      How to Download and Install Mp9PenCameraDriverFreeDownload

      - -

      To download and install Mp9PenCameraDriverFreeDownload, you need to follow these steps:

      - -
        -
      1. Connect your spy pen camera to your PC via USB cable.
      2. -
      3. Go to the official website of Mp9PenCameraDriverFreeDownload or a trusted source that offers the driver file.
      4. -
      5. Download the driver file that matches your Windows operating system version (32-bit or 64-bit).
      6. -
      7. Run the downloaded file and follow the instructions on the screen to install the driver on your PC.
      8. -
      9. Restart your PC and check if your spy pen camera is detected and working properly.
      10. -
      - -

      Note: If you encounter any problems during the installation process, you can try these troubleshooting steps:

      - -
        -
      • Make sure your spy pen camera is fully charged before connecting it to your PC.
      • -
      • Try using another USB cable or port if your PC does not recognize your spy pen camera.
      • -
      • Scan your PC for malware or viruses that might interfere with the driver installation.
      • -
      • Update your Windows operating system and other drivers to the latest versions.
      • -
      • Contact the customer support of Mp9PenCameraDriverFreeDownload or your spy pen camera manufacturer for further assistance.
      • -
      - -

      Benefits and Drawbacks of Using Mp9PenCameraDriverFreeDownload

      - -

      Using Mp9PenCameraDriverFreeDownload has some benefits and drawbacks that you should consider before using it. Here are some of them:

      - -
        -
      • Benefits: -
          -
        • You can use Mp9PenCameraDriverFreeDownload for free without paying any fees or charges.
        • -
        • You can use Mp9PenCameraDriverFreeDownload to access and transfer the files stored on your spy pen camera easily and quickly.
        • -
        • You can use Mp9PenCameraDriverFreeDownload to adjust the settings and options of your spy pen camera according to your preferences.
        • -
        -
      • -
      • Drawbacks: -
          -
        • You might face legal issues if you use Mp9PenCameraDriverFreeDownload or your spy pen camera without consent or permission from the people or places you record.
        • -
        • You might expose your PC or data to malware or hackers if you download Mp9PenCameraDriverFreeDownload from untrusted sources or without proper protection.
        • -
        • You might experience compatibility problems if your PC does not meet the system requirements for Mp9PenCameraDriverFreeDownload or if you want to use your spy pen camera with other software or devices.
        • -
        -
      • -
      - -

      Therefore, you should weigh the pros and cons of using Mp9PenCameraDriverFreeDownload before making your choice. You should also be aware of the alternatives to Mp9PenCameraDriverFreeDownload, such as other drivers or software that can work with your spy pen camera.

      - -

      Alternatives to Mp9PenCameraDriverFreeDownload

      - -

      If you are not satisfied with using Mp9PenCameraDriverFreeDownload or if you want to avoid any risks or challenges associated with it, you should consider the alternatives to Mp9PenCameraDriverFreeDownload. Here are some of them:

      - -
        -
      • Other drivers: You can use other drivers that can support your spy pen camera model or brand. You can find these drivers on the official website of your spy pen camera manufacturer or other reliable sources online. You can also use a driver updater tool that can automatically scan, download, and install the best drivers for your devices.
      • -
      • Other software: You can use other software that can access and manage the files stored on your spy pen camera. Some of these software are VLC Media Player, Windows Media Player, Windows Explorer, etc. You can also use online services that can convert, edit, or share your files online.
      • -
      - -

      Therefore, you have many alternatives to Mp9PenCameraDriverFreeDownload that you can consider before making your choice. You should compare the features, benefits, drawbacks, and costs of each option and choose the one that best suits your needs and preferences.

      - -

      Conclusion

      - -

      In this article, we have explained how to download and install Mp9PenCameraDriverFreeDownload, which is a driver for a popular spy pen camera model called Mp9. We have also discussed the benefits and drawbacks of using Mp9PenCameraDriverFreeDownload, and provided some tips on how to use a spy pen camera safely and effectively.

      -

      - -

      By following this guide, you will be able to use your spy pen camera on your PC without any hassle. However, you should also be aware of the risks and responsibilities that come with using Mp9PenCameraDriverFreeDownload or your spy pen camera. You should also be aware of the alternatives to Mp9PenCameraDriverFreeDownload that might offer a better solution for your spy pen camera needs.

      - -

      We hope this article has been helpful and informative for you. If you have any questions or comments, please feel free to share them below.

      -

      How to Use a Spy Pen Camera Safely and Effectively

      - -

      A spy pen camera can be a useful and fun device to have, but it also comes with some risks and challenges that you need to be aware of. Here are some tips on how to use a spy pen camera safely and effectively:

      - -
        -
      • Check the legality of using a spy pen camera in your country. Some countries have strict laws against recording or spying on people or places without their consent or permission. You might face legal consequences if you violate these laws. Therefore, you should check the legality of using a spy pen camera in your country before using it.
      • -
      • Respect the privacy and rights of others. Even if using a spy pen camera is legal in your country, you should still respect the privacy and rights of others. You should not use a spy pen camera to record or spy on people or places that have a reasonable expectation of privacy, such as bathrooms, bedrooms, locker rooms, etc. You should also not use a spy pen camera to record or spy on people or places that have prohibited or restricted the use of cameras, such as government buildings, military bases, schools, etc. You should also not use a spy pen camera to record or spy on people or places for malicious or illegal purposes, such as blackmail, extortion, harassment, etc.
      • -
      • Protect your PC and data from malware and hackers. If you download Mp9PenCameraDriverFreeDownload or your spy pen camera files from untrusted sources or without proper protection, you might expose your PC or data to malware or hackers. Malware can damage your PC or data, or steal your personal information. Hackers can access your PC or data, or hijack your spy pen camera. Therefore, you should protect your PC and data from malware and hackers by using antivirus software, firewall software, VPN service, etc.
      • -
      • Use your spy pen camera wisely and responsibly. A spy pen camera can be a powerful tool that can help you capture evidence, monitor activities, or have fun. However, you should also use your spy pen camera wisely and responsibly. You should not use your spy pen camera excessively or obsessively. You should not use your spy pen camera to harm yourself or others. You should not use your spy pen camera to violate the law or ethics. You should also delete or destroy any files that are no longer needed or relevant.
      • -
      - -

      By following these tips, you will be able to use your spy pen camera safely and effectively. You will also be able to enjoy the benefits and avoid the drawbacks of using Mp9PenCameraDriverFreeDownload or your spy pen camera.

      -

      How to Choose a Good Spy Pen Camera

      - -

      A spy pen camera can be a great device to have, but not all spy pen cameras are created equal. There are many factors that can affect the quality and performance of a spy pen camera, such as the camera resolution, battery life, storage capacity, design, features, etc. Therefore, you should choose a good spy pen camera that can meet your needs and expectations. Here are some tips on how to choose a good spy pen camera:

      - -
        -
      • Check the camera resolution. The camera resolution determines the clarity and detail of the video and audio recorded by the spy pen camera. The higher the resolution, the better the quality. However, higher resolution also means larger file size and more battery consumption. Therefore, you should choose a spy pen camera that has a balance between resolution and file size and battery life. For example, a spy pen camera that can record in 1080p HD resolution is a good choice.
      • -
      • Check the battery life. The battery life determines how long the spy pen camera can record continuously without recharging. The longer the battery life, the better. However, longer battery life also means larger and heavier spy pen camera. Therefore, you should choose a spy pen camera that has a balance between battery life and size and weight. For example, a spy pen camera that can record for 2 hours on a single charge is a good choice.
      • -
      • Check the storage capacity. The storage capacity determines how much video and audio the spy pen camera can store on its micro SD card. The larger the storage capacity, the better. However, larger storage capacity also means more expensive micro SD card. Therefore, you should choose a spy pen camera that has a balance between storage capacity and cost. For example, a spy pen camera that can support up to 32GB micro SD card is a good choice.
      • -
      • Check the design. The design determines how discreet and convenient the spy pen camera is. The more discreet and convenient the design, the better. However, more discreet and convenient design also means less features and functions. Therefore, you should choose a spy pen camera that has a balance between design and features and functions. For example, a spy pen camera that looks like a normal pen but has a hidden camera and a micro SD card slot is a good choice.
      • -
      • Check the features and functions. The features and functions determine what else the spy pen camera can do besides recording video and audio. The more features and functions the spy pen camera has, the better. However, more features and functions also mean more complicated operation and more potential problems. Therefore, you should choose a spy pen camera that has a balance between features and functions and simplicity and reliability. For example, a spy pen camera that has motion detection, night vision, loop recording, etc. is a good choice.
      • -
      - -

      By following these tips, you will be able to choose a good spy pen camera that can suit your needs and preferences. You will also be able to use Mp9PenCameraDriverFreeDownload or your spy pen camera more effectively.

      -

      Conclusion

      - -

      In this article, we have explained how to download and install Mp9PenCameraDriverFreeDownload, which is a driver for a popular spy pen camera model called Mp9. We have also discussed the benefits and drawbacks of using Mp9PenCameraDriverFreeDownload, and provided some tips on how to use a spy pen camera safely and effectively. We have also given some tips on how to choose a good spy pen camera that can meet your needs and expectations.

      - -

      By following this guide, you will be able to use your spy pen camera on your PC without any hassle. However, you should also be aware of the risks and responsibilities that come with using Mp9PenCameraDriverFreeDownload or your spy pen camera. You should also be aware of the alternatives to Mp9PenCameraDriverFreeDownload that might offer a better solution for your spy pen camera needs.

      - -

      We hope this article has been helpful and informative for you. If you have any questions or comments, please feel free to share them below.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/diagaiwei/ir_chinese_medqa/utility/evaluate/annotate_EM.py b/spaces/diagaiwei/ir_chinese_medqa/utility/evaluate/annotate_EM.py deleted file mode 100644 index ecd87048960b7da24d1ca3fea2b33e66790b6599..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/utility/evaluate/annotate_EM.py +++ /dev/null @@ -1,81 +0,0 @@ -import os -import sys -import git -import tqdm -import ujson -import random - -from argparse import ArgumentParser -from multiprocessing import Pool - -from colbert.utils.utils import print_message, load_ranking, groupby_first_item -from utility.utils.qa_loaders import load_qas_, load_collection_ -from utility.utils.save_metadata import format_metadata, get_metadata -from utility.evaluate.annotate_EM_helpers import * - - -# TODO: Tokenize passages in advance, especially if the ranked list is long! This requires changes to the has_answer input, slightly. - -def main(args): - qas = load_qas_(args.qas) - collection = load_collection_(args.collection, retain_titles=True) - rankings = load_ranking(args.ranking) - parallel_pool = Pool(30) - - print_message('#> Tokenize the answers in the Q&As in parallel...') - qas = list(parallel_pool.map(tokenize_all_answers, qas)) - - qid2answers = {qid: tok_answers for qid, _, tok_answers in qas} - assert len(qas) == len(qid2answers), (len(qas), len(qid2answers)) - - print_message('#> Lookup passages from PIDs...') - expanded_rankings = [(qid, pid, rank, collection[pid], qid2answers[qid]) - for qid, pid, rank, *_ in rankings] - - print_message('#> Assign labels in parallel...') - labeled_rankings = list(parallel_pool.map(assign_label_to_passage, enumerate(expanded_rankings))) - - # Dump output. - print_message("#> Dumping output to", args.output, "...") - qid2rankings = groupby_first_item(labeled_rankings) - - num_judged_queries, num_ranked_queries = check_sizes(qid2answers, qid2rankings) - - # Evaluation metrics and depths. - success, counts = compute_and_write_labels(args.output, qid2answers, qid2rankings) - - # Dump metrics. - with open(args.output_metrics, 'w') as f: - d = {'num_ranked_queries': num_ranked_queries, 'num_judged_queries': num_judged_queries} - - extra = '__WARNING' if num_judged_queries != num_ranked_queries else '' - d[f'success{extra}'] = {k: v / num_judged_queries for k, v in success.items()} - d[f'counts{extra}'] = {k: v / num_judged_queries for k, v in counts.items()} - d['arguments'] = get_metadata(args) - - f.write(format_metadata(d) + '\n') - - print('\n\n') - print(args.output) - print(args.output_metrics) - print("#> Done\n") - - -if __name__ == "__main__": - random.seed(12345) - - parser = ArgumentParser(description='.') - - # Input / Output Arguments - parser.add_argument('--qas', dest='qas', required=True, type=str) - parser.add_argument('--collection', dest='collection', required=True, type=str) - parser.add_argument('--ranking', dest='ranking', required=True, type=str) - - args = parser.parse_args() - - args.output = f'{args.ranking}.annotated' - args.output_metrics = f'{args.ranking}.annotated.metrics' - - assert not os.path.exists(args.output), args.output - - main(args) diff --git a/spaces/diaoren/OpenSetObstacleDetection/opendet2/modeling/layers/__init__.py b/spaces/diaoren/OpenSetObstacleDetection/opendet2/modeling/layers/__init__.py deleted file mode 100644 index 491a0d260c866a3551a27368029b68108e00b3bf..0000000000000000000000000000000000000000 --- a/spaces/diaoren/OpenSetObstacleDetection/opendet2/modeling/layers/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .mlp import * - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/digitalxingtong/Nailv-read-Bert-Vits2/short_audio_transcribe.py b/spaces/digitalxingtong/Nailv-read-Bert-Vits2/short_audio_transcribe.py deleted file mode 100644 index f1e8b30671f2c2f2fa3c93feb1f4edd3fbe2f545..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nailv-read-Bert-Vits2/short_audio_transcribe.py +++ /dev/null @@ -1,122 +0,0 @@ -import whisper -import os -import json -import torchaudio -import argparse -import torch - -lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } -def transcribe_one(audio_path): - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio_path) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - lang = max(probs, key=probs.get) - # decode the audio - options = whisper.DecodingOptions(beam_size=5) - result = whisper.decode(model, mel, options) - - # print the recognized text - print(result.text) - return lang, result.text -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--languages", default="CJE") - parser.add_argument("--whisper_size", default="medium") - args = parser.parse_args() - if args.languages == "CJE": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } - elif args.languages == "CJ": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - } - elif args.languages == "C": - lang2token = { - 'zh': "[ZH]", - } - assert (torch.cuda.is_available()), "Please enable GPU in order to run Whisper!" - model = whisper.load_model(args.whisper_size) - parent_dir = "./custom_character_voice/" - speaker_names = list(os.walk(parent_dir))[0][1] - speaker_annos = [] - total_files = sum([len(files) for r, d, files in os.walk(parent_dir)]) - # resample audios - # 2023/4/21: Get the target sampling rate - with open("./configs/config.json", 'r', encoding='utf-8') as f: - hps = json.load(f) - target_sr = hps['data']['sampling_rate'] - processed_files = 0 - for speaker in speaker_names: - for i, wavfile in enumerate(list(os.walk(parent_dir + speaker))[0][2]): - # try to load file as audio - if wavfile.startswith("processed_"): - continue - try: - wav, sr = torchaudio.load(parent_dir + speaker + "/" + wavfile, frame_offset=0, num_frames=-1, normalize=True, - channels_first=True) - wav = wav.mean(dim=0).unsqueeze(0) - if sr != target_sr: - wav = torchaudio.transforms.Resample(orig_freq=sr, new_freq=target_sr)(wav) - if wav.shape[1] / sr > 20: - print(f"{wavfile} too long, ignoring\n") - save_path = parent_dir + speaker + "/" + f"processed_{i}.wav" - torchaudio.save(save_path, wav, target_sr, channels_first=True) - # transcribe text - lang, text = transcribe_one(save_path) - if lang not in list(lang2token.keys()): - print(f"{lang} not supported, ignoring\n") - continue - text = "ZH|" + text + "\n"# - #text = lang2token[lang] + text + lang2token[lang] + "\n" - speaker_annos.append(save_path + "|" + speaker + "|" + text) - - processed_files += 1 - print(f"Processed: {processed_files}/{total_files}") - except: - continue - - # # clean annotation - # import argparse - # import text - # from utils import load_filepaths_and_text - # for i, line in enumerate(speaker_annos): - # path, sid, txt = line.split("|") - # cleaned_text = text._clean_text(txt, ["cjke_cleaners2"]) - # cleaned_text += "\n" if not cleaned_text.endswith("\n") else "" - # speaker_annos[i] = path + "|" + sid + "|" + cleaned_text - # write into annotation - if len(speaker_annos) == 0: - print("Warning: no short audios found, this IS expected if you have only uploaded long audios, videos or video links.") - print("this IS NOT expected if you have uploaded a zip file of short audios. Please check your file structure or make sure your audio language is supported.") - with open("./filelists/short_character_anno.list", 'w', encoding='utf-8') as f: - for line in speaker_annos: - f.write(line) - - # import json - # # generate new config - # with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f: - # hps = json.load(f) - # # modify n_speakers - # hps['data']["n_speakers"] = 1000 + len(speaker2id) - # # add speaker names - # for speaker in speaker_names: - # hps['speakers'][speaker] = speaker2id[speaker] - # # save modified config - # with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f: - # json.dump(hps, f, indent=2) - # print("finished") diff --git a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/short_audio_transcribe.py b/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/short_audio_transcribe.py deleted file mode 100644 index f1e8b30671f2c2f2fa3c93feb1f4edd3fbe2f545..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/short_audio_transcribe.py +++ /dev/null @@ -1,122 +0,0 @@ -import whisper -import os -import json -import torchaudio -import argparse -import torch - -lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } -def transcribe_one(audio_path): - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio_path) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - lang = max(probs, key=probs.get) - # decode the audio - options = whisper.DecodingOptions(beam_size=5) - result = whisper.decode(model, mel, options) - - # print the recognized text - print(result.text) - return lang, result.text -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--languages", default="CJE") - parser.add_argument("--whisper_size", default="medium") - args = parser.parse_args() - if args.languages == "CJE": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } - elif args.languages == "CJ": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - } - elif args.languages == "C": - lang2token = { - 'zh': "[ZH]", - } - assert (torch.cuda.is_available()), "Please enable GPU in order to run Whisper!" - model = whisper.load_model(args.whisper_size) - parent_dir = "./custom_character_voice/" - speaker_names = list(os.walk(parent_dir))[0][1] - speaker_annos = [] - total_files = sum([len(files) for r, d, files in os.walk(parent_dir)]) - # resample audios - # 2023/4/21: Get the target sampling rate - with open("./configs/config.json", 'r', encoding='utf-8') as f: - hps = json.load(f) - target_sr = hps['data']['sampling_rate'] - processed_files = 0 - for speaker in speaker_names: - for i, wavfile in enumerate(list(os.walk(parent_dir + speaker))[0][2]): - # try to load file as audio - if wavfile.startswith("processed_"): - continue - try: - wav, sr = torchaudio.load(parent_dir + speaker + "/" + wavfile, frame_offset=0, num_frames=-1, normalize=True, - channels_first=True) - wav = wav.mean(dim=0).unsqueeze(0) - if sr != target_sr: - wav = torchaudio.transforms.Resample(orig_freq=sr, new_freq=target_sr)(wav) - if wav.shape[1] / sr > 20: - print(f"{wavfile} too long, ignoring\n") - save_path = parent_dir + speaker + "/" + f"processed_{i}.wav" - torchaudio.save(save_path, wav, target_sr, channels_first=True) - # transcribe text - lang, text = transcribe_one(save_path) - if lang not in list(lang2token.keys()): - print(f"{lang} not supported, ignoring\n") - continue - text = "ZH|" + text + "\n"# - #text = lang2token[lang] + text + lang2token[lang] + "\n" - speaker_annos.append(save_path + "|" + speaker + "|" + text) - - processed_files += 1 - print(f"Processed: {processed_files}/{total_files}") - except: - continue - - # # clean annotation - # import argparse - # import text - # from utils import load_filepaths_and_text - # for i, line in enumerate(speaker_annos): - # path, sid, txt = line.split("|") - # cleaned_text = text._clean_text(txt, ["cjke_cleaners2"]) - # cleaned_text += "\n" if not cleaned_text.endswith("\n") else "" - # speaker_annos[i] = path + "|" + sid + "|" + cleaned_text - # write into annotation - if len(speaker_annos) == 0: - print("Warning: no short audios found, this IS expected if you have only uploaded long audios, videos or video links.") - print("this IS NOT expected if you have uploaded a zip file of short audios. Please check your file structure or make sure your audio language is supported.") - with open("./filelists/short_character_anno.list", 'w', encoding='utf-8') as f: - for line in speaker_annos: - f.write(line) - - # import json - # # generate new config - # with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f: - # hps = json.load(f) - # # modify n_speakers - # hps['data']["n_speakers"] = 1000 + len(speaker2id) - # # add speaker names - # for speaker in speaker_names: - # hps['speakers'][speaker] = speaker2id[speaker] - # # save modified config - # with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f: - # json.dump(hps, f, indent=2) - # print("finished") diff --git a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/text/japanese.py b/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/text/japanese.py deleted file mode 100644 index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-2dall-Bert-VITS2/text/japanese.py +++ /dev/null @@ -1,104 +0,0 @@ -# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py -import re -import sys - -import pyopenjtalk - -from text import symbols - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def preprocess_jap(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = [] - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - p = pyopenjtalk.g2p(sentence) - text += p.split(" ") - - if i < len(marks): - text += [marks[i].replace(' ', '')] - return text - -def text_normalize(text): - # todo: jap text normalize - return text - -def g2p(norm_text): - phones = preprocess_jap(norm_text) - phones = [post_replace_ph(i) for i in phones] - # todo: implement tones and word2ph - tones = [0 for i in phones] - word2ph = [1 for i in phones] - return phones, tones, word2ph - - -if __name__ == '__main__': - for line in open("../../../Downloads/transcript_utf8.txt").readlines(): - text = line.split(":")[1] - phones, tones, word2ph = g2p(text) - for p in phones: - if p == "z": - print(text, phones) - sys.exit(0) diff --git a/spaces/diivien/Music-Popularity-Prediction/app.py b/spaces/diivien/Music-Popularity-Prediction/app.py deleted file mode 100644 index d68d99dcb578c47e993f07b70e5d538786f21d5d..0000000000000000000000000000000000000000 --- a/spaces/diivien/Music-Popularity-Prediction/app.py +++ /dev/null @@ -1,240 +0,0 @@ -import gradio as gr -import pandas as pd -import joblib -import os -import spotipy -import pylast -import discogs_client -from spotipy.oauth2 import SpotifyClientCredentials -from queue import PriorityQueue -from fuzzywuzzy import fuzz - -final_model = joblib.load('final_model.pkl') -# Set up authentication with the Spotify API -sp = spotipy.Spotify(auth_manager=SpotifyClientCredentials(client_id=os.environ['SPOT_API'], client_secret=os.environ['SPOT_SECRET'])) -network = pylast.LastFMNetwork(api_key=os.environ['LAST_API'], api_secret=os.environ['LAST_SECRET']) -d = discogs_client.Client('app/0.1', user_token=os.environ['DIS_TOKEN']) -genre_list = ['acoustic', 'afrobeat', 'alt-rock', 'alternative', 'ambient', - 'anime', 'black-metal', 'bluegrass', 'blues', 'brazil', - 'breakbeat', 'british', 'cantopop', 'chicago-house', 'children', - 'chill', 'classical', 'club', 'comedy', 'country', 'dance', - 'dancehall', 'death-metal', 'deep-house', 'detroit-techno', - 'disco', 'disney', 'drum-and-bass', 'dub', 'dubstep', 'edm', - 'electro', 'electronic', 'emo', 'folk', 'forro', 'french', 'funk', - 'garage', 'german', 'gospel', 'goth', 'grindcore', 'groove', - 'grunge', 'guitar', 'happy', 'hard-rock', 'hardcore', 'hardstyle', - 'heavy-metal', 'hip-hop', 'honky-tonk', 'house', 'idm', 'indian', - 'indie-pop', 'indie', 'industrial', 'iranian', 'j-dance', 'j-idol', - 'j-pop', 'j-rock', 'jazz', 'k-pop', 'kids', 'latin', 'latino', - 'malay', 'mandopop', 'metal', 'metalcore', 'minimal-techno', 'mpb', - 'new-age', 'opera', 'pagode', 'party', 'piano', 'pop-film', 'pop', - 'power-pop', 'progressive-house', 'psych-rock', 'punk-rock', - 'punk', 'r-n-b', 'reggae', 'reggaeton', 'rock-n-roll', 'rock', - 'rockabilly', 'romance', 'sad', 'salsa', 'samba', 'sertanejo', - 'show-tunes', 'singer-songwriter', 'ska', 'sleep', 'soul', - 'spanish', 'study', 'swedish', 'synth-pop', 'tango', 'techno', - 'trance', 'trip-hop', 'turkish', 'world-music'] - - - - - -def get_track_genre(track_id,artist_name,track_name): - genres = {} - track_spot = sp.track(track_id) - artist = sp.artist(track_spot['artists'][0]['external_urls']['spotify']) - album_id = track_spot['album']['id'] - album = sp.album(album_id) - genres.update({genre: 100 for genre in album['genres']}) - genres.update({genre: 100 for genre in artist['genres']}) - - try: - if network.get_track(artist_name, track_name): - track_last = network.get_track(artist_name, track_name) - top_tags = track_last.get_top_tags(limit=5) - tags_list = {tag.item.get_name(): int(tag.weight) for tag in top_tags} - genres.update(tags_list) - except pylast.WSError as e: - if str(e) == "Track not found": - # Handle the error here - pass - - results = d.search(track_name, artist=artist_name, type='release') - if results: - release = results[0] - if release.genres: - genres.update({genre: 50 for genre in release.genres}) - if release.styles: - genres.update({genre: 50 for genre in release.styles}) - - - print(genres) - return genres - - -def similar(genre1, genre2): - score = fuzz.token_set_ratio(genre1, genre2) - return genre1 if score >85 else None - -def find_genre(genres, scraped_genres): - pq = PriorityQueue() - for genre, weight in scraped_genres.items(): - pq.put((-weight, genre)) - while not pq.empty(): - weight, genre = pq.get() - if genre in genres: - return genre - else: - for g in genres: - if similar(g, genre): - return g - return None - - -def match_genres_to_list(track_id,artist_name,track_name): - track_genres=get_track_genre(track_id,artist_name,track_name) - return find_genre(genre_list,track_genres) - -def search_songs(query): - results = sp.search(q=query, type="track") - songs = [f"{index}. {item['name']} by {item['artists'][0]['name']}" for index, item in enumerate(results["tracks"]["items"])] - - track_ids = [item["id"] for item in results["tracks"]["items"]] - return songs, track_ids - - -def get_song_features(song, track_ids): - index = int(song.split(".")[0]) - track_id = track_ids[index] - track_info = sp.track(track_id) - artist_name = track_info['artists'][0]['name'] - track_name = track_info['name'] - features = sp.audio_features([track_id])[0] - genre = match_genres_to_list(track_id,artist_name,track_name) - key_map = {0: 'C', 1: 'C#', 2: 'D', 3: 'D#', 4: 'E', 5: 'F', 6: 'F#', 7: 'G', 8: 'G#', 9: 'A', 10: 'A#', 11: 'B'} - key = str(key_map[features['key']]) - mode_map = { 1: "Major", 0: "Minor"} - mode = mode_map[features['mode']] - - explicit_real = track_info['explicit'] - features_list = [ - features['duration_ms'], - explicit_real, - features['danceability'], - features['energy'], - key, - features['loudness'], - mode, - features['speechiness'], - features['acousticness'], - features['instrumentalness'], - features['liveness'], - features['valence'], - features['tempo'], - str(features['time_signature']), - genre - ] - - return features_list - -theme = gr.themes.Monochrome( - # text_size="text_lg", - font=[gr.themes.GoogleFont('Neucha'), 'ui-sans-serif', 'system-ui', 'sans-serif'], -) -with gr.Blocks(theme=theme,css = "@media (max-width: 600px) {" + - ".gradio-container { flex-direction: column;}" + - ".gradio-container h1 {font-size: 30px !important ;margin-left: 20px !important; line-height: 30px !important}" + - ".gradio-container h2 {font-size: 15px !important;margin-left: 20px !important;margin-top: 20px !important;}"+ - ".gradio-container img{width : 100px; height : 100px}}") as demo: - with gr.Row(): - image = gr.HTML("
      My gif" + - "

      Music Popularity Prediction

      " + - "

      by Keh Zheng Xian

      ") - - with gr.Row(): - with gr.Column(): - search_box = gr.Textbox(label="Search for songs") - song_dropdown = gr.Dropdown(label="Select a song", choices=[]) - # features_box = gr.Textbox(label="Song features", interactive=False) - inputs = [ - gr.Number(label="duration_ms",interactive=True), - gr.Checkbox(label="explicit",interactive=True), - gr.Slider(0.0, 1.0, label="danceability",interactive=True), - gr.Slider(0.0, 1.0, label="energy",interactive=True), - gr.Dropdown(label="key", choices=["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"],interactive=True), - gr.Number(label="loudness",interactive=True), - gr.Radio(label="mode", choices=["Major", "Minor"],interactive=True), - gr.Slider(0.0, 1.0, label="speechiness",interactive=True), - gr.Slider(0.0, 1.0, label="acousticness",interactive=True), - gr.Slider(0.0, 1.0, label="instrumentalness",interactive=True), - gr.Slider(0.0, 1.0, label="liveness",interactive=True), - gr.Slider(0.0, 1.0, label="valence",interactive=True), - gr.Number(label="tempo",interactive=True), - gr.Dropdown(label="time_signature", choices=[3, 4, 5, 6, 7],interactive=True), - gr.Dropdown(label="track_genre", choices=genre_list,interactive=True) - ] - predict_button = gr.Button(label="Predict popularity") - - with gr.Column(): - popularity_box = gr.HTML("
      My gif 2" + - "

      Waiting for your song...

      ",elem_id="output") - track_ids_var = gr.State() - def update_dropdown(query,track_ids): - songs, track_ids = search_songs(query) - return {song_dropdown: gr.update(choices=songs), track_ids_var: track_ids} - - search_box.change(fn=update_dropdown, inputs=[search_box,track_ids_var], outputs=[song_dropdown,track_ids_var]) - - def update_features(song,track_ids): - features = get_song_features(song, track_ids) - return features - - def predict_popularity(duration_ms, explicit, danceability, energy, key, loudness, mode, speechiness, acousticness, instrumentalness, liveness, valence, tempo, time_signature,track_genre): - # Convert the key input from a string to an integer value - key_map = {"C": 0, "C#": 1, "D": 2, "D#": 3, "E": 4, "F": 5, "F#": 6, "G": 7, "G#": 8, "A": 9, "A#": 10, "B": 11} - key_real = str(key_map[key]) - - explicit_real = int(explicit) - # Convert the mode input from a string to an integer value - mode_map = {"Major": 1, "Minor": 0} - mode_real = mode_map[mode] - - data = { - "duration_ms": [duration_ms], - "explicit": [explicit_real], - "danceability": [danceability], - "energy": [energy], - "key": [key_real], - "loudness": [loudness], - "mode": [mode_real], - "speechiness": [speechiness], - "acousticness": [acousticness], - "instrumentalness": [instrumentalness], - "liveness": [liveness], - "valence": [valence], - "tempo": [tempo], - "time_signature": [str(time_signature)], - "track_genre": [track_genre] - } - - df = pd.DataFrame(data) - print(df) - print(final_model.predict(df)) - # Use your trained model to predict popularity based on the input features - if(final_model.predict(df)[0] == 1): - return ("
      My gif 3" + - "

      Your song issa boppp

      ") - else: - return ("
      My gif 4" + - "

      Not a bop....

      ") - - song_dropdown.change(fn=update_features, inputs=[song_dropdown,track_ids_var], outputs=inputs) - predict_button.click(fn=predict_popularity, inputs=inputs, outputs=popularity_box, scroll_to_output=True, - _js="const element = document.querySelector('output');"+ - "const rect = element.getBoundingClientRect();"+ - "const options = {left: rect.left, top: rect.top, behavior: 'smooth'}"+ - "parentIFrame' in window ?" - "window.parentIFrame.scrollTo(options):"+ - "window.scrollTo(options)") - - demo.launch() diff --git a/spaces/dirge/voicevox/voicevox_engine/synthesis_engine/core_wrapper.py b/spaces/dirge/voicevox/voicevox_engine/synthesis_engine/core_wrapper.py deleted file mode 100644 index fe8f9778707a7476f30ab5b80f1ed1e1f759b8a0..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/voicevox_engine/synthesis_engine/core_wrapper.py +++ /dev/null @@ -1,538 +0,0 @@ -import os -import platform -from ctypes import CDLL, POINTER, c_bool, c_char_p, c_float, c_int, c_long -from ctypes.util import find_library -from dataclasses import dataclass -from enum import Enum, auto -from pathlib import Path -from typing import List, Optional - -import numpy as np - - -class OldCoreError(Exception): - """古いコアが使用されている場合に発生するエラー""" - - -class CoreError(Exception): - """コア呼び出しで発生したエラー""" - - -def load_runtime_lib(runtime_dirs: List[Path]): - if platform.system() == "Windows": - # DirectML.dllはonnxruntimeと互換性のないWindows標準搭載のものを優先して読み込むことがあるため、明示的に読み込む - # 参考 1. https://github.com/microsoft/onnxruntime/issues/3360 - # 参考 2. https://tadaoyamaoka.hatenablog.com/entry/2020/06/07/113616 - lib_file_names = [ - "torch_cpu.dll", - "torch_cuda.dll", - "DirectML.dll", - "onnxruntime.dll", - ] - lib_names = ["torch_cpu", "torch_cuda", "onnxruntime"] - elif platform.system() == "Linux": - lib_file_names = ["libtorch.so", "libonnxruntime.so"] - lib_names = ["torch", "onnxruntime"] - elif platform.system() == "Darwin": - lib_file_names = ["libonnxruntime.dylib"] - lib_names = ["onnxruntime"] - else: - raise RuntimeError("不明なOSです") - for lib_path in runtime_dirs: - for file_name in lib_file_names: - try: - CDLL(str((lib_path / file_name).resolve(strict=True))) - except OSError: - pass - for lib_name in lib_names: - try: - CDLL(find_library(lib_name)) - except (OSError, TypeError): - pass - - -class GPUType(Enum): - # NONEはCPUしか対応していないことを示す - NONE = auto() - CUDA = auto() - DIRECT_ML = auto() - - -@dataclass(frozen=True) -class CoreInfo: - name: str - platform: str - arch: str - core_type: str - gpu_type: GPUType - - -# version 0.12 より前のコアの情報 -CORE_INFOS = [ - # Windows - CoreInfo( - name="core.dll", - platform="Windows", - arch="x64", - core_type="libtorch", - gpu_type=GPUType.CUDA, - ), - CoreInfo( - name="core_cpu.dll", - platform="Windows", - arch="x64", - core_type="libtorch", - gpu_type=GPUType.NONE, - ), - CoreInfo( - name="core_gpu_x64_nvidia.dll", - platform="Windows", - arch="x64", - core_type="onnxruntime", - gpu_type=GPUType.CUDA, - ), - CoreInfo( - name="core_gpu_x64_directml.dll", - platform="Windows", - arch="x64", - core_type="onnxruntime", - gpu_type=GPUType.DIRECT_ML, - ), - CoreInfo( - name="core_cpu_x64.dll", - platform="Windows", - arch="x64", - core_type="onnxruntime", - gpu_type=GPUType.NONE, - ), - CoreInfo( - name="core_cpu_x86.dll", - platform="Windows", - arch="x86", - core_type="onnxruntime", - gpu_type=GPUType.NONE, - ), - CoreInfo( - name="core_gpu_x86_directml.dll", - platform="Windows", - arch="x86", - core_type="onnxruntime", - gpu_type=GPUType.DIRECT_ML, - ), - CoreInfo( - name="core_cpu_arm.dll", - platform="Windows", - arch="armv7l", - core_type="onnxruntime", - gpu_type=GPUType.NONE, - ), - CoreInfo( - name="core_gpu_arm_directml.dll", - platform="Windows", - arch="armv7l", - core_type="onnxruntime", - gpu_type=GPUType.DIRECT_ML, - ), - CoreInfo( - name="core_cpu_arm64.dll", - platform="Windows", - arch="aarch64", - core_type="onnxruntime", - gpu_type=GPUType.NONE, - ), - CoreInfo( - name="core_gpu_arm64_directml.dll", - platform="Windows", - arch="aarch64", - core_type="onnxruntime", - gpu_type=GPUType.DIRECT_ML, - ), - # Linux - CoreInfo( - name="libcore.so", - platform="Linux", - arch="x64", - core_type="libtorch", - gpu_type=GPUType.CUDA, - ), - CoreInfo( - name="libcore_cpu.so", - platform="Linux", - arch="x64", - core_type="libtorch", - gpu_type=GPUType.NONE, - ), - CoreInfo( - name="libcore_gpu_x64_nvidia.so", - platform="Linux", - arch="x64", - core_type="onnxruntime", - gpu_type=GPUType.CUDA, - ), - CoreInfo( - name="libcore_cpu_x64.so", - platform="Linux", - arch="x64", - core_type="onnxruntime", - gpu_type=GPUType.NONE, - ), - CoreInfo( - name="libcore_cpu_armhf.so", - platform="Linux", - arch="armv7l", - core_type="onnxruntime", - gpu_type=GPUType.NONE, - ), - CoreInfo( - name="libcore_cpu_arm64.so", - platform="Linux", - arch="aarch64", - core_type="onnxruntime", - gpu_type=GPUType.NONE, - ), - # macOS - CoreInfo( - name="libcore_cpu_universal2.dylib", - platform="Darwin", - arch="universal", - core_type="onnxruntime", - gpu_type=GPUType.NONE, - ), -] - - -# version 0.12 以降のコアの名前の辞書 -# - version 0.12, 0.13 のコアの名前: core -# - version 0.14 からのコアの名前: voicevox_core -CORENAME_DICT = { - "Windows": ("voicevox_core.dll", "core.dll"), - "Linux": ("libvoicevox_core.so", "libcore.so"), - "Darwin": ("libvoicevox_core.dylib", "libcore.dylib"), -} - - -def find_version_0_12_core_or_later(core_dir: Path) -> Optional[str]: - """ - core_dir で指定したディレクトリにあるコアライブラリが Version 0.12 以降である場合、 - 見つかった共有ライブラリの名前を返す。 - - Version 0.12 以降と判定する条件は、 - - - core_dir に metas.json が存在しない - - コアライブラリの名前が CORENAME_DICT の定義に従っている - - の両方が真のときである。 - cf. https://github.com/VOICEVOX/voicevox_engine/issues/385 - """ - if (core_dir / "metas.json").exists(): - return None - - for core_name in CORENAME_DICT[platform.system()]: - if (core_dir / core_name).is_file(): - return core_name - - return None - - -def get_arch_name() -> Optional[str]: - """ - platform.machine() が特定のアーキテクチャ上で複数パターンの文字列を返し得るので、 - 一意な文字列に変換する - サポート外のアーキテクチャである場合、None を返す - """ - machine = platform.machine() - if machine == "x86_64" or machine == "x64" or machine == "AMD64": - return "x64" - elif machine == "i386" or machine == "x86": - return "x86" - elif machine == "arm64": - return "aarch64" - elif machine in ["armv7l", "aarch64"]: - return machine - else: - return None - - -def get_core_name( - arch_name: str, - platform_name: str, - model_type: str, - gpu_type: GPUType, -) -> Optional[str]: - if platform_name == "Darwin": - if gpu_type == GPUType.NONE and (arch_name == "x64" or arch_name == "aarch64"): - arch_name = "universal" - else: - return None - for core_info in CORE_INFOS: - if ( - core_info.platform == platform_name - and core_info.arch == arch_name - and core_info.core_type == model_type - and core_info.gpu_type == gpu_type - ): - return core_info.name - return None - - -def get_suitable_core_name( - model_type: str, - gpu_type: GPUType, -) -> Optional[str]: - arch_name = get_arch_name() - if arch_name is None: - return None - platform_name = platform.system() - return get_core_name(arch_name, platform_name, model_type, gpu_type) - - -def check_core_type(core_dir: Path) -> Optional[str]: - # libtorch版はDirectML未対応なので、ここでは`gpu_type=GPUType.DIRECT_ML`は入れない - libtorch_core_names = [ - get_suitable_core_name("libtorch", gpu_type=GPUType.CUDA), - get_suitable_core_name("libtorch", gpu_type=GPUType.NONE), - ] - onnxruntime_core_names = [ - get_suitable_core_name("onnxruntime", gpu_type=GPUType.CUDA), - get_suitable_core_name("onnxruntime", gpu_type=GPUType.DIRECT_ML), - get_suitable_core_name("onnxruntime", gpu_type=GPUType.NONE), - ] - if any([(core_dir / name).is_file() for name in libtorch_core_names if name]): - return "libtorch" - elif any([(core_dir / name).is_file() for name in onnxruntime_core_names if name]): - return "onnxruntime" - else: - return None - - -def load_core(core_dir: Path, use_gpu: bool) -> CDLL: - core_name = find_version_0_12_core_or_later(core_dir) - if core_name: - try: - # NOTE: CDLL クラスのコンストラクタの引数 name には文字列を渡す必要がある。 - # Windows 環境では PathLike オブジェクトを引数として渡すと初期化に失敗する。 - return CDLL(str((core_dir / core_name).resolve(strict=True))) - except OSError as err: - raise RuntimeError(f"コアの読み込みに失敗しました:{err}") - - model_type = check_core_type(core_dir) - if model_type is None: - raise RuntimeError("コアが見つかりません") - if use_gpu or model_type == "onnxruntime": - core_name = get_suitable_core_name(model_type, gpu_type=GPUType.CUDA) - if core_name: - try: - return CDLL(str((core_dir / core_name).resolve(strict=True))) - except OSError: - pass - core_name = get_suitable_core_name(model_type, gpu_type=GPUType.DIRECT_ML) - if core_name: - try: - return CDLL(str((core_dir / core_name).resolve(strict=True))) - except OSError: - pass - core_name = get_suitable_core_name(model_type, gpu_type=GPUType.NONE) - if core_name: - try: - return CDLL(str((core_dir / core_name).resolve(strict=True))) - except OSError as err: - if model_type == "libtorch": - core_name = get_suitable_core_name(model_type, gpu_type=GPUType.CUDA) - if core_name: - try: - return CDLL(str((core_dir / core_name).resolve(strict=True))) - except OSError as err_: - err = err_ - raise RuntimeError(f"コアの読み込みに失敗しました:{err}") - else: - raise RuntimeError(f"このコンピュータのアーキテクチャ {platform.machine()} で利用可能なコアがありません") - - -class CoreWrapper: - def __init__( - self, - use_gpu: bool, - core_dir: Path, - cpu_num_threads: int = 0, - load_all_models: bool = False, - ) -> None: - - self.core = load_core(core_dir, use_gpu) - - self.core.initialize.restype = c_bool - self.core.metas.restype = c_char_p - self.core.yukarin_s_forward.restype = c_bool - self.core.yukarin_sa_forward.restype = c_bool - self.core.decode_forward.restype = c_bool - self.core.last_error_message.restype = c_char_p - - self.exist_supported_devices = False - self.exist_finalize = False - exist_cpu_num_threads = False - self.exist_load_model = False - self.exist_is_model_loaded = False - - is_version_0_12_core_or_later = ( - find_version_0_12_core_or_later(core_dir) is not None - ) - if is_version_0_12_core_or_later: - model_type = "onnxruntime" - self.exist_load_model = True - self.exist_is_model_loaded = True - self.core.load_model.argtypes = (c_long,) - self.core.load_model.restype = c_bool - self.core.is_model_loaded.argtypes = (c_long,) - self.core.is_model_loaded.restype = c_bool - else: - model_type = check_core_type(core_dir) - assert model_type is not None - - if model_type == "onnxruntime": - self.core.supported_devices.restype = c_char_p - self.core.finalize.restype = None - self.exist_supported_devices = True - self.exist_finalize = True - exist_cpu_num_threads = True - - self.core.yukarin_s_forward.argtypes = ( - c_int, - POINTER(c_long), - POINTER(c_long), - POINTER(c_float), - ) - self.core.yukarin_sa_forward.argtypes = ( - c_int, - POINTER(c_long), - POINTER(c_long), - POINTER(c_long), - POINTER(c_long), - POINTER(c_long), - POINTER(c_long), - POINTER(c_long), - POINTER(c_float), - ) - self.core.decode_forward.argtypes = ( - c_int, - c_int, - POINTER(c_float), - POINTER(c_float), - POINTER(c_long), - POINTER(c_float), - ) - - cwd = os.getcwd() - os.chdir(core_dir) - try: - if is_version_0_12_core_or_later: - self.assert_core_success( - self.core.initialize(use_gpu, cpu_num_threads, load_all_models) - ) - elif exist_cpu_num_threads: - self.assert_core_success( - self.core.initialize(".", use_gpu, cpu_num_threads) - ) - else: - self.assert_core_success(self.core.initialize(".", use_gpu)) - finally: - os.chdir(cwd) - - def metas(self) -> str: - return self.core.metas().decode("utf-8") - - def yukarin_s_forward( - self, - length: int, - phoneme_list: np.ndarray, - speaker_id: np.ndarray, - ) -> np.ndarray: - output = np.zeros((length,), dtype=np.float32) - self.assert_core_success( - self.core.yukarin_s_forward( - c_int(length), - phoneme_list.ctypes.data_as(POINTER(c_long)), - speaker_id.ctypes.data_as(POINTER(c_long)), - output.ctypes.data_as(POINTER(c_float)), - ) - ) - return output - - def yukarin_sa_forward( - self, - length: int, - vowel_phoneme_list: np.ndarray, - consonant_phoneme_list: np.ndarray, - start_accent_list: np.ndarray, - end_accent_list: np.ndarray, - start_accent_phrase_list: np.ndarray, - end_accent_phrase_list: np.ndarray, - speaker_id: np.ndarray, - ) -> np.ndarray: - output = np.empty( - ( - len(speaker_id), - length, - ), - dtype=np.float32, - ) - self.assert_core_success( - self.core.yukarin_sa_forward( - c_int(length), - vowel_phoneme_list.ctypes.data_as(POINTER(c_long)), - consonant_phoneme_list.ctypes.data_as(POINTER(c_long)), - start_accent_list.ctypes.data_as(POINTER(c_long)), - end_accent_list.ctypes.data_as(POINTER(c_long)), - start_accent_phrase_list.ctypes.data_as(POINTER(c_long)), - end_accent_phrase_list.ctypes.data_as(POINTER(c_long)), - speaker_id.ctypes.data_as(POINTER(c_long)), - output.ctypes.data_as(POINTER(c_float)), - ) - ) - return output - - def decode_forward( - self, - length: int, - phoneme_size: int, - f0: np.ndarray, - phoneme: np.ndarray, - speaker_id: np.ndarray, - ) -> np.ndarray: - output = np.empty((length * 256,), dtype=np.float32) - self.assert_core_success( - self.core.decode_forward( - c_int(length), - c_int(phoneme_size), - f0.ctypes.data_as(POINTER(c_float)), - phoneme.ctypes.data_as(POINTER(c_float)), - speaker_id.ctypes.data_as(POINTER(c_long)), - output.ctypes.data_as(POINTER(c_float)), - ) - ) - return output - - def supported_devices(self) -> str: - if self.exist_supported_devices: - return self.core.supported_devices().decode("utf-8") - raise OldCoreError - - def finalize(self) -> None: - if self.exist_finalize: - self.core.finalize() - return - raise OldCoreError - - def load_model(self, speaker_id: int) -> None: - if self.exist_load_model: - self.assert_core_success(self.core.load_model(c_long(speaker_id))) - raise OldCoreError - - def is_model_loaded(self, speaker_id: int) -> bool: - if self.exist_is_model_loaded: - return self.core.is_model_loaded(c_long(speaker_id)) - raise OldCoreError - - def assert_core_success(self, result: bool) -> None: - if not result: - raise CoreError( - self.core.last_error_message().decode("utf-8", "backslashreplace") - ) diff --git a/spaces/dma123/gpt-js/js/3rdparty/auto-render.min.js b/spaces/dma123/gpt-js/js/3rdparty/auto-render.min.js deleted file mode 100644 index 74f07c2f99bc40c895f9e9bc353b9377962d0723..0000000000000000000000000000000000000000 --- a/spaces/dma123/gpt-js/js/3rdparty/auto-render.min.js +++ /dev/null @@ -1 +0,0 @@ -!function(e,t){"object"==typeof exports&&"object"==typeof module?module.exports=t(require("katex")):"function"==typeof define&&define.amd?define(["katex"],t):"object"==typeof exports?exports.renderMathInElement=t(require("katex")):e.renderMathInElement=t(e.katex)}("undefined"!=typeof self?self:this,(function(e){return function(){"use strict";var t={771:function(t){t.exports=e}},r={};function n(e){var i=r[e];if(void 0!==i)return i.exports;var a=r[e]={exports:{}};return t[e](a,a.exports,n),a.exports}n.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return n.d(t,{a:t}),t},n.d=function(e,t){for(var r in t)n.o(t,r)&&!n.o(e,r)&&Object.defineProperty(e,r,{enumerable:!0,get:t[r]})},n.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)};var i={};return function(){n.d(i,{default:function(){return s}});var e=n(771),t=n.n(e),r=function(e,t,r){for(var n=r,i=0,a=e.length;n0&&(i.push({type:"text",data:e.slice(0,n)}),e=e.slice(n));var l=t.findIndex((function(t){return e.startsWith(t.left)}));if(-1===(n=r(t[l].right,e,t[l].left.length)))break;var d=e.slice(0,n+t[l].right.length),s=a.test(d)?d:e.slice(t[l].left.length,n);i.push({type:"math",data:s,rawData:d,display:t[l].display}),e=e.slice(n+t[l].right.length)}return""!==e&&i.push({type:"text",data:e}),i},l=function(e,r){var n=o(e,r.delimiters);if(1===n.length&&"text"===n[0].type)return null;for(var i=document.createDocumentFragment(),a=0;a int: - return self._MAX_ONGOING_TASKS - - @property - def ongoing_tasks(self) -> List[str]: - return self._ONGOING_TASKS - - @property - def queue(self) -> deque: - return self._QUEUE - - @property - def task_data(self) -> Dict[str, BaseFlowData]: - return self._TASK_DATA - - @property - def task_states(self) -> dict: - return self._TASK_STATES - - @property - def nonce(self) -> str: - return self._NONCE - - def set_nonce(self, nonce: str): - self._NONCE = nonce - - @classmethod - async def run(cls): - raise NotImplementedError diff --git a/spaces/docs-demos/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full/app.py b/spaces/docs-demos/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full/app.py deleted file mode 100644 index 292894cb68668818a2c74bc8822d2c9b2a3d32ad..0000000000000000000000000000000000000000 --- a/spaces/docs-demos/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import gradio as gr - -title = "MT5" -description = "Gradio Demo for MT5. To use it, simply add your text, or click one of the examples to load them. Read more at the links below." - -article = "

      mT5: A massively multilingual pre-trained text-to-text transformer

      " - -examples = [ - ["""The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.""","mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full"] -] - -io1 = gr.Interface.load("huggingface/shamikbose89/mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full") - -io2 = gr.Interface.load("huggingface/csebuetnlp/mT5_multilingual_XLSum") - - -def inference(text,model): - if model == "mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full": - outtext = io1(text) - else: - outtext = io2(text) - return outtext - - -gr.Interface( - inference, - [gr.inputs.Textbox(label="Input",lines=5),gr.inputs.Dropdown(choices=["mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full","mT5_multilingual_XLSum"], type="value", default="mt5-small-finetuned-arxiv-cs-finetuned-arxiv-cs-full", label="model") -], - gr.outputs.Textbox(label="Output"), - examples=examples, - article=article, - title=title, - description=description).launch(enable_queue=True) diff --git a/spaces/donnyb/FalconVis/README.md b/spaces/donnyb/FalconVis/README.md deleted file mode 100644 index a3ad1bb5f78d0e8d49539c527701bdb5c21b15a0..0000000000000000000000000000000000000000 --- a/spaces/donnyb/FalconVis/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: FalconVis -emoji: ✈️ -colorFrom: green -colorTo: gray -sdk: docker -pinned: false ---- diff --git a/spaces/dorkai/text-generation-webui-main/css/chat_style-messenger.css b/spaces/dorkai/text-generation-webui-main/css/chat_style-messenger.css deleted file mode 100644 index 4d4bba0d902fe0d544a0203a8bf68d9c243ccbf6..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/css/chat_style-messenger.css +++ /dev/null @@ -1,124 +0,0 @@ -.chat { - margin-left: auto; - margin-right: auto; - max-width: 800px; - height: calc(100vh - 306px); - overflow-y: auto; - padding-right: 20px; - display: flex; - flex-direction: column-reverse; - word-break: break-word; - overflow-wrap: anywhere; -} - -.message { - padding-bottom: 25px; - font-size: 15px; - font-family: Helvetica, Arial, sans-serif; - line-height: 1.428571429; -} - -.circle-you { - width: 50px; - height: 50px; - background-color: rgb(238, 78, 59); - border-radius: 50%; -} - -.circle-bot { - width: 50px; - height: 50px; - background-color: rgb(59, 78, 244); - border-radius: 50%; - float: left; - margin-right: 10px; - margin-top: 5px; -} - -.circle-bot img, -.circle-you img { - border-radius: 50%; - width: 100%; - height: 100%; - object-fit: cover; -} -.circle-you { - margin-top: 5px; - float: right; -} -.circle-bot + .text, .circle-you + .text { - border-radius: 18px; - padding: 8px 12px; -} - -.circle-bot + .text { - background-color: #E4E6EB; - float: left; -} - -.circle-you + .text { - float: right; - background-color: rgb(0, 132, 255); - margin-right: 10px; -} - -.circle-you + .text div, .circle-you + .text *, .dark .circle-you + .text div, .dark .circle-you + .text * { - color: #FFF !important; -} -.circle-you + .text .username { - text-align: right; -} - -.dark .circle-bot + .text div, .dark .circle-bot + .text * { - color: #000; -} - -.text { - max-width: 80%; -} - -.text p { - margin-top: 5px; -} - -.username { - font-weight: bold; -} - -.message-body {} - -.message-body img { - max-width: 300px; - max-height: 300px; - border-radius: 20px; -} - -.message-body p { - margin-bottom: 0 !important; - font-size: 15px !important; - line-height: 1.428571429 !important; -} - -.message-body li { - margin-top: 0.5em !important; - margin-bottom: 0.5em !important; -} - -.message-body li > p { - display: inline !important; -} - -.message-body code { - overflow-x: auto; -} -.message-body :not(pre) > code { - white-space: normal !important; -} - -.dark .message-body p em { - color: rgb(138, 138, 138) !important; -} - -.message-body p em { - color: rgb(110, 110, 110) !important; -} diff --git a/spaces/ds520/bingo/src/lib/storage.ts b/spaces/ds520/bingo/src/lib/storage.ts deleted file mode 100644 index a5b7825c4f76a28c704da512ae39e8bb45addd09..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/lib/storage.ts +++ /dev/null @@ -1,27 +0,0 @@ -import { getMany, set, del, clear } from 'idb-keyval'; - -export const Storage = { - async get(key: string | string[] | null): Promise { - if (key === null) return null; - if (typeof key === 'string') { - key = [key] - } - const returnData: Record = {} - const values = await getMany(key) - key.forEach((k, idx)=> { - returnData[k] = values[idx] - }) - return returnData; - }, - async set(object: any) { - for (let key of Object.keys(object)) { - await set(key, object[key]) - } - }, - async remove(key: string) { - return del(key); - }, - async clear() { - return clear(); - } -} diff --git a/spaces/editing-images/ledits/utils.py b/spaces/editing-images/ledits/utils.py deleted file mode 100644 index a9a7ec323d63265f418432fad13d3f98e0ea9f14..0000000000000000000000000000000000000000 --- a/spaces/editing-images/ledits/utils.py +++ /dev/null @@ -1,114 +0,0 @@ -import PIL -from PIL import Image, ImageDraw ,ImageFont -from matplotlib import pyplot as plt -import torchvision.transforms as T -import os -import torch -import yaml - -def show_torch_img(img): - img = to_np_image(img) - plt.imshow(img) - plt.axis("off") - -def to_np_image(all_images): - all_images = (all_images.permute(0, 2, 3, 1) * 127.5 + 128).clamp(0, 255).to(torch.uint8).cpu().numpy()[0] - return all_images - -def tensor_to_pil(tensor_imgs): - if type(tensor_imgs) == list: - tensor_imgs = torch.cat(tensor_imgs) - tensor_imgs = (tensor_imgs / 2 + 0.5).clamp(0, 1) - to_pil = T.ToPILImage() - pil_imgs = [to_pil(img) for img in tensor_imgs] - return pil_imgs - -def pil_to_tensor(pil_imgs): - to_torch = T.ToTensor() - if type(pil_imgs) == PIL.Image.Image: - tensor_imgs = to_torch(pil_imgs).unsqueeze(0)*2-1 - elif type(pil_imgs) == list: - tensor_imgs = torch.cat([to_torch(pil_imgs).unsqueeze(0)*2-1 for img in pil_imgs]).to(device) - else: - raise Exception("Input need to be PIL.Image or list of PIL.Image") - return tensor_imgs - - -## TODO implement this -# n = 10 -# num_rows = 4 -# num_col = n // num_rows -# num_col = num_col + 1 if n % num_rows else num_col -# num_col -def add_margin(pil_img, top = 0, right = 0, bottom = 0, - left = 0, color = (255,255,255)): - width, height = pil_img.size - new_width = width + right + left - new_height = height + top + bottom - result = Image.new(pil_img.mode, (new_width, new_height), color) - - result.paste(pil_img, (left, top)) - return result - -def image_grid(imgs, rows = 1, cols = None, - size = None, - titles = None, text_pos = (0, 0)): - if type(imgs) == list and type(imgs[0]) == torch.Tensor: - imgs = torch.cat(imgs) - if type(imgs) == torch.Tensor: - imgs = tensor_to_pil(imgs) - - if not size is None: - imgs = [img.resize((size,size)) for img in imgs] - if cols is None: - cols = len(imgs) - assert len(imgs) >= rows*cols - - top=20 - w, h = imgs[0].size - delta = 0 - if len(imgs)> 1 and not imgs[1].size[1] == h: - delta = top - h = imgs[1].size[1] - if not titles is None: - font = ImageFont.truetype("/usr/share/fonts/truetype/freefont/FreeMono.ttf", - size = 20, encoding="unic") - h = top + h - grid = Image.new('RGB', size=(cols*w, rows*h+delta)) - for i, img in enumerate(imgs): - - if not titles is None: - img = add_margin(img, top = top, bottom = 0,left=0) - draw = ImageDraw.Draw(img) - draw.text(text_pos, titles[i],(0,0,0), - font = font) - if not delta == 0 and i > 0: - grid.paste(img, box=(i%cols*w, i//cols*h+delta)) - else: - grid.paste(img, box=(i%cols*w, i//cols*h)) - - return grid - - -""" -input_folder - dataset folder -""" -def load_dataset(input_folder): - # full_file_names = glob.glob(input_folder) - # class_names = [x[0] for x in os.walk(input_folder)] - class_names = next(os.walk(input_folder))[1] - class_names[:] = [d for d in class_names if not d[0] == '.'] - file_names=[] - for class_name in class_names: - cur_path = os.path.join(input_folder, class_name) - filenames = next(os.walk(cur_path), (None, None, []))[2] - filenames = [f for f in filenames if not f[0] == '.'] - file_names.append(filenames) - return class_names, file_names - - -def dataset_from_yaml(yaml_location): - with open(yaml_location, 'r') as stream: - data_loaded = yaml.safe_load(stream) - - return data_loaded \ No newline at end of file diff --git a/spaces/epexVfeibi/Imagedeblurr/3ds Max 2006 Download Full Version Torrent.md b/spaces/epexVfeibi/Imagedeblurr/3ds Max 2006 Download Full Version Torrent.md deleted file mode 100644 index 13de57b418e75a0ba26103e3e6df2d1ff46cedb6..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/3ds Max 2006 Download Full Version Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

      3ds Max 2006 Download Full Version Torrent


      Download Zip ✪✪✪ https://jinyurl.com/2uEpnN



      - -Link download Autodesk Netfabb Ultimate 2021 R0 win64 full crack . ... Crack copy hotmap sd card. sh Xforce Keygen 3ds Max 2012 64 Bit . ... Listen to AutoCAD Mobile 2006 Download Full Version Torrent and 154 ... for ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/eugenkalosha/Semmap/columnnames.py b/spaces/eugenkalosha/Semmap/columnnames.py deleted file mode 100644 index 674f09e5386be1fb28faee2bdcfbf98038bd29ef..0000000000000000000000000000000000000000 --- a/spaces/eugenkalosha/Semmap/columnnames.py +++ /dev/null @@ -1,5 +0,0 @@ -VZ_TOPIC = "topic" -VZ_SCORE = "score" -VZ_WORDS = "words" -VZ_IDS = "ids" - diff --git a/spaces/falterWliame/Face_Mask_Detection/Kon Kya Hai General Knowledge Book In Urdu Free Download.md b/spaces/falterWliame/Face_Mask_Detection/Kon Kya Hai General Knowledge Book In Urdu Free Download.md deleted file mode 100644 index 0ef74e38a1fb7967489fc8bc1ff345b4d81f6143..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Kon Kya Hai General Knowledge Book In Urdu Free Download.md +++ /dev/null @@ -1,5 +0,0 @@ - -

      You can read online books here. History books history books in Urdu and free history books are available here. If you are searching for the free ebook, free online book and free books online then visit us. People who search for online book and book bestseller can get the history books, world history books, Pakistan history books, and Urdu Point. People search for Urdu books, hindi books, and all books in Urdu from Urdu Point. People want to get Urdu books and free Urdu books. You can find the Urdu books free download, Urdu books for free and free Urdu books for children here. History books in Urdu are best history books and best books of world history. You can find the best history books, best History book online and best History book in Urdu at Urdu Point. For best history books search Urdu Point. History books online, history books examples, and history books to read are found. Some search results about Best history books, Best ancient history books and Islamic history books are found. People who are looking for history books online, history books in Urdu and free pdf books are found. Search results about online library books, online Urdu books, and online books are found. You can easily read online book here. Some people search for the free online romance books and read entire books free. If you want to know who the first king of India was, how old India is, medieval Indian history and history of India pdf then read Indian history books. Searches about Pakistan history, Indian history, Islamic history, Indian history online and Indian history pdf also found. To get the history books visit Urdu Point. Tareekhi kitabain are available here. We provide you access to the Tareekhi kitabain. Get the history books, famous history books, Indian history books and online history books at Urdu Point. Come at Urdu Point and get an easy access to history books, history books in Urdu and Pakistan history books.

      -

      kon kya hai general knowledge book in urdu free download


      DOWNLOAD ✫✫✫ https://urlca.com/2uDd1h



      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/PATCHED Active.File.Recovery.v7.3.Build.121.Incl.Keygen-TSRh-DVT-RESURRE Fixed.md b/spaces/falterWliame/Face_Mask_Detection/PATCHED Active.File.Recovery.v7.3.Build.121.Incl.Keygen-TSRh-DVT-RESURRE Fixed.md deleted file mode 100644 index 9377a0dd5ffc2b33a14c41a42e5db178e6c3d13c..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/PATCHED Active.File.Recovery.v7.3.Build.121.Incl.Keygen-TSRh-DVT-RESURRE Fixed.md +++ /dev/null @@ -1,6 +0,0 @@ -

      PATCHED Active.File.Recovery.v7.3.Build.121.Incl.Keygen-TSRh-DVT-RESURRE


      Downloadhttps://urlca.com/2uDcYE



      - -Incl.Keygen-TSRh-DVT-RESURRE ). Incl. Keygen-TSRh-DVT-RESURRE). Incl. Keygen-TSRh-DVT-RESURRE). KEEPALIVE "c2.pioneer.cc" "Pioneer" 20080224 2354163966 "ecc3d319dd7eb0b4ce28e0e2271f7d8df8a0f2e38" "773669bd82e2b2ba8a200c60d939a9f4dc7d9ccd9c5bfb" Last-Modified: Mon, 24 Feb 2020 21:22:22 GMT X-LIVE-PROTOCOL: 1.0 TID: 27620 [A] [B] [C] [D] [E] [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 1/1 : [23:44] [D] -10:-6:15 [F] [G] 4fefd39f24
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/All National Flags in One Place - Download and Embed Easily.md b/spaces/fatiXbelha/sd/All National Flags in One Place - Download and Embed Easily.md deleted file mode 100644 index 1ffc1ca3eb09a98a8b5a2002c86cf057f69e24bc..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/All National Flags in One Place - Download and Embed Easily.md +++ /dev/null @@ -1,152 +0,0 @@ - - - - - -
      -

      How to Download Flags of All Countries for Free

      -

      Do you need country flags images for your project? Whether you are working on a news magazine, a website, a software, a mobile app, or an educational material, you might find yourself in need of high-quality flag images that represent all the countries in the world. In this article, we will show you why you might need country flags images, where to find them, and how to download them for free.

      -

      Why You Might Need Country Flags Images

      -

      Country flags images are useful for many purposes. Here are some of the common use cases and benefits of using country flags images:

      -

      download flags


      Download Ziphttps://urllie.com/2uNz7A



      -

      For News Magazines and Websites

      -

      If you are running a news magazine or a website that covers global events, you might want to use country flags images to illustrate your articles and make them more engaging. For example, you can use country flags images to show the location of a news story, to compare different countries on a certain topic, or to display statistics and rankings by country.

      -

      Some tips for using country flags images for news magazines and websites are:

      -
        -
      • Use icon images that are optimal for websites and apps. They have options with effects and fixed ratio from 16px up to 256px.
      • -
      • Use PNG or WebP formats for better quality and smaller file size.
      • -
      • Use embed or download programmatically option to integrate flag images to your website easily.
      • -
      -

      For Software and Mobile Apps

      -

      If you are developing a software or a mobile app that involves countries or regions, you might want to use country flags images to enhance your user interface and user experience. For example, you can use country flags images to show the language or currency options, to indicate the origin or destination of a service or product, or to display the user's location or preferences.

      -

      Some tips for using country flags images for software and mobile apps are:

      -
        -
      • Use bitmap images that are suitable for printing and scaling. They have options with effects and fixed ratio from 16px up to 550px.
      • -
      • Use JPG or PNG formats for compatibility and quality.
      • -
      • Use vector images that are scalable and editable. They have options with effects and fixed ratio in SVG or PDF formats.
      • -
      -

      For Educational Purposes

      -

      If you are creating educational materials or activities that involve countries or regions, you might want to use country flags images to make them more fun and interactive. For example, you can use country flags images to teach geography, history, culture, or languages, to quiz students on their knowledge of countries or regions, or to create games and puzzles with country flags.

      -

      Some tips for using country flags images for educational purposes are:

      -
        -
      • Use bitmap images that are suitable for printing and scaling. They have options with effects and fixed ratio from 16px up to 550px.
      • -
      • Use JPG or PNG formats for compatibility and quality.
      • -
      • Use vector images that are scalable and editable. They have options with effects and fixed ratio in SVG or PDF formats.
      • -
      • Use continents and U.S. states categories to find flags of specific regions or states.
      • -
      -

      Where to Find Country Flags Images

      -

      There are many sources and options for finding country flags images online, but not all of them are reliable, updated, or free. Here are two of the best websites that offer country flags images for free:

      -

      Flagpedia.net

      -

      Flagpedia.net is a website that provides information and images of all the flags of the world. It has features and advantages such as:

      -

      download flags of all countries
      -download flags of the world free
      -download flags images in png format
      -download flags icons for websites
      -download flags vector files for print
      -download flags of U.S. states
      -download flags of continents
      -download flags of European Union
      -download flags of United Nations
      -download flags of Olympic Games
      -download flags of NATO
      -download flags of Commonwealth
      -download flags of African Union
      -download flags of ASEAN
      -download flags of Arab League
      -download flags of OPEC
      -download flags of G20
      -download flags of FIFA World Cup
      -download flags of UEFA Euro
      -download flags of Copa America
      -download flags of Asian Cup
      -download flags of Africa Cup of Nations
      -download flags of CONCACAF Gold Cup
      -download flags of Rugby World Cup
      -download flags of Cricket World Cup
      -download flags of historical countries
      -download flags of fictional countries
      -download flags of provinces and regions
      -download flags of cities and towns
      -download flags of islands and territories
      -download flags of military branches and units
      -download flags of political parties and movements
      -download flags of religious groups and organizations
      -download flags of ethnic groups and minorities
      -download flags of sports teams and clubs
      -download flags of universities and colleges
      -download flags of companies and brands
      -download flags of organizations and associations
      -download flags of festivals and events
      -download flags of languages and scripts
      -download flags quiz and trivia games
      -download flags coloring pages and worksheets
      -download flags wallpapers and screensavers
      -download flags stickers and emojis
      -download flags fonts and symbols

      -
        -
      • It offers three types of flag images: icon images, bitmap images, and vector images.
      • -
      • It allows you to embed or download flag images programmatically using API or CDN.
      • -
      • It updates its flag images regularly according to the changes in the world.
      • -
      • It provides additional information about each country, such as capital, population, area, languages, currencies, etc.
      • -
      -

      Icon Images

      -

      Icon images are flag images that are optimal for websites and apps. They have options with effects (such as glossy, rounded corners, shadow, etc.) and fixed ratio (such as square, 4:3, 16:9, etc.) from 16px up to 256px. You can choose from PNG or WebP formats for better quality and smaller file size.

      -

      Bitmap Images

      -

      Bitmap images are flag images that are suitable for printing and scaling. They have options with effects (such as glossy, rounded corners, shadow, etc.) and fixed ratio (such as square, 4:3, 16:9, etc.) from 16px up to 550px. You can choose from JPG or PNG formats for compatibility and quality.

      -

      Vector Images

      -

      Vector images are flag images that are scalable and editable. They have options with effects (such as glossy, rounded corners, shadow, etc.) and fixed ratio (such as square, 4:3, 16:9, etc.) in SVG or PDF formats. You can edit them using vector graphics software such as Adobe Illustrator or Inkscape.

      -

      Embed or Download Programmatically

      -

      If you want to integrate flag images to your website easily, you can use the embed or download programmatically option. You can use the API (Application Programming Interface) to request flag images by country code or name. You can also use the CDN (Content Delivery Network) to load flag images faster from a server near you.

      -

      Countryflags.com

      -

      Countryflags.com is another website that provides high-quality flag images of all the countries in the world. It has features and advantages such as:

      -
        -
      • It offers two types of flag files: JPG or PNG files.
      • -
      • It allows you to download flag files in various sizes from 16px up to 2500px.
      • -
      • It includes flags of continents and U.S. states in addition to countries.
      • -
      • It has a downloads page where you can download all the flag files in one ZIP file.
      • -
      -

      JPG or PNG Files

      -

      JPG or PNG files are flag files that you can download in various sizes from 16px up to 2500px. You can choose from JPG or PNG formats depending on your preference and need. JPG files are smaller in size but lower in quality, while PNG files are larger in size but higher in quality.

      -

      Continents and U.S. States

      -

      In addition to country flags, countryflags.com also provides flags of continents and U.S. states. You can find flags of Africa, Asia, Europe, North America, Oceania, and South America, as well as flags of all the 50 states of the United States. You can download them in JPG or PNG formats and in various sizes.

      -

      Downloads Page

      -

      If you want to download all the flag files in one ZIP file, you can go to the downloads page of countryflags.com. You can choose from three options: all country flags, all continent flags, or all U.S. state flags. You can also choose the file format (JPG or PNG) and the size (16px up to 2500px) that you want.

      -

      How to Download Country Flags Images

      -

      Now that you know where to find country flags images, let's see how to download them for free. Here are the steps and instructions for downloading country flags images from flagpedia.net and countryflags.com:

      -

      From Flagpedia.net

      -

      To download country flags images from flagpedia.net, follow these steps:

      -
        -
      1. Go to flagpedia.net and browse or search for the country flag that you want.
      2. -
      3. Click on the country flag to open its page.
      4. -
      5. Scroll down to the section "Download a flag or use it on a website".
      6. -
      7. Select the type of flag image that you want: icon image, bitmap image, or vector image.
      8. -
      9. Select the format, size, and effect that you want.
      10. -
      11. Click on the "Download" button to save the flag image to your device.
      12. -
      13. Alternatively, you can copy the embed code or the API/CDN link to use the flag image on your website.
      14. -
      -

      From Countryflags.com

      -

      To download country flags images from countryflags.com, follow these steps:

      -
        -
      1. Go to countryflags.com and browse or search for the country flag that you want.
      2. -
      3. Click on the country flag to open its page.
      4. -
      5. Select the file format (JPG or PNG) and the size (16px up to 2500px) that you want.
      6. -
      7. Right-click on the flag image and select "Save image as" to save it to your device.
      8. -
      9. Alternatively, you can go to the downloads page and download all the flag files in one ZIP file.
      10. -
      -

      Conclusion

      -

      In this article, we have shown you how to download flags of all countries for free. We have explained why you might need country flags images, where to find them, and how to download them. We have also given you some tips and examples for using country flags images for different purposes. We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

      - FAQs
        -
      • Q: How many countries are there in the world?
      • -
      • A: There is no definitive answer to this question, as different sources may have different criteria for defining a country. However, according to the United Nations, there are 193 member states and 2 observer states as of 2021.
      • -
      • Q: What are some of the benefits of using vector images over bitmap images?
      • -
      • A: Vector images are scalable and editable, which means that they can be resized without losing quality or modified using vector graphics software. Bitmap images are fixed and pixelated, which means that they may lose quality or become distorted when resized or edited.
      • -
      • Q: What are some of the effects that I can apply to flag images?
      • -
      • A: Some of the effects that you can apply to flag images are glossy, rounded corners, shadow, grayscale, sepia, negative, etc. You can preview how they look before downloading them.
      • -
      • Q: How can I use embed or download programmatically option to integrate flag images to my website?
      • -
      • A: You can use embed or download programmatically option to integrate flag images to your website by using API or CDN. API is a way of requesting flag images by country code or name using a URL. CDN is a way of loading flag images faster from a server near you using a URL. You can copy the embed code or the API/CDN link and paste it to your website's HTML code.
      • -
      • Q: How can I download all the flag files in one ZIP file from countryflags.com?
      • -
      • A: You can download all the flag files in one ZIP file from countryflags.com by going to the downloads page and choosing from three options: all country flags, all continent flags, or all U.S. state flags. You can also choose the file format (JPG or PNG) and the size (16px up to 2500px) that you want. Then, click on the "Download" button to save the ZIP file to your device.
      • -
      -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Bus Simulator 2023 MOD APK How to Unlock All Buses and Routes.md b/spaces/fatiXbelha/sd/Bus Simulator 2023 MOD APK How to Unlock All Buses and Routes.md deleted file mode 100644 index 0eac3b8336d31b86c7366d0e0e248ee9dcca27a1..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Bus Simulator 2023 MOD APK How to Unlock All Buses and Routes.md +++ /dev/null @@ -1,99 +0,0 @@ -
      -

      Bus Sim 2023 Mod APK: A Realistic and Fun Driving Simulator

      -

      Do you love driving buses and transporting passengers? Do you want to experience the thrill of driving in different cities and environments? Do you want to customize your bus and challenge other players online? If you answered yes to any of these questions, then you should try Bus Sim 2023, a realistic and fun driving simulator game for Android devices. And if you want to make the game even more exciting, you should download Bus Sim 2023 Mod APK, a modified version of the game that gives you unlimited money, coins, and access to all buses and routes. In this article, we will tell you what Bus Sim 2023 is, what features it has, what benefits you can get from Bus Sim 2023 Mod APK, and how to download and install it on your device.

      -

      What is Bus Sim 2023?

      -

      Bus Sim 2023 is a driving simulator game developed by Ovidiu Pop, a popular developer of simulation games. In this game, you can play as a bus driver, picking up people at bus stops and transporting them along the route. You can choose from different types of buses, such as city buses, school buses, double-decker buses, and more. You can also drive in different cities and environments, such as New York, London, Paris, Berlin, Rome, Tokyo, Sydney, and more. You can experience realistic graphics and physics, as well as weather effects, traffic conditions, day and night cycles, and more. You can also customize your bus with various paint jobs, stickers, accessories, and more. You can also adjust the controls and settings to suit your preferences. You can also play online with other players in multiplayer mode, where you can compete in races, join bus clubs, chat with other drivers, and climb the leaderboards.

      -

      bus sim 2023 mod apk


      Download ★★★★★ https://urllie.com/2uNx8Q



      -

      Features of Bus Sim 2023

      -

      - Realistic graphics and physics

      -

      Bus Sim 2023 has stunning graphics that make you feel like you are driving a real bus. You can see the details of the bus interior and exterior, as well as the passengers, buildings, roads, vehicles, trees, animals, and more. You can also feel the realistic physics of the bus movement, such as acceleration, braking, steering, suspension, collision, damage, and more. You can also experience different weather effects, such as rain, snow, fog, wind, thunderstorm, etc., as well as different traffic conditions, such as traffic lights, signs, signals, pedestrians, cars, trucks, bikes, etc.

      -

      - Various buses and routes

      -

      Bus Sim 2023 has a wide variety of buses that you can choose from. You can drive city buses, school buses, double-decker buses, articulated buses, electric buses, and more. Each bus has its own characteristics, such as speed, capacity, fuel consumption, handling, etc. You can also drive in different cities and environments around the world, such as New York, London, Paris, Berlin, Rome, Tokyo, Sydney, and more. Each city and environment has its own landmarks, scenery, culture, and challenges.

      -

      - Custom

      - Customizable controls and settings

      -

      Bus Sim 2023 allows you to customize the controls and settings of the game to suit your preferences. You can choose from different control options, such as tilt, buttons, steering wheel, or joystick. You can also adjust the sensitivity, vibration, and feedback of the controls. You can also change the camera angle, sound volume, language, and other settings of the game. You can also enable or disable the realistic features, such as traffic rules, speed limit, fuel consumption, damage, etc.

      -

      - Multiplayer mode and leaderboards

      -

      Bus Sim 2023 also has a multiplayer mode where you can play online with other players from around the world. You can join or create a bus club, where you can chat with other drivers, share tips and tricks, and cooperate in missions and challenges. You can also compete in races, where you can show off your driving skills and speed. You can also climb the leaderboards, where you can see your rank and stats compared to other players. You can also earn rewards and achievements for your performance in multiplayer mode.

      -

      What is Bus Sim 2023 Mod APK?

      -

      Bus Sim 2023 Mod APK is a modified version of Bus Sim 2023 that gives you some extra benefits that are not available in the original game. By downloading and installing Bus Sim 2023 Mod APK, you can enjoy unlimited money and coins, all buses and routes unlocked, no ads and pop-ups, and more. These benefits will make your gameplay more enjoyable and easier.

      -

      Benefits of Bus Sim 2023 Mod APK

      -

      - Unlimited money and coins

      -

      With Bus Sim 2023 Mod APK, you will have unlimited money and coins in your account. This means that you can buy any bus you want, upgrade it with any accessories you like, and refill your fuel whenever you need. You can also buy any paint job, sticker, or decoration for your bus. You don't have to worry about running out of money or coins in the game.

      -

      - All buses and routes unlocked

      -

      With Bus Sim 2023 Mod APK, you will have access to all buses and routes in the game. This means that you can drive any bus you want, in any city or environment you want. You don't have to complete any missions or challenges to unlock them. You can explore the whole world of Bus Sim 2023 without any restrictions.

      -

      bus simulator 2023 mod apk unlimited money
      -bus sim 2023 mod apk download for android
      -bus simulator 2023 hack mod apk
      -bus sim 2023 mod apk latest version
      -bus simulator 2023 premium mod apk
      -bus sim 2023 mod apk free shopping
      -bus simulator 2023 mod apk revdl
      -bus sim 2023 mod apk offline
      -bus simulator 2023 pro mod apk
      -bus sim 2023 mod apk android 1
      -bus simulator 2023 mod apk obb
      -bus sim 2023 mod apk rexdl
      -bus simulator 2023 mega mod apk
      -bus sim 2023 mod apk unlimited xp
      -bus simulator 2023 mod apk an1
      -bus sim 2023 mod apk no ads
      -bus simulator 2023 mod apk happymod
      -bus sim 2023 mod apk unlocked all buses
      -bus simulator 2023 vip mod apk
      -bus sim 2023 mod apk unlimited fuel
      -bus simulator 2023 mod apk data
      -bus sim 2023 mod apk unlimited everything
      -bus simulator 2023 full mod apk
      -bus sim 2023 mod apk all levels unlocked
      -bus simulator 2023 real mod apk
      -bus sim 2023 mod apk unlimited coins and gems
      -bus simulator 2023 hd mod apk
      -bus sim 2023 mod apk new update
      -bus simulator 2023 ultimate mod apk
      -bus sim 2023 mod apk unlimited tickets
      -bus simulator 2023 world tour mod apk
      -bus sim 2023 mod apk high graphics
      -bus simulator 2023 multiplayer mod apk
      -bus sim 2023 mod apk cheat menu
      -bus simulator 2023 indonesia mod apk
      -bus sim 2023 mod apk no root
      -bus simulator 2023 europe mod apk
      -bus sim 2023 mod apk god mode
      -bus simulator 2023 original mod apk
      -bus sim 2023 mod apk old version
      -bus simulator 2023 india mod apk
      -bus sim 2023 mod apk low mb
      -bus simulator 2023 coach driving game mod apk
      -bus sim 2023 mod apk without verification
      -bus simulator 2023 usa edition mod apk
      -bus sim 2023 mod apk for pc
      -bus simulator 2023 city driving game mod apk
      -bus sim 2023 mod apk with license verification removed

      -

      - No ads and pop-ups

      -

      With Bus Sim 2023 Mod APK, you will not see any ads or pop-ups in the game. This means that you can play the game without any interruptions or distractions. You don't have to watch any videos or click on any banners to get extra money or coins. You can enjoy the game without any annoying ads or pop-ups.

      -

      How to Download and Install Bus Sim 2023 Mod APK?

      -

      If you want to download and install Bus Sim 2023 Mod APK on your device, you need to follow some simple steps. Here are the steps to download and install Bus Sim 2023 Mod APK:

      -

      Steps to Download and Install Bus Sim 2023 Mod APK

      -

      - Step 1: Enable unknown sources on your device

      -

      Before you can install Bus Sim 2023 Mod APK on your device, you need to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To enable unknown sources on your device, go to Settings > Security > Unknown Sources and toggle it on.

      -

      - Step 2: Download the mod apk file from a trusted source

      -

      Next, you need to download the mod apk file from a trusted source. There are many websites that offer mod apk files for various games, but not all of them are safe and reliable. Some of them may contain viruses or malware that can harm your device or steal your data. Therefore, you need to be careful when choosing a source to download the mod apk file from. One of the trusted sources that we recommend is [BusSim2023ModAPK.com], where you can find the latest version of Bus Sim 2023 Mod APK with all the benefits mentioned above.

      -

      - Step 3: Locate and install the mod apk file on your device

      -

      After downloading the mod apk file from a trusted source, you need to locate and install it on your device. To do this, go to your file manager app and find the folder where you downloaded the mod apk file. Tap on the file and follow the instructions on the screen to install it on your device.

      -

      - Step 4:

      - Step 4: Launch the game and enjoy the mod features

      -

      Finally, you can launch the game and enjoy the mod features. To do this, go to your app drawer and find the Bus Sim 2023 icon. Tap on it and wait for the game to load. You will see that you have unlimited money and coins, all buses and routes unlocked, no ads and pop-ups, and more. You can now play the game as you wish, without any limitations or restrictions.

      -

      Conclusion

      -

      Bus Sim 2023 is a realistic and fun driving simulator game that lets you drive various buses in different cities and environments. You can customize your bus, adjust the controls and settings, and play online with other players. However, if you want to make the game more exciting and easier, you should download Bus Sim 2023 Mod APK, a modified version of the game that gives you unlimited money, coins, and access to all buses and routes. You can download and install Bus Sim 2023 Mod APK by following the steps we have provided in this article. We hope you enjoy playing Bus Sim 2023 Mod APK and have a great time driving your bus.

      -

      FAQs

      -

      Here are some frequently asked questions about Bus Sim 2023 Mod APK:

      -

      - Is Bus Sim 2023 Mod APK safe to download and install?

      -

      Yes, Bus Sim 2023 Mod APK is safe to download and install, as long as you get it from a trusted source like [BusSim2023ModAPK.com]. However, you should always be careful when downloading and installing any mod apk file from unknown sources, as they may contain viruses or malware that can harm your device or steal your data.

      -

      - Do I need to root my device to use Bus Sim 2023 Mod APK?

      -

      No, you do not need to root your device to use Bus Sim 2023 Mod APK. You can use it on any Android device that meets the minimum requirements of the game.

      -

      - Will I get banned from the game if I use Bus Sim 2023 Mod APK?

      -

      No, you will not get banned from the game if you use Bus Sim 2023 Mod APK. The mod apk file is designed to bypass the security checks of the game and prevent detection. However, you should always use the mod apk file at your own risk, as we cannot guarantee that it will work forever or that it will not cause any problems with your device or account.

      -

      - Can I update Bus Sim 2023 Mod APK?

      -

      Yes, you can update Bus Sim 2023 Mod APK whenever there is a new version available. However, you should always check the source of the update and make sure that it is compatible with your device and account. You should also backup your data before updating, in case something goes wrong.

      -

      - Can I play offline with Bus Sim 2023 Mod APK?

      -

      Yes, you can play offline with Bus Sim 2023 Mod APK. You can drive any bus in any city or environment without an internet connection. However, you will not be able to access some features of the game, such as multiplayer mode, leaderboards, achievements, etc., when playing offline.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Farlight 84 APK - The Latest and Greatest Battle Royale Game for Android.md b/spaces/fatiXbelha/sd/Farlight 84 APK - The Latest and Greatest Battle Royale Game for Android.md deleted file mode 100644 index f0237891a110633653a2722a14a403ed11f0cd8c..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Farlight 84 APK - The Latest and Greatest Battle Royale Game for Android.md +++ /dev/null @@ -1,98 +0,0 @@ - -

      Farlight 84 APK Download: How to Play the Latest Battle Royale Game on Android

      -

      If you are looking for a new and exciting battle royale game to play on your Android device, you might want to check out Farlight 84. This game is a post-apocalyptic hero shooter that features jetpacks, vehicles, and a variety of game modes. In this article, we will show you how to download and install Farlight 84 APK on your Android device, as well as how to play the game and some tips and tricks to win more matches.

      -

      What is Farlight 84?

      -

      Farlight 84 is a multiplayer game that is set in a near-future world in 2084. The game allows players to experience the thrill of commanding armored vehicles with powerful offensive capabilities. Fly and dash around the battlefield using jetpacks, and unleash an arsenal of ingenious weapons from four different manufacturers.

      -

      farlight 84 apk download


      Download File ✏ ✏ ✏ https://urllie.com/2uNDoO



      -

      A post-apocalyptic hero shooter with jetpacks and vehicles

      -

      The game has a unique setting that combines sci-fi and dystopian elements. The world is full of crises and challenges, and players have to fight for survival and glory. The game features jetpacks that allow players to move vertically or horizontally, as well as vehicles that provide protection and firepower. Players can also use auto-drive mode to focus on shooting while driving.

      -

      A game with diverse modes, characters, and weapons

      -

      The game offers a variety of game modes, such as battle royale, team deathmatch, hunt, solo deathmatch, treasure war, and more. Each mode has its own rules and objectives, and players can choose the one that suits their preferences. The game also has 14 heroes, each with unique skills and abilities that can be used in combat. Players can customize their heroes with skins and accessories. Moreover, the game has a wide range of weapons, from pistols and rifles to rocket launchers and flamethrowers. Players can loot and upgrade their weapons during the match.

      -

      How to download and install Farlight 84 APK on Android?

      -

      If you want to play Farlight 84 on your Android device, you will need to download and install the APK file of the game. Here are the steps to do so:

      -

      Download the APK file from a trusted source

      -

      The first step is to download the APK file of Farlight 84 from a reliable source. You can use the link below to download the latest version of the game from APKCombo.com, which is a safe and secure website that offers free APK downloads.

      -

      [Download Farlight 84 APK](^1^)

      -

      Enable unknown sources on your device

      -

      The next step is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, follow these steps:

      -
        -
      • Go to Settings > Security > Unknown Sources.
      • -
      • Toggle on the option to allow installation of apps from unknown sources.
      • -
      • Tap OK to confirm.
      • -
      -

      Install the APK file and launch the game

      -

      The final step is to install the APK file and launch the game. To do this, follow these steps:

      -

      farlight 84 apk download latest version
      -farlight 84 apk download free for android
      -farlight 84 apk download mod unlimited money
      -farlight 84 apk download no verification
      -farlight 84 apk download offline installer
      -farlight 84 apk download obb file
      -farlight 84 apk download pc windows 10
      -farlight 84 apk download rexdl
      -farlight 84 apk download revdl
      -farlight 84 apk download uptodown
      -farlight 84 apk download with data
      -farlight 84 apk download xapk
      -farlight 84 apk download youtube
      -farlight 84 apk download zip file
      -farlight 84 battle royale game apk download
      -farlight 84 beta version apk download
      -farlight 84 full game apk download
      -farlight 84 game free fire apk download
      -farlight 84 hack version apk download
      -farlight 84 latest update apk download
      -farlight 84 mod menu apk download
      -farlight 84 new update apk download
      -farlight 84 official game apk download
      -farlight 84 original game apk download
      -farlight 84 pro version apk download
      -farlight 84 shooter game apk download
      -farlight 84 unlimited coins apk download
      -how to download and install farlight 84 apk
      -how to play farlight 84 on android apk download
      -how to update farlight 84 on android apk download

      -
        -
      • Locate the downloaded APK file on your device. You can use a file manager app to find it.
      • -
      • Tap on the APK file and follow the instructions to install it.
      • -
      • Wait for the installation to complete and then tap on the game icon to launch it.
      • -
      -

      Congratulations, you have successfully installed Farlight 84 APK on your Android device. You can now enjoy playing the game and explore its features.

      -

      How to play Farlight 84 on Android?

      -

      Now that you have installed the game, you might be wondering how to play it. Here are some basic tips and steps to help you get started:

      -

      Choose your game mode and character

      -

      The first thing you need to do is to choose your game mode and character. You can access the game mode selection screen by tapping on the play button on the main menu. You can then choose from various modes, such as battle royale, team deathmatch, hunt, solo deathmatch, treasure war, and more. Each mode has its own rules and objectives, so make sure you read them before joining a match.

      -

      After choosing your game mode, you can also choose your character. You can swipe left or right to browse through the available heroes, each with unique skills and abilities. You can also tap on the hero icon to see their stats and skills. You can customize your hero with skins and accessories by tapping on the wardrobe button. You can also change your hero name by tapping on the edit button.

      -

      Customize your controls and settings

      -

      The next thing you need to do is to customize your controls and settings. You can access the settings menu by tapping on the gear icon on the top right corner of the screen. You can then adjust various options, such as graphics, sound, language, sensitivity, aim assist, auto-fire, and more. You can also customize your controls by tapping on the control button. You can drag and resize the buttons to suit your preference.

      -

      Use your jetpack, vehicle, and weapons wisely

      -

      The last thing you need to do is to use your jetpack, vehicle, and weapons wisely. You can use your jetpack by tapping on the jetpack button on the right side of the screen. You can fly or dash in any direction using the joystick. You can also use your vehicle by tapping on the vehicle button on the left side of the screen. You can drive or shoot using the joystick and buttons. You can also use auto-drive mode by tapping on the auto button.

      -

      You can use your weapons by tapping on the fire button on the right side of the screen. You can switch between different weapons by tapping on the weapon icons on the bottom of the screen. You can also loot and upgrade your weapons during the match by finding crates and stations. You can use your character's skills by tapping on the skill buttons on the left side of the screen. Each skill has a cooldown time, so use them wisely.

      -

      Tips and tricks to win more matches in Farlight 84

      -

      If you want to win more matches in Farlight 84, you will need some tips and tricks to improve your skills and strategy. Here are some of them:

      -

      Pick the right gun for your playstyle

      -

      The game has a wide range of weapons, from pistols and rifles to rocket launchers and flamethrowers. Each weapon has its own advantages and disadvantages, such as damage, range, accuracy, fire rate, reload speed, magazine size, and recoil. You should pick the right gun for your playstyle and situation. For example, if you like to snipe from a distance, you should use a sniper rifle or a marksman rifle. If you like to rush into close combat, you should use a shotgun or a submachine gun.

      -

      Use your character's skills effectively

      -

      The game has 14 heroes, each with unique skills and abilities that can be used in combat. Each skill has a cooldown time, so use them effectively. For example, if you are playing as Blaze, you can use his Fireball skill to deal damage and burn enemies in a large area. If you are playing as Luna, you can use her Shield skill to protect yourself and your allies from enemy fire.

      -

      Improve your movement and aim

      -

      The game requires good movement and aim skills to survive and eliminate enemies. You should practice using your jetpack and vehicle to move around the map quickly and avoid enemy fire. You should also practice using your weapons to aim accurately and shoot precisely at enemies. You can use aim assist or auto-fire options to help you with aiming, but you should also try to improve your aim skills by practicing and adjusting your sensitivity settings.

      -

      Loot and upgrade your gear

      -

      The game has a loot and upgrade system that allows you to find and enhance your gear during the match. You can find crates and stations that contain weapons, ammo, health kits, armor, and other items. You can also use stations to upgrade your weapons and armor to increase their stats and effects. You should loot and upgrade your gear as much as possible to gain an edge over your enemies.

      -

      Team up and communicate with your squad

      -

      The game has a team-based mode that allows you to play with your friends or other players in a squad of four. You can invite or join a squad by tapping on the squad button on the main menu. You can also use the voice chat or text chat features to communicate with your squad members. You should team up and communicate with your squad to coordinate your actions, share information, and support each other.

      -

      Conclusion

      -

      Farlight 84 is a fun and exciting battle royale game that you can play on your Android device. You can download and install the APK file of the game from a trusted source, and then enjoy the game's features, such as jetpacks, vehicles, diverse modes, characters, and weapons. You can also use some tips and tricks to improve your skills and strategy, such as picking the right gun, using your character's skills, improving your movement and aim, looting and upgrading your gear, and teaming up and communicating with your squad. We hope this article has helped you learn how to play Farlight 84 on Android. Have fun and good luck!

      -

      FAQs

      -

      Here are some frequently asked questions about Farlight 84:

      -

      Is Farlight 84 free to play?

      -

      Yes, Farlight 84 is free to play on Android devices. However, the game may contain in-app purchases that allow you to buy items or currency with real money.

      -

      Is Farlight 84 available on iOS devices?

      -

      No, Farlight 84 is not available on iOS devices at the moment. The game is only compatible with Android devices that have Android 5.0 or higher.

      -

      How can I update Farlight 84 APK?

      -

      You can update Farlight 84 APK by downloading the latest version of the APK file from the same source that you downloaded it from before. You can then install the new APK file over the old one without losing your data.

      -

      How can I report bugs or issues in Farlight 84?

      -

      You can report bugs or issues in Farlight 84 by contacting the game's customer service team. You can do this by tapping on the feedback button on the settings menu, or by sending an email to support@miraclegames.com.

      -

      How can I get more information about Farlight 84?

      -

      You can get more information about Farlight 84 by visiting the game's official website, Facebook page, Twitter account, or YouTube channel. You can also join the game's Discord server to chat with other players and developers.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/backbones/mobilefacenet.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/backbones/mobilefacenet.py deleted file mode 100644 index 87731491d76f9ff61cc70e57bb3f18c54fae308c..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/backbones/mobilefacenet.py +++ /dev/null @@ -1,130 +0,0 @@ -''' -Adapted from https://github.com/cavalleria/cavaface.pytorch/blob/master/backbone/mobilefacenet.py -Original author cavalleria -''' - -import torch.nn as nn -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Sequential, Module -import torch - - -class Flatten(Module): - def forward(self, x): - return x.view(x.size(0), -1) - - -class ConvBlock(Module): - def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1): - super(ConvBlock, self).__init__() - self.layers = nn.Sequential( - Conv2d(in_c, out_c, kernel, groups=groups, stride=stride, padding=padding, bias=False), - BatchNorm2d(num_features=out_c), - PReLU(num_parameters=out_c) - ) - - def forward(self, x): - return self.layers(x) - - -class LinearBlock(Module): - def __init__(self, in_c, out_c, kernel=(1, 1), stride=(1, 1), padding=(0, 0), groups=1): - super(LinearBlock, self).__init__() - self.layers = nn.Sequential( - Conv2d(in_c, out_c, kernel, stride, padding, groups=groups, bias=False), - BatchNorm2d(num_features=out_c) - ) - - def forward(self, x): - return self.layers(x) - - -class DepthWise(Module): - def __init__(self, in_c, out_c, residual=False, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=1): - super(DepthWise, self).__init__() - self.residual = residual - self.layers = nn.Sequential( - ConvBlock(in_c, out_c=groups, kernel=(1, 1), padding=(0, 0), stride=(1, 1)), - ConvBlock(groups, groups, groups=groups, kernel=kernel, padding=padding, stride=stride), - LinearBlock(groups, out_c, kernel=(1, 1), padding=(0, 0), stride=(1, 1)) - ) - - def forward(self, x): - short_cut = None - if self.residual: - short_cut = x - x = self.layers(x) - if self.residual: - output = short_cut + x - else: - output = x - return output - - -class Residual(Module): - def __init__(self, c, num_block, groups, kernel=(3, 3), stride=(1, 1), padding=(1, 1)): - super(Residual, self).__init__() - modules = [] - for _ in range(num_block): - modules.append(DepthWise(c, c, True, kernel, stride, padding, groups)) - self.layers = Sequential(*modules) - - def forward(self, x): - return self.layers(x) - - -class GDC(Module): - def __init__(self, embedding_size): - super(GDC, self).__init__() - self.layers = nn.Sequential( - LinearBlock(512, 512, groups=512, kernel=(7, 7), stride=(1, 1), padding=(0, 0)), - Flatten(), - Linear(512, embedding_size, bias=False), - BatchNorm1d(embedding_size)) - - def forward(self, x): - return self.layers(x) - - -class MobileFaceNet(Module): - def __init__(self, fp16=False, num_features=512): - super(MobileFaceNet, self).__init__() - scale = 2 - self.fp16 = fp16 - self.layers = nn.Sequential( - ConvBlock(3, 64 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1)), - ConvBlock(64 * scale, 64 * scale, kernel=(3, 3), stride=(1, 1), padding=(1, 1), groups=64), - DepthWise(64 * scale, 64 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=128), - Residual(64 * scale, num_block=4, groups=128, kernel=(3, 3), stride=(1, 1), padding=(1, 1)), - DepthWise(64 * scale, 128 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=256), - Residual(128 * scale, num_block=6, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)), - DepthWise(128 * scale, 128 * scale, kernel=(3, 3), stride=(2, 2), padding=(1, 1), groups=512), - Residual(128 * scale, num_block=2, groups=256, kernel=(3, 3), stride=(1, 1), padding=(1, 1)), - ) - self.conv_sep = ConvBlock(128 * scale, 512, kernel=(1, 1), stride=(1, 1), padding=(0, 0)) - self.features = GDC(num_features) - self._initialize_weights() - - def _initialize_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - m.bias.data.zero_() - elif isinstance(m, nn.BatchNorm2d): - m.weight.data.fill_(1) - m.bias.data.zero_() - elif isinstance(m, nn.Linear): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - m.bias.data.zero_() - - def forward(self, x): - with torch.cuda.amp.autocast(self.fp16): - x = self.layers(x) - x = self.conv_sep(x.float() if self.fp16 else x) - x = self.features(x) - return x - - -def get_mbf(fp16, num_features): - return MobileFaceNet(fp16, num_features) \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_large_cmeee.sh b/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_large_cmeee.sh deleted file mode 100644 index 02409b04501bf6155481673b3acd0bd22914d3f3..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_large_cmeee.sh +++ /dev/null @@ -1,91 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_large_cmeee # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_large_cmeee/%x-%j.log # output and error file name (%x=job name, %j=job id) - - -# export CUDA_VISIBLE_DEVICES='2' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen2_large - -TASK=cmeee - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/CMeEE/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_large_2.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.char.bio \ - --valid_data dev.char.bio \ - --test_data dev.char.bio \ - --train_batchsize 16 \ - --valid_batchsize 16 \ - --max_seq_length 256 \ - --task_name cmeee \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bio \ - --middle_prefix I- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 100 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 30 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 200 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/fengmuxi/ChatGpt-Web/app/api/user/info/route.ts b/spaces/fengmuxi/ChatGpt-Web/app/api/user/info/route.ts deleted file mode 100644 index e479d0506c310c17931869c05427b04bec2d89b0..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/app/api/user/info/route.ts +++ /dev/null @@ -1,35 +0,0 @@ -import { NextRequest, NextResponse } from "next/server"; -import { auth, getIP } from "../../auth"; - -export async function POST(req: NextRequest) { - try { - const authResult = auth(req); - if (authResult.error) { - return NextResponse.json(authResult, { - status: 401, - }); - } - const token=req.headers.get("auth") ?? "" - let res=await fetch("https://eladmin.dwzynj.top/api/users/getInfo", { - method: "GET", - headers:{ - "Authorization":token, - "UserIp": String(getIP(req)) - } - }) - if(res.status==401){ - let msg={ - flag:false, - msg:"未登录!" - } - console.log(res.status) - return new Response(JSON.stringify(msg)) - } - let msg=await res.json() - // console.log(msg) - return new Response(JSON.stringify(msg)) - } catch (e) { - console.error("[eladmin] ", e); - return new Response(JSON.stringify(e)); - } -} diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download One Piece World Seeker and Unleash the Power of the Gum-Gum Fruit.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download One Piece World Seeker and Unleash the Power of the Gum-Gum Fruit.md deleted file mode 100644 index e2ff98a06a389a52855295a2ca5cc4ff28076e4a..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download One Piece World Seeker and Unleash the Power of the Gum-Gum Fruit.md +++ /dev/null @@ -1,132 +0,0 @@ -
      -

      How to Download One Piece World Seeker

      -

      If you are a fan of One Piece, the popular manga and anime series by Eiichiro Oda, you might be interested in playing One Piece World Seeker, an action-adventure video game based on the franchise. In this article, we will tell you everything you need to know about this game, including what it is, why you should play it, where you can download it, and what are the different editions available. So, let's get started!

      -

      download one piece world seeker


      Downloadhttps://gohhs.com/2uPn2G



      -

      What is One Piece World Seeker?

      -

      One Piece World Seeker is a video game developed by Ganbarion and published by Bandai Namco Entertainment. It is the first video game in the franchise to feature an open world environment, where you can explore a vast and seamless island as Monkey D. Luffy, the protagonist of One Piece. The game was released on March 15, 2019 for PC, PlayStation 4, and Xbox One.

      -

      Why should you play One Piece World Seeker?

      -

      There are many reasons why you should play One Piece World Seeker, but here are some of the main ones:

      -
        -
      • The Freedom of the Pirate King: You can experience the powers of Luffy's Gum-Gum fruit, which allows him to stretch his limbs and swing into action. You can also use his powerful Haki abilities to sense enemies and unleash devastating attacks. You can explore different areas of the island, such as cities, farms, beaches, mines, and prisons, and interact with various objects and characters.
      • -
      • An Original Story with Original Characters: The game features an original story that takes place on Prison Island, a mysterious island that is under the control of the Navy. The Straw Hat Pirates arrive on the island and get involved in a dramatic story full of twists and turns. The game also includes original characters designed by Eiichiro Oda himself, such as Jeanne, a young woman who leads a rebel group against the Navy; Isaac, a former Marine scientist who rules over Prison Island; and Karakuri Island Automata (KIA), mechanical soldiers created by Isaac.
      • -
      • Fierce Battles Between Popular Characters: The game lets you fight against some of the most iconic enemies from the One Piece series, such as Crocodile, Rob Lucci, Akainu, Kizaru, and more. You can also encounter some of the allies and friends of Luffy, such as Sabo, Law, Hancock, and more. You can enjoy the dynamic and thrilling combat system that combines melee, ranged, and stealth attacks.
      • -
      -

      Where can you download One Piece World Seeker?

      -

      One Piece World Seeker is available for download on PC, PlayStation 4, and Xbox One. Here are the steps on how to download the game for each platform:

      -

      Download One Piece World Seeker for PC

      -

      If you want to play One Piece World Seeker on your PC, you need to download it from Steam, the online gaming platform. Here are the steps to do so:

      -
        -
      1. Create a Steam account if you don't have one already. You can do this by visiting https://store.steampowered.com/join/ and following the instructions.
      2. -
      3. Download and install the Steam client on your PC. You can do this by visiting https://store.steampowered.com/about/ and clicking on the "Install Steam" button.
      4. -
      5. Launch the Steam client and log in with your Steam account.
      6. -
      7. Search for One Piece World Seeker in the Steam store. You can do this by typing the name of the game in the search bar or browsing through the categories.
      8. -
      9. Select One Piece World Seeker from the search results and click on the "Add to Cart" button.
      10. -
      11. Proceed to checkout and pay for the game using your preferred payment method.
      12. -
      13. Wait for the game to download and install on your PC. You can check the progress of the download in the "Library" section of the Steam client.
      14. -
      15. Once the game is installed, you can launch it from your Steam library and enjoy playing it.
      16. -
      -

      System Requirements for PC

      -

      Before you download One Piece World Seeker for PC, you need to make sure that your PC meets the minimum or recommended system requirements for the game. Here is a table of the system requirements for PC:

      - | Minimum | Recommended | | --- | --- | | OS: Windows 7 64-bit SP1 | OS: Windows 10 64-bit | | Processor: Intel Core i5-2300 or AMD A10-7850K | Processor: Intel Core i7-3770 or AMD FX-8350 | | Memory: 4 GB RAM | Memory: 8 GB RAM | | Graphics: GeForce GTX 660 or Radeon HD 7950 | Graphics: GeForce GTX 1060 or Radeon RX 580 | | DirectX: Version 11 | DirectX: Version 11 | | Storage: 25 GB available space | Storage: 25 GB available space | | Sound Card: DirectX compatible soundcard or onboard chipset | Sound Card: DirectX compatible soundcard or onboard chipset |

      Download One Piece World Seeker for PlayStation 4

      -

      If you want to play One Piece World Seeker on your PlayStation 4, you need to download it from PlayStation Store, the online gaming platform. Here are the steps to do so:

      -
        -
      1. Create a PlayStation Network account if you don't have one already. You can do this by visiting https://www.playstation.com/en-us/network/onlineid/ and following the instructions.
      2. -
      3. Download and install the PlayStation Store app on your PlayStation 4. You can do this by selecting "PlayStation Store" from the home screen of your console.
      4. -
      5. Launch the PlayStation Store app and log in with your PlayStation Network account.
      6. -
      7. Search for One Piece World Seeker in the PlayStation Store. You can do this by typing the name of the game in the search bar or browsing through the categories.
      8. -
      9. Select One Piece World Seeker from the search results and click on the "Add to Cart" button.
      10. -
      11. Proceed to checkout and pay for the game using your preferred payment method.
      12. -
      13. Wait for the game to download and install on your PlayStation 4. You can check the progress of the download in the "Notifications" section of your console.
      14. -
      15. Once the game is installed, you can launch it from your home screen and enjoy playing it.
      16. -
      -

      System Requirements for PlayStation 4

      -

      Before you download One Piece World Seeker for PlayStation 4, you need to make sure that your PlayStation 4 meets the minimum or recommended system requirements for the game. Here is a table of the system requirements for PlayStation 4:

      -

      download one piece world seeker pc
      -download one piece world seeker free
      -download one piece world seeker full version
      -download one piece world seeker crack
      -download one piece world seeker steam
      -download one piece world seeker deluxe edition
      -download one piece world seeker episode pass
      -download one piece world seeker torrent
      -download one piece world seeker fitgirl repack
      -download one piece world seeker codex
      -download one piece world seeker update
      -download one piece world seeker dlc
      -download one piece world seeker mods
      -download one piece world seeker trainer
      -download one piece world seeker save file
      -download one piece world seeker highly compressed
      -download one piece world seeker for android
      -download one piece world seeker for ps4
      -download one piece world seeker for xbox one
      -download one piece world seeker for switch
      -download one piece world seeker gameplay
      -download one piece world seeker review
      -download one piece world seeker walkthrough
      -download one piece world seeker guide
      -download one piece world seeker tips and tricks
      -download one piece world seeker cheats
      -download one piece world seeker hack
      -download one piece world seeker patch notes
      -download one piece world seeker system requirements
      -download one piece world seeker wallpaper
      -download one piece world seeker soundtrack
      -download one piece world seeker ost
      -download one piece world seeker opening song
      -download one piece world seeker ending song
      -download one piece world seeker characters
      -download one piece world seeker luffy moveset
      -download one piece world seeker zoro gameplay
      -download one piece world seeker sabo gameplay
      -download one piece world seeker law gameplay
      -download one piece world seeker map locations
      -download one piece world seeker side missions
      -download one piece world seeker treasure chests
      -download one piece world seeker blueprints
      -download one piece world seeker outfits and costumes
      -download one piece world seeker skill tree and upgrades
      -download one piece world seeker karma system and factions
      -download one piece world seeker photo mode and screenshots
      -download one piece world seeker easter eggs and secrets
      -download one piece world seeker best weapons and equipment

      - | Minimum | Recommended | | --- | --- | | OS: PlayStation 4 | OS: PlayStation 4 Pro | | Processor: AMD Jaguar 8-core | Processor: AMD Jaguar 8-core | | Memory: 8 GB GDDR5 | Memory: 8 GB GDDR5 | | Graphics: AMD Radeon GCN 1.84 TFLOPS | Graphics: AMD Radeon GCN 4.2 TFLOPS | | Storage: 25 GB available space | Storage: 25 GB available space |

      Download One Piece World Seeker for Xbox One

      -

      If you want to play One Piece World Seeker on your Xbox One, you need to download it from Microsoft Store, the online gaming platform. Here are the steps to do so:

      -
        -
      1. Create a Microsoft account if you don't have one already. You can do this by visiting https://account.microsoft.com/account and following the instructions.
      2. -
      3. Download and install the Microsoft Store app on your Xbox One. You can do this by selecting "Microsoft Store" from the home screen of your console.
      4. -
      5. Launch the Microsoft Store app and log in with your Microsoft account.
      6. -
      7. Search for One Piece World Seeker in the Microsoft Store. You can do this by typing the name of the game in the search bar or browsing through the categories.
      8. -
      9. Select One Piece World Seeker from the search results and click on the "Buy" button.
      10. -
      11. Proceed to checkout and pay for the game using your preferred payment method.
      12. -
      13. Wait for the game to download and install on your Xbox One. You can check the progress of the download in the "My games & apps" section of your console.
      14. -
      15. Once the game is installed, you can launch it from your home screen and enjoy playing it.
      16. -
      -

      System Requirements for Xbox One

      -

      Before you download One Piece World Seeker for Xbox One, you need to make sure that your Xbox One meets the minimum or recommended system requirements for the game. Here is a table of the system requirements for Xbox One:

      - | Minimum | Recommended | | --- | --- | | OS: Xbox One | OS: Xbox One X | | Processor: AMD Jaguar 8-core | Processor: AMD Jaguar 8-core | | Memory: 8 GB DDR3 | Memory: 12 GB GDDR5 | | Graphics: AMD Radeon GCN 1.31 TFLOPS | Graphics: AMD Radeon GCN 6 TFLOPS | | Storage: 25 GB available space | Storage: 25 GB available space |

      What are the different editions of One Piece World Seeker?

      -

      One Piece World Seeker has three different editions that you can choose from, depending on your budget and preferences. They are the standard edition, the deluxe edition, and the pirate king edition. Here is a comparison of what each edition offers:

      -

      Standard Edition

      -

      The standard edition of One Piece World Seeker is the basic version of the game that includes only the main game itself. It costs $59.99 USD. If you pre-ordered the standard edition, you also received some bonus items, such as a swimsuit outfit for Luffy, a military outfit for Luffy, and a quest called "Strange Island Rocks".

      -

      Deluxe Edition

      -

      The deluxe edition of One Piece World Seeker is an upgraded version of the game that includes not only the main game, but also an episode pass that gives you access to three additional episodes that expand the story and gameplay of the game. The episode pass also includes some extra items, such as a raid suit for Luffy, a kung fu outfit for Luffy, and a white suit outfit for Luffy. The deluxe edition costs $89.99 USD.

      -

      Pirate King Edition

      -

      The pirate king edition of One Piece World Seeker is the ultimate version of the game that includes everything from the deluxe edition, plus some exclusive physical items that are perfect for collectors and fans of One Piece. The pirate king edition includes a figurine of Luffy in his Gear Fourth form, a replica of Luffy's straw hat, a CD with selected tracks from the game's soundtrack, and a season pass that gives you access to all future DLCs for the game. The pirate king edition costs $129.99 USD.

      -

      Conclusion

      -

      One Piece World Seeker is an amazing video game that lets you experience the world of One Piece like never before. You can explore a vast and beautiful island as Luffy and use his amazing abilities to fight against his enemies and allies. You can also enjoy an original story with original characters that are designed by the creator of One Piece himself, Eiichiro Oda. You can download the game for PC, PlayStation 4, or Xbox One from various online platforms, and choose from different editions that offer different content and bonuses. If you are looking for a fun and immersive game that will make you feel like a pirate king, you should definitely try One Piece World Seeker. You won't regret it!

      -

      FAQs

      -

      Here are some of the frequently asked questions about One Piece World Seeker:

      -
        -
      • Q: How long is the game?
      • -
      • A: The game's main story takes about 15 to 20 hours to complete, depending on your playstyle and difficulty level. The game also has many side quests and activities that can extend the gameplay time to over 40 hours.
      • -
      • Q: Can you play as other characters besides Luffy?
      • -
      • A: No, you can only play as Luffy in the game. However, you can interact with other characters from the One Piece series, and some of them will join you as support characters in combat.
      • -
      • Q: Can you customize Luffy's appearance and skills?
      • -
      • A: Yes, you can change Luffy's outfits and accessories in the game, as well as upgrade his skills and abilities using skill points that you earn by completing missions and defeating enemies.
      • -
      • Q: Is the game multiplayer or co-op?
      • -
      • A: No, the game is single-player only. There is no online or local multiplayer or co-op mode in the game.
      • -
      • Q: Is the game canon to the One Piece series?
      • -
      • A: The game is not canon to the One Piece series, but it is an original story that is supervised by Eiichiro Oda himself. The game takes place in an alternate timeline after the Whole Cake Island arc of the manga and anime.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Poppy Playtime Chapter 2 and Face Mommy Long Legs on Mobile.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Poppy Playtime Chapter 2 and Face Mommy Long Legs on Mobile.md deleted file mode 100644 index cf9174117f890fc3e4e3d2bf97ab6b2550e0a89c..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Poppy Playtime Chapter 2 and Face Mommy Long Legs on Mobile.md +++ /dev/null @@ -1,100 +0,0 @@ - -

      Poppy Playtime Chapter 2: A Horror-Puzzle Adventure Game for Mobile Devices

      -

      If you are a fan of horror games with cartoon characters and a touch of whimsy, then you might have heard of Poppy Playtime. This is a first-person horror-puzzle-adventure game developed by Mob Entertainment that revolves around an abandoned toy factory and its sinister secrets. The game is divided into multiple chapters, each with its own storyline, puzzles, and enemies. The first chapter was released in 2022 and became a hit among horror fans and YouTubers alike. The second chapter was released in May 2023 and offers a more thrilling and challenging experience.

      -

      In this article, we will tell you everything you need to know about Poppy Playtime Chapter 2, including how to download it on your mobile devices, what to expect from the game, and some tips and tricks to help you survive. Let's get started!

      -

      poppy playtime chapter 2 download mobile


      Downloadhttps://gohhs.com/2uPruq



      -

      How to Download Poppy Playtime Chapter 2 on Mobile Devices

      -

      Poppy Playtime Chapter 2 is available on both Android and iOS devices. However, it is not a standalone game. You need to have the base game Poppy Playtime installed on your device first. The base game costs $2.99 on both platforms and can be downloaded from the Google Play Store or the App Store. Once you have the base game, you can purchase the second chapter as a downloadable content (DLC) for $4.99 on Android or $5.99 on iOS. Here are the steps to download Poppy Playtime Chapter 2 on your mobile devices:

      -
        -
      • Go to the Google Play Store or the App Store on your device.
      • -
      • Type in "Poppy Playtime" in the search bar and tap on the game icon.
      • -
      • If you don't have the base game yet, tap on the "Buy" button and follow the instructions to complete the payment.
      • -
      • If you already have the base game, tap on the "Downloadable Content" section and look for "Poppy Playtime Chapter 2".
      • -
      • Tap on the "Buy" button and follow the instructions to complete the payment.
      • -
      • After paying, tap on the "Download" button to get the second chapter.
      • -
      • Launch the game and enjoy!
      • -
      -

      What to Expect from Poppy Playtime Chapter 2

      -

      Poppy Playtime Chapter 2 continues where Poppy Playtime Chapter 1 left off. You play as a former employee of Playtime Co., a toy company that mysteriously shut down after its founder disappeared. You decide to investigate the abandoned factory and find out what happened to your colleagues and the company's mascot, Poppy. However, you soon realize that the factory is not as empty as it seems. There are vengeful toys waiting for you in every corner, ready to play with you before they kill you.

      -

      The second chapter introduces a new area to explore: The Train Station. This is one of Playtime Co.'s most popular locations, where children can enjoy games, playgrounds, and more. It also has a train that provides a straight shot out of the factory. However, getting to the train is not easy. You will have to solve mind-numbing puzzles and face horrific toys along the way. Here are some of the new features that you can expect from Poppy Playtime Chapter 2:

      -

      New Toys and Enemies

      In Poppy Playtime Chapter 2, you will encounter new toys and enemies that will make your journey more terrifying. Some of them are:

      -
        -
      • Bunzo Bunny: This is a cute and fluffy bunny toy that can hop around and follow you. However, don't let its appearance fool you. It has sharp teeth and claws that can rip you apart. It can also sense your movements and track you down.
      • -
      • PJ Pug-a-pillar: This is a hybrid toy that combines a pug and a caterpillar. It has a long body with multiple legs and a pug head. It can crawl on walls and ceilings and drop down on you when you least expect it. It can also spit acid at you from its mouth.
      • -
      • Mommy Long Legs: This is a giant spider-like toy that has a human face and long hair. It can climb on any surface and spin webs to trap you. It can also use its legs to stab you or grab you and drag you to its lair.
      • -
      • Poppy: This is the main antagonist of the game and the mascot of Playtime Co. She is a doll with blonde hair and blue eyes that wears a pink dress. She is the leader of the toys and wants to play with you forever. She can appear anywhere and anytime, and she can control the other toys with her voice. She can also use her magic wand to manipulate the environment and create obstacles for you.
      • -
      -

      New Gameplay Mechanic: The Green Hand

      -

      In Poppy Playtime Chapter 2, you will also get to use a new gameplay mechanic: The Green Hand. This is a device that you can find in the Train Station that allows you to transfer power, grapple, and swing. You can use it to activate switches, open doors, move objects, and more. You can also use it to escape from enemies or reach new areas. However, be careful not to overuse it, as it has a limited battery life and needs to be recharged at certain stations.

      -

      The Green Hand works similarly to the Blue Hand that you used in the first chapter, but with some differences. The Blue Hand allows you to grab objects from a distance and pull them towards you, while the Green Hand allows you to attach objects from a distance and pull yourself towards them. The Blue Hand has two modes: Grab Mode and Pull Mode, while the Green Hand has three modes: Transfer Mode, Grapple Mode, and Swing Mode. Here is how each mode works:

      -

      How to play poppy playtime chapter 2 on android
      -Poppy playtime chapter 2 apk download for free
      -Poppy playtime chapter 2 ios release date and price
      -Poppy playtime chapter 2 walkthrough and guide
      -Poppy playtime chapter 2 review and rating
      -Poppy playtime chapter 2 secrets and easter eggs
      -Poppy playtime chapter 2 system requirements and compatibility
      -Poppy playtime chapter 2 trailer and gameplay
      -Poppy playtime chapter 2 tips and tricks
      -Poppy playtime chapter 2 cheats and hacks
      -Poppy playtime chapter 2 mod apk download
      -Poppy playtime chapter 2 update and patch notes
      -Poppy playtime chapter 2 horror game download
      -Poppy playtime chapter 2 best settings and options
      -Poppy playtime chapter 2 steam version download
      -Poppy playtime chapter 2 official website and support
      -Poppy playtime chapter 2 fan art and memes
      -Poppy playtime chapter 2 characters and story
      -Poppy playtime chapter 2 ending and sequel
      -Poppy playtime chapter 2 online multiplayer mode
      -Poppy playtime chapter 2 vs poppy playtime chapter 1
      -Poppy playtime chapter 2 softonic download link
      -Poppy playtime chapter 2 sportskeeda article and news
      -Poppy playtime chapter 2 mob entertainment developer
      -Poppy playtime chapter 2 bugs and glitches
      -Poppy playtime chapter 2 soundtrack and music
      -Poppy playtime chapter 2 voice actors and cast
      -Poppy playtime chapter 2 merchandise and toys
      -Poppy playtime chapter 2 wiki and fandom page
      -Poppy playtime chapter 2 reddit community and discussion
      -Poppy playtime chapter 2 youtube videos and streams
      -Poppy playtime chapter 2 google drive download link
      -Poppy playtime chapter 2 amazon app store download link
      -Poppy playtime chapter 2 refund policy and terms of service
      -Poppy playtime chapter 2 achievements and trophies
      -Poppy playtime chapter 2 speedrun and challenge mode
      -Poppy playtime chapter 2 fan theories and speculation
      -Poppy playtime chapter 2 lore and backstory
      -Poppy playtime chapter 2 crossover and collaboration
      -Poppy playtime chapter 2 memes and jokes

      -
        -
      • Transfer Mode: This mode allows you to transfer power from one source to another. You can use it to power up machines, lights, doors, etc. To use this mode, aim at a power source (such as a socket or a battery) and press the trigger button to attach the Green Hand to it. Then, aim at another power source (such as a switch or a panel) and press the trigger button again to transfer the power to it.
      • -
      • Grapple Mode: This mode allows you to grapple onto objects or surfaces from a distance and pull yourself towards them. You can use it to cross gaps, climb walls, reach high places, etc. To use this mode, aim at an object or surface (such as a hook or a ledge) that has a green outline and press the trigger button to attach the Green Hand to it. Then, hold the trigger button to pull yourself towards it.
      • -
      • Swing Mode: This mode allows you to swing from one object or surface to another using the Green Hand as a rope. You can use it to traverse large areas, avoid enemies, find secrets, etc. To use this mode, aim at an object or surface (such as a pipe or a beam) that has a green outline and press the trigger button to attach the Green Hand to it. Then, release the trigger button to swing from it. You can press the trigger button again to detach the Green Hand from it.
      • -
      -

      Tips and Tricks for Playing Poppy Playtime Chapter 2

      Poppy Playtime Chapter 2 is not an easy game. It requires a lot of skill, patience, and courage to complete. Here are some tips and tricks that can help you play the game better:

      -
        -
      • Explore the environment: The Train Station is a large and complex area that has many hidden secrets and collectibles. You can find tapes, posters, notes, and more that can give you more information about the story and the characters. You can also find batteries, health kits, and other items that can help you survive. Be sure to look around and interact with everything you can.
      • -
      • Use the Green Hand wisely: The Green Hand is a powerful tool that can help you solve puzzles and escape from enemies. However, it also has some limitations. It has a limited battery life that drains when you use it. It also has a cooldown time between each use. You can recharge it at certain stations, but they are not always available. Therefore, you should use the Green Hand sparingly and strategically. Don't waste it on unnecessary actions or objects. Save it for when you really need it.
      • -
      • Avoid the enemies: The toys in Poppy Playtime Chapter 2 are not your friends. They are deadly and relentless. They will chase you, attack you, and kill you if they catch you. You cannot fight them or kill them. You can only run away from them or hide from them. You should avoid making noise or getting too close to them. You should also use the environment to your advantage. You can use doors, vents, lockers, and other objects to block their path or hide from their sight. You can also use the Green Hand to grapple or swing away from them.
      • -
      • Solve the puzzles: The puzzles in Poppy Playtime Chapter 2 are challenging and creative. They require you to use your logic, observation, and memory skills. You will have to find clues, codes, keys, and more to unlock doors, activate machines, and progress through the game. You will also have to use the Green Hand to transfer power, grapple, and swing to solve some puzzles. You should pay attention to everything you see and hear in the game. You might find hints or solutions in the most unexpected places.
      • -
      -

      Conclusion and FAQs

      -

      Poppy Playtime Chapter 2 is a horror-puzzle-adventure game that will keep you on the edge of your seat. It offers a thrilling and immersive experience that will make you feel like you are in a nightmare. It has stunning graphics, sound effects, and voice acting that will make you forget that you are playing on a mobile device. It also has a captivating story, challenging puzzles, and terrifying enemies that will make you want to play more.

      -

      If you are looking for a game that will scare you, entertain you, and challenge you, then Poppy Playtime Chapter 2 is the game for you. Download it now and see if you can escape from the Train Station alive.

      -

      Here are some FAQs that might help you with the game:

      -
        -
      • Q: How long is Poppy Playtime Chapter 2?
      • -
      • A: Poppy Playtime Chapter 2 is about 1-2 hours long, depending on your skill level and how much you explore.
      • -
      • Q: Can I play Poppy Playtime Chapter 2 without playing Poppy Playtime Chapter 1?
      • -
      • A: Yes, but we don't recommend it. Poppy Playtime Chapter 2 is a continuation of Poppy Playtime Chapter 1 and assumes that you know what happened in the first chapter. If you play the second chapter without playing the first one, you might miss some important details and references.
      • -
      • Q: Is Poppy Playtime Chapter 2 suitable for children?
      • -
      • A: No, Poppy Playtime Chapter 2 is not suitable for children. It is a horror game that contains violence, gore, jump scares, and disturbing themes. It is rated M for Mature by ESRB and PEGI 18 by PEGI.
      • -
      • Q: Will there be more chapters of Poppy Playtime?
      • -
      • A: Yes, according to the developer Mob Entertainment, there will be more chapters of Poppy Playtime in the future. However, they have not announced any release dates or details yet.
      • -
      • Q: Where can I find more information about Poppy Playtime?
      • -
      • A: You can find more information about Poppy Playtime on its official website, its Twitter account, its YouTube channel, or its Discord server. You can also check out some of the reviews, gameplay videos, and fan art of Poppy Playtime on various websites and platforms.
      • -
      -

      I hope you enjoyed this article and found it helpful. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and have a great day!

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fffiloni/AnimateDiff-Image-Init/download_bashscripts/8-GhibliBackground.sh b/spaces/fffiloni/AnimateDiff-Image-Init/download_bashscripts/8-GhibliBackground.sh deleted file mode 100644 index 39b9e76ddf77a842e4f41acbee9e73f62c49eec0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/AnimateDiff-Image-Init/download_bashscripts/8-GhibliBackground.sh +++ /dev/null @@ -1,3 +0,0 @@ -#!/bin/bash -wget https://civitai.com/api/download/models/102828 -P models/DreamBooth_LoRA/ --content-disposition --no-check-certificate -wget https://civitai.com/api/download/models/57618 -P models/DreamBooth_LoRA/ --content-disposition --no-check-certificate diff --git a/spaces/fffiloni/BedtimeStory/README.md b/spaces/fffiloni/BedtimeStory/README.md deleted file mode 100644 index 6dda54ebb18dd9110ef64c43f621dfc343a11bd0..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/BedtimeStory/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: BedtimeStory -emoji: 🌙 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/flax-community/Multilingual-VQA/apps/vqa.py b/spaces/flax-community/Multilingual-VQA/apps/vqa.py deleted file mode 100644 index b9636caf6dc2c00988d99dd5938234e76c210959..0000000000000000000000000000000000000000 --- a/spaces/flax-community/Multilingual-VQA/apps/vqa.py +++ /dev/null @@ -1,153 +0,0 @@ -from .utils import ( - get_text_attributes, - get_top_5_predictions, - get_transformed_image, - plotly_express_horizontal_bar_plot, - translate_labels, -) - -import streamlit as st -import numpy as np -import pandas as pd -import os -import requests -from PIL import Image -import matplotlib.pyplot as plt -import json - -from mtranslate import translate -from .utils import read_markdown - -from .model.flax_clip_vision_bert.modeling_clip_vision_bert import ( - FlaxCLIPVisionBertForSequenceClassification, -) - - -def softmax(logits): - return np.exp(logits) / np.sum(np.exp(logits), axis=0) - - -def app(state): - vqa_state = state - st.header("Visual Question Answering Demo") - - with st.beta_expander("Usage"): - st.write(read_markdown("vqa_usage.md")) - st.info(read_markdown("vqa_intro.md")) - - # @st.cache(persist=False) - def predict(transformed_image, question_inputs): - return np.array( - vqa_state.vqa_model(pixel_values=transformed_image, **question_inputs)[0][0] - ) - - # @st.cache(persist=False) - def load_model(ckpt): - return FlaxCLIPVisionBertForSequenceClassification.from_pretrained(ckpt) - - vqa_checkpoints = [ - "flax-community/clip-vision-bert-vqa-ft-6k" - ] # TODO: Maybe add more checkpoints? - # vqa_checkpoints = ["./ckpt/vqa/ckpt-60k-5999"] - dummy_data = pd.read_csv("dummy_vqa_multilingual.tsv", sep="\t") - code_to_name = { - "en": "English", - "fr": "French", - "de": "German", - "es": "Spanish", - } - - with open("answer_reverse_mapping.json") as f: - answer_reverse_mapping = json.load(f) - - first_index = 20 - # Init Session vqa_state - if vqa_state.vqa_image_file is None: - vqa_state.vqa_image_file = dummy_data.loc[first_index, "image_file"] - vqa_state.question = dummy_data.loc[first_index, "question"].strip("- ") - vqa_state.answer_label = dummy_data.loc[first_index, "answer_label"] - vqa_state.question_lang_id = dummy_data.loc[first_index, "lang_id"] - vqa_state.answer_lang_id = dummy_data.loc[first_index, "lang_id"] - - image_path = os.path.join("resized_images", vqa_state.vqa_image_file) - image = plt.imread(image_path) - vqa_state.vqa_image = image - - if vqa_state.vqa_model is None: - with st.spinner("Loading model..."): - vqa_state.vqa_model = load_model(vqa_checkpoints[0]) - - # Display Top-5 Predictions - query1 = st.text_input( - "Enter a URL to an image", - value="http://images.cocodataset.org/val2017/000000039769.jpg", - ) - col1, col2, col3 = st.beta_columns([2,1, 2]) - if col1.button( - "Get a random example", - help="Get a random example from the 100 `seeded` image-text pairs.", - ): - sample = dummy_data.sample(1).reset_index() - vqa_state.vqa_image_file = sample.loc[0, "image_file"] - vqa_state.question = sample.loc[0, "question"].strip("- ") - vqa_state.answer_label = sample.loc[0, "answer_label"] - vqa_state.question_lang_id = sample.loc[0, "lang_id"] - vqa_state.answer_lang_id = sample.loc[0, "lang_id"] - - image_path = os.path.join("resized_images", vqa_state.vqa_image_file) - image = plt.imread(image_path) - vqa_state.vqa_image = image - - col2.write("OR") - - if col3.button("Use above URL"): - image_data = requests.get(query1, stream=True).raw - image = np.asarray(Image.open(image_data)) - vqa_state.vqa_image = image - - transformed_image = get_transformed_image(vqa_state.vqa_image) - - new_col1, new_col2 = st.beta_columns([5, 5]) - - # Display Image - new_col1.image(vqa_state.vqa_image, use_column_width="auto") - - # Display Question - question = new_col2.text_input( - label="Question", - value=vqa_state.question, - help="Type your question regarding the image above in one of the four languages.", - ) - new_col2.markdown( - f"""**English Translation**: {question if vqa_state.question_lang_id == "en" else translate(question, 'en')}""" - ) - - question_inputs = get_text_attributes(question) - - # Select Language - options = ["en", "de", "es", "fr"] - vqa_state.answer_lang_id = new_col2.selectbox( - "Answer Language", - index=options.index(vqa_state.answer_lang_id), - options=options, - format_func=lambda x: code_to_name[x], - help="The language to be used to show the top-5 labels.", - ) - if question == vqa_state.question: - - actual_answer = answer_reverse_mapping[str(vqa_state.answer_label)] - new_col2.markdown( - "**Actual Answer**: " - + translate_labels([actual_answer], vqa_state.answer_lang_id)[0] - + " (" - + actual_answer - + ")" - ) - - with st.spinner("Predicting..."): - logits = predict(transformed_image, dict(question_inputs)) - logits = softmax(logits) - labels, values = get_top_5_predictions(logits, answer_reverse_mapping) - translated_labels = translate_labels(labels, vqa_state.answer_lang_id) - fig = plotly_express_horizontal_bar_plot(values, translated_labels) - st.plotly_chart(fig, use_container_width=True) diff --git a/spaces/flax-community/multilingual-image-captioning/sections/conclusion_future_work/conclusion.md b/spaces/flax-community/multilingual-image-captioning/sections/conclusion_future_work/conclusion.md deleted file mode 100644 index 20e08ed74a0b9d22b7b786ba4f75e22aff69e41a..0000000000000000000000000000000000000000 --- a/spaces/flax-community/multilingual-image-captioning/sections/conclusion_future_work/conclusion.md +++ /dev/null @@ -1 +0,0 @@ -In this project, we presented Proof-of-Concept with our CLIP Vision + mBART-50 model baseline which leverages a multilingual checkpoint with pre-trained image encoders in four languages - **English, French, German, and Spanish**. Our models achieve a BLEU-1 score of around 0.14 which is decent considering the amount of training time we could get and how challenging multilingual training is. \ No newline at end of file diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/components/parkour_config.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/components/parkour_config.js deleted file mode 100644 index 07909fe35eb4aef278fd4ac1d8fca9ceb2619571..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/components/parkour_config.js +++ /dev/null @@ -1,79 +0,0 @@ -import Component from '../lib/component.js'; -import store from '../store/index.js'; - -/** - * @classdesc UI component for the parkour parameters. - */ -export default class ParkourConfig extends Component { - - /** - * @constructor - */ - constructor() { - super({ - store, - element: document.querySelector('#parkour-custom-tab'), - eventName: 'parkourConfigChange' - }); - } - - /** - * Renders the parkour parameters. - */ - render() { - - // TERRAIN CONFIG - const terrainConfig = store.state.parkourConfig.terrain; - let dict = window.lang_dict[store.state.language]['parkourConfig']; - - // Sections titles - this.element.querySelector('#terrain-generation-title').innerHTML = dict['terrainGeneration']; - this.element.querySelector('#general-parameters-title').innerText = dict['generalParameters']; - this.element.querySelector('#creepers-title').innerText = dict['creepers']; - - // Terrain generation tabs buttons - this.element.querySelector('#draw-tab-btn').innerText = dict['drawTabBtn']; - this.element.querySelector('#proc-gen-tab-btn').innerText = dict['procGenTabBtn']; - - // Procedural Generation text - this.element.querySelector('#proc-gen-text').innerHTML = dict['procGenText']; - - // Parameters labels - this.element.querySelector('#smoothing-label').innerText = dict['smoothing']; - this.element.querySelector('#water-level-label').innerText = dict['waterLevel']; - this.element.querySelector('#creepers-width-label').innerText = dict['creepersWidth']; - this.element.querySelector('#creepers-height-label').innerText = dict['creepersHeight']; - this.element.querySelector('#creepers-spacing-label').innerText = dict['creepersSpacing']; - //this.element.querySelector('#creepers-type-label').innerText = dict['creepersType']; - - this.element.querySelector('#rigid-otpion').innerText = dict['rigid']; - this.element.querySelector('#swingable-option').innerText = dict['swingable']; - - // Sliders values - this.element.querySelector("#dim1Slider").value = terrainConfig.dim1; - this.element.querySelector("#dim2Slider").value = terrainConfig.dim2; - this.element.querySelector("#dim3Slider").value = terrainConfig.dim3; - this.element.querySelector("#smoothingSlider").value = terrainConfig.smoothing; - this.element.querySelector("#waterSlider").value = terrainConfig.waterLevel; - - // Sliders text values - this.element.querySelector("#dim1Value").innerText = terrainConfig.dim1; - this.element.querySelector("#dim2Value").innerText = terrainConfig.dim2; - this.element.querySelector("#dim3Value").innerText = terrainConfig.dim3; - this.element.querySelector("#smoothingValue").innerText = terrainConfig.smoothing; - this.element.querySelector("#waterValue").innerText = terrainConfig.waterLevel; - - // CREEPERS CONFIG - const creepersConfig = store.state.parkourConfig.creepers; - - this.element.querySelector("#creepersWidthSlider").value = creepersConfig.width; - this.element.querySelector("#creepersHeightSlider").value = creepersConfig.height; - this.element.querySelector("#creepersSpacingSlider").value = creepersConfig.spacing; - - this.element.querySelector("#creepersWidthValue").innerText = creepersConfig.width; - this.element.querySelector("#creepersHeightValue").innerText = creepersConfig.height; - this.element.querySelector("#creepersSpacingValue").innerText = creepersConfig.spacing; - - this.element.querySelector("#creepersType").value = creepersConfig.type; - } -}; \ No newline at end of file diff --git a/spaces/freddyaboulton/EDSR-freddy/app.py b/spaces/freddyaboulton/EDSR-freddy/app.py deleted file mode 100644 index e091fd1c263863b6fbf81e596cf14aff568d5557..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/EDSR-freddy/app.py +++ /dev/null @@ -1,129 +0,0 @@ -import tensorflow as tf -import matplotlib.pyplot as plt -from tensorflow import keras -from tensorflow.keras import layers -import gradio as gr - -# Define EDSR custom model - -class EDSRModel(tf.keras.Model): - def train_step(self, data): - # Unpack the data. Its structure depends on your model and - # on what you pass to `fit()`. - x, y = data - - with tf.GradientTape() as tape: - y_pred = self(x, training=True) # Forward pass - # Compute the loss value - # (the loss function is configured in `compile()`) - loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses) - - # Compute gradients - trainable_vars = self.trainable_variables - gradients = tape.gradient(loss, trainable_vars) - # Update weights - self.optimizer.apply_gradients(zip(gradients, trainable_vars)) - # Update metrics (includes the metric that tracks the loss) - self.compiled_metrics.update_state(y, y_pred) - # Return a dict mapping metric names to current value - return {m.name: m.result() for m in self.metrics} - - def predict_step(self, x): - # Adding dummy dimension using tf.expand_dims and converting to float32 using tf.cast - x = tf.cast(tf.expand_dims(x, axis=0), tf.float32) - # Passing low resolution image to model - super_resolution_img = self(x, training=False) - # Clips the tensor from min(0) to max(255) - super_resolution_img = tf.clip_by_value(super_resolution_img, 0, 255) - # Rounds the values of a tensor to the nearest integer - super_resolution_img = tf.round(super_resolution_img) - # Removes dimensions of size 1 from the shape of a tensor and converting to uint8 - super_resolution_img = tf.squeeze( - tf.cast(super_resolution_img, tf.uint8), axis=0 - ) - return super_resolution_img - - -# Residual Block -def ResBlock(inputs): - x = layers.Conv2D(64, 3, padding="same", activation="relu")(inputs) - x = layers.Conv2D(64, 3, padding="same")(x) - x = layers.Add()([inputs, x]) - return x - - -# Upsampling Block -def Upsampling(inputs, factor=2, **kwargs): - x = layers.Conv2D(64 * (factor ** 2), 3, padding="same", **kwargs)(inputs) - x = tf.nn.depth_to_space(x, block_size=factor) - x = layers.Conv2D(64 * (factor ** 2), 3, padding="same", **kwargs)(x) - x = tf.nn.depth_to_space(x, block_size=factor) - return x - - -def make_model(num_filters, num_of_residual_blocks): - # Flexible Inputs to input_layer - input_layer = layers.Input(shape=(None, None, 3)) - # Scaling Pixel Values - x = layers.Rescaling(scale=1.0 / 255)(input_layer) - x = x_new = layers.Conv2D(num_filters, 3, padding="same")(x) - - # 16 residual blocks - for _ in range(num_of_residual_blocks): - x_new = ResBlock(x_new) - - x_new = layers.Conv2D(num_filters, 3, padding="same")(x_new) - x = layers.Add()([x, x_new]) - - x = Upsampling(x) - x = layers.Conv2D(3, 3, padding="same")(x) - - output_layer = layers.Rescaling(scale=255)(x) - return EDSRModel(input_layer, output_layer) - - -# Define PSNR metric - -def PSNR(super_resolution, high_resolution): - """Compute the peak signal-to-noise ratio, measures quality of image.""" - # Max value of pixel is 255 - psnr_value = tf.image.psnr(high_resolution, super_resolution, max_val=255)[0] - return psnr_value - -custom_objects = {"EDSRModel":EDSRModel} - -with keras.utils.custom_object_scope(custom_objects): - new_model = keras.models.load_model("./trained.h5", custom_objects={'PSNR':PSNR}) - - -def process_image(img): - lowres = tf.convert_to_tensor(img, dtype=tf.uint8) - lowres = tf.image.random_crop(lowres, (150, 150, 3)) - preds = new_model.predict_step(lowres) - preds = preds.numpy() - lowres = lowres.numpy() - return (lowres, preds) - -image = gr.inputs.Image() -image_out = gr.outputs.Image() - -markdown_part = """ - -Model Link - https://huggingface.co/keras-io/EDSR - -""" - -examples = [["examples/1.png"]] - -gr.Interface( - process_image, - title="EDSR - Enhanced Deep Residual Networks for Single Image Super-Resolution", - description="SuperResolution", - inputs = image, - examples = examples, - outputs = gr.Gallery(label="Outputs, First image is low res, next one is High Res",visible=True), - article = markdown_part, - interpretation='default', - allow_flagging='never', - cache_examples=True - ).launch(debug=True) \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp b/spaces/gradio/HuBERT/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp deleted file mode 100644 index 8a6af4285da3c40a01383541acf1f455ffc060fb..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/dynamicconv_layer/dynamiconv_cpu.cpp +++ /dev/null @@ -1,35 +0,0 @@ -#include -#include - -std::vector dynamicconv_cpu_forward( - float* input, - float* filters, - int padding_l); - -std::vector dynamicconv_cpu_backward( - float* gradOutput, - int padding_l, - float* input, - float* filters); - -std::vector dynamicconv_forward( - float* input, - float* filters, - int padding_l) { - - return dynamicconv_cpu_forward(input, filters, padding_l); -} - -std::vector dynamicconv_backward( - float* gradOutput, - int padding_l, - float* input, - float* filters) { - - return dynamicconv_cpu_backward(gradOutput, padding_l, input, filters); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("forward", &dynamicconv_forward, "dynamicconv forward (CPU)"); - m.def("backward", &dynamicconv_backward, "dynamicconv backward (CPU)"); -} diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chat/Chat.tsx b/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chat/Chat.tsx deleted file mode 100644 index 5c01e33a91f1ad4a0bedbf2b4c43e72761ddbd58..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chat/Chat.tsx +++ /dev/null @@ -1,487 +0,0 @@ -import { IconClearAll, IconSettings } from '@tabler/icons-react'; -import { - MutableRefObject, - memo, - useCallback, - useContext, - useEffect, - useRef, - useState, -} from 'react'; -import toast from 'react-hot-toast'; - -import { useTranslation } from 'next-i18next'; - -import { getEndpoint } from '@/utils/app/api'; -import { - saveConversation, - saveConversations, - updateConversation, -} from '@/utils/app/conversation'; -import { throttle } from '@/utils/data/throttle'; - -import { ChatBody, Conversation, Message } from '@/types/chat'; -import { Plugin } from '@/types/plugin'; - -import HomeContext from '@/pages/api/home/home.context'; - -import { ChatInput } from './ChatInput'; -import { ChatLoader } from './ChatLoader'; -import { ErrorMessageDiv } from './ErrorMessageDiv'; -import { ModelSelect } from './ModelSelect'; -import { SystemPrompt } from './SystemPrompt'; -import { TemperatureSlider } from './Temperature'; -import { MemoizedChatMessage } from './MemoizedChatMessage'; - -interface Props { - stopConversationRef: MutableRefObject; -} - -export const Chat = memo(({ stopConversationRef }: Props) => { - const { t } = useTranslation('chat'); - - const { - state: { - selectedConversation, - conversations, - models, - apiKey, - pluginKeys, - serverSideApiKeyIsSet, - messageIsStreaming, - modelError, - loading, - prompts, - }, - handleUpdateConversation, - dispatch: homeDispatch, - } = useContext(HomeContext); - - const [currentMessage, setCurrentMessage] = useState(); - const [autoScrollEnabled, setAutoScrollEnabled] = useState(true); - const [showSettings, setShowSettings] = useState(false); - const [showScrollDownButton, setShowScrollDownButton] = - useState(false); - - const messagesEndRef = useRef(null); - const chatContainerRef = useRef(null); - const textareaRef = useRef(null); - - const handleSend = useCallback( - async (message: Message, deleteCount = 0, plugin: Plugin | null = null) => { - if (selectedConversation) { - let updatedConversation: Conversation; - if (deleteCount) { - const updatedMessages = [...selectedConversation.messages]; - for (let i = 0; i < deleteCount; i++) { - updatedMessages.pop(); - } - updatedConversation = { - ...selectedConversation, - messages: [...updatedMessages, message], - }; - } else { - updatedConversation = { - ...selectedConversation, - messages: [...selectedConversation.messages, message], - }; - } - homeDispatch({ - field: 'selectedConversation', - value: updatedConversation, - }); - homeDispatch({ field: 'loading', value: true }); - homeDispatch({ field: 'messageIsStreaming', value: true }); - const chatBody: ChatBody = { - model: updatedConversation.model, - messages: updatedConversation.messages, - key: apiKey, - prompt: updatedConversation.prompt, - temperature: updatedConversation.temperature, - }; - const endpoint = getEndpoint(plugin); - let body; - if (!plugin) { - body = JSON.stringify(chatBody); - } else { - body = JSON.stringify({ - ...chatBody, - googleAPIKey: pluginKeys - .find((key) => key.pluginId === 'google-search') - ?.requiredKeys.find((key) => key.key === 'GOOGLE_API_KEY')?.value, - googleCSEId: pluginKeys - .find((key) => key.pluginId === 'google-search') - ?.requiredKeys.find((key) => key.key === 'GOOGLE_CSE_ID')?.value, - }); - } - const controller = new AbortController(); - const response = await fetch(endpoint, { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: controller.signal, - body, - }); - if (!response.ok) { - homeDispatch({ field: 'loading', value: false }); - homeDispatch({ field: 'messageIsStreaming', value: false }); - toast.error(response.statusText); - return; - } - const data = response.body; - if (!data) { - homeDispatch({ field: 'loading', value: false }); - homeDispatch({ field: 'messageIsStreaming', value: false }); - return; - } - if (!plugin) { - if (updatedConversation.messages.length === 1) { - const { content } = message; - const customName = - content.length > 30 ? content.substring(0, 30) + '...' : content; - updatedConversation = { - ...updatedConversation, - name: customName, - }; - } - homeDispatch({ field: 'loading', value: false }); - const reader = data.getReader(); - const decoder = new TextDecoder(); - let done = false; - let isFirst = true; - let text = ''; - while (!done) { - if (stopConversationRef.current === true) { - controller.abort(); - done = true; - break; - } - const { value, done: doneReading } = await reader.read(); - done = doneReading; - const chunkValue = decoder.decode(value); - text += chunkValue; - if (isFirst) { - isFirst = false; - const updatedMessages: Message[] = [ - ...updatedConversation.messages, - { role: 'assistant', content: chunkValue }, - ]; - updatedConversation = { - ...updatedConversation, - messages: updatedMessages, - }; - homeDispatch({ - field: 'selectedConversation', - value: updatedConversation, - }); - } else { - const updatedMessages: Message[] = - updatedConversation.messages.map((message, index) => { - if (index === updatedConversation.messages.length - 1) { - return { - ...message, - content: text, - }; - } - return message; - }); - updatedConversation = { - ...updatedConversation, - messages: updatedMessages, - }; - homeDispatch({ - field: 'selectedConversation', - value: updatedConversation, - }); - } - } - saveConversation(updatedConversation); - const updatedConversations: Conversation[] = conversations.map( - (conversation) => { - if (conversation.id === selectedConversation.id) { - return updatedConversation; - } - return conversation; - }, - ); - if (updatedConversations.length === 0) { - updatedConversations.push(updatedConversation); - } - homeDispatch({ field: 'conversations', value: updatedConversations }); - saveConversations(updatedConversations); - homeDispatch({ field: 'messageIsStreaming', value: false }); - } else { - const { answer } = await response.json(); - const updatedMessages: Message[] = [ - ...updatedConversation.messages, - { role: 'assistant', content: answer }, - ]; - updatedConversation = { - ...updatedConversation, - messages: updatedMessages, - }; - homeDispatch({ - field: 'selectedConversation', - value: updateConversation, - }); - saveConversation(updatedConversation); - const updatedConversations: Conversation[] = conversations.map( - (conversation) => { - if (conversation.id === selectedConversation.id) { - return updatedConversation; - } - return conversation; - }, - ); - if (updatedConversations.length === 0) { - updatedConversations.push(updatedConversation); - } - homeDispatch({ field: 'conversations', value: updatedConversations }); - saveConversations(updatedConversations); - homeDispatch({ field: 'loading', value: false }); - homeDispatch({ field: 'messageIsStreaming', value: false }); - } - } - }, - [ - apiKey, - conversations, - pluginKeys, - selectedConversation, - stopConversationRef, - ], - ); - - const scrollToBottom = useCallback(() => { - if (autoScrollEnabled) { - messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' }); - textareaRef.current?.focus(); - } - }, [autoScrollEnabled]); - - const handleScroll = () => { - if (chatContainerRef.current) { - const { scrollTop, scrollHeight, clientHeight } = - chatContainerRef.current; - const bottomTolerance = 30; - - if (scrollTop + clientHeight < scrollHeight - bottomTolerance) { - setAutoScrollEnabled(false); - setShowScrollDownButton(true); - } else { - setAutoScrollEnabled(true); - setShowScrollDownButton(false); - } - } - }; - - const handleScrollDown = () => { - chatContainerRef.current?.scrollTo({ - top: chatContainerRef.current.scrollHeight, - behavior: 'smooth', - }); - }; - - const handleSettings = () => { - setShowSettings(!showSettings); - }; - - const onClearAll = () => { - if ( - confirm(t('Are you sure you want to clear all messages?')) && - selectedConversation - ) { - handleUpdateConversation(selectedConversation, { - key: 'messages', - value: [], - }); - } - }; - - const scrollDown = () => { - if (autoScrollEnabled) { - messagesEndRef.current?.scrollIntoView(true); - } - }; - const throttledScrollDown = throttle(scrollDown, 250); - - // useEffect(() => { - // console.log('currentMessage', currentMessage); - // if (currentMessage) { - // handleSend(currentMessage); - // homeDispatch({ field: 'currentMessage', value: undefined }); - // } - // }, [currentMessage]); - - useEffect(() => { - throttledScrollDown(); - selectedConversation && - setCurrentMessage( - selectedConversation.messages[selectedConversation.messages.length - 2], - ); - }, [selectedConversation, throttledScrollDown]); - - useEffect(() => { - const observer = new IntersectionObserver( - ([entry]) => { - setAutoScrollEnabled(entry.isIntersecting); - if (entry.isIntersecting) { - textareaRef.current?.focus(); - } - }, - { - root: null, - threshold: 0.5, - }, - ); - const messagesEndElement = messagesEndRef.current; - if (messagesEndElement) { - observer.observe(messagesEndElement); - } - return () => { - if (messagesEndElement) { - observer.unobserve(messagesEndElement); - } - }; - }, [messagesEndRef]); - - return ( -
      - {!(apiKey || serverSideApiKeyIsSet) ? ( -
      -
      - Welcome to Chatbot UI -
      -
      -
      {`Chatbot UI is an open source clone of OpenAI's ChatGPT UI.`}
      -
      - Important: Chatbot UI is 100% unaffiliated with OpenAI. -
      -
      -
      -
      - Chatbot UI allows you to plug in your base url to use this UI with - your API. -
      -
      - It is only used to communicate - with your API. -
      -
      -
      - ) : modelError ? ( - - ) : ( - <> -
      - {selectedConversation?.messages.length === 0 ? ( - <> -
      -
      - Starchat UI -
      - - {models.length > 0 && ( -
      - - - - handleUpdateConversation(selectedConversation, { - key: 'prompt', - value: prompt, - }) - } - /> - - - handleUpdateConversation(selectedConversation, { - key: 'temperature', - value: temperature, - }) - } - /> -
      - )} -
      - - ) : ( - <> -
      - - -
      - {showSettings && ( -
      -
      - -
      -
      - )} - - {selectedConversation?.messages.map((message, index) => ( - { - setCurrentMessage(editedMessage); - // discard edited message and the ones that come after then resend - handleSend( - editedMessage, - selectedConversation?.messages.length - index, - ); - }} - /> - ))} - - {loading && } - -
      - - )} -
      - - { - setCurrentMessage(message); - handleSend(message, 0, plugin); - }} - onScrollDownClick={handleScrollDown} - onRegenerate={() => { - if (currentMessage) { - handleSend(currentMessage, 2, null); - } - }} - showScrollDownButton={showScrollDownButton} - /> - - )} -
      - ); -}); -Chat.displayName = 'Chat'; diff --git a/spaces/gustproof/sd_prompts/README.md b/spaces/gustproof/sd_prompts/README.md deleted file mode 100644 index 8ff116de4661211ac6a06545a82859365f9bd675..0000000000000000000000000000000000000000 --- a/spaces/gustproof/sd_prompts/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sd Prompts -emoji: 💻 -colorFrom: purple -colorTo: blue -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gwang-kim/DATID-3D/eg3d/torch_utils/ops/upfirdn2d.cpp b/spaces/gwang-kim/DATID-3D/eg3d/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index c1769c3cbe4dd04f76f9ccef726680720e6f39c8..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,111 +0,0 @@ -/* - * SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. - * SPDX-License-Identifier: LicenseRef-NvidiaProprietary - * - * NVIDIA CORPORATION, its affiliates and licensors retain all intellectual - * property and proprietary rights in and to this material, related - * documentation and any modifications thereto. Any use, reproduction, - * disclosure or distribution of this material and related documentation - * without an express license agreement from NVIDIA CORPORATION or - * its affiliates is strictly prohibited. - */ - -#include -#include -#include -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.numel() > 0, "x has zero size"); - TORCH_CHECK(f.numel() > 0, "f has zero size"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK((x.size(0)-1)*x.stride(0) + (x.size(1)-1)*x.stride(1) + (x.size(2)-1)*x.stride(2) + (x.size(3)-1)*x.stride(3) <= INT_MAX, "x memory footprint is too large"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - TORCH_CHECK((y.size(0)-1)*y.stride(0) + (y.size(1)-1)*y.stride(1) + (y.size(2)-1)*y.stride(2) + (y.size(3)-1)*y.stride(3) <= INT_MAX, "output memory footprint is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/h2oai/h2ogpt-chatbot2/src/gradio_utils/css.py b/spaces/h2oai/h2ogpt-chatbot2/src/gradio_utils/css.py deleted file mode 100644 index 6f3d0dd56bfd4287034afd0b23751e3abd59a143..0000000000000000000000000000000000000000 --- a/spaces/h2oai/h2ogpt-chatbot2/src/gradio_utils/css.py +++ /dev/null @@ -1,148 +0,0 @@ -def get_css(kwargs) -> str: - if kwargs['h2ocolors']: - css_code = """footer {visibility: hidden;} - body{background:linear-gradient(#f5f5f5,#e5e5e5);} - body.dark{background:linear-gradient(#000000,#0d0d0d);} - """ - else: - css_code = """footer {visibility: hidden}""" - - css_code += make_css_base() - return css_code - - -def make_css_base() -> str: - return """ - #col_container {margin-left: auto; margin-right: auto; text-align: left;} - - @import url('https://fonts.googleapis.com/css2?family=Source+Sans+Pro:wght@400;600&display=swap'); - - body.dark{#warning {background-color: #555555};} - - #sidebar { - order: 1; - - @media (max-width: 463px) { - order: 2; - } - } - - #col-tabs { - order: 2; - - @media (max-width: 463px) { - order: 1; - } - } - - #small_btn { - margin: 0.6em 0em 0.55em 0; - max-width: 20em; - min-width: 5em !important; - height: 5em; - font-size: 14px !important; - } - - #prompt-form { - border: 1px solid var(--primary-500) !important; - } - - #prompt-form.block { - border-radius: var(--block-radius) !important; - } - - #prompt-form textarea { - border: 1px solid rgb(209, 213, 219); - } - - #prompt-form label > div { - margin-top: 4px; - } - - button.primary:hover { - background-color: var(--primary-600) !important; - transition: .2s; - } - - #prompt-form-area { - margin-bottom: 2.5rem; - } - .chatsmall chatbot {font-size: 10px !important} - - .gradio-container { - max-width: none !important; - } - - div.message { - padding: var(--text-lg) !important; - } - - div.message.user > div.icon-button { - top: unset; - bottom: 0; - } - - div.message.bot > div.icon-button { - top: unset; - bottom: 0; - } - - #prompt-form-row { - position: relative; - } - - #attach-button { - position: absolute; - top: 45px; - right: 20px; - - display: flex; - justify-content: center; - border: 1px solid var(--primary-500) !important; - - @media (max-width: 463px) { - width: 56px; - } - } - - #attach-button > img { - margin-right: 0; - } - - #prompt-form > label > textarea { - padding-right: 104px; - - @media (max-width: 463px) { - min-height: 94px; - padding-right: 70px; - } - } - - #visible-models > label > div.wrap > div.wrap-inner > div.secondary-wrap > div.remove-all { - display: none !important; - } - - #visible-models > label > div.wrap > div.wrap-inner > div.token { - display: none !important; - } - - #visible-models > label > div.wrap > div.wrap-inner > div.secondary-wrap::before { - content: "Select"; - padding: 0 4px; - margin-right: 2px; - } - - #langchain_agents > label > div.wrap > div.wrap-inner > div.secondary-wrap > div.remove-all { - display: none !important; - } - - #langchain_agents > label > div.wrap > div.wrap-inner > div.token { - display: none !important; - } - - #langchain_agents > label > div.wrap > div.wrap-inner > div.secondary-wrap::before { - content: "Select"; - padding: 0 4px; - margin-right: 2px; - } - """ diff --git a/spaces/hamacojr/CAT-Seg/cat_seg/data/dataset_mappers/mask_former_semantic_dataset_mapper.py b/spaces/hamacojr/CAT-Seg/cat_seg/data/dataset_mappers/mask_former_semantic_dataset_mapper.py deleted file mode 100644 index 41c82f2c76cb6d74020ae0a6a3ba045469755f01..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/cat_seg/data/dataset_mappers/mask_former_semantic_dataset_mapper.py +++ /dev/null @@ -1,186 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging - -import numpy as np -import torch -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.data import MetadataCatalog -from detectron2.data import detection_utils as utils -from detectron2.data import transforms as T -from detectron2.projects.point_rend import ColorAugSSDTransform -from detectron2.structures import BitMasks, Instances - -__all__ = ["MaskFormerSemanticDatasetMapper"] - - -class MaskFormerSemanticDatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by MaskFormer for semantic segmentation. - - The callable currently does the following: - - 1. Read the image from "file_name" - 2. Applies geometric transforms to the image and annotation - 3. Find and applies suitable cropping to the image and annotation - 4. Prepare image and annotation to Tensors - """ - - @configurable - def __init__( - self, - is_train=True, - *, - augmentations, - image_format, - ignore_label, - size_divisibility, - ): - """ - NOTE: this interface is experimental. - Args: - is_train: for training or inference - augmentations: a list of augmentations or deterministic transforms to apply - image_format: an image format supported by :func:`detection_utils.read_image`. - ignore_label: the label that is ignored to evaluation - size_divisibility: pad image size to be divisible by this value - """ - self.is_train = is_train - self.tfm_gens = augmentations - self.img_format = image_format - self.ignore_label = ignore_label - self.size_divisibility = size_divisibility - - logger = logging.getLogger(__name__) - mode = "training" if is_train else "inference" - logger.info(f"[{self.__class__.__name__}] Augmentations used in {mode}: {augmentations}") - - @classmethod - def from_config(cls, cfg, is_train=True): - # Build augmentation - augs = [ - T.ResizeShortestEdge( - cfg.INPUT.MIN_SIZE_TRAIN, - cfg.INPUT.MAX_SIZE_TRAIN, - cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING, - ) - ] - if cfg.INPUT.CROP.ENABLED: - augs.append( - T.RandomCrop_CategoryAreaConstraint( - cfg.INPUT.CROP.TYPE, - cfg.INPUT.CROP.SIZE, - cfg.INPUT.CROP.SINGLE_CATEGORY_MAX_AREA, - cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE, - ) - ) - if cfg.INPUT.COLOR_AUG_SSD: - augs.append(ColorAugSSDTransform(img_format=cfg.INPUT.FORMAT)) - augs.append(T.RandomFlip()) - - # Assume always applies to the training set. - dataset_names = cfg.DATASETS.TRAIN - meta = MetadataCatalog.get(dataset_names[0]) - ignore_label = meta.ignore_label - - ret = { - "is_train": is_train, - "augmentations": augs, - "image_format": cfg.INPUT.FORMAT, - "ignore_label": ignore_label, - "size_divisibility": cfg.INPUT.SIZE_DIVISIBILITY, - } - return ret - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - assert self.is_train, "MaskFormerSemanticDatasetMapper should only be used for training!" - - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - image = utils.read_image(dataset_dict["file_name"], format=self.img_format) - utils.check_image_size(dataset_dict, image) - - if "sem_seg_file_name" in dataset_dict: - # PyTorch transformation not implemented for uint16, so converting it to double first - sem_seg_gt = utils.read_image(dataset_dict.pop("sem_seg_file_name")).astype("double") - else: - sem_seg_gt = None - - if sem_seg_gt is None: - raise ValueError( - "Cannot find 'sem_seg_file_name' for semantic segmentation dataset {}.".format( - dataset_dict["file_name"] - ) - ) - - aug_input = T.AugInput(image, sem_seg=sem_seg_gt) - aug_input, transforms = T.apply_transform_gens(self.tfm_gens, aug_input) - image = aug_input.image - sem_seg_gt = aug_input.sem_seg - - # Pad image and segmentation label here! - image = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - if sem_seg_gt is not None: - sem_seg_gt = torch.as_tensor(sem_seg_gt.astype("long")) - # import ipdb; ipdb.set_trace() - if self.size_divisibility > 0: - image_size = (image.shape[-2], image.shape[-1]) - # The ori_size is not the real original size, but size before padding - dataset_dict['ori_size'] = image_size - padding_size = [ - 0, - self.size_divisibility - image_size[1], # w: (left, right) - 0, - self.size_divisibility - image_size[0], # h: 0,(top, bottom) - ] - image = F.pad(image, padding_size, value=128).contiguous() - if sem_seg_gt is not None: - sem_seg_gt = F.pad(sem_seg_gt, padding_size, value=self.ignore_label).contiguous() - - image_shape = (image.shape[-2], image.shape[-1]) # h, w - - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = image - # print('#########################################################################################') - if sem_seg_gt is not None: - dataset_dict["sem_seg"] = sem_seg_gt.long() - - if "annotations" in dataset_dict: - raise ValueError("Semantic segmentation dataset should not have 'annotations'.") - - # Prepare per-category binary masks - if sem_seg_gt is not None: - sem_seg_gt = sem_seg_gt.numpy() - instances = Instances(image_shape) - classes = np.unique(sem_seg_gt) - # remove ignored region - classes = classes[classes != self.ignore_label] - instances.gt_classes = torch.tensor(classes, dtype=torch.int64) - - masks = [] - for class_id in classes: - masks.append(sem_seg_gt == class_id) - - if len(masks) == 0: - # Some image does not have annotation (all ignored) - instances.gt_masks = torch.zeros((0, sem_seg_gt.shape[-2], sem_seg_gt.shape[-1])) - else: - masks = BitMasks( - torch.stack([torch.from_numpy(np.ascontiguousarray(x.copy())) for x in masks]) - ) - instances.gt_masks = masks.tensor - - dataset_dict["instances"] = instances - - return dataset_dict diff --git a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/imagenet_zeroshot_data.py b/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/imagenet_zeroshot_data.py deleted file mode 100644 index 27abd8bf24ebe077a73e8496576d949d8bb16f69..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/SAM-CAT-Seg/open_clip/src/training/imagenet_zeroshot_data.py +++ /dev/null @@ -1,254 +0,0 @@ - - -imagenet_classnames = ["tench", "goldfish", "great white shark", "tiger shark", "hammerhead shark", "electric ray", - "stingray", "rooster", "hen", "ostrich", "brambling", "goldfinch", "house finch", "junco", - "indigo bunting", "American robin", "bulbul", "jay", "magpie", "chickadee", "American dipper", - "kite (bird of prey)", "bald eagle", "vulture", "great grey owl", "fire salamander", - "smooth newt", "newt", "spotted salamander", "axolotl", "American bullfrog", "tree frog", - "tailed frog", "loggerhead sea turtle", "leatherback sea turtle", "mud turtle", "terrapin", - "box turtle", "banded gecko", "green iguana", "Carolina anole", - "desert grassland whiptail lizard", "agama", "frilled-necked lizard", "alligator lizard", - "Gila monster", "European green lizard", "chameleon", "Komodo dragon", "Nile crocodile", - "American alligator", "triceratops", "worm snake", "ring-necked snake", - "eastern hog-nosed snake", "smooth green snake", "kingsnake", "garter snake", "water snake", - "vine snake", "night snake", "boa constrictor", "African rock python", "Indian cobra", - "green mamba", "sea snake", "Saharan horned viper", "eastern diamondback rattlesnake", - "sidewinder rattlesnake", "trilobite", "harvestman", "scorpion", "yellow garden spider", - "barn spider", "European garden spider", "southern black widow", "tarantula", "wolf spider", - "tick", "centipede", "black grouse", "ptarmigan", "ruffed grouse", "prairie grouse", "peafowl", - "quail", "partridge", "african grey parrot", "macaw", "sulphur-crested cockatoo", "lorikeet", - "coucal", "bee eater", "hornbill", "hummingbird", "jacamar", "toucan", "duck", - "red-breasted merganser", "goose", "black swan", "tusker", "echidna", "platypus", "wallaby", - "koala", "wombat", "jellyfish", "sea anemone", "brain coral", "flatworm", "nematode", "conch", - "snail", "slug", "sea slug", "chiton", "chambered nautilus", "Dungeness crab", "rock crab", - "fiddler crab", "red king crab", "American lobster", "spiny lobster", "crayfish", "hermit crab", - "isopod", "white stork", "black stork", "spoonbill", "flamingo", "little blue heron", - "great egret", "bittern bird", "crane bird", "limpkin", "common gallinule", "American coot", - "bustard", "ruddy turnstone", "dunlin", "common redshank", "dowitcher", "oystercatcher", - "pelican", "king penguin", "albatross", "grey whale", "killer whale", "dugong", "sea lion", - "Chihuahua", "Japanese Chin", "Maltese", "Pekingese", "Shih Tzu", "King Charles Spaniel", - "Papillon", "toy terrier", "Rhodesian Ridgeback", "Afghan Hound", "Basset Hound", "Beagle", - "Bloodhound", "Bluetick Coonhound", "Black and Tan Coonhound", "Treeing Walker Coonhound", - "English foxhound", "Redbone Coonhound", "borzoi", "Irish Wolfhound", "Italian Greyhound", - "Whippet", "Ibizan Hound", "Norwegian Elkhound", "Otterhound", "Saluki", "Scottish Deerhound", - "Weimaraner", "Staffordshire Bull Terrier", "American Staffordshire Terrier", - "Bedlington Terrier", "Border Terrier", "Kerry Blue Terrier", "Irish Terrier", - "Norfolk Terrier", "Norwich Terrier", "Yorkshire Terrier", "Wire Fox Terrier", - "Lakeland Terrier", "Sealyham Terrier", "Airedale Terrier", "Cairn Terrier", - "Australian Terrier", "Dandie Dinmont Terrier", "Boston Terrier", "Miniature Schnauzer", - "Giant Schnauzer", "Standard Schnauzer", "Scottish Terrier", "Tibetan Terrier", - "Australian Silky Terrier", "Soft-coated Wheaten Terrier", "West Highland White Terrier", - "Lhasa Apso", "Flat-Coated Retriever", "Curly-coated Retriever", "Golden Retriever", - "Labrador Retriever", "Chesapeake Bay Retriever", "German Shorthaired Pointer", "Vizsla", - "English Setter", "Irish Setter", "Gordon Setter", "Brittany dog", "Clumber Spaniel", - "English Springer Spaniel", "Welsh Springer Spaniel", "Cocker Spaniel", "Sussex Spaniel", - "Irish Water Spaniel", "Kuvasz", "Schipperke", "Groenendael dog", "Malinois", "Briard", - "Australian Kelpie", "Komondor", "Old English Sheepdog", "Shetland Sheepdog", "collie", - "Border Collie", "Bouvier des Flandres dog", "Rottweiler", "German Shepherd Dog", "Dobermann", - "Miniature Pinscher", "Greater Swiss Mountain Dog", "Bernese Mountain Dog", - "Appenzeller Sennenhund", "Entlebucher Sennenhund", "Boxer", "Bullmastiff", "Tibetan Mastiff", - "French Bulldog", "Great Dane", "St. Bernard", "husky", "Alaskan Malamute", "Siberian Husky", - "Dalmatian", "Affenpinscher", "Basenji", "pug", "Leonberger", "Newfoundland dog", - "Great Pyrenees dog", "Samoyed", "Pomeranian", "Chow Chow", "Keeshond", "brussels griffon", - "Pembroke Welsh Corgi", "Cardigan Welsh Corgi", "Toy Poodle", "Miniature Poodle", - "Standard Poodle", "Mexican hairless dog (xoloitzcuintli)", "grey wolf", "Alaskan tundra wolf", - "red wolf or maned wolf", "coyote", "dingo", "dhole", "African wild dog", "hyena", "red fox", - "kit fox", "Arctic fox", "grey fox", "tabby cat", "tiger cat", "Persian cat", "Siamese cat", - "Egyptian Mau", "cougar", "lynx", "leopard", "snow leopard", "jaguar", "lion", "tiger", - "cheetah", "brown bear", "American black bear", "polar bear", "sloth bear", "mongoose", - "meerkat", "tiger beetle", "ladybug", "ground beetle", "longhorn beetle", "leaf beetle", - "dung beetle", "rhinoceros beetle", "weevil", "fly", "bee", "ant", "grasshopper", - "cricket insect", "stick insect", "cockroach", "praying mantis", "cicada", "leafhopper", - "lacewing", "dragonfly", "damselfly", "red admiral butterfly", "ringlet butterfly", - "monarch butterfly", "small white butterfly", "sulphur butterfly", "gossamer-winged butterfly", - "starfish", "sea urchin", "sea cucumber", "cottontail rabbit", "hare", "Angora rabbit", - "hamster", "porcupine", "fox squirrel", "marmot", "beaver", "guinea pig", "common sorrel horse", - "zebra", "pig", "wild boar", "warthog", "hippopotamus", "ox", "water buffalo", "bison", - "ram (adult male sheep)", "bighorn sheep", "Alpine ibex", "hartebeest", "impala (antelope)", - "gazelle", "arabian camel", "llama", "weasel", "mink", "European polecat", - "black-footed ferret", "otter", "skunk", "badger", "armadillo", "three-toed sloth", "orangutan", - "gorilla", "chimpanzee", "gibbon", "siamang", "guenon", "patas monkey", "baboon", "macaque", - "langur", "black-and-white colobus", "proboscis monkey", "marmoset", "white-headed capuchin", - "howler monkey", "titi monkey", "Geoffroy's spider monkey", "common squirrel monkey", - "ring-tailed lemur", "indri", "Asian elephant", "African bush elephant", "red panda", - "giant panda", "snoek fish", "eel", "silver salmon", "rock beauty fish", "clownfish", - "sturgeon", "gar fish", "lionfish", "pufferfish", "abacus", "abaya", "academic gown", - "accordion", "acoustic guitar", "aircraft carrier", "airliner", "airship", "altar", "ambulance", - "amphibious vehicle", "analog clock", "apiary", "apron", "trash can", "assault rifle", - "backpack", "bakery", "balance beam", "balloon", "ballpoint pen", "Band-Aid", "banjo", - "baluster / handrail", "barbell", "barber chair", "barbershop", "barn", "barometer", "barrel", - "wheelbarrow", "baseball", "basketball", "bassinet", "bassoon", "swimming cap", "bath towel", - "bathtub", "station wagon", "lighthouse", "beaker", "military hat (bearskin or shako)", - "beer bottle", "beer glass", "bell tower", "baby bib", "tandem bicycle", "bikini", - "ring binder", "binoculars", "birdhouse", "boathouse", "bobsleigh", "bolo tie", "poke bonnet", - "bookcase", "bookstore", "bottle cap", "hunting bow", "bow tie", "brass memorial plaque", "bra", - "breakwater", "breastplate", "broom", "bucket", "buckle", "bulletproof vest", - "high-speed train", "butcher shop", "taxicab", "cauldron", "candle", "cannon", "canoe", - "can opener", "cardigan", "car mirror", "carousel", "tool kit", "cardboard box / carton", - "car wheel", "automated teller machine", "cassette", "cassette player", "castle", "catamaran", - "CD player", "cello", "mobile phone", "chain", "chain-link fence", "chain mail", "chainsaw", - "storage chest", "chiffonier", "bell or wind chime", "china cabinet", "Christmas stocking", - "church", "movie theater", "cleaver", "cliff dwelling", "cloak", "clogs", "cocktail shaker", - "coffee mug", "coffeemaker", "spiral or coil", "combination lock", "computer keyboard", - "candy store", "container ship", "convertible", "corkscrew", "cornet", "cowboy boot", - "cowboy hat", "cradle", "construction crane", "crash helmet", "crate", "infant bed", - "Crock Pot", "croquet ball", "crutch", "cuirass", "dam", "desk", "desktop computer", - "rotary dial telephone", "diaper", "digital clock", "digital watch", "dining table", - "dishcloth", "dishwasher", "disc brake", "dock", "dog sled", "dome", "doormat", "drilling rig", - "drum", "drumstick", "dumbbell", "Dutch oven", "electric fan", "electric guitar", - "electric locomotive", "entertainment center", "envelope", "espresso machine", "face powder", - "feather boa", "filing cabinet", "fireboat", "fire truck", "fire screen", "flagpole", "flute", - "folding chair", "football helmet", "forklift", "fountain", "fountain pen", "four-poster bed", - "freight car", "French horn", "frying pan", "fur coat", "garbage truck", - "gas mask or respirator", "gas pump", "goblet", "go-kart", "golf ball", "golf cart", "gondola", - "gong", "gown", "grand piano", "greenhouse", "radiator grille", "grocery store", "guillotine", - "hair clip", "hair spray", "half-track", "hammer", "hamper", "hair dryer", "hand-held computer", - "handkerchief", "hard disk drive", "harmonica", "harp", "combine harvester", "hatchet", - "holster", "home theater", "honeycomb", "hook", "hoop skirt", "gymnastic horizontal bar", - "horse-drawn vehicle", "hourglass", "iPod", "clothes iron", "carved pumpkin", "jeans", "jeep", - "T-shirt", "jigsaw puzzle", "rickshaw", "joystick", "kimono", "knee pad", "knot", "lab coat", - "ladle", "lampshade", "laptop computer", "lawn mower", "lens cap", "letter opener", "library", - "lifeboat", "lighter", "limousine", "ocean liner", "lipstick", "slip-on shoe", "lotion", - "music speaker", "loupe magnifying glass", "sawmill", "magnetic compass", "messenger bag", - "mailbox", "tights", "one-piece bathing suit", "manhole cover", "maraca", "marimba", "mask", - "matchstick", "maypole", "maze", "measuring cup", "medicine cabinet", "megalith", "microphone", - "microwave oven", "military uniform", "milk can", "minibus", "miniskirt", "minivan", "missile", - "mitten", "mixing bowl", "mobile home", "ford model t", "modem", "monastery", "monitor", - "moped", "mortar and pestle", "graduation cap", "mosque", "mosquito net", "vespa", - "mountain bike", "tent", "computer mouse", "mousetrap", "moving van", "muzzle", "metal nail", - "neck brace", "necklace", "baby pacifier", "notebook computer", "obelisk", "oboe", "ocarina", - "odometer", "oil filter", "pipe organ", "oscilloscope", "overskirt", "bullock cart", - "oxygen mask", "product packet / packaging", "paddle", "paddle wheel", "padlock", "paintbrush", - "pajamas", "palace", "pan flute", "paper towel", "parachute", "parallel bars", "park bench", - "parking meter", "railroad car", "patio", "payphone", "pedestal", "pencil case", - "pencil sharpener", "perfume", "Petri dish", "photocopier", "plectrum", "Pickelhaube", - "picket fence", "pickup truck", "pier", "piggy bank", "pill bottle", "pillow", "ping-pong ball", - "pinwheel", "pirate ship", "drink pitcher", "block plane", "planetarium", "plastic bag", - "plate rack", "farm plow", "plunger", "Polaroid camera", "pole", "police van", "poncho", - "pool table", "soda bottle", "plant pot", "potter's wheel", "power drill", "prayer rug", - "printer", "prison", "missile", "projector", "hockey puck", "punching bag", "purse", "quill", - "quilt", "race car", "racket", "radiator", "radio", "radio telescope", "rain barrel", - "recreational vehicle", "fishing casting reel", "reflex camera", "refrigerator", - "remote control", "restaurant", "revolver", "rifle", "rocking chair", "rotisserie", "eraser", - "rugby ball", "ruler measuring stick", "sneaker", "safe", "safety pin", "salt shaker", "sandal", - "sarong", "saxophone", "scabbard", "weighing scale", "school bus", "schooner", "scoreboard", - "CRT monitor", "screw", "screwdriver", "seat belt", "sewing machine", "shield", "shoe store", - "shoji screen / room divider", "shopping basket", "shopping cart", "shovel", "shower cap", - "shower curtain", "ski", "balaclava ski mask", "sleeping bag", "slide rule", "sliding door", - "slot machine", "snorkel", "snowmobile", "snowplow", "soap dispenser", "soccer ball", "sock", - "solar thermal collector", "sombrero", "soup bowl", "keyboard space bar", "space heater", - "space shuttle", "spatula", "motorboat", "spider web", "spindle", "sports car", "spotlight", - "stage", "steam locomotive", "through arch bridge", "steel drum", "stethoscope", "scarf", - "stone wall", "stopwatch", "stove", "strainer", "tram", "stretcher", "couch", "stupa", - "submarine", "suit", "sundial", "sunglasses", "sunglasses", "sunscreen", "suspension bridge", - "mop", "sweatshirt", "swim trunks / shorts", "swing", "electrical switch", "syringe", - "table lamp", "tank", "tape player", "teapot", "teddy bear", "television", "tennis ball", - "thatched roof", "front curtain", "thimble", "threshing machine", "throne", "tile roof", - "toaster", "tobacco shop", "toilet seat", "torch", "totem pole", "tow truck", "toy store", - "tractor", "semi-trailer truck", "tray", "trench coat", "tricycle", "trimaran", "tripod", - "triumphal arch", "trolleybus", "trombone", "hot tub", "turnstile", "typewriter keyboard", - "umbrella", "unicycle", "upright piano", "vacuum cleaner", "vase", "vaulted or arched ceiling", - "velvet fabric", "vending machine", "vestment", "viaduct", "violin", "volleyball", - "waffle iron", "wall clock", "wallet", "wardrobe", "military aircraft", "sink", - "washing machine", "water bottle", "water jug", "water tower", "whiskey jug", "whistle", - "hair wig", "window screen", "window shade", "Windsor tie", "wine bottle", "airplane wing", - "wok", "wooden spoon", "wool", "split-rail fence", "shipwreck", "sailboat", "yurt", "website", - "comic book", "crossword", "traffic or street sign", "traffic light", "dust jacket", "menu", - "plate", "guacamole", "consomme", "hot pot", "trifle", "ice cream", "popsicle", "baguette", - "bagel", "pretzel", "cheeseburger", "hot dog", "mashed potatoes", "cabbage", "broccoli", - "cauliflower", "zucchini", "spaghetti squash", "acorn squash", "butternut squash", "cucumber", - "artichoke", "bell pepper", "cardoon", "mushroom", "Granny Smith apple", "strawberry", "orange", - "lemon", "fig", "pineapple", "banana", "jackfruit", "cherimoya (custard apple)", "pomegranate", - "hay", "carbonara", "chocolate syrup", "dough", "meatloaf", "pizza", "pot pie", "burrito", - "red wine", "espresso", "tea cup", "eggnog", "mountain", "bubble", "cliff", "coral reef", - "geyser", "lakeshore", "promontory", "sandbar", "beach", "valley", "volcano", "baseball player", - "bridegroom", "scuba diver", "rapeseed", "daisy", "yellow lady's slipper", "corn", "acorn", - "rose hip", "horse chestnut seed", "coral fungus", "agaric", "gyromitra", "stinkhorn mushroom", - "earth star fungus", "hen of the woods mushroom", "bolete", "corn cob", "toilet paper"] - - - - - -openai_imagenet_template = [ - lambda c: f'a bad photo of a {c}.', - lambda c: f'a photo of many {c}.', - lambda c: f'a sculpture of a {c}.', - lambda c: f'a photo of the hard to see {c}.', - lambda c: f'a low resolution photo of the {c}.', - lambda c: f'a rendering of a {c}.', - lambda c: f'graffiti of a {c}.', - lambda c: f'a bad photo of the {c}.', - lambda c: f'a cropped photo of the {c}.', - lambda c: f'a tattoo of a {c}.', - lambda c: f'the embroidered {c}.', - lambda c: f'a photo of a hard to see {c}.', - lambda c: f'a bright photo of a {c}.', - lambda c: f'a photo of a clean {c}.', - lambda c: f'a photo of a dirty {c}.', - lambda c: f'a dark photo of the {c}.', - lambda c: f'a drawing of a {c}.', - lambda c: f'a photo of my {c}.', - lambda c: f'the plastic {c}.', - lambda c: f'a photo of the cool {c}.', - lambda c: f'a close-up photo of a {c}.', - lambda c: f'a black and white photo of the {c}.', - lambda c: f'a painting of the {c}.', - lambda c: f'a painting of a {c}.', - lambda c: f'a pixelated photo of the {c}.', - lambda c: f'a sculpture of the {c}.', - lambda c: f'a bright photo of the {c}.', - lambda c: f'a cropped photo of a {c}.', - lambda c: f'a plastic {c}.', - lambda c: f'a photo of the dirty {c}.', - lambda c: f'a jpeg corrupted photo of a {c}.', - lambda c: f'a blurry photo of the {c}.', - lambda c: f'a photo of the {c}.', - lambda c: f'a good photo of the {c}.', - lambda c: f'a rendering of the {c}.', - lambda c: f'a {c} in a video game.', - lambda c: f'a photo of one {c}.', - lambda c: f'a doodle of a {c}.', - lambda c: f'a close-up photo of the {c}.', - lambda c: f'a photo of a {c}.', - lambda c: f'the origami {c}.', - lambda c: f'the {c} in a video game.', - lambda c: f'a sketch of a {c}.', - lambda c: f'a doodle of the {c}.', - lambda c: f'a origami {c}.', - lambda c: f'a low resolution photo of a {c}.', - lambda c: f'the toy {c}.', - lambda c: f'a rendition of the {c}.', - lambda c: f'a photo of the clean {c}.', - lambda c: f'a photo of a large {c}.', - lambda c: f'a rendition of a {c}.', - lambda c: f'a photo of a nice {c}.', - lambda c: f'a photo of a weird {c}.', - lambda c: f'a blurry photo of a {c}.', - lambda c: f'a cartoon {c}.', - lambda c: f'art of a {c}.', - lambda c: f'a sketch of the {c}.', - lambda c: f'a embroidered {c}.', - lambda c: f'a pixelated photo of a {c}.', - lambda c: f'itap of the {c}.', - lambda c: f'a jpeg corrupted photo of the {c}.', - lambda c: f'a good photo of a {c}.', - lambda c: f'a plushie {c}.', - lambda c: f'a photo of the nice {c}.', - lambda c: f'a photo of the small {c}.', - lambda c: f'a photo of the weird {c}.', - lambda c: f'the cartoon {c}.', - lambda c: f'art of the {c}.', - lambda c: f'a drawing of the {c}.', - lambda c: f'a photo of the large {c}.', - lambda c: f'a black and white photo of a {c}.', - lambda c: f'the plushie {c}.', - lambda c: f'a dark photo of a {c}.', - lambda c: f'itap of a {c}.', - lambda c: f'graffiti of the {c}.', - lambda c: f'a toy {c}.', - lambda c: f'itap of my {c}.', - lambda c: f'a photo of a cool {c}.', - lambda c: f'a photo of a small {c}.', - lambda c: f'a tattoo of the {c}.', -] diff --git a/spaces/hamelcubsfan/AutoGPT/BULLETIN.md b/spaces/hamelcubsfan/AutoGPT/BULLETIN.md deleted file mode 100644 index 735048ddc87a914987c6bd70ccdb231a80242ae3..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/BULLETIN.md +++ /dev/null @@ -1,2 +0,0 @@ -Welcome to Auto-GPT! We'll keep you informed of the latest news and features by printing messages here. -If you don't wish to see this message, you can run Auto-GPT with the --skip-news flag \ No newline at end of file diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/workspace.py b/spaces/hamelcubsfan/AutoGPT/autogpt/workspace.py deleted file mode 100644 index 6fb0e3113eb2c1338edf7f86c6e162fc27c61e50..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/autogpt/workspace.py +++ /dev/null @@ -1,47 +0,0 @@ -from __future__ import annotations - -import os -from pathlib import Path - -from autogpt.config import Config - -CFG = Config() - -# Set a dedicated folder for file I/O -WORKSPACE_PATH = Path(os.getcwd()) / "auto_gpt_workspace" - -# Create the directory if it doesn't exist -if not os.path.exists(WORKSPACE_PATH): - os.makedirs(WORKSPACE_PATH) - - -def path_in_workspace(relative_path: str | Path) -> Path: - """Get full path for item in workspace - - Parameters: - relative_path (str | Path): Path to translate into the workspace - - Returns: - Path: Absolute path for the given path in the workspace - """ - return safe_path_join(WORKSPACE_PATH, relative_path) - - -def safe_path_join(base: Path, *paths: str | Path) -> Path: - """Join one or more path components, asserting the resulting path is within the workspace. - - Args: - base (Path): The base path - *paths (str): The paths to join to the base path - - Returns: - Path: The joined path - """ - joined_path = base.joinpath(*paths).resolve() - - if CFG.restrict_to_workspace and not joined_path.is_relative_to(base): - raise ValueError( - f"Attempted to access path '{joined_path}' outside of workspace '{base}'." - ) - - return joined_path diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/dyhead.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/dyhead.py deleted file mode 100644 index baf5b37212e590ab453576278cc6c124dce91e90..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/dyhead.py +++ /dev/null @@ -1,151 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -from .deform_conv import ModulatedDeformConv -from .dyrelu import h_sigmoid, DYReLU - - -class Conv3x3Norm(torch.nn.Module): - def __init__(self, - in_channels, - out_channels, - stride, - deformable=False, - use_gn=False): - super(Conv3x3Norm, self).__init__() - - if deformable: - self.conv = ModulatedDeformConv(in_channels, out_channels, kernel_size=3, stride=stride, padding=1) - else: - self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1) - - if use_gn: - self.bn = nn.GroupNorm(num_groups=16, num_channels=out_channels) - else: - self.bn = None - - def forward(self, input, **kwargs): - x = self.conv(input, **kwargs) - if self.bn: - x = self.bn(x) - return x - - -class DyConv(nn.Module): - def __init__(self, - in_channels=256, - out_channels=256, - conv_func=Conv3x3Norm, - use_dyfuse=True, - use_dyrelu=False, - use_deform=False - ): - super(DyConv, self).__init__() - - self.DyConv = nn.ModuleList() - self.DyConv.append(conv_func(in_channels, out_channels, 1)) - self.DyConv.append(conv_func(in_channels, out_channels, 1)) - self.DyConv.append(conv_func(in_channels, out_channels, 2)) - - if use_dyfuse: - self.AttnConv = nn.Sequential( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(in_channels, 1, kernel_size=1), - nn.ReLU(inplace=True)) - self.h_sigmoid = h_sigmoid() - else: - self.AttnConv = None - - if use_dyrelu: - self.relu = DYReLU(in_channels, out_channels) - else: - self.relu = nn.ReLU() - - if use_deform: - self.offset = nn.Conv2d(in_channels, 27, kernel_size=3, stride=1, padding=1) - else: - self.offset = None - - self.init_weights() - - def init_weights(self): - for m in self.DyConv.modules(): - if isinstance(m, nn.Conv2d): - nn.init.normal_(m.weight.data, 0, 0.01) - if m.bias is not None: - m.bias.data.zero_() - if self.AttnConv is not None: - for m in self.AttnConv.modules(): - if isinstance(m, nn.Conv2d): - nn.init.normal_(m.weight.data, 0, 0.01) - if m.bias is not None: - m.bias.data.zero_() - - def forward(self, x): - next_x = [] - for level, feature in enumerate(x): - - conv_args = dict() - if self.offset is not None: - offset_mask = self.offset(feature) - offset = offset_mask[:, :18, :, :] - mask = offset_mask[:, 18:, :, :].sigmoid() - conv_args = dict(offset=offset, mask=mask) - - temp_fea = [self.DyConv[1](feature, **conv_args)] - - if level > 0: - temp_fea.append(self.DyConv[2](x[level - 1], **conv_args)) - if level < len(x) - 1: - temp_fea.append(F.upsample_bilinear(self.DyConv[0](x[level + 1], **conv_args), - size=[feature.size(2), feature.size(3)])) - mean_fea = torch.mean(torch.stack(temp_fea), dim=0, keepdim=False) - - if self.AttnConv is not None: - attn_fea = [] - res_fea = [] - for fea in temp_fea: - res_fea.append(fea) - attn_fea.append(self.AttnConv(fea)) - - res_fea = torch.stack(res_fea) - spa_pyr_attn = self.h_sigmoid(torch.stack(attn_fea)) - - mean_fea = torch.mean(res_fea * spa_pyr_attn, dim=0, keepdim=False) - - next_x.append(self.relu(mean_fea)) - - return next_x - - -class DyHead(nn.Module): - def __init__(self, cfg, in_channels): - super(DyHead, self).__init__() - self.cfg = cfg - channels = cfg.MODEL.DYHEAD.CHANNELS - use_gn = cfg.MODEL.DYHEAD.USE_GN - use_dyrelu = cfg.MODEL.DYHEAD.USE_DYRELU - use_dyfuse = cfg.MODEL.DYHEAD.USE_DYFUSE - use_deform = cfg.MODEL.DYHEAD.USE_DFCONV - - conv_func = lambda i,o,s : Conv3x3Norm(i,o,s,deformable=use_deform,use_gn=use_gn) - - dyhead_tower = [] - for i in range(cfg.MODEL.DYHEAD.NUM_CONVS): - dyhead_tower.append( - DyConv( - in_channels if i == 0 else channels, - channels, - conv_func=conv_func, - use_dyrelu=use_dyrelu, - use_dyfuse=use_dyfuse, - use_deform=use_deform - ) - ) - - self.add_module('dyhead_tower', nn.Sequential(*dyhead_tower)) - - def forward(self, x): - dyhead_tower = self.dyhead_tower(x) - return dyhead_tower \ No newline at end of file diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/dyrelu.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/dyrelu.py deleted file mode 100644 index 3170a9efedfa05988242e04d2c204992a2dcd3f8..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/dyrelu.py +++ /dev/null @@ -1,120 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -def _make_divisible(v, divisor, min_value=None): - if min_value is None: - min_value = divisor - new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than 10%. - if new_v < 0.9 * v: - new_v += divisor - return new_v - - -class swish(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class h_swish(nn.Module): - def __init__(self, inplace=False): - super(h_swish, self).__init__() - self.inplace = inplace - - def forward(self, x): - return x * F.relu6(x + 3.0, inplace=self.inplace) / 6.0 - - -class h_sigmoid(nn.Module): - def __init__(self, inplace=True, h_max=1): - super(h_sigmoid, self).__init__() - self.relu = nn.ReLU6(inplace=inplace) - self.h_max = h_max - - def forward(self, x): - return self.relu(x + 3) * self.h_max / 6 - - -class DYReLU(nn.Module): - def __init__(self, inp, oup, reduction=4, lambda_a=1.0, K2=True, use_bias=True, use_spatial=False, - init_a=[1.0, 0.0], init_b=[0.0, 0.0]): - super(DYReLU, self).__init__() - self.oup = oup - self.lambda_a = lambda_a * 2 - self.K2 = K2 - self.avg_pool = nn.AdaptiveAvgPool2d(1) - - self.use_bias = use_bias - if K2: - self.exp = 4 if use_bias else 2 - else: - self.exp = 2 if use_bias else 1 - self.init_a = init_a - self.init_b = init_b - - # determine squeeze - if reduction == 4: - squeeze = inp // reduction - else: - squeeze = _make_divisible(inp // reduction, 4) - # print('reduction: {}, squeeze: {}/{}'.format(reduction, inp, squeeze)) - # print('init_a: {}, init_b: {}'.format(self.init_a, self.init_b)) - - self.fc = nn.Sequential( - nn.Linear(inp, squeeze), - nn.ReLU(inplace=True), - nn.Linear(squeeze, oup * self.exp), - h_sigmoid() - ) - if use_spatial: - self.spa = nn.Sequential( - nn.Conv2d(inp, 1, kernel_size=1), - nn.BatchNorm2d(1), - ) - else: - self.spa = None - - def forward(self, x): - if isinstance(x, list): - x_in = x[0] - x_out = x[1] - else: - x_in = x - x_out = x - b, c, h, w = x_in.size() - y = self.avg_pool(x_in).view(b, c) - y = self.fc(y).view(b, self.oup * self.exp, 1, 1) - if self.exp == 4: - a1, b1, a2, b2 = torch.split(y, self.oup, dim=1) - a1 = (a1 - 0.5) * self.lambda_a + self.init_a[0] # 1.0 - a2 = (a2 - 0.5) * self.lambda_a + self.init_a[1] - - b1 = b1 - 0.5 + self.init_b[0] - b2 = b2 - 0.5 + self.init_b[1] - out = torch.max(x_out * a1 + b1, x_out * a2 + b2) - elif self.exp == 2: - if self.use_bias: # bias but not PL - a1, b1 = torch.split(y, self.oup, dim=1) - a1 = (a1 - 0.5) * self.lambda_a + self.init_a[0] # 1.0 - b1 = b1 - 0.5 + self.init_b[0] - out = x_out * a1 + b1 - - else: - a1, a2 = torch.split(y, self.oup, dim=1) - a1 = (a1 - 0.5) * self.lambda_a + self.init_a[0] # 1.0 - a2 = (a2 - 0.5) * self.lambda_a + self.init_a[1] - out = torch.max(x_out * a1, x_out * a2) - - elif self.exp == 1: - a1 = y - a1 = (a1 - 0.5) * self.lambda_a + self.init_a[0] # 1.0 - out = x_out * a1 - - if self.spa: - ys = self.spa(x_in).view(b, -1) - ys = F.softmax(ys, dim=1).view(b, 1, h, w) * h * w - ys = F.hardtanh(ys, 0, 3, inplace=True)/3 - out = out * ys - - return out diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/data/datasets/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/data/datasets/__init__.py deleted file mode 100644 index 4a59d9332034e9dc3a09f0ba7aa63f0c61b25e87..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/data/datasets/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -from . import builtin # ensure the builtin data are registered - -__all__ = [k for k in globals().keys() if "builtin" not in k and not k.startswith("_")] diff --git a/spaces/hi9/core4testing/app.py b/spaces/hi9/core4testing/app.py deleted file mode 100644 index f5c645d2d353f5bba858e9e88cc7624b0213e72b..0000000000000000000000000000000000000000 --- a/spaces/hi9/core4testing/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import gradio as gr - -import tensorflow as tf -import numpy as np -import pickle - -# Load model, including its weights and the optimizer -model = tf.keras.models.load_model('core4.h5') - -# load tokenizer -with open('tokenizer.pickle', 'rb') as handle: - tokenize = pickle.load(handle) - -text_labels = ['How to apply', 'how much can I get', 'who can apply'] - -# model.summary() # model architecture - -def greet(string): - - tokenizedText = tokenize.texts_to_matrix([string]) - prediction = model.predict(np.array([tokenizedText[0]])) - predicted_label = text_labels[np.argmax(prediction)] - - print(prediction[0][np.argmax(prediction)]) - print("Predicted label: " + predicted_label + "\n") - - ################### - import requests as rs - import pandas as pd - - spreadsheet_id = '1vjWnYsnGc0J6snT67NVbA-NWSGZ5b0eDBVHmg9lbf9s' # Please set the Spreadsheet ID. - csv_url='https://docs.google.com/spreadsheets/d/' + spreadsheet_id + '/export?format=csv&id=' + spreadsheet_id + '&gid=0' - - res=rs.get(url=csv_url) - open('google.csv', 'wb').write(res.content) - df = pd.read_csv('google.csv') - - import json - import requests - - spreadsheet_id = '1vjWnYsnGc0J6snT67NVbA-NWSGZ5b0eDBVHmg9lbf9s' # Please set the Spreadsheet ID. - url = 'https://script.google.com/macros/s/AKfycbwXP5fsDgOXJ9biZQC293o6bTBL3kDOJ07PlmxKjabzdTej6WYdC8Yos6NpDVqAJeVM/exec?spreadsheetId=' + spreadsheet_id - body = { - "arguments": {"range": "Sheet1!A"+str(len(df)+2), "valueInputOption": "USER_ENTERED"}, - "body": {"values": [[string]]} - } - res = requests.post(url, json.dumps(body), headers={'Content-Type': 'application/json'}) - - body = { - "arguments": {"range": "Sheet1!B"+str(len(df)+2), "valueInputOption": "USER_ENTERED"}, - "body": {"values": [[predicted_label]]} - } - res = requests.post(url, json.dumps(body), headers={'Content-Type': 'application/json'}) - - import datetime - current_time = datetime.datetime.now() - body = { - "arguments": {"range": "Sheet1!D"+str(len(df)+2), "valueInputOption": "USER_ENTERED"}, - "body": {"values": [[str(current_time)]]} - } - res = requests.post(url, json.dumps(body), headers={'Content-Type': 'application/json'}) - #print(res.text) - ####################### - - return predicted_label - - -#One testing case - - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/hieupt/image_style_transfer/model.py b/spaces/hieupt/image_style_transfer/model.py deleted file mode 100644 index 72159437d5296406c9c040b5474180d31aba2c8a..0000000000000000000000000000000000000000 --- a/spaces/hieupt/image_style_transfer/model.py +++ /dev/null @@ -1,78 +0,0 @@ -import torch -from torch import nn - -class Residual_block(nn.Module): - """Residual block - Architecture: https://arxiv.org/pdf/1610.02915.pdf - """ - def __init__(self, channel): - super(Residual_block, self).__init__() - self.conv_1 = nn.Conv2d(in_channels=channel, out_channels=channel, - padding='same', kernel_size=3, stride=1) - self.inst1 = nn.InstanceNorm2d(channel, affine=True) - self.conv_2 = nn.Conv2d(in_channels=channel, out_channels=channel, - padding='same', kernel_size=3, stride=1) - self.inst2 = nn.InstanceNorm2d(channel, affine=True) - self.relu = nn.ReLU() - - def forward(self, x): - residual = x - out = self.relu(self.inst1(self.conv_1(x))) - out = self.inst2(self.conv_2(out)) - return self.relu(out + residual) - -class TransformerNet(nn.Module): - def __init__(self): - super(TransformerNet, self).__init__() - # Downsampling - self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=9, stride=1, padding = 9//2) - self.BN_1 = nn.InstanceNorm2d(num_features=32, affine=True) - self.down_1 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=2, padding = 1) - self.BN_2 = nn.InstanceNorm2d(num_features=64, affine=True) - self.down_2 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=2, padding = 1) - self.BN_3 = nn.InstanceNorm2d(num_features=128, affine=True) - # Residual connect - self.res_1 = Residual_block(128) - self.res_2 = Residual_block(128) - self.res_3 = Residual_block(128) - self.res_4 = Residual_block(128) - self.res_5 = Residual_block(128) - # Upsampling - self.up_1 = nn.ConvTranspose2d(in_channels=128, out_channels=64, kernel_size=3, stride=2, padding=1, output_padding= 1) - self.BN_4 = nn.InstanceNorm2d(num_features=64, affine=True) - self.up_2 = nn.ConvTranspose2d(in_channels=64, out_channels=32, kernel_size=3, stride=2, padding = 1, output_padding= 1) - self.BN_5 = nn.InstanceNorm2d(num_features=32, affine=True) - self.conv2 = nn.Conv2d(in_channels=32, out_channels=3, kernel_size=9, stride=1, padding = 9//2) - - self.relu = nn.ReLU() - - - - def forward(self, x): - y = self.relu(self.BN_1(self.conv1(x))) - # print(y.shape) - y = self.relu(self.BN_2(self.down_1(y))) - # print(y.shape) - y = self.relu(self.BN_3(self.down_2(y))) - # print(y.shape) - - # print() - y = self.res_1(y) - # print(y.shape) - y = self.res_2(y) - # print(y.shape) - y = self.res_3(y) - # print(y.shape) - y = self.res_4(y) - # print(y.shape) - y = self.res_5(y) - # print(y.shape) - - # print() - y = self.relu(self.BN_4(self.up_1(y))) - # print(y.shape) - y = self.relu(self.BN_5(self.up_2(y))) - # print(y.shape) - y = self.conv2(y) - # print(y.shape) - return y diff --git a/spaces/hlopez/Twitter-Positivity-Analyzer/backend.py b/spaces/hlopez/Twitter-Positivity-Analyzer/backend.py deleted file mode 100644 index a4ee2d0f634c85dbce4b0da109a13ad419031d8c..0000000000000000000000000000000000000000 --- a/spaces/hlopez/Twitter-Positivity-Analyzer/backend.py +++ /dev/null @@ -1,44 +0,0 @@ -""" -Positivity predictor. - -This module provides the functionality to predict -a tweet's positivity using a BERT model. -""" -import torch -from transformers import BertForSequenceClassification, BertTokenizer - -tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", do_lower_case=True) -model = BertForSequenceClassification.from_pretrained( - "bert-base-uncased", - num_labels=5, - output_attentions=False, - output_hidden_states=False, - local_files_only=False, -) -model.load_state_dict(torch.load("data/BERT_ft_epoch5.model", map_location='cpu')) -model.eval() - - -def predict_positivity(text: str) -> str: - """ - Predict the positivity of a given tweet. - - Args: - text (str): Tweet's text. - - Returns: - str: Predicted positivity. - """ - label_dict = { - 0: "Extremely Negative", - 1: "Negative", - 2: "Neutral", - 3: "Positive", - 4: "Extremely Positive", - } - encoded = tokenizer(text, return_tensors="pt") - logits = model(**encoded).logits - - predicted_class_id = logits.argmax().item() - - return label_dict[predicted_class_id] diff --git a/spaces/hugforziio/chat-gpt-batch/app_en.py b/spaces/hugforziio/chat-gpt-batch/app_en.py deleted file mode 100644 index 3f3884dbed46db50d8752450669dea36eec6a7bc..0000000000000000000000000000000000000000 --- a/spaces/hugforziio/chat-gpt-batch/app_en.py +++ /dev/null @@ -1,191 +0,0 @@ -# import gradio as gr -import gradio -# import lmdb -# import base64 -# import io -# import random -# import time -import json -import copy -# import sqlite3 -from urllib.parse import urljoin -import openai - -from app_js import api_key__get_from_browser, api_key__save_to_browser, saved_prompts_refresh_btn__click_js, selected_saved_prompt_title__change_js, saved_prompts_delete_btn__click_js, saved_prompts_save_btn__click_js, copy_prompt__click_js, paste_prompt__click_js, chat_copy_history_btn__click_js, chat_copy_history_md_btn__click_js, api_key_refresh_btn__click_js, api_key_save_btn__click_js - -from functions import sequential_chat_fn, make_history_file_fn, on_click_send_btn, clear_history, copy_history, update_saved_prompt_titles, save_prompt, load_saved_prompt - -introduction = """

      ChatGPT Batch Tool

      - -
      Hello. This is a tool for sending messages to ChatGPT in bulk.
      - -
      With this tool, you can plan and send multiple messages to ChatGPT at once.
      - -Please note: - -1. In order to use this tool, you will need to provide your own API Key and assume any associated costs. We do not collect or store your API Key. You can obtain your API Key by visiting https://platform.openai.com/account/api-keys. -2. The space for this demo page is public. For research and code improvement purposes, we need to log the chat content sent through this page, meaning we can see your chat history with ChatGPT in the background. **By continuing to use this tool on this page, you agree to allow us to view, use, and share your chat data.** If you wish to avoid this, you can [make a copy of this tool to your own private space](https://huggingface.co/spaces/hugforziio/chat-gpt-batch?duplicate=true), which also eliminates waiting in a queue. -""" - - -css = """ -.table-wrap .cell-wrap input {min-width:80%} -#api-key-textbox textarea {filter:blur(8px); transition: filter 0.25s} -#api-key-textbox textarea:focus {filter:none} -#chat-log-md hr {margin-top: 1rem; margin-bottom: 1rem;} -#chat-markdown-wrap-box {max-height:80vh; overflow: auto !important;} -""" -with gradio.Blocks(title="ChatGPT Batch Tool", css=css) as demo: - - with gradio.Accordion("introduction", open=True): - gradio.Markdown(introduction) - - with gradio.Accordion("Basic settings", open=True): - system_prompt_enabled = gradio.Checkbox(label='Enable System level Prompt', info='Whether to use the system level prompt for ChatGPT task description as "System"', value=True) - # System prompt - system_prompt = gradio.Textbox(label='System level Prompt', info='Description of the task for ChatGPT as "System"', value='You are a part-of-speech classifier. Users will send you a word and you should determine its part-of-speech, such as nouns, verbs, etc.!!Please note!! ⚠️Highest priority!!: You may only directly return the part-of-speech without any extra information. Do not explain why it is this part-of-speech, etc., otherwise the program used by the user will fail and cause serious losses to the user😱!!!') - # User message template - user_message_template = gradio.Textbox(label='User Message Template', info='Template of messages to be sent in bulk', value='Word: ```___```') - with gradio.Row(): - # Replacement area in user message template - user_message_template_mask = gradio.Textbox(label='Template Placeholder', info='The part that needs to be replaced in the message template, can be a regular expression', value='___') - # Is the replacement area in the user message template a regex - user_message_template_mask_is_regex = gradio.Checkbox(label='Placeholder is regex', info='Is the placeholder in the message template a regular expression?', value=False) - # User message replacement area list text - user_message_list_text = gradio.Textbox(label='User Message List', info='All messages to be sent', value='animals| trains| between| of| located| what are you doing') - with gradio.Row(): - # User message replacement area list splitter - user_message_list_text_splitter = gradio.Textbox(label='User Message Splitter', info='Splitter used to split user message list, such as comma (`,`), line feed (`\n`), or regular expressions', value='\\|\\s+') - # Is the splitter for the user message replacement area list a regex - user_message_list_text_splitter_is_regex = gradio.Checkbox(label='Splitter is regex', info='Is the splitter for the user message list a regular expression?', value=True) - # Number of history records - history_prompt_num = gradio.Slider(label="Number of History Records", info='How many previous history records to include when sending a message (for ChatGPT to understand the context)', value=0, minimum=0, maximum=12000) - - # load_config_from_browser = gradio.Button("🔄 Load Configuration from Browser") - # save_config_to_browser = gradio.Button("💾 Save Configuration to Browser") - # export_config_to_file = gradio.Button("📤 Export Configuration to File") - - # 更多参数 - with gradio.Accordion("More settings", open=False): - # 时间间隔 - sleep_base = gradio.Number(label='sleep between each message (ms)', value=700) - # 时间间隔浮动 - sleep_rand = gradio.Number(label='sleep float (ms)', value=200) - # 那些参数 - prop_stream = gradio.Checkbox(label="use stream", value=True) - prop_model = gradio.Textbox(label="model", value="gpt-3.5-turbo") - prop_temperature = gradio.Slider(label="temperature", value=1, minimum=0, maximum=2) - prop_top_p = gradio.Slider(label="top_p", value=1, minimum=0, maximum=1) - prop_choices_num = gradio.Slider(label="choices num(n)", value=1, minimum=1, maximum=20) - prop_max_tokens = gradio.Slider(label="max_tokens", value=-1, minimum=-1, maximum=4096) - prop_presence_penalty = gradio.Slider(label="presence_penalty", value=0, minimum=-2, maximum=2) - prop_frequency_penalty = gradio.Slider(label="frequency_penalty", value=0, minimum=-2, maximum=2) - prop_logit_bias = gradio.Textbox(label="logit_bias", visible=False) - pass - - # 欸丕艾科易 - token_text = gradio.Textbox(visible=False) - with gradio.Row(): - with gradio.Column(scale=10, min_width=100): - api_key_text = gradio.Textbox(label="Your API key", placeholder="sk-...", elem_id="api-key-textbox") - with gradio.Column(scale=1, min_width=100): - api_key_load_btn = gradio.Button("🔄 Load from browser storage") - api_key_load_btn.click( - None, - inputs=[], - outputs=[api_key_text, token_text], - _js=api_key__get_from_browser, - ) - with gradio.Column(scale=1, min_width=100): - api_key_save_btn = gradio.Button("💾 save to browser storage") - api_key_save_btn.click( - None, - inputs=[api_key_text, token_text], - outputs=[api_key_text, token_text], - _js=api_key__save_to_browser, - ) - pass - pass - - # 开始执行按钮 - start_btn = gradio.Button(value='Run!') - - with gradio.Accordion(label="Chat log", elem_id='chat-markdown-wrap-box'): - # 输出区域(隐藏状态) - history = gradio.State(value=[]) - # 输出区域(md渲染) - history_md_stable = gradio.Markdown(value="🙂") - history_md_stream = gradio.Markdown(value="🤖") - - with gradio.Accordion("Status"): - tips = gradio.Markdown(value="ready") - - # 中止执行按钮 - stop_btn = gradio.Button(value='Stop!') - - with gradio.Accordion("Download", open=False): - # gradio.Markdown("(Currently unable to download, possibly due to restrictions from Hugging Face. Will update later.)") - make_file_btn = gradio.Button(value='Generate files') - with gradio.Row(visible=False) as file_row: - # 下载区域(json文件) - history_file_json = gradio.File(label='Download Json', interactive=False) - # 下载区域(md文件) - history_file_md = gradio.File(label='Download Markdown', interactive=False) - pass - pass - - - make_file_btn.click( - fn=make_history_file_fn, - inputs=[history], - outputs=[history_file_json, history_file_md, file_row], - ) - - - start_event = start_btn.click( - fn=sequential_chat_fn, - inputs=[ - history, - - system_prompt_enabled, - system_prompt, - user_message_template, - user_message_template_mask, - user_message_template_mask_is_regex, - user_message_list_text, - user_message_list_text_splitter, - user_message_list_text_splitter_is_regex, - history_prompt_num, - - api_key_text, token_text, - - sleep_base, - sleep_rand, - prop_stream, - prop_model, - prop_temperature, - prop_top_p, - prop_choices_num, - prop_max_tokens, - prop_presence_penalty, - prop_frequency_penalty, - prop_logit_bias, - ], - outputs=[ - history, - history_md_stable, - history_md_stream, - tips, - file_row, - ], - ) - stop_btn.click( - fn=None, - inputs=[], - outputs=[], - cancels=[start_event], - ) - - -if __name__ == "__main__": - demo.queue(concurrency_count=200).launch() diff --git a/spaces/huggingface-projects/color-palette-generator-sd/frontend/src/lib/store.ts b/spaces/huggingface-projects/color-palette-generator-sd/frontend/src/lib/store.ts deleted file mode 100644 index a07350e7b3bb62e71d7142b7542f7b56c435ac63..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/color-palette-generator-sd/frontend/src/lib/store.ts +++ /dev/null @@ -1,3 +0,0 @@ -import { writable } from 'svelte/store'; -export const loadingState = writable(''); -export const isLoading = writable(false); diff --git a/spaces/hylee/apdrawing/APDrawingGAN2/util/util.py b/spaces/hylee/apdrawing/APDrawingGAN2/util/util.py deleted file mode 100644 index ba7b083ca1843fc639d23c7d5a71db26010c158c..0000000000000000000000000000000000000000 --- a/spaces/hylee/apdrawing/APDrawingGAN2/util/util.py +++ /dev/null @@ -1,60 +0,0 @@ -from __future__ import print_function -import torch -import numpy as np -from PIL import Image -import os - - -# Converts a Tensor into an image array (numpy) -# |imtype|: the desired type of the converted numpy array -def tensor2im(input_image, imtype=np.uint8): - if isinstance(input_image, torch.Tensor): - image_tensor = input_image.data - else: - return input_image - image_numpy = image_tensor[0].cpu().float().numpy() - if image_numpy.shape[0] == 1: - image_numpy = np.tile(image_numpy, (3, 1, 1)) - image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 - return image_numpy.astype(imtype) - - -def diagnose_network(net, name='network'): - mean = 0.0 - count = 0 - for param in net.parameters(): - if param.grad is not None: - mean += torch.mean(torch.abs(param.grad.data)) - count += 1 - if count > 0: - mean = mean / count - print(name) - print(mean) - - -def save_image(image_numpy, image_path): - image_pil = Image.fromarray(image_numpy) - image_pil.save(image_path) - - -def print_numpy(x, val=True, shp=False): - x = x.astype(np.float64) - if shp: - print('shape,', x.shape) - if val: - x = x.flatten() - print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % ( - np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x))) - - -def mkdirs(paths): - if isinstance(paths, list) and not isinstance(paths, str): - for path in paths: - mkdir(path) - else: - mkdir(paths) - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Chance Pe Dance 4 Movie Download HOT 720p Hd.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Chance Pe Dance 4 Movie Download HOT 720p Hd.md deleted file mode 100644 index 90419e245b7008906ad50ffab690052a779924a0..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Chance Pe Dance 4 Movie Download HOT 720p Hd.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Chance Pe Dance 4 movie download 720p hd


      Download Zip · https://urlin.us/2uEvMf



      -
      -Stream Hollywood movies in HD 720p, 1080p with English subtitles or download it to watch offline. ... We have all four seasons here, so I get a chance to dress all four ways! ... Starting at three, I loved moving my body – climbing, rolling, dancing. ... Shauna Sand Shauna Sand pe 25 aprilie 2009: Playboy Playmate; mai 1996 ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Counter-strike 1.6 Patch V21 Full.exe Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Counter-strike 1.6 Patch V21 Full.exe Download.md deleted file mode 100644 index a6fd7a7da59d6fbf7be006a512efed810ba99ed3..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Counter-strike 1.6 Patch V21 Full.exe Download.md +++ /dev/null @@ -1,23 +0,0 @@ -
      -

      How to Download and Install Counter-strike 1.6 Patch V21 Full.exe

      -

      Counter-strike 1.6 is one of the most popular online multiplayer games that you can play for free on your PC, Windows 7, 8, 10, XP, 64bit, 32bit and mobile devices. It is a first-person shooter game that pits two teams of terrorists and counter-terrorists against each other in various maps and scenarios. You can choose from a variety of weapons, grenades, equipment and strategies to eliminate your enemies and complete your objectives.

      -

      If you want to enjoy the latest features and updates of Counter-strike 1.6, you need to download and install the patch V21 full.exe file. This patch will fix some bugs, improve the graphics and performance, and add new maps and modes to the game. Here are the steps to download and install the patch V21 full.exe file for Counter-strike 1.6:

      -

      Counter-strike 1.6 Patch V21 Full.exe Download


      Download Ziphttps://urlin.us/2uEydJ



      -
        -
      1. Download the patch V21 full.exe file from this link: https://www.kotakenterprise.com/counter-strike-1-6-patch-v21-full-exe-download-new/ [^2^]. This is a trusted and verified source that offers free and fast downloads.
      2. -
      3. Save the file to your preferred location on your PC or mobile device.
      4. -
      5. Run the patch V21 full.exe file as an administrator. Follow the instructions on the screen to install the patch.
      6. -
      7. Launch Counter-strike 1.6 from your desktop or start menu. Enjoy the new features and updates of the game.
      8. -
      -

      Note: You need to have Counter-strike 1.6 installed on your device before you can apply the patch V21 full.exe file. If you don't have Counter-strike 1.6 yet, you can download it from this link: https://csdownload.pm/.

      Counter-strike 1.6 is a game that requires skill, teamwork and strategy. You can play online with other players from around the world, or offline with bots. You can also create your own servers and customize the game settings to your liking. There are many modes and maps to choose from, such as bomb defusal, hostage rescue, deathmatch, capture the flag and more.

      -

      Some tips and tricks for Counter-strike 1.6 are:

      -
        -
      • Learn the maps and their layouts. Knowing where the enemies, objectives and items are can give you an advantage.
      • -
      • Communicate with your teammates. Use voice chat or text chat to coordinate your actions and share information.
      • -
      • Practice your aim and recoil control. Use the crosshair to aim at the head of your enemies and burst fire or tap fire to reduce the recoil of your weapons.
      • -
      • Use grenades wisely. Flashbangs can blind your enemies and give you an opportunity to attack. Smoke grenades can block the vision of your enemies and allow you to move or plant the bomb. HE grenades can deal damage and force your enemies to retreat.
      • -
      • Manage your economy. Buy weapons and equipment according to your team's budget and situation. Don't waste money on unnecessary items or weapons that you are not comfortable with.
      • -
      -

      Counter-strike 1.6 is a game that can provide hours of fun and excitement. Whether you are a casual player or a competitive player, you can find a server and a mode that suits your style and preference. Download Counter-strike 1.6 and patch V21 full.exe today and join the action!

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargaraspelprod30crack [BETTER].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Descargaraspelprod30crack [BETTER].md deleted file mode 100644 index 0676bd23fa1fb5d3688aea91dd21709b8df9eef4..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargaraspelprod30crack [BETTER].md +++ /dev/null @@ -1,6 +0,0 @@ -
      -

      This page is made for discusión de https://www.codewars.com/ juega los peligrosos crack de plane,descargaraspelprod30crack. Wira para aumentar mientras https://globitur.com/product/codewarz-champion-2015 de juegos de COD. Sea por los viejos enojo cuentas cuenta de jogos para pc de c. .a nuevos crack de equipos lanzado por Linux, descargaraspelprod30crack.

      -

      Descargaraspelprod30crack - descargarespelprod30crack.exe - descargaspelprod30crack. Directamente desde la web de descargaspellprod30crack. El programa descargaraspelprod30crack para descargar. Descargaraspelprod30crack - descargarespelprod30crack.exe - descargaspellprod30crack. DescargarApellProd30Crack.ZerExe. . Descargaraspelprod30crack: descargaraspelprod30crack - descargarespelprod30crack.exe - descargaspellprod30crack. Nuestro sitio web de descargaspellprod30crack, desde nuestras páginas, descarga, descargaraspelprod30crack. . Descargaraspelprod30crack - descargarespelprod30crack.exe - descargaspellprod30crack. Instalarasolo desde nuestros portal, para que puedas descargaraspelprod30crack descargaraspelprod30crack. . Descargaraspelprod30crack para descargar aspellprod30crack, descargar espelprod30crack, descargaraspellprod30crack, descargarespelprod30crack, Descargaraspelprod30crack, descargaraspelprod30crack, descargaraspelprod30crack. . De vuelta atrás, descargaraspelprod30crack para descargar espelprod30crack, descargaraspelprod30crack, descargaraspelprod30crack, Descargaraspelprod30crack, descargaraspelprod30crack, descargaraspelprod30crack, descargaraspelprod30crack, descargaraspelprod30crack, Descargaraspelprod30crack, descargaraspelprod30crack. .

      -

      descargaraspelprod30crack


      Download Filehttps://urlin.us/2uEwwP



      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Download Film 5 Cm Mkv Ganool PORTABLE.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Download Film 5 Cm Mkv Ganool PORTABLE.md deleted file mode 100644 index acee818c7a204c71ed3ac91dd856e3839eb00317..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Download Film 5 Cm Mkv Ganool PORTABLE.md +++ /dev/null @@ -1,51 +0,0 @@ -
      -

      Download Film 5 cm mkv ganool: Cara Mudah dan Cepat Menonton Film Indonesia Terbaik

      -

      Download Film 5 cm mkv ganool adalah salah satu cara yang banyak dicari oleh para pecinta film Indonesia. Film 5 cm adalah film yang dirilis pada tahun 2012 dan menceritakan tentang kisah persahabatan lima remaja yang memutuskan untuk mendaki puncak tertinggi di Jawa. Film ini diangkat dari novel best seller karya Donny Dhirgantoro yang berjudul sama. Film ini dibintangi oleh Fedi Nuril, Herjunot Ali, Pevita Pearce, Raline Shah, dan Igor Saykoji.

      -

      Apa itu ganool?

      -

      Ganool adalah salah satu situs yang menyediakan layanan download film secara gratis dan mudah. Ganool menyajikan berbagai macam genre film, mulai dari film Hollywood, Bollywood, Korea, Jepang, hingga film Indonesia. Ganool juga menyediakan berbagai macam format film, mulai dari mp4, mkv, avi, hingga 3gp. Ganool juga menyediakan berbagai macam kualitas film, mulai dari web-dl, bluray, dvdrip, hingga hdrip.

      -

      download film 5 cm mkv ganool


      DOWNLOAD > https://urlin.us/2uEvH0



      -

      Bagaimana cara download film 5 cm mkv ganool?

      -

      Download Film 5 cm mkv ganool tidaklah sulit. Berikut adalah langkah-langkah yang bisa anda ikuti:

      -
        -
      • Buka situs ganool di browser anda. Anda bisa menggunakan alamat https://ganol.si/ atau https://ganol.st/.
      • -
      • Ketik "5 cm" di kolom pencarian yang ada di pojok kanan atas situs.
      • -
      • Pilih film 5 cm (2012) yang muncul di hasil pencarian. Pastikan anda memilih film yang memiliki format mkv dan kualitas web-dl.
      • -
      • Klik tombol "Download" yang ada di bawah judul film.
      • -
      • Anda akan diarahkan ke halaman baru yang berisi beberapa link download. Pilih salah satu link download yang anda inginkan. Anda bisa menggunakan link google drive, racaty, mediafire, uptobox, atau zippyshare.
      • -
      • Tunggu beberapa detik hingga link download muncul. Klik link download tersebut dan simpan file film di folder yang anda inginkan.
      • -
      • Selesai. Anda sudah berhasil download film 5 cm mkv ganool.
      • -
      -

      Apa kelebihan download film 5 cm mkv ganool?

      -

      Download Film 5 cm mkv ganool memiliki beberapa kelebihan dibandingkan dengan cara download film lainnya. Berikut adalah beberapa kelebihannya:

      -
        -
      • Gratis. Anda tidak perlu membayar apapun untuk download film 5 cm mkv ganool.
      • -
      • Cepat. Anda tidak perlu menunggu lama untuk download film 5 cm mkv ganool karena link downloadnya cepat dan mudah diakses.
      • -
      • Mudah. Anda tidak perlu menginstal aplikasi atau software apapun untuk download film 5 cm mkv ganool karena prosesnya sangat sederhana dan praktis.
      • -
      • Berkualitas. Anda bisa menikmati film 5 cm dengan kualitas gambar dan suara yang bagus karena format mkv dan kualitas web-dl.
      • -
      -

      Kesimpulan

      -

      Download Film 5 cm mkv ganool adalah salah satu cara yang bisa anda gunakan untuk menonton film Indonesia terbaik yang menceritakan tentang kisah persahabatan lima remaja yang mendaki puncak tertinggi di Jawa. Cara ini gratis, cepat, mudah, dan berkualitas. Anda hanya perlu mengunjungi situs ganool dan mengikuti langkah-langkah yang sudah dijelaskan di atas. Selamat menonton!

      -

      Apa itu film 5 cm?

      -

      Film 5 cm adalah film yang diadaptasi dari novel berjudul sama karya Donny Dhirgantoro. Film ini menceritakan tentang lima sahabat yang memutuskan untuk mendaki puncak tertinggi di Jawa, yaitu Gunung Semeru. Mereka adalah Genta, Arial, Zafran, Riani, dan Ian. Mereka memiliki mimpi untuk melihat matahari terbit dari puncak gunung yang disebut Mahameru. Namun, sebelum melakukan pendakian, mereka berlima memutuskan untuk berpisah selama tiga bulan tanpa saling berkomunikasi. Tujuannya adalah untuk menemukan kembali arti persahabatan dan cinta mereka. Film ini menggambarkan perjalanan fisik dan emosional mereka selama pendakian dan setelahnya.

      -

      Apa pesan yang ingin disampaikan oleh film 5 cm?

      -

      Film 5 cm ingin menyampaikan pesan bahwa persahabatan dan cinta adalah hal yang sangat berharga dalam hidup. Film ini juga ingin menginspirasi para penonton untuk mengejar mimpi mereka dengan tekad dan semangat yang tinggi. Film ini menunjukkan bahwa setiap orang memiliki potensi untuk melakukan hal-hal luar biasa jika mereka mau berusaha dan berjuang. Film ini juga menekankan pentingnya menghargai alam dan lingkungan sekitar kita.

      -

      Bagaimana tanggapan penonton terhadap film 5 cm?

      -

      Film 5 cm mendapatkan tanggapan yang positif dari penonton. Film ini berhasil meraih lebih dari 2 juta penonton di bioskop Indonesia dan menjadi salah satu film Indonesia terlaris pada tahun 2012. Film ini juga mendapatkan banyak pujian dari kritikus film dan media. Film ini dinilai sebagai film yang mengangkat tema-tema yang relevan dengan generasi muda Indonesia, seperti persahabatan, cinta, mimpi, dan petualangan. Film ini juga dipuji karena memiliki sinematografi yang indah dan akting yang natural dari para pemainnya.

      -

      -

      Apa yang menarik dari film 5 cm?

      -

      Film 5 cm memiliki banyak hal yang menarik untuk ditonton. Film ini memiliki cerita yang mengharukan dan menginspirasi tentang persahabatan dan cinta. Film ini juga memiliki adegan-adegan yang menegangkan dan menantang saat para sahabat mendaki gunung Semeru. Film ini juga memiliki pemandangan alam yang indah dan mempesona yang membuat penonton terpesona. Film ini juga memiliki musik dan lagu-lagu yang menyentuh hati dan sesuai dengan suasana film. Film ini juga memiliki pesan-pesan moral yang positif dan bermanfaat untuk penonton.

      -

      Bagaimana cara menonton film 5 cm secara online?

      -

      Jika anda ingin menonton film 5 cm secara online, anda bisa menggunakan situs download film 5 cm mkv ganool. Situs ini menyediakan link download film 5 cm dengan format mkv dan kualitas web-dl. Anda bisa menonton film 5 cm dengan kualitas gambar dan suara yang bagus tanpa perlu membayar apapun. Anda juga bisa menonton film 5 cm dengan subtitle Indonesia yang sudah tersedia di situs ini. Anda hanya perlu mengikuti langkah-langkah yang sudah dijelaskan sebelumnya untuk download film 5 cm mkv ganool.

      -

      Apa saja tips untuk menikmati film 5 cm?

      -

      Untuk menikmati film 5 cm, ada beberapa tips yang bisa anda lakukan. Berikut adalah beberapa tipsnya:

      -
        -
      • Pilih waktu yang tepat untuk menonton film 5 cm. Anda bisa menonton film 5 cm saat anda sedang santai atau ingin terhibur. Anda juga bisa menonton film 5 cm bersama teman-teman atau keluarga anda untuk berbagi kesan dan pendapat tentang film ini.
      • -
      • Persiapkan peralatan yang dibutuhkan untuk menonton film 5 cm. Anda bisa menggunakan laptop, komputer, tablet, atau smartphone untuk menonton film 5 cm. Anda juga bisa menggunakan speaker, earphone, atau headphone untuk mendengarkan suara film 5 cm. Anda juga bisa menggunakan koneksi internet yang stabil dan cepat untuk download dan streaming film 5 cm.
      • -
      • Nikmati setiap adegan dan dialog yang ada di film 5 cm. Anda bisa memperhatikan setiap detail yang ada di film 5 cm, seperti cerita, karakter, latar belakang, sinematografi, musik, dan lainnya. Anda juga bisa merasakan emosi dan pesan yang ingin disampaikan oleh film 5 cm.
      • -
      -

      Kesimpulan

      -

      Download Film 5 cm mkv ganool adalah salah satu cara yang bisa anda gunakan untuk menonton film Indonesia terbaik yang menceritakan tentang kisah persahabatan lima remaja yang mendaki puncak tertinggi di Jawa. Cara ini gratis, cepat, mudah, dan berkualitas. Anda hanya perlu mengunjungi situs ganool dan mengikuti langkah-langkah yang sudah dijelaskan di atas. Selamat menonton!

      -

      Conclusione

      -

      Download Film 5 cm mkv ganool è un modo semplice e veloce per guardare uno dei migliori film indonesiani che racconta la storia di cinque amici che decidono di scalare la vetta più alta di Giava. Questo metodo è gratuito, veloce, facile e di qualità. Basta visitare il sito ganool e seguire i passaggi descritti sopra. Buona visione!

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/izumo092/TestSecret888/app.py b/spaces/izumo092/TestSecret888/app.py deleted file mode 100644 index f006a5ee90893b1f4186fba490bf6f3c55db2b2b..0000000000000000000000000000000000000000 --- a/spaces/izumo092/TestSecret888/app.py +++ /dev/null @@ -1,6 +0,0 @@ - - - -import gradio as gr -import os -gr.load("spaces/izumo092/ii", api_key= os.environ.get("API_KEY")).launch() \ No newline at end of file diff --git a/spaces/jackli888/stable-diffusion-webui/modules/progress.py b/spaces/jackli888/stable-diffusion-webui/modules/progress.py deleted file mode 100644 index be6c8480a75305b7631be90f5ba3fc48df3f45a3..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/progress.py +++ /dev/null @@ -1,99 +0,0 @@ -import base64 -import io -import time - -import gradio as gr -from pydantic import BaseModel, Field - -from modules.shared import opts - -import modules.shared as shared - - -current_task = None -pending_tasks = {} -finished_tasks = [] - - -def start_task(id_task): - global current_task - - current_task = id_task - pending_tasks.pop(id_task, None) - - -def finish_task(id_task): - global current_task - - if current_task == id_task: - current_task = None - - finished_tasks.append(id_task) - if len(finished_tasks) > 16: - finished_tasks.pop(0) - - -def add_task_to_queue(id_job): - pending_tasks[id_job] = time.time() - - -class ProgressRequest(BaseModel): - id_task: str = Field(default=None, title="Task ID", description="id of the task to get progress for") - id_live_preview: int = Field(default=-1, title="Live preview image ID", description="id of last received last preview image") - - -class ProgressResponse(BaseModel): - active: bool = Field(title="Whether the task is being worked on right now") - queued: bool = Field(title="Whether the task is in queue") - completed: bool = Field(title="Whether the task has already finished") - progress: float = Field(default=None, title="Progress", description="The progress with a range of 0 to 1") - eta: float = Field(default=None, title="ETA in secs") - live_preview: str = Field(default=None, title="Live preview image", description="Current live preview; a data: uri") - id_live_preview: int = Field(default=None, title="Live preview image ID", description="Send this together with next request to prevent receiving same image") - textinfo: str = Field(default=None, title="Info text", description="Info text used by WebUI.") - - -def setup_progress_api(app): - return app.add_api_route("/internal/progress", progressapi, methods=["POST"], response_model=ProgressResponse) - - -def progressapi(req: ProgressRequest): - active = req.id_task == current_task - queued = req.id_task in pending_tasks - completed = req.id_task in finished_tasks - - if not active: - return ProgressResponse(active=active, queued=queued, completed=completed, id_live_preview=-1, textinfo="In queue..." if queued else "Waiting...") - - progress = 0 - - job_count, job_no = shared.state.job_count, shared.state.job_no - sampling_steps, sampling_step = shared.state.sampling_steps, shared.state.sampling_step - - if job_count > 0: - progress += job_no / job_count - if sampling_steps > 0 and job_count > 0: - progress += 1 / job_count * sampling_step / sampling_steps - - progress = min(progress, 1) - - elapsed_since_start = time.time() - shared.state.time_start - predicted_duration = elapsed_since_start / progress if progress > 0 else None - eta = predicted_duration - elapsed_since_start if predicted_duration is not None else None - - id_live_preview = req.id_live_preview - shared.state.set_current_image() - if opts.live_previews_enable and shared.state.id_live_preview != req.id_live_preview: - image = shared.state.current_image - if image is not None: - buffered = io.BytesIO() - image.save(buffered, format="png") - live_preview = 'data:image/png;base64,' + base64.b64encode(buffered.getvalue()).decode("ascii") - id_live_preview = shared.state.id_live_preview - else: - live_preview = None - else: - live_preview = None - - return ProgressResponse(active=active, queued=queued, completed=completed, progress=progress, eta=eta, live_preview=live_preview, id_live_preview=id_live_preview, textinfo=shared.state.textinfo) - diff --git a/spaces/jbetker/tortoise/tortoise/utils/__init__.py b/spaces/jbetker/tortoise/tortoise/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/jbilcke-hf/VideoQuest/src/components/ui/checkbox.tsx b/spaces/jbilcke-hf/VideoQuest/src/components/ui/checkbox.tsx deleted file mode 100644 index 5850485b9fecba303bdba1849e5a7b6329300af4..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoQuest/src/components/ui/checkbox.tsx +++ /dev/null @@ -1,30 +0,0 @@ -"use client" - -import * as React from "react" -import * as CheckboxPrimitive from "@radix-ui/react-checkbox" -import { Check } from "lucide-react" - -import { cn } from "@/lib/utils" - -const Checkbox = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - - - - - -)) -Checkbox.displayName = CheckboxPrimitive.Root.displayName - -export { Checkbox } diff --git a/spaces/jbochi/madlad400-3b-mt/README.md b/spaces/jbochi/madlad400-3b-mt/README.md deleted file mode 100644 index 405160661c3ecb8f93b9102748ee2fb935270b07..0000000000000000000000000000000000000000 --- a/spaces/jbochi/madlad400-3b-mt/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Madlad400 3b Mt -emoji: 🗣️😠 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 4.1.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jbraun19/Webcam-Object-Recognition-Yolo-n-Coco/loss.py b/spaces/jbraun19/Webcam-Object-Recognition-Yolo-n-Coco/loss.py deleted file mode 100644 index 4675441242d67a211ae1048df865fb006d5ec235..0000000000000000000000000000000000000000 --- a/spaces/jbraun19/Webcam-Object-Recognition-Yolo-n-Coco/loss.py +++ /dev/null @@ -1,212 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -import numpy as np -import math -import tensorflow.keras.backend as K -import tensorflow as tf - - -def xywh_to_x1y1x2y2(boxes): - return tf.concat([boxes[..., :2] - boxes[..., 2:] * 0.5, boxes[..., :2] + boxes[..., 2:] * 0.5], axis=-1) - - -# x,y,w,h -def bbox_iou(boxes1, boxes2): - boxes1_area = boxes1[..., 2] * boxes1[..., 3] # w * h - boxes2_area = boxes2[..., 2] * boxes2[..., 3] - - # (x, y, w, h) -> (x0, y0, x1, y1) - boxes1 = xywh_to_x1y1x2y2(boxes1) - boxes2 = xywh_to_x1y1x2y2(boxes2) - - # coordinates of intersection - top_left = tf.maximum(boxes1[..., :2], boxes2[..., :2]) - bottom_right = tf.minimum(boxes1[..., 2:], boxes2[..., 2:]) - intersection_xy = tf.maximum(bottom_right - top_left, 0.0) - - intersection_area = intersection_xy[..., 0] * intersection_xy[..., 1] - union_area = boxes1_area + boxes2_area - intersection_area - - return 1.0 * intersection_area / (union_area + tf.keras.backend.epsilon()) - - -def bbox_giou(boxes1, boxes2): - boxes1_area = boxes1[..., 2] * boxes1[..., 3] # w*h - boxes2_area = boxes2[..., 2] * boxes2[..., 3] - - # (x, y, w, h) -> (x0, y0, x1, y1) - boxes1 = xywh_to_x1y1x2y2(boxes1) - boxes2 = xywh_to_x1y1x2y2(boxes2) - - top_left = tf.maximum(boxes1[..., :2], boxes2[..., :2]) - bottom_right = tf.minimum(boxes1[..., 2:], boxes2[..., 2:]) - - intersection_xy = tf.maximum(bottom_right - top_left, 0.0) - intersection_area = intersection_xy[..., 0] * intersection_xy[..., 1] - - union_area = boxes1_area + boxes2_area - intersection_area - - iou = 1.0 * intersection_area / (union_area + tf.keras.backend.epsilon()) - - enclose_top_left = tf.minimum(boxes1[..., :2], boxes2[..., :2]) - enclose_bottom_right = tf.maximum(boxes1[..., 2:], boxes2[..., 2:]) - - enclose_xy = enclose_bottom_right - enclose_top_left - enclose_area = enclose_xy[..., 0] * enclose_xy[..., 1] - - giou = iou - tf.math.divide_no_nan(enclose_area - union_area, enclose_area) - - return giou - - -def bbox_ciou(boxes1, boxes2): - ''' - ciou = iou - p2/c2 - av - :param boxes1: (8, 13, 13, 3, 4) pred_xywh - :param boxes2: (8, 13, 13, 3, 4) label_xywh - :return: - ''' - boxes1_x0y0x1y1 = tf.concat([boxes1[..., :2] - boxes1[..., 2:] * 0.5, - boxes1[..., :2] + boxes1[..., 2:] * 0.5], axis=-1) - boxes2_x0y0x1y1 = tf.concat([boxes2[..., :2] - boxes2[..., 2:] * 0.5, - boxes2[..., :2] + boxes2[..., 2:] * 0.5], axis=-1) - boxes1_x0y0x1y1 = tf.concat([tf.minimum(boxes1_x0y0x1y1[..., :2], boxes1_x0y0x1y1[..., 2:]), - tf.maximum(boxes1_x0y0x1y1[..., :2], boxes1_x0y0x1y1[..., 2:])], axis=-1) - boxes2_x0y0x1y1 = tf.concat([tf.minimum(boxes2_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., 2:]), - tf.maximum(boxes2_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., 2:])], axis=-1) - - # area - boxes1_area = (boxes1_x0y0x1y1[..., 2] - boxes1_x0y0x1y1[..., 0]) * ( - boxes1_x0y0x1y1[..., 3] - boxes1_x0y0x1y1[..., 1]) - boxes2_area = (boxes2_x0y0x1y1[..., 2] - boxes2_x0y0x1y1[..., 0]) * ( - boxes2_x0y0x1y1[..., 3] - boxes2_x0y0x1y1[..., 1]) - - # top-left and bottom-right coord, shape: (8, 13, 13, 3, 2) - left_up = tf.maximum(boxes1_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., :2]) - right_down = tf.minimum(boxes1_x0y0x1y1[..., 2:], boxes2_x0y0x1y1[..., 2:]) - - # intersection area and iou - inter_section = tf.maximum(right_down - left_up, 0.0) - inter_area = inter_section[..., 0] * inter_section[..., 1] - union_area = boxes1_area + boxes2_area - inter_area - iou = inter_area / (union_area + 1e-9) - - # top-left and bottom-right coord of the enclosing rectangle, shape: (8, 13, 13, 3, 2) - enclose_left_up = tf.minimum(boxes1_x0y0x1y1[..., :2], boxes2_x0y0x1y1[..., :2]) - enclose_right_down = tf.maximum(boxes1_x0y0x1y1[..., 2:], boxes2_x0y0x1y1[..., 2:]) - - # diagnal ** 2 - enclose_wh = enclose_right_down - enclose_left_up - enclose_c2 = K.pow(enclose_wh[..., 0], 2) + K.pow(enclose_wh[..., 1], 2) - - # center distances between two rectangles - p2 = K.pow(boxes1[..., 0] - boxes2[..., 0], 2) + K.pow(boxes1[..., 1] - boxes2[..., 1], 2) - - # add av - atan1 = tf.atan(boxes1[..., 2] / (boxes1[..., 3] + 1e-9)) - atan2 = tf.atan(boxes2[..., 2] / (boxes2[..., 3] + 1e-9)) - v = 4.0 * K.pow(atan1 - atan2, 2) / (math.pi ** 2) - a = v / (1 - iou + v) - - ciou = iou - 1.0 * p2 / enclose_c2 - 1.0 * a * v - return ciou - - -def yolo_loss(args, num_classes, iou_loss_thresh, anchors): - conv_lbbox = args[2] # (?, ?, ?, 3*(num_classes+5)) - conv_mbbox = args[1] # (?, ?, ?, 3*(num_classes+5)) - conv_sbbox = args[0] # (?, ?, ?, 3*(num_classes+5)) - label_sbbox = args[3] # (?, ?, ?, 3, num_classes+5) - label_mbbox = args[4] # (?, ?, ?, 3, num_classes+5) - label_lbbox = args[5] # (?, ?, ?, 3, num_classes+5) - true_bboxes = args[6] # (?, 50, 4) - pred_sbbox = decode(conv_sbbox, anchors[0], 8, num_classes) - pred_mbbox = decode(conv_mbbox, anchors[1], 16, num_classes) - pred_lbbox = decode(conv_lbbox, anchors[2], 32, num_classes) - sbbox_ciou_loss, sbbox_conf_loss, sbbox_prob_loss = loss_layer(conv_sbbox, pred_sbbox, label_sbbox, true_bboxes, 8, num_classes, iou_loss_thresh) - mbbox_ciou_loss, mbbox_conf_loss, mbbox_prob_loss = loss_layer(conv_mbbox, pred_mbbox, label_mbbox, true_bboxes, 16, num_classes, iou_loss_thresh) - lbbox_ciou_loss, lbbox_conf_loss, lbbox_prob_loss = loss_layer(conv_lbbox, pred_lbbox, label_lbbox, true_bboxes, 32, num_classes, iou_loss_thresh) - - ciou_loss = (lbbox_ciou_loss + sbbox_ciou_loss + mbbox_ciou_loss) * 3.54 - conf_loss = (lbbox_conf_loss + sbbox_conf_loss + mbbox_conf_loss) * 64.3 - prob_loss = (lbbox_prob_loss + sbbox_prob_loss + mbbox_prob_loss) * 1 - - return ciou_loss+conf_loss+prob_loss - - -def loss_layer(conv, pred, label, bboxes, stride, num_class, iou_loss_thresh): - conv_shape = tf.shape(conv) - batch_size = conv_shape[0] - output_size = conv_shape[1] - input_size = stride * output_size - conv = tf.reshape(conv, (batch_size, output_size, output_size, - 3, 5 + num_class)) - conv_raw_prob = conv[:, :, :, :, 5:] - conv_raw_conf = conv[:, :, :, :, 4:5] - - pred_xywh = pred[:, :, :, :, 0:4] - pred_conf = pred[:, :, :, :, 4:5] - - label_xywh = label[:, :, :, :, 0:4] - respond_bbox = label[:, :, :, :, 4:5] - label_prob = label[:, :, :, :, 5:] - - # Coordinate loss - ciou = tf.expand_dims(bbox_giou(pred_xywh, label_xywh), axis=-1) # (8, 13, 13, 3, 1) - # ciou = tf.expand_dims(bbox_ciou(pred_xywh, label_xywh), axis=-1) # (8, 13, 13, 3, 1) - input_size = tf.cast(input_size, tf.float32) - - # loss weight of the gt bbox: 2-(gt area/img area) - bbox_loss_scale = 2.0 - 1.0 * label_xywh[:, :, :, :, 2:3] * label_xywh[:, :, :, :, 3:4] / (input_size ** 2) - ciou_loss = respond_bbox * bbox_loss_scale * (1 - ciou) # iou loss for respond bbox - - # Classification loss for respond bbox - prob_loss = respond_bbox * tf.nn.sigmoid_cross_entropy_with_logits(labels=label_prob, logits=conv_raw_prob) - - expand_pred_xywh = pred_xywh[:, :, :, :, np.newaxis, :] # (?, grid_h, grid_w, 3, 1, 4) - expand_bboxes = bboxes[:, np.newaxis, np.newaxis, np.newaxis, :, :] # (?, 1, 1, 1, 70, 4) - iou = bbox_iou(expand_pred_xywh, expand_bboxes) # IoU between all pred bbox and all gt (?, grid_h, grid_w, 3, 70) - max_iou = tf.expand_dims(tf.reduce_max(iou, axis=-1), axis=-1) # max iou: (?, grid_h, grid_w, 3, 1) - - # ignore the bbox which is not respond bbox and max iou < threshold - respond_bgd = (1.0 - respond_bbox) * tf.cast(max_iou < iou_loss_thresh, tf.float32) - - # Confidence loss - conf_focal = tf.pow(respond_bbox - pred_conf, 2) - - conf_loss = conf_focal * ( - respond_bbox * tf.nn.sigmoid_cross_entropy_with_logits(labels=respond_bbox, logits=conv_raw_conf) - + - respond_bgd * tf.nn.sigmoid_cross_entropy_with_logits(labels=respond_bbox, logits=conv_raw_conf) - ) - - ciou_loss = tf.reduce_mean(tf.reduce_sum(ciou_loss, axis=[1, 2, 3, 4])) - conf_loss = tf.reduce_mean(tf.reduce_sum(conf_loss, axis=[1, 2, 3, 4])) - prob_loss = tf.reduce_mean(tf.reduce_sum(prob_loss, axis=[1, 2, 3, 4])) - - return ciou_loss, conf_loss, prob_loss - - -def decode(conv_output, anchors, stride, num_class): - conv_shape = tf.shape(conv_output) - batch_size = conv_shape[0] - output_size = conv_shape[1] - anchor_per_scale = len(anchors) - conv_output = tf.reshape(conv_output, (batch_size, output_size, output_size, anchor_per_scale, 5 + num_class)) - conv_raw_dxdy = conv_output[:, :, :, :, 0:2] - conv_raw_dwdh = conv_output[:, :, :, :, 2:4] - conv_raw_conf = conv_output[:, :, :, :, 4:5] - conv_raw_prob = conv_output[:, :, :, :, 5:] - y = tf.tile(tf.range(output_size, dtype=tf.int32)[:, tf.newaxis], [1, output_size]) - x = tf.tile(tf.range(output_size, dtype=tf.int32)[tf.newaxis, :], [output_size, 1]) - xy_grid = tf.concat([x[:, :, tf.newaxis], y[:, :, tf.newaxis]], axis=-1) - xy_grid = tf.tile(xy_grid[tf.newaxis, :, :, tf.newaxis, :], [batch_size, 1, 1, anchor_per_scale, 1]) - xy_grid = tf.cast(xy_grid, tf.float32) - pred_xy = (tf.sigmoid(conv_raw_dxdy) + xy_grid) * stride - pred_wh = (tf.exp(conv_raw_dwdh) * anchors) - pred_xywh = tf.concat([pred_xy, pred_wh], axis=-1) - pred_conf = tf.sigmoid(conv_raw_conf) - pred_prob = tf.sigmoid(conv_raw_prob) - return tf.concat([pred_xywh, pred_conf, pred_prob], axis=-1) - diff --git a/spaces/jharrison27/gradio-blenderbot/app.py b/spaces/jharrison27/gradio-blenderbot/app.py deleted file mode 100644 index 402decb0f85e7d8816700a68087dda65a1975594..0000000000000000000000000000000000000000 --- a/spaces/jharrison27/gradio-blenderbot/app.py +++ /dev/null @@ -1,52 +0,0 @@ -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration -import torch -import gradio as gr - -mname = "facebook/blenderbot-400M-distill" -model = BlenderbotForConditionalGeneration.from_pretrained(mname) -tokenizer = BlenderbotTokenizer.from_pretrained(mname) - -def take_last_tokens(inputs, note_history, history): - """Filter the last 128 tokens""" - if inputs['input_ids'].shape[1] > 128: - inputs['input_ids'] = torch.tensor([inputs['input_ids'][0][-128:].tolist()]) - inputs['attention_mask'] = torch.tensor([inputs['attention_mask'][0][-128:].tolist()]) - note_history = [' '.join(note_history[0].split(' ')[2:])] - history = history[1:] - return inputs, note_history, history - -def add_note_to_history(note, note_history): - """Add a note to the historical information""" - note_history.append(note) - note_history = ' '.join(note_history) - return [note_history] - -title = "Blenderbot Tokenizer with Conditional Generation State of the Art" -description = """Blenderbot""" - -def chat(message, history): - history = history or [] - if history: - history_useful = [' '.join([str(a[0])+' '+str(a[1]) for a in history])] - else: - history_useful = [] - history_useful = add_note_to_history(message, history_useful) - inputs = tokenizer(history_useful, return_tensors="pt") - inputs, history_useful, history = take_last_tokens(inputs, history_useful, history) - reply_ids = model.generate(**inputs) - response = tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0] - history_useful = add_note_to_history(response, history_useful) - list_history = history_useful[0].split(' ') - history.append((list_history[-2], list_history[-1])) - return history, history - -gr.Interface( - fn=chat, - theme="huggingface", - css=".footer {display:none !important}", - inputs=["text", "state"], - outputs=["chatbot", "state"], - title=title, - description=description, - allow_flagging="never", - ).launch() \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageFont.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageFont.py deleted file mode 100644 index 05828a72fdf90dbe434cebf06f968ef7e91189b3..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImageFont.py +++ /dev/null @@ -1,997 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PIL raster font management -# -# History: -# 1996-08-07 fl created (experimental) -# 1997-08-25 fl minor adjustments to handle fonts from pilfont 0.3 -# 1999-02-06 fl rewrote most font management stuff in C -# 1999-03-17 fl take pth files into account in load_path (from Richard Jones) -# 2001-02-17 fl added freetype support -# 2001-05-09 fl added TransposedFont wrapper class -# 2002-03-04 fl make sure we have a "L" or "1" font -# 2002-12-04 fl skip non-directory entries in the system path -# 2003-04-29 fl add embedded default font -# 2003-09-27 fl added support for truetype charmap encodings -# -# Todo: -# Adapt to PILFONT2 format (16-bit fonts, compressed, single file) -# -# Copyright (c) 1997-2003 by Secret Labs AB -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import base64 -import os -import sys -import warnings -from enum import IntEnum -from io import BytesIO - -from . import Image -from ._util import is_directory, is_path - - -class Layout(IntEnum): - BASIC = 0 - RAQM = 1 - - -MAX_STRING_LENGTH = 1_000_000 - - -try: - from . import _imagingft as core -except ImportError as ex: - from ._util import DeferredError - - core = DeferredError(ex) - - -def _string_length_check(text): - if MAX_STRING_LENGTH is not None and len(text) > MAX_STRING_LENGTH: - msg = "too many characters in string" - raise ValueError(msg) - - -# FIXME: add support for pilfont2 format (see FontFile.py) - -# -------------------------------------------------------------------- -# Font metrics format: -# "PILfont" LF -# fontdescriptor LF -# (optional) key=value... LF -# "DATA" LF -# binary data: 256*10*2 bytes (dx, dy, dstbox, srcbox) -# -# To place a character, cut out srcbox and paste at dstbox, -# relative to the character position. Then move the character -# position according to dx, dy. -# -------------------------------------------------------------------- - - -class ImageFont: - """PIL font wrapper""" - - def _load_pilfont(self, filename): - with open(filename, "rb") as fp: - image = None - for ext in (".png", ".gif", ".pbm"): - if image: - image.close() - try: - fullname = os.path.splitext(filename)[0] + ext - image = Image.open(fullname) - except Exception: - pass - else: - if image and image.mode in ("1", "L"): - break - else: - if image: - image.close() - msg = "cannot find glyph data file" - raise OSError(msg) - - self.file = fullname - - self._load_pilfont_data(fp, image) - image.close() - - def _load_pilfont_data(self, file, image): - # read PILfont header - if file.readline() != b"PILfont\n": - msg = "Not a PILfont file" - raise SyntaxError(msg) - file.readline().split(b";") - self.info = [] # FIXME: should be a dictionary - while True: - s = file.readline() - if not s or s == b"DATA\n": - break - self.info.append(s) - - # read PILfont metrics - data = file.read(256 * 20) - - # check image - if image.mode not in ("1", "L"): - msg = "invalid font image mode" - raise TypeError(msg) - - image.load() - - self.font = Image.core.font(image.im, data) - - def getmask(self, text, mode="", *args, **kwargs): - """ - Create a bitmap for the text. - - If the font uses antialiasing, the bitmap should have mode ``L`` and use a - maximum value of 255. Otherwise, it should have mode ``1``. - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - .. versionadded:: 1.1.5 - - :return: An internal PIL storage memory instance as defined by the - :py:mod:`PIL.Image.core` interface module. - """ - return self.font.getmask(text, mode) - - def getbbox(self, text, *args, **kwargs): - """ - Returns bounding box (in pixels) of given text. - - .. versionadded:: 9.2.0 - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - :return: ``(left, top, right, bottom)`` bounding box - """ - _string_length_check(text) - width, height = self.font.getsize(text) - return 0, 0, width, height - - def getlength(self, text, *args, **kwargs): - """ - Returns length (in pixels) of given text. - This is the amount by which following text should be offset. - - .. versionadded:: 9.2.0 - """ - _string_length_check(text) - width, height = self.font.getsize(text) - return width - - -## -# Wrapper for FreeType fonts. Application code should use the -# truetype factory function to create font objects. - - -class FreeTypeFont: - """FreeType font wrapper (requires _imagingft service)""" - - def __init__(self, font=None, size=10, index=0, encoding="", layout_engine=None): - # FIXME: use service provider instead - - self.path = font - self.size = size - self.index = index - self.encoding = encoding - - if layout_engine not in (Layout.BASIC, Layout.RAQM): - layout_engine = Layout.BASIC - if core.HAVE_RAQM: - layout_engine = Layout.RAQM - elif layout_engine == Layout.RAQM and not core.HAVE_RAQM: - warnings.warn( - "Raqm layout was requested, but Raqm is not available. " - "Falling back to basic layout." - ) - layout_engine = Layout.BASIC - - self.layout_engine = layout_engine - - def load_from_bytes(f): - self.font_bytes = f.read() - self.font = core.getfont( - "", size, index, encoding, self.font_bytes, layout_engine - ) - - if is_path(font): - if sys.platform == "win32": - font_bytes_path = font if isinstance(font, bytes) else font.encode() - try: - font_bytes_path.decode("ascii") - except UnicodeDecodeError: - # FreeType cannot load fonts with non-ASCII characters on Windows - # So load it into memory first - with open(font, "rb") as f: - load_from_bytes(f) - return - self.font = core.getfont( - font, size, index, encoding, layout_engine=layout_engine - ) - else: - load_from_bytes(font) - - def __getstate__(self): - return [self.path, self.size, self.index, self.encoding, self.layout_engine] - - def __setstate__(self, state): - path, size, index, encoding, layout_engine = state - self.__init__(path, size, index, encoding, layout_engine) - - def getname(self): - """ - :return: A tuple of the font family (e.g. Helvetica) and the font style - (e.g. Bold) - """ - return self.font.family, self.font.style - - def getmetrics(self): - """ - :return: A tuple of the font ascent (the distance from the baseline to - the highest outline point) and descent (the distance from the - baseline to the lowest outline point, a negative value) - """ - return self.font.ascent, self.font.descent - - def getlength(self, text, mode="", direction=None, features=None, language=None): - """ - Returns length (in pixels with 1/64 precision) of given text when rendered - in font with provided direction, features, and language. - - This is the amount by which following text should be offset. - Text bounding box may extend past the length in some fonts, - e.g. when using italics or accents. - - The result is returned as a float; it is a whole number if using basic layout. - - Note that the sum of two lengths may not equal the length of a concatenated - string due to kerning. If you need to adjust for kerning, include the following - character and subtract its length. - - For example, instead of :: - - hello = font.getlength("Hello") - world = font.getlength("World") - hello_world = hello + world # not adjusted for kerning - assert hello_world == font.getlength("HelloWorld") # may fail - - use :: - - hello = font.getlength("HelloW") - font.getlength("W") # adjusted for kerning - world = font.getlength("World") - hello_world = hello + world # adjusted for kerning - assert hello_world == font.getlength("HelloWorld") # True - - or disable kerning with (requires libraqm) :: - - hello = draw.textlength("Hello", font, features=["-kern"]) - world = draw.textlength("World", font, features=["-kern"]) - hello_world = hello + world # kerning is disabled, no need to adjust - assert hello_world == draw.textlength("HelloWorld", font, features=["-kern"]) - - .. versionadded:: 8.0.0 - - :param text: Text to measure. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - :return: Width for horizontal, height for vertical text. - """ - _string_length_check(text) - return self.font.getlength(text, mode, direction, features, language) / 64 - - def getbbox( - self, - text, - mode="", - direction=None, - features=None, - language=None, - stroke_width=0, - anchor=None, - ): - """ - Returns bounding box (in pixels) of given text relative to given anchor - when rendered in font with provided direction, features, and language. - - Use :py:meth:`getlength()` to get the offset of following text with - 1/64 pixel precision. The bounding box includes extra margins for - some fonts, e.g. italics or accents. - - .. versionadded:: 8.0.0 - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - :param stroke_width: The width of the text stroke. - - :param anchor: The text anchor alignment. Determines the relative location of - the anchor to the text. The default alignment is top left. - See :ref:`text-anchors` for valid values. - - :return: ``(left, top, right, bottom)`` bounding box - """ - _string_length_check(text) - size, offset = self.font.getsize( - text, mode, direction, features, language, anchor - ) - left, top = offset[0] - stroke_width, offset[1] - stroke_width - width, height = size[0] + 2 * stroke_width, size[1] + 2 * stroke_width - return left, top, left + width, top + height - - def getmask( - self, - text, - mode="", - direction=None, - features=None, - language=None, - stroke_width=0, - anchor=None, - ink=0, - start=None, - ): - """ - Create a bitmap for the text. - - If the font uses antialiasing, the bitmap should have mode ``L`` and use a - maximum value of 255. If the font has embedded color data, the bitmap - should have mode ``RGBA``. Otherwise, it should have mode ``1``. - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - .. versionadded:: 1.1.5 - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - .. versionadded:: 6.0.0 - - :param stroke_width: The width of the text stroke. - - .. versionadded:: 6.2.0 - - :param anchor: The text anchor alignment. Determines the relative location of - the anchor to the text. The default alignment is top left. - See :ref:`text-anchors` for valid values. - - .. versionadded:: 8.0.0 - - :param ink: Foreground ink for rendering in RGBA mode. - - .. versionadded:: 8.0.0 - - :param start: Tuple of horizontal and vertical offset, as text may render - differently when starting at fractional coordinates. - - .. versionadded:: 9.4.0 - - :return: An internal PIL storage memory instance as defined by the - :py:mod:`PIL.Image.core` interface module. - """ - return self.getmask2( - text, - mode, - direction=direction, - features=features, - language=language, - stroke_width=stroke_width, - anchor=anchor, - ink=ink, - start=start, - )[0] - - def getmask2( - self, - text, - mode="", - direction=None, - features=None, - language=None, - stroke_width=0, - anchor=None, - ink=0, - start=None, - *args, - **kwargs, - ): - """ - Create a bitmap for the text. - - If the font uses antialiasing, the bitmap should have mode ``L`` and use a - maximum value of 255. If the font has embedded color data, the bitmap - should have mode ``RGBA``. Otherwise, it should have mode ``1``. - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - .. versionadded:: 1.1.5 - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - .. versionadded:: 6.0.0 - - :param stroke_width: The width of the text stroke. - - .. versionadded:: 6.2.0 - - :param anchor: The text anchor alignment. Determines the relative location of - the anchor to the text. The default alignment is top left. - See :ref:`text-anchors` for valid values. - - .. versionadded:: 8.0.0 - - :param ink: Foreground ink for rendering in RGBA mode. - - .. versionadded:: 8.0.0 - - :param start: Tuple of horizontal and vertical offset, as text may render - differently when starting at fractional coordinates. - - .. versionadded:: 9.4.0 - - :return: A tuple of an internal PIL storage memory instance as defined by the - :py:mod:`PIL.Image.core` interface module, and the text offset, the - gap between the starting coordinate and the first marking - """ - _string_length_check(text) - if start is None: - start = (0, 0) - im = None - - def fill(mode, size): - nonlocal im - - im = Image.core.fill(mode, size) - return im - - size, offset = self.font.render( - text, - fill, - mode, - direction, - features, - language, - stroke_width, - anchor, - ink, - start[0], - start[1], - Image.MAX_IMAGE_PIXELS, - ) - Image._decompression_bomb_check(size) - return im, offset - - def font_variant( - self, font=None, size=None, index=None, encoding=None, layout_engine=None - ): - """ - Create a copy of this FreeTypeFont object, - using any specified arguments to override the settings. - - Parameters are identical to the parameters used to initialize this - object. - - :return: A FreeTypeFont object. - """ - if font is None: - try: - font = BytesIO(self.font_bytes) - except AttributeError: - font = self.path - return FreeTypeFont( - font=font, - size=self.size if size is None else size, - index=self.index if index is None else index, - encoding=self.encoding if encoding is None else encoding, - layout_engine=layout_engine or self.layout_engine, - ) - - def get_variation_names(self): - """ - :returns: A list of the named styles in a variation font. - :exception OSError: If the font is not a variation font. - """ - try: - names = self.font.getvarnames() - except AttributeError as e: - msg = "FreeType 2.9.1 or greater is required" - raise NotImplementedError(msg) from e - return [name.replace(b"\x00", b"") for name in names] - - def set_variation_by_name(self, name): - """ - :param name: The name of the style. - :exception OSError: If the font is not a variation font. - """ - names = self.get_variation_names() - if not isinstance(name, bytes): - name = name.encode() - index = names.index(name) + 1 - - if index == getattr(self, "_last_variation_index", None): - # When the same name is set twice in a row, - # there is an 'unknown freetype error' - # https://savannah.nongnu.org/bugs/?56186 - return - self._last_variation_index = index - - self.font.setvarname(index) - - def get_variation_axes(self): - """ - :returns: A list of the axes in a variation font. - :exception OSError: If the font is not a variation font. - """ - try: - axes = self.font.getvaraxes() - except AttributeError as e: - msg = "FreeType 2.9.1 or greater is required" - raise NotImplementedError(msg) from e - for axis in axes: - axis["name"] = axis["name"].replace(b"\x00", b"") - return axes - - def set_variation_by_axes(self, axes): - """ - :param axes: A list of values for each axis. - :exception OSError: If the font is not a variation font. - """ - try: - self.font.setvaraxes(axes) - except AttributeError as e: - msg = "FreeType 2.9.1 or greater is required" - raise NotImplementedError(msg) from e - - -class TransposedFont: - """Wrapper for writing rotated or mirrored text""" - - def __init__(self, font, orientation=None): - """ - Wrapper that creates a transposed font from any existing font - object. - - :param font: A font object. - :param orientation: An optional orientation. If given, this should - be one of Image.Transpose.FLIP_LEFT_RIGHT, Image.Transpose.FLIP_TOP_BOTTOM, - Image.Transpose.ROTATE_90, Image.Transpose.ROTATE_180, or - Image.Transpose.ROTATE_270. - """ - self.font = font - self.orientation = orientation # any 'transpose' argument, or None - - def getmask(self, text, mode="", *args, **kwargs): - im = self.font.getmask(text, mode, *args, **kwargs) - if self.orientation is not None: - return im.transpose(self.orientation) - return im - - def getbbox(self, text, *args, **kwargs): - # TransposedFont doesn't support getmask2, move top-left point to (0, 0) - # this has no effect on ImageFont and simulates anchor="lt" for FreeTypeFont - left, top, right, bottom = self.font.getbbox(text, *args, **kwargs) - width = right - left - height = bottom - top - if self.orientation in (Image.Transpose.ROTATE_90, Image.Transpose.ROTATE_270): - return 0, 0, height, width - return 0, 0, width, height - - def getlength(self, text, *args, **kwargs): - if self.orientation in (Image.Transpose.ROTATE_90, Image.Transpose.ROTATE_270): - msg = "text length is undefined for text rotated by 90 or 270 degrees" - raise ValueError(msg) - _string_length_check(text) - return self.font.getlength(text, *args, **kwargs) - - -def load(filename): - """ - Load a font file. This function loads a font object from the given - bitmap font file, and returns the corresponding font object. - - :param filename: Name of font file. - :return: A font object. - :exception OSError: If the file could not be read. - """ - f = ImageFont() - f._load_pilfont(filename) - return f - - -def truetype(font=None, size=10, index=0, encoding="", layout_engine=None): - """ - Load a TrueType or OpenType font from a file or file-like object, - and create a font object. - This function loads a font object from the given file or file-like - object, and creates a font object for a font of the given size. - - Pillow uses FreeType to open font files. On Windows, be aware that FreeType - will keep the file open as long as the FreeTypeFont object exists. Windows - limits the number of files that can be open in C at once to 512, so if many - fonts are opened simultaneously and that limit is approached, an - ``OSError`` may be thrown, reporting that FreeType "cannot open resource". - A workaround would be to copy the file(s) into memory, and open that instead. - - This function requires the _imagingft service. - - :param font: A filename or file-like object containing a TrueType font. - If the file is not found in this filename, the loader may also - search in other directories, such as the :file:`fonts/` - directory on Windows or :file:`/Library/Fonts/`, - :file:`/System/Library/Fonts/` and :file:`~/Library/Fonts/` on - macOS. - - :param size: The requested size, in pixels. - :param index: Which font face to load (default is first available face). - :param encoding: Which font encoding to use (default is Unicode). Possible - encodings include (see the FreeType documentation for more - information): - - * "unic" (Unicode) - * "symb" (Microsoft Symbol) - * "ADOB" (Adobe Standard) - * "ADBE" (Adobe Expert) - * "ADBC" (Adobe Custom) - * "armn" (Apple Roman) - * "sjis" (Shift JIS) - * "gb " (PRC) - * "big5" - * "wans" (Extended Wansung) - * "joha" (Johab) - * "lat1" (Latin-1) - - This specifies the character set to use. It does not alter the - encoding of any text provided in subsequent operations. - :param layout_engine: Which layout engine to use, if available: - :data:`.ImageFont.Layout.BASIC` or :data:`.ImageFont.Layout.RAQM`. - If it is available, Raqm layout will be used by default. - Otherwise, basic layout will be used. - - Raqm layout is recommended for all non-English text. If Raqm layout - is not required, basic layout will have better performance. - - You can check support for Raqm layout using - :py:func:`PIL.features.check_feature` with ``feature="raqm"``. - - .. versionadded:: 4.2.0 - :return: A font object. - :exception OSError: If the file could not be read. - """ - - def freetype(font): - return FreeTypeFont(font, size, index, encoding, layout_engine) - - try: - return freetype(font) - except OSError: - if not is_path(font): - raise - ttf_filename = os.path.basename(font) - - dirs = [] - if sys.platform == "win32": - # check the windows font repository - # NOTE: must use uppercase WINDIR, to work around bugs in - # 1.5.2's os.environ.get() - windir = os.environ.get("WINDIR") - if windir: - dirs.append(os.path.join(windir, "fonts")) - elif sys.platform in ("linux", "linux2"): - lindirs = os.environ.get("XDG_DATA_DIRS") - if not lindirs: - # According to the freedesktop spec, XDG_DATA_DIRS should - # default to /usr/share - lindirs = "/usr/share" - dirs += [os.path.join(lindir, "fonts") for lindir in lindirs.split(":")] - elif sys.platform == "darwin": - dirs += [ - "/Library/Fonts", - "/System/Library/Fonts", - os.path.expanduser("~/Library/Fonts"), - ] - - ext = os.path.splitext(ttf_filename)[1] - first_font_with_a_different_extension = None - for directory in dirs: - for walkroot, walkdir, walkfilenames in os.walk(directory): - for walkfilename in walkfilenames: - if ext and walkfilename == ttf_filename: - return freetype(os.path.join(walkroot, walkfilename)) - elif not ext and os.path.splitext(walkfilename)[0] == ttf_filename: - fontpath = os.path.join(walkroot, walkfilename) - if os.path.splitext(fontpath)[1] == ".ttf": - return freetype(fontpath) - if not ext and first_font_with_a_different_extension is None: - first_font_with_a_different_extension = fontpath - if first_font_with_a_different_extension: - return freetype(first_font_with_a_different_extension) - raise - - -def load_path(filename): - """ - Load font file. Same as :py:func:`~PIL.ImageFont.load`, but searches for a - bitmap font along the Python path. - - :param filename: Name of font file. - :return: A font object. - :exception OSError: If the file could not be read. - """ - for directory in sys.path: - if is_directory(directory): - if not isinstance(filename, str): - filename = filename.decode("utf-8") - try: - return load(os.path.join(directory, filename)) - except OSError: - pass - msg = "cannot find font file" - raise OSError(msg) - - -def load_default(): - """Load a "better than nothing" default font. - - .. versionadded:: 1.1.4 - - :return: A font object. - """ - f = ImageFont() - f._load_pilfont_data( - # courB08 - BytesIO( - base64.b64decode( - b""" -UElMZm9udAo7Ozs7OzsxMDsKREFUQQoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAAAAA//8AAQAAAAAAAAABAAEA -BgAAAAH/+gADAAAAAQAAAAMABgAGAAAAAf/6AAT//QADAAAABgADAAYAAAAA//kABQABAAYAAAAL -AAgABgAAAAD/+AAFAAEACwAAABAACQAGAAAAAP/5AAUAAAAQAAAAFQAHAAYAAP////oABQAAABUA -AAAbAAYABgAAAAH/+QAE//wAGwAAAB4AAwAGAAAAAf/5AAQAAQAeAAAAIQAIAAYAAAAB//kABAAB -ACEAAAAkAAgABgAAAAD/+QAE//0AJAAAACgABAAGAAAAAP/6AAX//wAoAAAALQAFAAYAAAAB//8A -BAACAC0AAAAwAAMABgAAAAD//AAF//0AMAAAADUAAQAGAAAAAf//AAMAAAA1AAAANwABAAYAAAAB -//kABQABADcAAAA7AAgABgAAAAD/+QAFAAAAOwAAAEAABwAGAAAAAP/5AAYAAABAAAAARgAHAAYA -AAAA//kABQAAAEYAAABLAAcABgAAAAD/+QAFAAAASwAAAFAABwAGAAAAAP/5AAYAAABQAAAAVgAH -AAYAAAAA//kABQAAAFYAAABbAAcABgAAAAD/+QAFAAAAWwAAAGAABwAGAAAAAP/5AAUAAABgAAAA -ZQAHAAYAAAAA//kABQAAAGUAAABqAAcABgAAAAD/+QAFAAAAagAAAG8ABwAGAAAAAf/8AAMAAABv -AAAAcQAEAAYAAAAA//wAAwACAHEAAAB0AAYABgAAAAD/+gAE//8AdAAAAHgABQAGAAAAAP/7AAT/ -/gB4AAAAfAADAAYAAAAB//oABf//AHwAAACAAAUABgAAAAD/+gAFAAAAgAAAAIUABgAGAAAAAP/5 -AAYAAQCFAAAAiwAIAAYAAP////oABgAAAIsAAACSAAYABgAA////+gAFAAAAkgAAAJgABgAGAAAA -AP/6AAUAAACYAAAAnQAGAAYAAP////oABQAAAJ0AAACjAAYABgAA////+gAFAAAAowAAAKkABgAG -AAD////6AAUAAACpAAAArwAGAAYAAAAA//oABQAAAK8AAAC0AAYABgAA////+gAGAAAAtAAAALsA -BgAGAAAAAP/6AAQAAAC7AAAAvwAGAAYAAP////oABQAAAL8AAADFAAYABgAA////+gAGAAAAxQAA -AMwABgAGAAD////6AAUAAADMAAAA0gAGAAYAAP////oABQAAANIAAADYAAYABgAA////+gAGAAAA -2AAAAN8ABgAGAAAAAP/6AAUAAADfAAAA5AAGAAYAAP////oABQAAAOQAAADqAAYABgAAAAD/+gAF -AAEA6gAAAO8ABwAGAAD////6AAYAAADvAAAA9gAGAAYAAAAA//oABQAAAPYAAAD7AAYABgAA//// -+gAFAAAA+wAAAQEABgAGAAD////6AAYAAAEBAAABCAAGAAYAAP////oABgAAAQgAAAEPAAYABgAA -////+gAGAAABDwAAARYABgAGAAAAAP/6AAYAAAEWAAABHAAGAAYAAP////oABgAAARwAAAEjAAYA -BgAAAAD/+gAFAAABIwAAASgABgAGAAAAAf/5AAQAAQEoAAABKwAIAAYAAAAA//kABAABASsAAAEv -AAgABgAAAAH/+QAEAAEBLwAAATIACAAGAAAAAP/5AAX//AEyAAABNwADAAYAAAAAAAEABgACATcA -AAE9AAEABgAAAAH/+QAE//wBPQAAAUAAAwAGAAAAAP/7AAYAAAFAAAABRgAFAAYAAP////kABQAA -AUYAAAFMAAcABgAAAAD/+wAFAAABTAAAAVEABQAGAAAAAP/5AAYAAAFRAAABVwAHAAYAAAAA//sA -BQAAAVcAAAFcAAUABgAAAAD/+QAFAAABXAAAAWEABwAGAAAAAP/7AAYAAgFhAAABZwAHAAYAAP// -//kABQAAAWcAAAFtAAcABgAAAAD/+QAGAAABbQAAAXMABwAGAAAAAP/5AAQAAgFzAAABdwAJAAYA -AP////kABgAAAXcAAAF+AAcABgAAAAD/+QAGAAABfgAAAYQABwAGAAD////7AAUAAAGEAAABigAF -AAYAAP////sABQAAAYoAAAGQAAUABgAAAAD/+wAFAAABkAAAAZUABQAGAAD////7AAUAAgGVAAAB -mwAHAAYAAAAA//sABgACAZsAAAGhAAcABgAAAAD/+wAGAAABoQAAAacABQAGAAAAAP/7AAYAAAGn -AAABrQAFAAYAAAAA//kABgAAAa0AAAGzAAcABgAA////+wAGAAABswAAAboABQAGAAD////7AAUA -AAG6AAABwAAFAAYAAP////sABgAAAcAAAAHHAAUABgAAAAD/+wAGAAABxwAAAc0ABQAGAAD////7 -AAYAAgHNAAAB1AAHAAYAAAAA//sABQAAAdQAAAHZAAUABgAAAAH/+QAFAAEB2QAAAd0ACAAGAAAA -Av/6AAMAAQHdAAAB3gAHAAYAAAAA//kABAABAd4AAAHiAAgABgAAAAD/+wAF//0B4gAAAecAAgAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAAAAB -//sAAwACAecAAAHpAAcABgAAAAD/+QAFAAEB6QAAAe4ACAAGAAAAAP/5AAYAAAHuAAAB9AAHAAYA -AAAA//oABf//AfQAAAH5AAUABgAAAAD/+QAGAAAB+QAAAf8ABwAGAAAAAv/5AAMAAgH/AAACAAAJ -AAYAAAAA//kABQABAgAAAAIFAAgABgAAAAH/+gAE//sCBQAAAggAAQAGAAAAAP/5AAYAAAIIAAAC -DgAHAAYAAAAB//kABf/+Ag4AAAISAAUABgAA////+wAGAAACEgAAAhkABQAGAAAAAP/7AAX//gIZ -AAACHgADAAYAAAAA//wABf/9Ah4AAAIjAAEABgAAAAD/+QAHAAACIwAAAioABwAGAAAAAP/6AAT/ -+wIqAAACLgABAAYAAAAA//kABP/8Ai4AAAIyAAMABgAAAAD/+gAFAAACMgAAAjcABgAGAAAAAf/5 -AAT//QI3AAACOgAEAAYAAAAB//kABP/9AjoAAAI9AAQABgAAAAL/+QAE//sCPQAAAj8AAgAGAAD/ -///7AAYAAgI/AAACRgAHAAYAAAAA//kABgABAkYAAAJMAAgABgAAAAH//AAD//0CTAAAAk4AAQAG -AAAAAf//AAQAAgJOAAACUQADAAYAAAAB//kABP/9AlEAAAJUAAQABgAAAAH/+QAF//4CVAAAAlgA -BQAGAAD////7AAYAAAJYAAACXwAFAAYAAP////kABgAAAl8AAAJmAAcABgAA////+QAGAAACZgAA -Am0ABwAGAAD////5AAYAAAJtAAACdAAHAAYAAAAA//sABQACAnQAAAJ5AAcABgAA////9wAGAAAC -eQAAAoAACQAGAAD////3AAYAAAKAAAAChwAJAAYAAP////cABgAAAocAAAKOAAkABgAA////9wAG -AAACjgAAApUACQAGAAD////4AAYAAAKVAAACnAAIAAYAAP////cABgAAApwAAAKjAAkABgAA//// -+gAGAAACowAAAqoABgAGAAAAAP/6AAUAAgKqAAACrwAIAAYAAP////cABQAAAq8AAAK1AAkABgAA -////9wAFAAACtQAAArsACQAGAAD////3AAUAAAK7AAACwQAJAAYAAP////gABQAAAsEAAALHAAgA -BgAAAAD/9wAEAAACxwAAAssACQAGAAAAAP/3AAQAAALLAAACzwAJAAYAAAAA//cABAAAAs8AAALT -AAkABgAAAAD/+AAEAAAC0wAAAtcACAAGAAD////6AAUAAALXAAAC3QAGAAYAAP////cABgAAAt0A -AALkAAkABgAAAAD/9wAFAAAC5AAAAukACQAGAAAAAP/3AAUAAALpAAAC7gAJAAYAAAAA//cABQAA -Au4AAALzAAkABgAAAAD/9wAFAAAC8wAAAvgACQAGAAAAAP/4AAUAAAL4AAAC/QAIAAYAAAAA//oA -Bf//Av0AAAMCAAUABgAA////+gAGAAADAgAAAwkABgAGAAD////3AAYAAAMJAAADEAAJAAYAAP// -//cABgAAAxAAAAMXAAkABgAA////9wAGAAADFwAAAx4ACQAGAAD////4AAYAAAAAAAoABwASAAYA -AP////cABgAAAAcACgAOABMABgAA////+gAFAAAADgAKABQAEAAGAAD////6AAYAAAAUAAoAGwAQ -AAYAAAAA//gABgAAABsACgAhABIABgAAAAD/+AAGAAAAIQAKACcAEgAGAAAAAP/4AAYAAAAnAAoA -LQASAAYAAAAA//gABgAAAC0ACgAzABIABgAAAAD/+QAGAAAAMwAKADkAEQAGAAAAAP/3AAYAAAA5 -AAoAPwATAAYAAP////sABQAAAD8ACgBFAA8ABgAAAAD/+wAFAAIARQAKAEoAEQAGAAAAAP/4AAUA -AABKAAoATwASAAYAAAAA//gABQAAAE8ACgBUABIABgAAAAD/+AAFAAAAVAAKAFkAEgAGAAAAAP/5 -AAUAAABZAAoAXgARAAYAAAAA//gABgAAAF4ACgBkABIABgAAAAD/+AAGAAAAZAAKAGoAEgAGAAAA -AP/4AAYAAABqAAoAcAASAAYAAAAA//kABgAAAHAACgB2ABEABgAAAAD/+AAFAAAAdgAKAHsAEgAG -AAD////4AAYAAAB7AAoAggASAAYAAAAA//gABQAAAIIACgCHABIABgAAAAD/+AAFAAAAhwAKAIwA -EgAGAAAAAP/4AAUAAACMAAoAkQASAAYAAAAA//gABQAAAJEACgCWABIABgAAAAD/+QAFAAAAlgAK -AJsAEQAGAAAAAP/6AAX//wCbAAoAoAAPAAYAAAAA//oABQABAKAACgClABEABgAA////+AAGAAAA -pQAKAKwAEgAGAAD////4AAYAAACsAAoAswASAAYAAP////gABgAAALMACgC6ABIABgAA////+QAG -AAAAugAKAMEAEQAGAAD////4AAYAAgDBAAoAyAAUAAYAAP////kABQACAMgACgDOABMABgAA//// -+QAGAAIAzgAKANUAEw== -""" - ) - ), - Image.open( - BytesIO( - base64.b64decode( - b""" -iVBORw0KGgoAAAANSUhEUgAAAx4AAAAUAQAAAAArMtZoAAAEwElEQVR4nABlAJr/AHVE4czCI/4u -Mc4b7vuds/xzjz5/3/7u/n9vMe7vnfH/9++vPn/xyf5zhxzjt8GHw8+2d83u8x27199/nxuQ6Od9 -M43/5z2I+9n9ZtmDBwMQECDRQw/eQIQohJXxpBCNVE6QCCAAAAD//wBlAJr/AgALyj1t/wINwq0g -LeNZUworuN1cjTPIzrTX6ofHWeo3v336qPzfEwRmBnHTtf95/fglZK5N0PDgfRTslpGBvz7LFc4F -IUXBWQGjQ5MGCx34EDFPwXiY4YbYxavpnhHFrk14CDAAAAD//wBlAJr/AgKqRooH2gAgPeggvUAA -Bu2WfgPoAwzRAABAAAAAAACQgLz/3Uv4Gv+gX7BJgDeeGP6AAAD1NMDzKHD7ANWr3loYbxsAD791 -NAADfcoIDyP44K/jv4Y63/Z+t98Ovt+ub4T48LAAAAD//wBlAJr/AuplMlADJAAAAGuAphWpqhMx -in0A/fRvAYBABPgBwBUgABBQ/sYAyv9g0bCHgOLoGAAAAAAAREAAwI7nr0ArYpow7aX8//9LaP/9 -SjdavWA8ePHeBIKB//81/83ndznOaXx379wAAAD//wBlAJr/AqDxW+D3AABAAbUh/QMnbQag/gAY -AYDAAACgtgD/gOqAAAB5IA/8AAAk+n9w0AAA8AAAmFRJuPo27ciC0cD5oeW4E7KA/wD3ECMAn2tt -y8PgwH8AfAxFzC0JzeAMtratAsC/ffwAAAD//wBlAJr/BGKAyCAA4AAAAvgeYTAwHd1kmQF5chkG -ABoMIHcL5xVpTfQbUqzlAAAErwAQBgAAEOClA5D9il08AEh/tUzdCBsXkbgACED+woQg8Si9VeqY -lODCn7lmF6NhnAEYgAAA/NMIAAAAAAD//2JgjLZgVGBg5Pv/Tvpc8hwGBjYGJADjHDrAwPzAjv/H -/Wf3PzCwtzcwHmBgYGcwbZz8wHaCAQMDOwMDQ8MCBgYOC3W7mp+f0w+wHOYxO3OG+e376hsMZjk3 -AAAAAP//YmCMY2A4wMAIN5e5gQETPD6AZisDAwMDgzSDAAPjByiHcQMDAwMDg1nOze1lByRu5/47 -c4859311AYNZzg0AAAAA//9iYGDBYihOIIMuwIjGL39/fwffA8b//xv/P2BPtzzHwCBjUQAAAAD/ -/yLFBrIBAAAA//9i1HhcwdhizX7u8NZNzyLbvT97bfrMf/QHI8evOwcSqGUJAAAA//9iYBB81iSw -pEE170Qrg5MIYydHqwdDQRMrAwcVrQAAAAD//2J4x7j9AAMDn8Q/BgYLBoaiAwwMjPdvMDBYM1Tv -oJodAAAAAP//Yqo/83+dxePWlxl3npsel9lvLfPcqlE9725C+acfVLMEAAAA//9i+s9gwCoaaGMR -evta/58PTEWzr21hufPjA8N+qlnBwAAAAAD//2JiWLci5v1+HmFXDqcnULE/MxgYGBj+f6CaJQAA -AAD//2Ji2FrkY3iYpYC5qDeGgeEMAwPDvwQBBoYvcTwOVLMEAAAA//9isDBgkP///0EOg9z35v// -Gc/eeW7BwPj5+QGZhANUswMAAAD//2JgqGBgYGBgqEMXlvhMPUsAAAAA//8iYDd1AAAAAP//AwDR -w7IkEbzhVQAAAABJRU5ErkJggg== -""" - ) - ) - ), - ) - return f diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attr/filters.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attr/filters.py deleted file mode 100644 index a1e40c98db853aa375ab0b24559e0559f91e6152..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/attr/filters.py +++ /dev/null @@ -1,66 +0,0 @@ -# SPDX-License-Identifier: MIT - -""" -Commonly useful filters for `attr.asdict`. -""" - -from ._make import Attribute - - -def _split_what(what): - """ - Returns a tuple of `frozenset`s of classes and attributes. - """ - return ( - frozenset(cls for cls in what if isinstance(cls, type)), - frozenset(cls for cls in what if isinstance(cls, str)), - frozenset(cls for cls in what if isinstance(cls, Attribute)), - ) - - -def include(*what): - """ - Include *what*. - - :param what: What to include. - :type what: `list` of classes `type`, field names `str` or - `attrs.Attribute`\\ s - - :rtype: `callable` - - .. versionchanged:: 23.1.0 Accept strings with field names. - """ - cls, names, attrs = _split_what(what) - - def include_(attribute, value): - return ( - value.__class__ in cls - or attribute.name in names - or attribute in attrs - ) - - return include_ - - -def exclude(*what): - """ - Exclude *what*. - - :param what: What to exclude. - :type what: `list` of classes `type`, field names `str` or - `attrs.Attribute`\\ s. - - :rtype: `callable` - - .. versionchanged:: 23.3.0 Accept field name string as input argument - """ - cls, names, attrs = _split_what(what) - - def exclude_(attribute, value): - return not ( - value.__class__ in cls - or attribute.name in names - or attribute in attrs - ) - - return exclude_ diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/contourpy/util/_build_config.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/contourpy/util/_build_config.py deleted file mode 100644 index f283a4bb81188d85094d0a6addd90e0f6f6e2ec2..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/contourpy/util/_build_config.py +++ /dev/null @@ -1,58 +0,0 @@ -# _build_config.py.in is converted into _build_config.py during the meson build process. - -from __future__ import annotations - - -def build_config() -> dict[str, str]: - """ - Return a dictionary containing build configuration settings. - - All dictionary keys and values are strings, for example ``False`` is - returned as ``"False"``. - """ - return dict( - # Python settings - python_version="3.9", - python_install_dir=r"/usr/local/lib/python3.9/site-packages/", - python_path=r"/private/var/folders/24/8k48jl6d249_n_qfxwsl6xvm0000gn/T/build-env-2akgyfne/bin/python", - - # Package versions - contourpy_version="1.1.1", - meson_version="1.2.1", - mesonpy_version="0.14.0", - pybind11_version="2.11.1", - - # Misc meson settings - meson_backend="ninja", - build_dir=r"/Users/runner/work/contourpy/contourpy/.mesonpy-e1d1r4o1/lib/contourpy/util", - source_dir=r"/Users/runner/work/contourpy/contourpy/lib/contourpy/util", - cross_build="False", - - # Build options - build_options=r"-Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md -Dvsenv=True --native-file=/Users/runner/work/contourpy/contourpy/.mesonpy-e1d1r4o1/meson-python-native-file.ini", - buildtype="release", - cpp_std="c++17", - debug="False", - optimization="3", - vsenv="True", - b_ndebug="if-release", - b_vscrt="from_buildtype", - - # C++ compiler - compiler_name="clang", - compiler_version="13.0.0", - linker_id="ld64", - compile_command="c++", - - # Host machine - host_cpu="x86_64", - host_cpu_family="x86_64", - host_cpu_endian="little", - host_cpu_system="darwin", - - # Build machine, same as host machine if not a cross_build - build_cpu="x86_64", - build_cpu_family="x86_64", - build_cpu_endian="little", - build_cpu_system="darwin", - ) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/tokenizer.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/tokenizer.py deleted file mode 100644 index 454cac4a85e609d3429df45cbdfcb4103bd19213..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/tokenizer.py +++ /dev/null @@ -1,708 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2017 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -"""Tokenize DNS zone file format""" - -import io -import sys -from typing import Any, List, Optional, Tuple - -import dns.exception -import dns.name -import dns.ttl - -_DELIMITERS = {" ", "\t", "\n", ";", "(", ")", '"'} -_QUOTING_DELIMITERS = {'"'} - -EOF = 0 -EOL = 1 -WHITESPACE = 2 -IDENTIFIER = 3 -QUOTED_STRING = 4 -COMMENT = 5 -DELIMITER = 6 - - -class UngetBufferFull(dns.exception.DNSException): - """An attempt was made to unget a token when the unget buffer was full.""" - - -class Token: - """A DNS zone file format token. - - ttype: The token type - value: The token value - has_escape: Does the token value contain escapes? - """ - - def __init__( - self, - ttype: int, - value: Any = "", - has_escape: bool = False, - comment: Optional[str] = None, - ): - """Initialize a token instance.""" - - self.ttype = ttype - self.value = value - self.has_escape = has_escape - self.comment = comment - - def is_eof(self) -> bool: - return self.ttype == EOF - - def is_eol(self) -> bool: - return self.ttype == EOL - - def is_whitespace(self) -> bool: - return self.ttype == WHITESPACE - - def is_identifier(self) -> bool: - return self.ttype == IDENTIFIER - - def is_quoted_string(self) -> bool: - return self.ttype == QUOTED_STRING - - def is_comment(self) -> bool: - return self.ttype == COMMENT - - def is_delimiter(self) -> bool: # pragma: no cover (we don't return delimiters yet) - return self.ttype == DELIMITER - - def is_eol_or_eof(self) -> bool: - return self.ttype == EOL or self.ttype == EOF - - def __eq__(self, other): - if not isinstance(other, Token): - return False - return self.ttype == other.ttype and self.value == other.value - - def __ne__(self, other): - if not isinstance(other, Token): - return True - return self.ttype != other.ttype or self.value != other.value - - def __str__(self): - return '%d "%s"' % (self.ttype, self.value) - - def unescape(self) -> "Token": - if not self.has_escape: - return self - unescaped = "" - l = len(self.value) - i = 0 - while i < l: - c = self.value[i] - i += 1 - if c == "\\": - if i >= l: # pragma: no cover (can't happen via get()) - raise dns.exception.UnexpectedEnd - c = self.value[i] - i += 1 - if c.isdigit(): - if i >= l: - raise dns.exception.UnexpectedEnd - c2 = self.value[i] - i += 1 - if i >= l: - raise dns.exception.UnexpectedEnd - c3 = self.value[i] - i += 1 - if not (c2.isdigit() and c3.isdigit()): - raise dns.exception.SyntaxError - codepoint = int(c) * 100 + int(c2) * 10 + int(c3) - if codepoint > 255: - raise dns.exception.SyntaxError - c = chr(codepoint) - unescaped += c - return Token(self.ttype, unescaped) - - def unescape_to_bytes(self) -> "Token": - # We used to use unescape() for TXT-like records, but this - # caused problems as we'd process DNS escapes into Unicode code - # points instead of byte values, and then a to_text() of the - # processed data would not equal the original input. For - # example, \226 in the TXT record would have a to_text() of - # \195\162 because we applied UTF-8 encoding to Unicode code - # point 226. - # - # We now apply escapes while converting directly to bytes, - # avoiding this double encoding. - # - # This code also handles cases where the unicode input has - # non-ASCII code-points in it by converting it to UTF-8. TXT - # records aren't defined for Unicode, but this is the best we - # can do to preserve meaning. For example, - # - # foo\u200bbar - # - # (where \u200b is Unicode code point 0x200b) will be treated - # as if the input had been the UTF-8 encoding of that string, - # namely: - # - # foo\226\128\139bar - # - unescaped = b"" - l = len(self.value) - i = 0 - while i < l: - c = self.value[i] - i += 1 - if c == "\\": - if i >= l: # pragma: no cover (can't happen via get()) - raise dns.exception.UnexpectedEnd - c = self.value[i] - i += 1 - if c.isdigit(): - if i >= l: - raise dns.exception.UnexpectedEnd - c2 = self.value[i] - i += 1 - if i >= l: - raise dns.exception.UnexpectedEnd - c3 = self.value[i] - i += 1 - if not (c2.isdigit() and c3.isdigit()): - raise dns.exception.SyntaxError - codepoint = int(c) * 100 + int(c2) * 10 + int(c3) - if codepoint > 255: - raise dns.exception.SyntaxError - unescaped += b"%c" % (codepoint) - else: - # Note that as mentioned above, if c is a Unicode - # code point outside of the ASCII range, then this - # += is converting that code point to its UTF-8 - # encoding and appending multiple bytes to - # unescaped. - unescaped += c.encode() - else: - unescaped += c.encode() - return Token(self.ttype, bytes(unescaped)) - - -class Tokenizer: - """A DNS zone file format tokenizer. - - A token object is basically a (type, value) tuple. The valid - types are EOF, EOL, WHITESPACE, IDENTIFIER, QUOTED_STRING, - COMMENT, and DELIMITER. - - file: The file to tokenize - - ungotten_char: The most recently ungotten character, or None. - - ungotten_token: The most recently ungotten token, or None. - - multiline: The current multiline level. This value is increased - by one every time a '(' delimiter is read, and decreased by one every time - a ')' delimiter is read. - - quoting: This variable is true if the tokenizer is currently - reading a quoted string. - - eof: This variable is true if the tokenizer has encountered EOF. - - delimiters: The current delimiter dictionary. - - line_number: The current line number - - filename: A filename that will be returned by the where() method. - - idna_codec: A dns.name.IDNACodec, specifies the IDNA - encoder/decoder. If None, the default IDNA 2003 - encoder/decoder is used. - """ - - def __init__( - self, - f: Any = sys.stdin, - filename: Optional[str] = None, - idna_codec: Optional[dns.name.IDNACodec] = None, - ): - """Initialize a tokenizer instance. - - f: The file to tokenize. The default is sys.stdin. - This parameter may also be a string, in which case the tokenizer - will take its input from the contents of the string. - - filename: the name of the filename that the where() method - will return. - - idna_codec: A dns.name.IDNACodec, specifies the IDNA - encoder/decoder. If None, the default IDNA 2003 - encoder/decoder is used. - """ - - if isinstance(f, str): - f = io.StringIO(f) - if filename is None: - filename = "" - elif isinstance(f, bytes): - f = io.StringIO(f.decode()) - if filename is None: - filename = "" - else: - if filename is None: - if f is sys.stdin: - filename = "" - else: - filename = "" - self.file = f - self.ungotten_char: Optional[str] = None - self.ungotten_token: Optional[Token] = None - self.multiline = 0 - self.quoting = False - self.eof = False - self.delimiters = _DELIMITERS - self.line_number = 1 - assert filename is not None - self.filename = filename - if idna_codec is None: - self.idna_codec: dns.name.IDNACodec = dns.name.IDNA_2003 - else: - self.idna_codec = idna_codec - - def _get_char(self) -> str: - """Read a character from input.""" - - if self.ungotten_char is None: - if self.eof: - c = "" - else: - c = self.file.read(1) - if c == "": - self.eof = True - elif c == "\n": - self.line_number += 1 - else: - c = self.ungotten_char - self.ungotten_char = None - return c - - def where(self) -> Tuple[str, int]: - """Return the current location in the input. - - Returns a (string, int) tuple. The first item is the filename of - the input, the second is the current line number. - """ - - return (self.filename, self.line_number) - - def _unget_char(self, c: str) -> None: - """Unget a character. - - The unget buffer for characters is only one character large; it is - an error to try to unget a character when the unget buffer is not - empty. - - c: the character to unget - raises UngetBufferFull: there is already an ungotten char - """ - - if self.ungotten_char is not None: - # this should never happen! - raise UngetBufferFull # pragma: no cover - self.ungotten_char = c - - def skip_whitespace(self) -> int: - """Consume input until a non-whitespace character is encountered. - - The non-whitespace character is then ungotten, and the number of - whitespace characters consumed is returned. - - If the tokenizer is in multiline mode, then newlines are whitespace. - - Returns the number of characters skipped. - """ - - skipped = 0 - while True: - c = self._get_char() - if c != " " and c != "\t": - if (c != "\n") or not self.multiline: - self._unget_char(c) - return skipped - skipped += 1 - - def get(self, want_leading: bool = False, want_comment: bool = False) -> Token: - """Get the next token. - - want_leading: If True, return a WHITESPACE token if the - first character read is whitespace. The default is False. - - want_comment: If True, return a COMMENT token if the - first token read is a comment. The default is False. - - Raises dns.exception.UnexpectedEnd: input ended prematurely - - Raises dns.exception.SyntaxError: input was badly formed - - Returns a Token. - """ - - if self.ungotten_token is not None: - utoken = self.ungotten_token - self.ungotten_token = None - if utoken.is_whitespace(): - if want_leading: - return utoken - elif utoken.is_comment(): - if want_comment: - return utoken - else: - return utoken - skipped = self.skip_whitespace() - if want_leading and skipped > 0: - return Token(WHITESPACE, " ") - token = "" - ttype = IDENTIFIER - has_escape = False - while True: - c = self._get_char() - if c == "" or c in self.delimiters: - if c == "" and self.quoting: - raise dns.exception.UnexpectedEnd - if token == "" and ttype != QUOTED_STRING: - if c == "(": - self.multiline += 1 - self.skip_whitespace() - continue - elif c == ")": - if self.multiline <= 0: - raise dns.exception.SyntaxError - self.multiline -= 1 - self.skip_whitespace() - continue - elif c == '"': - if not self.quoting: - self.quoting = True - self.delimiters = _QUOTING_DELIMITERS - ttype = QUOTED_STRING - continue - else: - self.quoting = False - self.delimiters = _DELIMITERS - self.skip_whitespace() - continue - elif c == "\n": - return Token(EOL, "\n") - elif c == ";": - while 1: - c = self._get_char() - if c == "\n" or c == "": - break - token += c - if want_comment: - self._unget_char(c) - return Token(COMMENT, token) - elif c == "": - if self.multiline: - raise dns.exception.SyntaxError( - "unbalanced parentheses" - ) - return Token(EOF, comment=token) - elif self.multiline: - self.skip_whitespace() - token = "" - continue - else: - return Token(EOL, "\n", comment=token) - else: - # This code exists in case we ever want a - # delimiter to be returned. It never produces - # a token currently. - token = c - ttype = DELIMITER - else: - self._unget_char(c) - break - elif self.quoting and c == "\n": - raise dns.exception.SyntaxError("newline in quoted string") - elif c == "\\": - # - # It's an escape. Put it and the next character into - # the token; it will be checked later for goodness. - # - token += c - has_escape = True - c = self._get_char() - if c == "" or (c == "\n" and not self.quoting): - raise dns.exception.UnexpectedEnd - token += c - if token == "" and ttype != QUOTED_STRING: - if self.multiline: - raise dns.exception.SyntaxError("unbalanced parentheses") - ttype = EOF - return Token(ttype, token, has_escape) - - def unget(self, token: Token) -> None: - """Unget a token. - - The unget buffer for tokens is only one token large; it is - an error to try to unget a token when the unget buffer is not - empty. - - token: the token to unget - - Raises UngetBufferFull: there is already an ungotten token - """ - - if self.ungotten_token is not None: - raise UngetBufferFull - self.ungotten_token = token - - def next(self): - """Return the next item in an iteration. - - Returns a Token. - """ - - token = self.get() - if token.is_eof(): - raise StopIteration - return token - - __next__ = next - - def __iter__(self): - return self - - # Helpers - - def get_int(self, base: int = 10) -> int: - """Read the next token and interpret it as an unsigned integer. - - Raises dns.exception.SyntaxError if not an unsigned integer. - - Returns an int. - """ - - token = self.get().unescape() - if not token.is_identifier(): - raise dns.exception.SyntaxError("expecting an identifier") - if not token.value.isdigit(): - raise dns.exception.SyntaxError("expecting an integer") - return int(token.value, base) - - def get_uint8(self) -> int: - """Read the next token and interpret it as an 8-bit unsigned - integer. - - Raises dns.exception.SyntaxError if not an 8-bit unsigned integer. - - Returns an int. - """ - - value = self.get_int() - if value < 0 or value > 255: - raise dns.exception.SyntaxError( - "%d is not an unsigned 8-bit integer" % value - ) - return value - - def get_uint16(self, base: int = 10) -> int: - """Read the next token and interpret it as a 16-bit unsigned - integer. - - Raises dns.exception.SyntaxError if not a 16-bit unsigned integer. - - Returns an int. - """ - - value = self.get_int(base=base) - if value < 0 or value > 65535: - if base == 8: - raise dns.exception.SyntaxError( - "%o is not an octal unsigned 16-bit integer" % value - ) - else: - raise dns.exception.SyntaxError( - "%d is not an unsigned 16-bit integer" % value - ) - return value - - def get_uint32(self, base: int = 10) -> int: - """Read the next token and interpret it as a 32-bit unsigned - integer. - - Raises dns.exception.SyntaxError if not a 32-bit unsigned integer. - - Returns an int. - """ - - value = self.get_int(base=base) - if value < 0 or value > 4294967295: - raise dns.exception.SyntaxError( - "%d is not an unsigned 32-bit integer" % value - ) - return value - - def get_uint48(self, base: int = 10) -> int: - """Read the next token and interpret it as a 48-bit unsigned - integer. - - Raises dns.exception.SyntaxError if not a 48-bit unsigned integer. - - Returns an int. - """ - - value = self.get_int(base=base) - if value < 0 or value > 281474976710655: - raise dns.exception.SyntaxError( - "%d is not an unsigned 48-bit integer" % value - ) - return value - - def get_string(self, max_length: Optional[int] = None) -> str: - """Read the next token and interpret it as a string. - - Raises dns.exception.SyntaxError if not a string. - Raises dns.exception.SyntaxError if token value length - exceeds max_length (if specified). - - Returns a string. - """ - - token = self.get().unescape() - if not (token.is_identifier() or token.is_quoted_string()): - raise dns.exception.SyntaxError("expecting a string") - if max_length and len(token.value) > max_length: - raise dns.exception.SyntaxError("string too long") - return token.value - - def get_identifier(self) -> str: - """Read the next token, which should be an identifier. - - Raises dns.exception.SyntaxError if not an identifier. - - Returns a string. - """ - - token = self.get().unescape() - if not token.is_identifier(): - raise dns.exception.SyntaxError("expecting an identifier") - return token.value - - def get_remaining(self, max_tokens: Optional[int] = None) -> List[Token]: - """Return the remaining tokens on the line, until an EOL or EOF is seen. - - max_tokens: If not None, stop after this number of tokens. - - Returns a list of tokens. - """ - - tokens = [] - while True: - token = self.get() - if token.is_eol_or_eof(): - self.unget(token) - break - tokens.append(token) - if len(tokens) == max_tokens: - break - return tokens - - def concatenate_remaining_identifiers(self, allow_empty: bool = False) -> str: - """Read the remaining tokens on the line, which should be identifiers. - - Raises dns.exception.SyntaxError if there are no remaining tokens, - unless `allow_empty=True` is given. - - Raises dns.exception.SyntaxError if a token is seen that is not an - identifier. - - Returns a string containing a concatenation of the remaining - identifiers. - """ - s = "" - while True: - token = self.get().unescape() - if token.is_eol_or_eof(): - self.unget(token) - break - if not token.is_identifier(): - raise dns.exception.SyntaxError - s += token.value - if not (allow_empty or s): - raise dns.exception.SyntaxError("expecting another identifier") - return s - - def as_name( - self, - token: Token, - origin: Optional[dns.name.Name] = None, - relativize: bool = False, - relativize_to: Optional[dns.name.Name] = None, - ) -> dns.name.Name: - """Try to interpret the token as a DNS name. - - Raises dns.exception.SyntaxError if not a name. - - Returns a dns.name.Name. - """ - if not token.is_identifier(): - raise dns.exception.SyntaxError("expecting an identifier") - name = dns.name.from_text(token.value, origin, self.idna_codec) - return name.choose_relativity(relativize_to or origin, relativize) - - def get_name( - self, - origin: Optional[dns.name.Name] = None, - relativize: bool = False, - relativize_to: Optional[dns.name.Name] = None, - ) -> dns.name.Name: - """Read the next token and interpret it as a DNS name. - - Raises dns.exception.SyntaxError if not a name. - - Returns a dns.name.Name. - """ - - token = self.get() - return self.as_name(token, origin, relativize, relativize_to) - - def get_eol_as_token(self) -> Token: - """Read the next token and raise an exception if it isn't EOL or - EOF. - - Returns a string. - """ - - token = self.get() - if not token.is_eol_or_eof(): - raise dns.exception.SyntaxError( - 'expected EOL or EOF, got %d "%s"' % (token.ttype, token.value) - ) - return token - - def get_eol(self) -> str: - return self.get_eol_as_token().value - - def get_ttl(self) -> int: - """Read the next token and interpret it as a DNS TTL. - - Raises dns.exception.SyntaxError or dns.ttl.BadTTL if not an - identifier or badly formed. - - Returns an int. - """ - - token = self.get().unescape() - if not token.is_identifier(): - raise dns.exception.SyntaxError("expecting an identifier") - return dns.ttl.from_text(token.value) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gradio/components/label.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gradio/components/label.py deleted file mode 100644 index f6e965b0a3f819ebadcbee490bb926245c96fd81..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gradio/components/label.py +++ /dev/null @@ -1,177 +0,0 @@ -"""gr.Label() component.""" - -from __future__ import annotations - -import operator -import warnings -from pathlib import Path -from typing import Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import ( - JSONSerializable, -) - -from gradio.components.base import IOComponent, _Keywords -from gradio.deprecation import warn_style_method_deprecation -from gradio.events import ( - Changeable, - EventListenerMethod, - Selectable, -) - -set_documentation_group("component") - - -@document() -class Label(Changeable, Selectable, IOComponent, JSONSerializable): - """ - Displays a classification label, along with confidence scores of top categories, if provided. - Preprocessing: this component does *not* accept input. - Postprocessing: expects a {Dict[str, float]} of classes and confidences, or {str} with just the class or an {int}/{float} for regression outputs, or a {str} path to a .json file containing a json dictionary in the structure produced by Label.postprocess(). - - Demos: main_note, titanic_survival - Guides: image-classification-in-pytorch, image-classification-in-tensorflow, image-classification-with-vision-transformers, building-a-pictionary-app - """ - - CONFIDENCES_KEY = "confidences" - - def __init__( - self, - value: dict[str, float] | str | float | Callable | None = None, - *, - num_top_classes: int | None = None, - label: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - color: str | None = None, - **kwargs, - ): - """ - Parameters: - value: Default value to show in the component. If a str or number is provided, simply displays the string or number. If a {Dict[str, float]} of classes and confidences is provided, displays the top class on top and the `num_top_classes` below, along with their confidence bars. If callable, the function will be called whenever the app loads to set the initial value of the component. - num_top_classes: number of most confident classes to show. - label: component name in interface. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - color: The background color of the label (either a valid css color name or hexadecimal string). - """ - self.num_top_classes = num_top_classes - self.color = color - self.select: EventListenerMethod - """ - Event listener for when the user selects a category from Label. - Uses event data gradio.SelectData to carry `value` referring to name of selected category, and `index` to refer to index. - See EventData documentation on how to use this event data. - """ - IOComponent.__init__( - self, - label=label, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def postprocess(self, y: dict[str, float] | str | float | None) -> dict | None: - """ - Parameters: - y: a dictionary mapping labels to confidence value, or just a string/numerical label by itself - Returns: - Object with key 'label' representing primary label, and key 'confidences' representing a list of label-confidence pairs - """ - if y is None or y == {}: - return {} - if isinstance(y, str) and y.endswith(".json") and Path(y).exists(): - return self.serialize(y) - if isinstance(y, (str, float, int)): - return {"label": str(y)} - if isinstance(y, dict): - if "confidences" in y and isinstance(y["confidences"], dict): - y = y["confidences"] - y = {c["label"]: c["confidence"] for c in y} - sorted_pred = sorted(y.items(), key=operator.itemgetter(1), reverse=True) - if self.num_top_classes is not None: - sorted_pred = sorted_pred[: self.num_top_classes] - return { - "label": sorted_pred[0][0], - "confidences": [ - {"label": pred[0], "confidence": pred[1]} for pred in sorted_pred - ], - } - raise ValueError( - "The `Label` output interface expects one of: a string label, or an int label, a " - "float label, or a dictionary whose keys are labels and values are confidences. " - f"Instead, got a {type(y)}" - ) - - @staticmethod - def update( - value: dict[str, float] - | str - | float - | Literal[_Keywords.NO_VALUE] - | None = _Keywords.NO_VALUE, - label: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - color: str | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - ): - warnings.warn( - "Using the update method is deprecated. Simply return a new object instead, e.g. `return gr.Label(...)` instead of `return gr.Label.update(...)`." - ) - # If color is not specified (NO_VALUE) map it to None so that - # it gets filtered out in postprocess. This will mean the color - # will not be updated in the front-end - if color is _Keywords.NO_VALUE: - color = None - # If the color was specified by the developer as None - # Map is so that the color is updated to be transparent, - # e.g. no background default state. - elif color is None: - color = "transparent" - return { - "label": label, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "color": color, - "__type__": "update", - } - - def style( - self, - *, - container: bool | None = None, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if container is not None: - self.container = container - return self diff --git a/spaces/joe-aquino/keras_pretty_face/README.md b/spaces/joe-aquino/keras_pretty_face/README.md deleted file mode 100644 index 0362e651bed96add1a9ceb6e476c14fd64549a2d..0000000000000000000000000000000000000000 --- a/spaces/joe-aquino/keras_pretty_face/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Keras Pretty Face -emoji: 🌍 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/johnslegers/bilingual_stable_diffusion/css_and_js.py b/spaces/johnslegers/bilingual_stable_diffusion/css_and_js.py deleted file mode 100644 index 64e6dd5e703281d0b11e7a9ef7f05a264fb2341c..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/bilingual_stable_diffusion/css_and_js.py +++ /dev/null @@ -1,92 +0,0 @@ -from os import path -import json - - -def readTextFile(*args): - dir = path.dirname(__file__) - entry = path.join(dir, *args) - with open(entry, "r", encoding="utf8") as f: - data = f.read() - return data - - -def css(opt): - styling = readTextFile("css", "styles.css") - # TODO: @altryne restore this before merge - if not opt.no_progressbar_hiding: - styling += readTextFile("css", "no_progress_bar.css") - return styling - - -def js(opt): - data = readTextFile("js", "index.js") - data = "(z) => {" + data + "; return z ?? [] }" - return data - - -# TODO : @altryne fix this to the new JS format -js_copy_txt2img_output = "(x) => {navigator.clipboard.writeText(document.querySelector('gradio-app').shadowRoot.querySelector('#highlight .textfield').textContent.replace(/\s+/g,' ').replace(/: /g,':'))}" - - - -js_parse_prompt =""" -(txt2img_prompt, txt2img_width, txt2img_height, txt2img_steps, txt2img_seed, txt2img_batch_count, txt2img_cfg) => { - -const prompt_input = document.querySelector('gradio-app').shadowRoot.querySelector('#prompt_input [data-testid="textbox"]'); -const multiline = document.querySelector('gradio-app').shadowRoot.querySelector('#submit_on_enter label:nth-child(2)') -if (prompt_input.scrollWidth > prompt_input.clientWidth + 10 ) { - multiline.click(); -} - - -let height_match = /(?:-h|-H|--height|height)[ :]?(?\d+) /.exec(txt2img_prompt); -if (height_match) { - txt2img_height = Math.round(height_match.groups.height / 64) * 64; - txt2img_prompt = txt2img_prompt.replace(height_match[0], ''); -} -let width_match = /(?:-w|-W|--width|width)[ :]?(?\d+) /.exec(txt2img_prompt); -if (width_match) { - txt2img_width = Math.round(width_match.groups.width / 64) * 64; - txt2img_prompt = txt2img_prompt.replace(width_match[0], ''); -} -let steps_match = /(?:-s|--steps|steps)[ :]?(?\d+) /.exec(txt2img_prompt); -if (steps_match) { - txt2img_steps = steps_match.groups.steps.trim(); - txt2img_prompt = txt2img_prompt.replace(steps_match[0], ''); -} -let seed_match = /(?:-S|--seed|seed)[ :]?(?\d+) /.exec(txt2img_prompt); -if (seed_match) { - txt2img_seed = seed_match.groups.seed; - txt2img_prompt = txt2img_prompt.replace(seed_match[0], ''); -} -let batch_count_match = /(?:-n|-N|--number|number)[ :]?(?\d+) /.exec(txt2img_prompt); -if (batch_count_match) { - txt2img_batch_count = batch_count_match.groups.batch_count; - txt2img_prompt = txt2img_prompt.replace(batch_count_match[0], ''); -} -let cfg_scale_match = /(?:-c|-C|--cfg-scale|cfg_scale|cfg)[ :]?(?\d\.?\d+?) /.exec(txt2img_prompt); -if (cfg_scale_match) { - txt2img_cfg = parseFloat(cfg_scale_match.groups.cfgscale).toFixed(1); - txt2img_prompt = txt2img_prompt.replace(cfg_scale_match[0], ''); -} -let sampler_match = /(?:-A|--sampler|sampler)[ :]?(?\w+) /.exec(txt2img_prompt); -if (sampler_match) { - - txt2img_prompt = txt2img_prompt.replace(sampler_match[0], ''); -} - -return [txt2img_prompt, parseInt(txt2img_width), parseInt(txt2img_height), parseInt(txt2img_steps), txt2img_seed, parseInt(txt2img_batch_count), parseFloat(txt2img_cfg)]; -} -""" - - -# Wrap the typical SD method call into async closure for ease of use -# Supplies the js function with a params object -# That includes all the passed arguments and input from Gradio: x -# ATTENTION: x is an array of values of all components passed to your -# python event handler -# Example call in Gradio component's event handler (pass the result to _js arg): -# _js=call_JS("myJsMethod", arg1="string", arg2=100, arg3=[]) -def call_JS(sd_method, **kwargs): - param_str = json.dumps(kwargs) - return f"async (...x) => {{ return await SD.{sd_method}({{ x, ...{param_str} }}) ?? []; }}" diff --git a/spaces/juancopi81/whisper-youtube-2-hf_dataset/loading/loaderiterator.py b/spaces/juancopi81/whisper-youtube-2-hf_dataset/loading/loaderiterator.py deleted file mode 100644 index 794278277d63f5f5f219832c41ceebf9f0c2aedb..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/whisper-youtube-2-hf_dataset/loading/loaderiterator.py +++ /dev/null @@ -1,46 +0,0 @@ -from pathlib import Path -from typing import List, Dict, Optional - -from loading.serialization import Serializer - -class LoaderIterator: - """Iterator that loads data from multiple files in batches.""" - - def __init__(self, - serializer: Serializer, - num_files_per_iteration: int, - load_paths: Optional[List[Path]] = None) -> None: - self.serializer = serializer - self.num_files_per_iteration = num_files_per_iteration - self._load_paths = load_paths - self._current_iteration = None - - @property - def load_paths(self) -> Optional[List[Path]]: - return self._load_paths - - @load_paths.setter - def load_paths(self, load_paths: List[Path]) -> None: - self._load_paths = load_paths - - def __iter__(self): - self._current_iteration = 0 - return self - - def __next__(self) -> List[Dict]: - if self._did_load_all_batches(): - raise StopIteration - data_batch = self._load_data_batch() - self._current_iteration += 1 - return data_batch - - def _did_load_all_batches(self) -> bool: - if self._current_iteration >= len(self._load_paths) / self.num_files_per_iteration: - return True - return False - - def _load_data_batch(self) -> List[Dict]: - start_index = self._current_iteration * self.num_files_per_iteration - stop_index = start_index + self.num_files_per_iteration - return [self.serializer.load(load_path) for load_path in - self._load_paths[start_index:stop_index] if load_path.exists()] \ No newline at end of file diff --git a/spaces/juanhuggingface/ChuanhuChatGPT_Beta/modules/models/configuration_moss.py b/spaces/juanhuggingface/ChuanhuChatGPT_Beta/modules/models/configuration_moss.py deleted file mode 100644 index 9bad4396ecea6578c1628732d0ef077d8964d45d..0000000000000000000000000000000000000000 --- a/spaces/juanhuggingface/ChuanhuChatGPT_Beta/modules/models/configuration_moss.py +++ /dev/null @@ -1,118 +0,0 @@ -""" Moss model configuration""" - -from transformers.utils import logging -from transformers.configuration_utils import PretrainedConfig - - -logger = logging.get_logger(__name__) - - -class MossConfig(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`MossModel`]. It is used to instantiate a - Moss model according to the specified arguments, defining the model architecture. Instantiating a configuration - with the defaults will yield a similar configuration to that of the Moss - [fnlp/moss-moon-003-base](https://huggingface.co/fnlp/moss-moon-003-base) architecture. Configuration objects - inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the documentation from - [`PretrainedConfig`] for more information. - - Args: - vocab_size (`int`, *optional*, defaults to 107008): - Vocabulary size of the Moss model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`MossModel`]. - n_positions (`int`, *optional*, defaults to 2048): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - n_embd (`int`, *optional*, defaults to 4096): - Dimensionality of the embeddings and hidden states. - n_layer (`int`, *optional*, defaults to 28): - Number of hidden layers in the Transformer encoder. - n_head (`int`, *optional*, defaults to 16): - Number of attention heads for each attention layer in the Transformer encoder. - rotary_dim (`int`, *optional*, defaults to 64): - Number of dimensions in the embedding that Rotary Position Embedding is applied to. - n_inner (`int`, *optional*, defaults to None): - Dimensionality of the inner feed-forward layers. `None` will set it to 4 times n_embd - activation_function (`str`, *optional*, defaults to `"gelu_new"`): - Activation function, to be selected in the list `["relu", "silu", "gelu", "tanh", "gelu_new"]`. - resid_pdrop (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - embd_pdrop (`int`, *optional*, defaults to 0.1): - The dropout ratio for the embeddings. - attn_pdrop (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention. - layer_norm_epsilon (`float`, *optional*, defaults to 1e-5): - The epsilon to use in the layer normalization layers. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models). - - Example: - - ```python - >>> from modeling_moss import MossModel - >>> from configuration_moss import MossConfig - - >>> # Initializing a moss-moon-003-base configuration - >>> configuration = MossConfig() - - >>> # Initializing a model (with random weights) from the configuration - >>> model = MossModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "moss" - attribute_map = { - "max_position_embeddings": "n_positions", - "hidden_size": "n_embd", - "num_attention_heads": "n_head", - "num_hidden_layers": "n_layer", - } - - def __init__( - self, - vocab_size=107008, - n_positions=2048, - n_ctx=2048, - n_embd=4096, - n_layer=28, - n_head=16, - rotary_dim=64, - n_inner=None, - activation_function="gelu_new", - resid_pdrop=0.0, - embd_pdrop=0.0, - attn_pdrop=0.0, - layer_norm_epsilon=1e-5, - initializer_range=0.02, - use_cache=True, - bos_token_id=106028, - eos_token_id=106068, - tie_word_embeddings=False, - **kwargs, - ): - self.vocab_size = vocab_size - self.n_ctx = n_ctx - self.n_positions = n_positions - self.n_embd = n_embd - self.n_layer = n_layer - self.n_head = n_head - self.n_inner = n_inner - self.rotary_dim = rotary_dim - self.activation_function = activation_function - self.resid_pdrop = resid_pdrop - self.embd_pdrop = embd_pdrop - self.attn_pdrop = attn_pdrop - self.layer_norm_epsilon = layer_norm_epsilon - self.initializer_range = initializer_range - self.use_cache = use_cache - - self.bos_token_id = bos_token_id - self.eos_token_id = eos_token_id - - super().__init__( - bos_token_id=bos_token_id, eos_token_id=eos_token_id, tie_word_embeddings=tie_word_embeddings, **kwargs - ) diff --git a/spaces/kangvcar/RealChar/client/web/src/components/Auth/SignIn.js b/spaces/kangvcar/RealChar/client/web/src/components/Auth/SignIn.js deleted file mode 100644 index e4219aeedde3102bac97fc5f9ad9f384da993cc2..0000000000000000000000000000000000000000 --- a/spaces/kangvcar/RealChar/client/web/src/components/Auth/SignIn.js +++ /dev/null @@ -1,91 +0,0 @@ -/** - * src/components/Auth/SignIn.jsx - * signin and signup with google account - * - * created by Lynchee on 7/20/23 - */ - -import React, { useState } from 'react'; -import auth from '../../utils/firebase'; -import { signInWithPopup, GoogleAuthProvider } from "firebase/auth"; -import './styles.css'; - -export const sendTokenToServer = async (token) => { - // Send token to server - const scheme = window.location.protocol; - var currentHost = window.location.host; - var parts = currentHost.split(':'); - var ipAddress = parts[0]; - var newPort = '8000'; - var newHost = ipAddress + ':' + newPort; - const url = scheme + '//' + newHost; - - try { - const response = await fetch(url, { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - 'Authorization': `Bearer ${token}` - } - }); - - if (!response.ok) { - console.error("Sent token failed"); - } - } catch (error) { - console.error("Sent token failed. ", error); - } -} - -export const signInWithGoogle = async (isLoggedIn, setToken) => { - const provider = new GoogleAuthProvider(); - return signInWithPopup(auth, provider) // Return the promise here - .then(async (result) => { - // This gives you a Google Access Token. You can use it to access the Google API. - const credential = GoogleAuthProvider.credentialFromResult(result); - const token = await auth.currentUser.getIdToken(); - - // The signed-in user info. - const user = result.user; - isLoggedIn.current = true; - setToken(token); - await sendTokenToServer(token); - - console.log("Sign-in successfully"); - }).catch((error) => { - // Handle Errors here. - const errorCode = error.code; - const errorMessage = error.message; - console.error(`Error occurred during sign in. Code: ${errorCode}, Message: ${errorMessage}`); - // The email of the user's account used. - const email = error.customData.email; - // The AuthCredential type that was used. - const credential = GoogleAuthProvider.credentialFromError(error); - isLoggedIn.current = false; - }); -} - -const SignIn = ({ isLoggedIn, setToken }) => { - const [isLoading, setIsLoading] = useState(false); - - const signIn = async (e) => { - e.preventDefault(); - setIsLoading(true); - try { - await signInWithGoogle(isLoggedIn, setToken); - } catch (error) { - console.error('Error during sign in:', error); - } - setIsLoading(false); - } - - return ( -
      - -
      - ) -} - -export default SignIn; \ No newline at end of file diff --git a/spaces/kangvcar/RealChar/realtime_ai_character/database/__init__.py b/spaces/kangvcar/RealChar/realtime_ai_character/database/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/kenjiqq/aesthetics-scorer/app.py b/spaces/kenjiqq/aesthetics-scorer/app.py deleted file mode 100644 index c0089d6b734f73687fe7d02aa9494770e9e28178..0000000000000000000000000000000000000000 --- a/spaces/kenjiqq/aesthetics-scorer/app.py +++ /dev/null @@ -1,35 +0,0 @@ -import gradio as gr -import torch -from model import preprocess, load_model -from transformers import CLIPModel, CLIPProcessor - -MODEL = "laion/CLIP-ViT-L-14-laion2B-s32B-b82K" -DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu' - -model = CLIPModel.from_pretrained(MODEL) -vision_model = model.vision_model -vision_model.to(DEVICE) -del model -clip_processor = CLIPProcessor.from_pretrained(MODEL) - -rating_model = load_model("aesthetics_scorer_rating_openclip_vit_l_14.pth").to(DEVICE) -artifacts_model = load_model("aesthetics_scorer_artifacts_openclip_vit_l_14.pth").to(DEVICE) - -def predict(img): - inputs = clip_processor(images=img, return_tensors="pt").to(DEVICE) - with torch.no_grad(): - vision_output = vision_model(**inputs) - pooled_output = vision_output.pooler_output - embedding = preprocess(pooled_output) - with torch.no_grad(): - rating = rating_model(embedding) - artifact = artifacts_model(embedding) - return rating.detach().cpu().item(), artifact.detach().cpu().item() - -gr.Interface( - title="Aesthetics Scorer", - description="Predicts aesthetics and artifact scores for images using CLIP-ViT-L. Demo for https://github.com/kenjiqq/aesthetics-scorer", - fn=predict, - inputs=gr.Image(type="pil"), - outputs=[gr.Number(label="Rating ~1-10 (high is good)"), gr.Number(label="Artifacts ~0-5 (low is good)")] -).launch() \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/Dockerfile b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/Dockerfile deleted file mode 100644 index 5ddc6e3d8b246534a58f9612a88b309fa7e10795..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/Dockerfile +++ /dev/null @@ -1,59 +0,0 @@ -FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04 -ENV DEBIAN_FRONTEND=noninteractive -RUN apt-get update && \ - apt-get upgrade -y && \ - apt-get install -y --no-install-recommends \ - git \ - zip \ - unzip \ - git-lfs \ - wget \ - curl \ - # ffmpeg \ - ffmpeg \ - x264 \ - # python build dependencies \ - build-essential \ - libssl-dev \ - zlib1g-dev \ - libbz2-dev \ - libreadline-dev \ - libsqlite3-dev \ - libncursesw5-dev \ - xz-utils \ - tk-dev \ - libxml2-dev \ - libxmlsec1-dev \ - libffi-dev \ - liblzma-dev && \ - apt-get clean && \ - rm -rf /var/lib/apt/lists/* - -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:${PATH} -WORKDIR ${HOME}/app - -RUN curl https://pyenv.run | bash -ENV PATH=${HOME}/.pyenv/shims:${HOME}/.pyenv/bin:${PATH} -ENV PYTHON_VERSION=3.10.9 -RUN pyenv install ${PYTHON_VERSION} && \ - pyenv global ${PYTHON_VERSION} && \ - pyenv rehash && \ - pip install --no-cache-dir -U pip setuptools wheel - -RUN pip install --no-cache-dir -U torch==1.12.1 torchvision==0.13.1 -COPY --chown=1000 requirements.txt /tmp/requirements.txt -RUN pip install --no-cache-dir -U -r /tmp/requirements.txt - -COPY --chown=1000 . ${HOME}/app -RUN ls -a -ENV PYTHONPATH=${HOME}/app \ - PYTHONUNBUFFERED=1 \ - GRADIO_ALLOW_FLAGGING=never \ - GRADIO_NUM_PORTS=1 \ - GRADIO_SERVER_NAME=0.0.0.0 \ - GRADIO_THEME=huggingface \ - SYSTEM=spaces -CMD ["python", "app.py"] \ No newline at end of file diff --git a/spaces/kevinwang676/DreamlikeArt-PhotoReal-2.0/style.css b/spaces/kevinwang676/DreamlikeArt-PhotoReal-2.0/style.css deleted file mode 100644 index fdbef9e64cc6b9f8003698ffa38997ee22a640ac..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/DreamlikeArt-PhotoReal-2.0/style.css +++ /dev/null @@ -1,84 +0,0 @@ -#col-container { - max-width: 800px; - margin-left: auto; - margin-right: auto; -} -a { - color: inherit; - text-decoration: underline; -} -.gradio-container { - font-family: 'IBM Plex Sans', sans-serif; -} -.gr-button { - color: white; - border-color: #9d66e5; - background: #9d66e5; -} -input[type='range'] { - accent-color: #9d66e5; -} -.dark input[type='range'] { - accent-color: #dfdfdf; -} -.container { - max-width: 800px; - margin: auto; - padding-top: 1.5rem; -} -#gallery { - min-height: 22rem; - margin-bottom: 15px; - margin-left: auto; - margin-right: auto; - border-bottom-right-radius: .5rem !important; - border-bottom-left-radius: .5rem !important; -} -#gallery>div>.h-full { - min-height: 20rem; -} -.details:hover { - text-decoration: underline; -} -.gr-button { - white-space: nowrap; -} -.gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; -} -#advanced-options { - margin-bottom: 20px; -} -.footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; -} -.footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; -} -.dark .logo{ filter: invert(1); } -.dark .footer { - border-color: #303030; -} -.dark .footer>p { - background: #0b0f19; -} -.acknowledgments h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; -} - diff --git a/spaces/koajoel/PolyFormer/fairseq/README.md b/spaces/koajoel/PolyFormer/fairseq/README.md deleted file mode 100644 index dd687174808a6ff341f597eb6a4cc9a1687d74a1..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/README.md +++ /dev/null @@ -1,229 +0,0 @@ -

      - -
      -
      - MIT License - Latest Release - Build Status - Documentation Status -

      - --------------------------------------------------------------------------------- - -Fairseq(-py) is a sequence modeling toolkit that allows researchers and -developers to train custom models for translation, summarization, language -modeling and other text generation tasks. - -We provide reference implementations of various sequence modeling papers: - -
      List of implemented papers

      - -* **Convolutional Neural Networks (CNN)** - + [Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017)](examples/language_model/conv_lm/README.md) - + [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](examples/conv_seq2seq/README.md) - + [Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018)](https://github.com/pytorch/fairseq/tree/classic_seqlevel) - + [Hierarchical Neural Story Generation (Fan et al., 2018)](examples/stories/README.md) - + [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](examples/wav2vec/README.md) -* **LightConv and DynamicConv models** - + [Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)](examples/pay_less_attention_paper/README.md) -* **Long Short-Term Memory (LSTM) networks** - + Effective Approaches to Attention-based Neural Machine Translation (Luong et al., 2015) -* **Transformer (self-attention) networks** - + Attention Is All You Need (Vaswani et al., 2017) - + [Scaling Neural Machine Translation (Ott et al., 2018)](examples/scaling_nmt/README.md) - + [Understanding Back-Translation at Scale (Edunov et al., 2018)](examples/backtranslation/README.md) - + [Adaptive Input Representations for Neural Language Modeling (Baevski and Auli, 2018)](examples/language_model/README.adaptive_inputs.md) - + [Lexically constrained decoding with dynamic beam allocation (Post & Vilar, 2018)](examples/constrained_decoding/README.md) - + [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context (Dai et al., 2019)](examples/truncated_bptt/README.md) - + [Adaptive Attention Span in Transformers (Sukhbaatar et al., 2019)](examples/adaptive_span/README.md) - + [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](examples/translation_moe/README.md) - + [RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019)](examples/roberta/README.md) - + [Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019)](examples/wmt19/README.md) - + [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](examples/joint_alignment_translation/README.md ) - + [Multilingual Denoising Pre-training for Neural Machine Translation (Liu et at., 2020)](examples/mbart/README.md) - + [Neural Machine Translation with Byte-Level Subwords (Wang et al., 2020)](examples/byte_level_bpe/README.md) - + [Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020)](examples/unsupervised_quality_estimation/README.md) - + [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations (Baevski et al., 2020)](examples/wav2vec/README.md) - + [Generating Medical Reports from Patient-Doctor Conversations Using Sequence-to-Sequence Models (Enarvi et al., 2020)](examples/pointer_generator/README.md) - + [Linformer: Self-Attention with Linear Complexity (Wang et al., 2020)](examples/linformer/README.md) - + [Cross-lingual Retrieval for Iterative Self-Supervised Training (Tran et al., 2020)](examples/criss/README.md) - + [Deep Transformers with Latent Depth (Li et al., 2020)](examples/latent_depth/README.md) - + [Unsupervised Cross-lingual Representation Learning for Speech Recognition (Conneau et al., 2020)](https://arxiv.org/abs/2006.13979) - + [Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training (Hsu, et al., 2021)](https://arxiv.org/abs/2104.01027) - + [Unsupervised Speech Recognition (Baevski, et al., 2021)](https://arxiv.org/abs/2105.11084) -* **Non-autoregressive Transformers** - + Non-Autoregressive Neural Machine Translation (Gu et al., 2017) - + Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement (Lee et al. 2018) - + Insertion Transformer: Flexible Sequence Generation via Insertion Operations (Stern et al. 2019) - + Mask-Predict: Parallel Decoding of Conditional Masked Language Models (Ghazvininejad et al., 2019) - + [Levenshtein Transformer (Gu et al., 2019)](examples/nonautoregressive_translation/README.md) -* **Finetuning** - + [Better Fine-Tuning by Reducing Representational Collapse (Aghajanyan et al. 2020)](examples/rxf/README.md) - -

      - -### What's New: - -* September 2021 [`master` branch renamed to `main`](https://github.com/github/renaming). -* July 2021 [Released DrNMT code](examples/discriminative_reranking_nmt/README.md) -* July 2021 [Released Robust wav2vec 2.0 model](examples/wav2vec/README.md) -* June 2021 [Released XLMR-XL and XLMR-XXL models](examples/xlmr/README.md) -* May 2021 [Released Unsupervised Speech Recognition code](examples/wav2vec/unsupervised/README.md) -* March 2021 [Added full parameter and optimizer state sharding + CPU offloading](examples/fully_sharded_data_parallel/README.md) -* February 2021 [Added LASER training code](examples/laser/README.md) -* December 2020: [Added Adaptive Attention Span code](examples/adaptive_span/README.md) -* December 2020: [GottBERT model and code released](examples/gottbert/README.md) -* November 2020: Adopted the [Hydra](https://github.com/facebookresearch/hydra) configuration framework - * [see documentation explaining how to use it for new and existing projects](docs/hydra_integration.md) -* November 2020: [fairseq 0.10.0 released](https://github.com/pytorch/fairseq/releases/tag/v0.10.0) -* October 2020: [Added R3F/R4F (Better Fine-Tuning) code](examples/rxf/README.md) -* October 2020: [Deep Transformer with Latent Depth code released](examples/latent_depth/README.md) -* October 2020: [Added CRISS models and code](examples/criss/README.md) - -
      Previous updates

      - -* September 2020: [Added Linformer code](examples/linformer/README.md) -* September 2020: [Added pointer-generator networks](examples/pointer_generator/README.md) -* August 2020: [Added lexically constrained decoding](examples/constrained_decoding/README.md) -* August 2020: [wav2vec2 models and code released](examples/wav2vec/README.md) -* July 2020: [Unsupervised Quality Estimation code released](examples/unsupervised_quality_estimation/README.md) -* May 2020: [Follow fairseq on Twitter](https://twitter.com/fairseq) -* April 2020: [Monotonic Multihead Attention code released](examples/simultaneous_translation/README.md) -* April 2020: [Quant-Noise code released](examples/quant_noise/README.md) -* April 2020: [Initial model parallel support and 11B parameters unidirectional LM released](examples/megatron_11b/README.md) -* March 2020: [Byte-level BPE code released](examples/byte_level_bpe/README.md) -* February 2020: [mBART model and code released](examples/mbart/README.md) -* February 2020: [Added tutorial for back-translation](https://github.com/pytorch/fairseq/tree/main/examples/backtranslation#training-your-own-model-wmt18-english-german) -* December 2019: [fairseq 0.9.0 released](https://github.com/pytorch/fairseq/releases/tag/v0.9.0) -* November 2019: [VizSeq released (a visual analysis toolkit for evaluating fairseq models)](https://facebookresearch.github.io/vizseq/docs/getting_started/fairseq_example) -* November 2019: [CamemBERT model and code released](examples/camembert/README.md) -* November 2019: [BART model and code released](examples/bart/README.md) -* November 2019: [XLM-R models and code released](examples/xlmr/README.md) -* September 2019: [Nonautoregressive translation code released](examples/nonautoregressive_translation/README.md) -* August 2019: [WMT'19 models released](examples/wmt19/README.md) -* July 2019: fairseq relicensed under MIT license -* July 2019: [RoBERTa models and code released](examples/roberta/README.md) -* June 2019: [wav2vec models and code released](examples/wav2vec/README.md) - -

      - -### Features: - -* multi-GPU training on one machine or across multiple machines (data and model parallel) -* fast generation on both CPU and GPU with multiple search algorithms implemented: - + beam search - + Diverse Beam Search ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424)) - + sampling (unconstrained, top-k and top-p/nucleus) - + [lexically constrained decoding](examples/constrained_decoding/README.md) (Post & Vilar, 2018) -* [gradient accumulation](https://fairseq.readthedocs.io/en/latest/getting_started.html#large-mini-batch-training-with-delayed-updates) enables training with large mini-batches even on a single GPU -* [mixed precision training](https://fairseq.readthedocs.io/en/latest/getting_started.html#training-with-half-precision-floating-point-fp16) (trains faster with less GPU memory on [NVIDIA tensor cores](https://developer.nvidia.com/tensor-cores)) -* [extensible](https://fairseq.readthedocs.io/en/latest/overview.html): easily register new models, criterions, tasks, optimizers and learning rate schedulers -* [flexible configuration](docs/hydra_integration.md) based on [Hydra](https://github.com/facebookresearch/hydra) allowing a combination of code, command-line and file based configuration -* [full parameter and optimizer state sharding](examples/fully_sharded_data_parallel/README.md) -* [offloading parameters to CPU](examples/fully_sharded_data_parallel/README.md) - -We also provide [pre-trained models for translation and language modeling](#pre-trained-models-and-examples) -with a convenient `torch.hub` interface: - -``` python -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de.single_model') -en2de.translate('Hello world', beam=5) -# 'Hallo Welt' -``` - -See the PyTorch Hub tutorials for [translation](https://pytorch.org/hub/pytorch_fairseq_translation/) -and [RoBERTa](https://pytorch.org/hub/pytorch_fairseq_roberta/) for more examples. - -# Requirements and Installation - -* [PyTorch](http://pytorch.org/) version >= 1.5.0 -* Python version >= 3.6 -* For training new models, you'll also need an NVIDIA GPU and [NCCL](https://github.com/NVIDIA/nccl) -* **To install fairseq** and develop locally: - -``` bash -git clone https://github.com/pytorch/fairseq -cd fairseq -pip install --editable ./ - -# on MacOS: -# CFLAGS="-stdlib=libc++" pip install --editable ./ - -# to install the latest stable release (0.10.x) -# pip install fairseq -``` - -* **For faster training** install NVIDIA's [apex](https://github.com/NVIDIA/apex) library: - -``` bash -git clone https://github.com/NVIDIA/apex -cd apex -pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \ - --global-option="--deprecated_fused_adam" --global-option="--xentropy" \ - --global-option="--fast_multihead_attn" ./ -``` - -* **For large datasets** install [PyArrow](https://arrow.apache.org/docs/python/install.html#using-pip): `pip install pyarrow` -* If you use Docker make sure to increase the shared memory size either with `--ipc=host` or `--shm-size` - as command line options to `nvidia-docker run` . - -# Getting Started - -The [full documentation](https://fairseq.readthedocs.io/) contains instructions -for getting started, training new models and extending fairseq with new model -types and tasks. - -# Pre-trained models and examples - -We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, -as well as example training and evaluation commands. - -* [Translation](examples/translation/README.md): convolutional and transformer models are available -* [Language Modeling](examples/language_model/README.md): convolutional and transformer models are available - -We also have more detailed READMEs to reproduce results from specific papers: - -* [Cross-lingual Retrieval for Iterative Self-Supervised Training (Tran et al., 2020)](examples/criss/README.md) -* [wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations (Baevski et al., 2020)](examples/wav2vec/README.md) -* [Unsupervised Quality Estimation for Neural Machine Translation (Fomicheva et al., 2020)](examples/unsupervised_quality_estimation/README.md) -* [Training with Quantization Noise for Extreme Model Compression ({Fan*, Stock*} et al., 2020)](examples/quant_noise/README.md) -* [Neural Machine Translation with Byte-Level Subwords (Wang et al., 2020)](examples/byte_level_bpe/README.md) -* [Multilingual Denoising Pre-training for Neural Machine Translation (Liu et at., 2020)](examples/mbart/README.md) -* [Reducing Transformer Depth on Demand with Structured Dropout (Fan et al., 2019)](examples/layerdrop/README.md) -* [Jointly Learning to Align and Translate with Transformer Models (Garg et al., 2019)](examples/joint_alignment_translation/README.md) -* [Levenshtein Transformer (Gu et al., 2019)](examples/nonautoregressive_translation/README.md) -* [Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019)](examples/wmt19/README.md) -* [RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019)](examples/roberta/README.md) -* [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](examples/wav2vec/README.md) -* [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](examples/translation_moe/README.md) -* [Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)](examples/pay_less_attention_paper/README.md) -* [Understanding Back-Translation at Scale (Edunov et al., 2018)](examples/backtranslation/README.md) -* [Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018)](https://github.com/pytorch/fairseq/tree/classic_seqlevel) -* [Hierarchical Neural Story Generation (Fan et al., 2018)](examples/stories/README.md) -* [Scaling Neural Machine Translation (Ott et al., 2018)](examples/scaling_nmt/README.md) -* [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](examples/conv_seq2seq/README.md) -* [Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017)](examples/language_model/README.conv.md) - -# Join the fairseq community - -* Twitter: https://twitter.com/fairseq -* Facebook page: https://www.facebook.com/groups/fairseq.users -* Google group: https://groups.google.com/forum/#!forum/fairseq-users - -# License - -fairseq(-py) is MIT-licensed. -The license applies to the pre-trained models as well. - -# Citation - -Please cite as: - -``` bibtex -@inproceedings{ott2019fairseq, - title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling}, - author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli}, - booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations}, - year = {2019}, -} -``` diff --git a/spaces/koajoel/PolyFormer/fairseq/examples/linformer/linformer_src/modules/multihead_linear_attention.py b/spaces/koajoel/PolyFormer/fairseq/examples/linformer/linformer_src/modules/multihead_linear_attention.py deleted file mode 100644 index 6be1007279217c5de644e8b054f5d14a19f06c55..0000000000000000000000000000000000000000 --- a/spaces/koajoel/PolyFormer/fairseq/examples/linformer/linformer_src/modules/multihead_linear_attention.py +++ /dev/null @@ -1,481 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from typing import Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.quant_noise import quant_noise -from torch import Tensor, nn -from torch.nn import Parameter - - -@with_incremental_state -class MultiheadLinearAttention(nn.Module): - """Multi-headed linformer attention. - - Projects the key and values down to the compressed dimension, before computing self-attention. - - See "Linformer: Self-Attention with Linear Complexity" for more details. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - add_bias_kv=False, - add_zero_attn=False, - self_attention=False, - encoder_decoder_attention=False, - q_noise=0.0, - qn_block_size=8, - compressed=1, - max_seq_len=256, - shared_kv_compressed=0, - shared_compress_layer=None, - freeze_compress=0, - ): - super().__init__() - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.num_heads = num_heads - self.dropout = dropout - self.head_dim = embed_dim // num_heads - assert ( - self.head_dim * num_heads == self.embed_dim - ), "embed_dim must be divisible by num_heads" - self.scaling = self.head_dim ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - - assert not self.self_attention or self.qkv_same_dim, ( - "Self-attention requires query, key and " "value to be of the same size" - ) - - self.k_proj = quant_noise( - nn.Linear(self.kdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.v_proj = quant_noise( - nn.Linear(self.vdim, embed_dim, bias=bias), q_noise, qn_block_size - ) - self.q_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - # used for compress sequence to subsequence - if shared_compress_layer is None: - self.compress_seq_len = max_seq_len // compressed - self.compress_k = nn.Linear(max_seq_len, self.compress_seq_len, bias=False) - if shared_kv_compressed == 0: - self.compress_v = nn.Linear( - max_seq_len, self.compress_seq_len, bias=False - ) - self.layerwise_sharing = False - else: - self.compress_k = shared_compress_layer - if shared_kv_compressed == 0: - self.compress_v = shared_compress_layer - self.layerwise_sharing = True - self.shared_kv_compressed = shared_kv_compressed - - self.out_proj = quant_noise( - nn.Linear(embed_dim, embed_dim, bias=bias), q_noise, qn_block_size - ) - - if add_bias_kv: - self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim)) - self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim)) - else: - self.bias_k = self.bias_v = None - - self.add_zero_attn = add_zero_attn - - self.reset_parameters() - - if freeze_compress == 1: - self.compress_k.weight.requires_grad = False - if shared_kv_compressed == 0: - self.compress_v.weight.requires_grad = False - - self.onnx_trace = False - - def prepare_for_onnx_export_(self): - self.onnx_trace = True - - def reset_parameters(self): - if self.qkv_same_dim: - # Empirically observed the convergence to be much better with - # the scaled initialization - nn.init.xavier_uniform_(self.k_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.v_proj.weight, gain=1 / math.sqrt(2)) - nn.init.xavier_uniform_(self.q_proj.weight, gain=1 / math.sqrt(2)) - if ( - not self.layerwise_sharing - ): # otherwise, we already initialize the parameters - nn.init.xavier_uniform_(self.compress_k.weight, gain=1 / math.sqrt(2)) - if self.shared_kv_compressed == 0: - nn.init.xavier_uniform_( - self.compress_v.weight, gain=1 / math.sqrt(2) - ) - else: - nn.init.xavier_uniform_(self.k_proj.weight) - nn.init.xavier_uniform_(self.v_proj.weight) - nn.init.xavier_uniform_(self.q_proj.weight) - if ( - not self.layerwise_sharing - ): # otherwise, we already initialize the parameters - nn.init.xavier_uniform_(self.compress_k.weight) - if self.shared_kv_compressed == 0: - nn.init.xavier_uniform_(self.compress_v.weight) - - nn.init.xavier_uniform_(self.out_proj.weight) - if self.out_proj.bias is not None: - nn.init.constant_(self.out_proj.bias, 0.0) - if self.bias_k is not None: - nn.init.xavier_normal_(self.bias_k) - if self.bias_v is not None: - nn.init.xavier_normal_(self.bias_v) - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - need_weights: bool = True, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - before_softmax: bool = False, - need_head_weights: bool = False, - ) -> Tuple[Tensor, Optional[Tensor]]: - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - need_weights (bool, optional): return the attention weights, - averaged over heads (default: False). - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - before_softmax (bool, optional): return the raw attention - weights and values before the attention softmax. - need_head_weights (bool, optional): return the attention - weights for each head. Implies *need_weights*. Default: - return the average attention weights over all heads. - """ - if need_head_weights: - need_weights = True - - tgt_len, bsz, embed_dim = query.size() - assert embed_dim == self.embed_dim - assert list(query.size()) == [tgt_len, bsz, embed_dim] - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention: - q = self.q_proj(query) - - k_input = query.permute(1, 2, 0).contiguous() # B * C * T - k_input = ( - F.linear(k_input, self.compress_k.weight[:, 0:tgt_len]) - .permute(2, 0, 1) - .contiguous() - ) - k = self.k_proj(k_input) - - v_input = query.permute(1, 2, 0).contiguous() # B * C * T - if self.shared_kv_compressed == 0: - v_input = ( - F.linear(v_input, self.compress_v.weight[:, 0:tgt_len]) - .permute(2, 0, 1) - .contiguous() - ) - if self.shared_kv_compressed == 1: # use shared kv compressed linear layer - v_input = ( - F.linear(v_input, self.compress_k.weight[:, 0:tgt_len]) - .permute(2, 0, 1) - .contiguous() - ) - v = self.v_proj(v_input) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - if self.bias_k is not None: - assert self.bias_v is not None - k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [ - key_padding_mask, - key_padding_mask.new_zeros(key_padding_mask.size(0), 1), - ], - dim=1, - ) - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.num_heads, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = MultiheadLinearAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - - saved_state["prev_key"] = k.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_value"] = v.view(bsz, self.num_heads, -1, self.head_dim) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - src_len = k.size(1) - - if self.add_zero_attn: - assert v is not None - src_len += 1 - k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1) - v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1) - if attn_mask is not None: - attn_mask = torch.cat( - [attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1 - ) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - attn_weights = MultiheadLinearAttention.apply_sparse_mask( - attn_weights, tgt_len, src_len, bsz - ) - - assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len] - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - if self.onnx_trace: - attn_mask = attn_mask.repeat(attn_weights.size(0), 1, 1) - attn_weights += attn_mask - - if before_softmax: - return attn_weights, v - - attn_weights_float = utils.softmax( - attn_weights, dim=-1, onnx_trace=self.onnx_trace - ) - attn_weights = attn_weights_float.type_as(attn_weights) - attn_probs = F.dropout( - attn_weights, - p=self.dropout, - training=self.training, - ) - assert v is not None - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] - if self.onnx_trace and attn.size(1) == 1: - # when ONNX tracing a single decoder step (sequence length == 1) - # the transpose is a no-op copy before view, thus unnecessary - attn = attn.contiguous().view(tgt_len, bsz, embed_dim) - else: - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - attn = self.out_proj(attn) - attn_weights: Optional[Tensor] = None - if need_weights: - attn_weights = attn_weights_float.view( - bsz, self.num_heads, tgt_len, src_len - ).transpose(1, 0) - if not need_head_weights: - # average attention weights over heads - attn_weights = attn_weights.mean(dim=0) - - return attn, attn_weights - - @staticmethod - def _append_prev_key_padding_mask( - key_padding_mask: Optional[Tensor], - prev_key_padding_mask: Optional[Tensor], - batch_size: int, - src_len: int, - static_kv: bool, - ) -> Optional[Tensor]: - # saved key padding masks have shape (bsz, seq_len) - if prev_key_padding_mask is not None and static_kv: - new_key_padding_mask = prev_key_padding_mask - elif prev_key_padding_mask is not None and key_padding_mask is not None: - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1 - ) - # During incremental decoding, as the padding token enters and - # leaves the frame, there will be a time when prev or current - # is None - elif prev_key_padding_mask is not None: - filler = torch.zeros( - (batch_size, src_len - prev_key_padding_mask.size(1)), - device=prev_key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), filler.float()], dim=1 - ) - elif key_padding_mask is not None: - filler = torch.zeros( - (batch_size, src_len - key_padding_mask.size(1)), - device=key_padding_mask.device, - ) - new_key_padding_mask = torch.cat( - [filler.float(), key_padding_mask.float()], dim=1 - ) - else: - new_key_padding_mask = prev_key_padding_mask - return new_key_padding_mask - - @torch.jit.export - def reorder_incremental_state( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - new_order: Tensor, - ): - """Reorder buffered internal state (for incremental generation).""" - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - for k in input_buffer.keys(): - input_buffer_k = input_buffer[k] - if input_buffer_k is not None: - if self.encoder_decoder_attention and input_buffer_k.size( - 0 - ) == new_order.size(0): - break - input_buffer[k] = input_buffer_k.index_select(0, new_order) - incremental_state = self._set_input_buffer(incremental_state, input_buffer) - return incremental_state - - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ) -> Dict[str, Optional[Tensor]]: - result = self.get_incremental_state(incremental_state, "attn_state") - if result is not None: - return result - else: - empty_result: Dict[str, Optional[Tensor]] = {} - return empty_result - - def _set_input_buffer( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - buffer: Dict[str, Optional[Tensor]], - ): - return self.set_incremental_state(incremental_state, "attn_state", buffer) - - def apply_sparse_mask(attn_weights, tgt_len: int, src_len: int, bsz: int): - return attn_weights - - def upgrade_state_dict_named(self, state_dict, name): - prefix = name + "." if name != "" else "" - items_to_add = {} - keys_to_remove = [] - for k in state_dict.keys(): - if k.endswith(prefix + "in_proj_weight"): - # in_proj_weight used to be q + k + v with same dimensions - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.weight"] = state_dict[k][:dim] - items_to_add[prefix + "k_proj.weight"] = state_dict[k][dim : 2 * dim] - items_to_add[prefix + "v_proj.weight"] = state_dict[k][2 * dim :] - - keys_to_remove.append(k) - - k_bias = prefix + "in_proj_bias" - if k_bias in state_dict.keys(): - dim = int(state_dict[k].shape[0] / 3) - items_to_add[prefix + "q_proj.bias"] = state_dict[k_bias][:dim] - items_to_add[prefix + "k_proj.bias"] = state_dict[k_bias][ - dim : 2 * dim - ] - items_to_add[prefix + "v_proj.bias"] = state_dict[k_bias][2 * dim :] - - keys_to_remove.append(prefix + "in_proj_bias") - - for k in keys_to_remove: - del state_dict[k] - - for key, value in items_to_add.items(): - state_dict[key] = value diff --git a/spaces/koushik-org/Trading_QA_Bot/app.py b/spaces/koushik-org/Trading_QA_Bot/app.py deleted file mode 100644 index e96ba927709f8c9a33d9d72da175762cfb8c86b7..0000000000000000000000000000000000000000 --- a/spaces/koushik-org/Trading_QA_Bot/app.py +++ /dev/null @@ -1,75 +0,0 @@ -# Imports -import gradio as gr -from helper_functions import * - -with gr.Blocks() as app: - gr.Markdown('# Trading Q&A Bot') - session_data = gr.State([ - [],[] - ]) - def user(user_message, history): - return "", history + [[user_message, None]] - - def bot(history, session_data_fn): - messages_archived = session_data_fn[0] - messages_current = session_data_fn[1] - bot_message, messages_archived, messages_current = get_reply(history[-1][0], messages_archived, messages_current) - history[-1][1] = bot_message - session_data_fn[0] = messages_archived - session_data_fn[1] = messages_current - return history, session_data_fn - - def reset_memory(session_data_fn): - messages_archived = session_data_fn[0] - # print("Message Archived Len=", len(messages_archived)) - if(len(messages_archived)>=21): - messages_archived = messages_archived[0:1] + messages_archived[3:] - session_data_fn[0] = messages_archived - return session_data_fn - - def clear_data(session_data_fn): - messages_archived = [ - {"role": "system", "content": pre_text} - ] - messages_current = [] - session_data_fn[0] = messages_archived - session_data_fn[1] = messages_current - return None, session_data_fn - - def get_context_gr(session_data_fn): - messages_current = session_data_fn[1] - return str(messages_current) - - with gr.Tab("Chat"): - with gr.Row(): - with gr.Column(): - msg = gr.Textbox() - with gr.Row(): - submit = gr.Button("Submit") - clear = gr.Button("Clear") - with gr.Column(): - chatbot = gr.Chatbot() - - with gr.Tab("Prompt"): - context = gr.Textbox() - submit_p = gr.Button("Check Prompt") - # Tab Chat - msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( - bot, [chatbot, session_data], [chatbot, session_data] - ).then( - fn = reset_memory, inputs = session_data, outputs = session_data - ) - submit.click(user, [msg, chatbot], [msg, chatbot], queue=False).then( - bot, [chatbot, session_data], [chatbot, session_data] - ).then( - fn = reset_memory, inputs = session_data, outputs = session_data - ) - clear.click( - fn = clear_data, - inputs = session_data, - outputs = [chatbot, session_data], - queue = False - ) - # Tab Prompt - submit_p.click(get_context_gr, session_data, context, queue=False) -app.launch(auth=(os.getenv("id"), os.getenv("password")), show_api=False) \ No newline at end of file diff --git a/spaces/krafiq/deep-neural-networks-for-navier-stokes-equations/app.py b/spaces/krafiq/deep-neural-networks-for-navier-stokes-equations/app.py deleted file mode 100644 index 94e39310c677a502dcbd0de850aad355df808244..0000000000000000000000000000000000000000 --- a/spaces/krafiq/deep-neural-networks-for-navier-stokes-equations/app.py +++ /dev/null @@ -1,290 +0,0 @@ -import pandas as pd -import tensorflow as tf -from tensorflow.keras.models import load_model -import cv2 -import numpy as np -import matplotlib.pyplot as plt -import gradio as gr - -# developing the flowfield space -flow_field = np.ones((128,256), dtype = np.uint8) - -# Changing the left input side -flow_field[:,0] = 3 -# Changing the right output side -flow_field[:,-1] = 4 -# Changing the top layer -flow_field[0,:] = 2 -# Changing the bottom layer -flow_field[-1,:] = 2 - -mean_u = 0.075003795 -mean_v = -0.000036 -mean_p = 0.004301 - -std_dev_u = 0.04605 -std_dev_v = 0.013812 -std_dev_p = 0.007917 - -def nvs_loss(y_pred, rho=10, nu=0.0001): #arbitary rho and nu(Later use values of air) - u,v,p = tf.split(y_pred, 3, axis=3) - - #First order derivative - du_dx, du_dy = tf.image.image_gradients(u) # tf.image.image_gradients returns a tuple containing two tensors: u-grad along the x dir and u-grad along the y dir - dv_dx, dv_dy = tf.image.image_gradients(v) - dp_dx, dp_dy = tf.image.image_gradients(p) - - #Second order derivatives - du_dx2, du_dydx = tf.image.image_gradients(du_dx) # du_dydx will be unused - du_dxdy, du_dy2 = tf.image.image_gradients(du_dy) # du_dxdy will be unused - - dv_dx2, dv_dydx = tf.image.image_gradients(dv_dx) - dv_dxdy, dv_dy2 = tf.image.image_gradients(dv_dy) - - #Momentum equation - er1_tensor = tf.math.multiply(u, du_dx) + tf.math.multiply(v, du_dy) + 1.0*dp_dx/rho - nu*(du_dx2 + du_dy2) - er2_tensor = tf.math.multiply(u, dv_dx) + tf.math.multiply(v, dv_dy) + 1.0*dp_dy/rho - nu*(dv_dx2 + dv_dy2) - - # # #Continuity equation - er3_tensor = du_dx + dv_dy - - er1 = tf.reduce_mean(er1_tensor) - er2 = tf.reduce_mean(er2_tensor) - er3 = tf.reduce_mean(er3_tensor) - - return er1*er1 + er2*er2 + er3*er3 - - # Initiating the Loss Function- -def custom_loss(y_true, y_pred): - nv_loss = nvs_loss(y_pred) - mse_loss = tf.reduce_mean(tf.square(y_true-y_pred)) # Try mse loss function here - return mse_loss + nv_loss - -import torch -import matplotlib -def colorize(value, vmin=None, vmax=None, cmap='gray_r', invalid_val=-99, invalid_mask=None, background_color=(128, 128, 128, 255), gamma_corrected=False, value_transform=None): - """Converts a depth map to a color image. - - Args: - value (torch.Tensor, numpy.ndarry): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed - vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to None. - vmax (float, optional): vmax-valued entries are mapped to end color of cmap. If None, value.max() is used. Defaults to None. - cmap (str, optional): matplotlib colormap to use. Defaults to 'magma_r'. - invalid_val (int, optional): Specifies value of invalid pixels that should be colored as 'background_color'. Defaults to -99. - invalid_mask (numpy.ndarray, optional): Boolean mask for invalid regions. Defaults to None. - background_color (tuple[int], optional): 4-tuple RGB color to give to invalid pixels. Defaults to (128, 128, 128, 255). - gamma_corrected (bool, optional): Apply gamma correction to colored image. Defaults to False. - value_transform (Callable, optional): Apply transform function to valid pixels before coloring. Defaults to None. - - Returns: - numpy.ndarray, dtype - uint8: Colored depth map. Shape: (H, W, 4) - """ - if isinstance(value, torch.Tensor): - value = value.detach().cpu().numpy() - - value = value.squeeze() - if invalid_mask is None: - invalid_mask = value == invalid_val - mask = np.logical_not(invalid_mask) - - # normalize - # vmin = np.percentile(value[mask],2) if vmin is None else vmin - # vmax = np.percentile(value[mask],85) if vmax is None else vmax - vmin = np.min(value[mask]) if vmin is None else vmin - vmax = np.max(value[mask]) if vmax is None else vmax - if vmin != vmax: - value = (value - vmin) / (vmax - vmin) # vmin..vmax - else: - # Avoid 0-division - value = value * 0. - - # squeeze last dim if it exists - # grey out the invalid values - - value[invalid_mask] = np.nan - cmapper = matplotlib.cm.get_cmap(cmap) - if value_transform: - value = value_transform(value) - # value = value / value.max() - value = cmapper(value, bytes=True) # (nxmx4) - - # img = value[:, :, :] - img = value[...] - img[invalid_mask] = background_color - - # return img.transpose((2, 0, 1)) - if gamma_corrected: - # gamma correction - img = img / 255 - img = np.power(img, 2.2) - img = img * 255 - img = img.astype(np.uint8) - return img - -def img_preprocess(image, h, w): - # Convert the drawn image to grayscale - img_gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) - - # Threshold the grayscale image to create a binary image - _, binary_img = cv2.threshold(img_gray, 1, 255, cv2.THRESH_BINARY) - - # Perform flood fill starting from a point inside the shape. Fill the inside with pixel value 0 - seed_point = (int(h/2), int(w/2)) - retval, flooded_image, mask, rect = cv2.floodFill(binary_img, None, seed_point, 0) - flooded_image = (flooded_image/255).astype(np.uint8) - return flooded_image - -def patch_stiching(flooded_image, h, w, x0, y0): # ((x0, y0) = center of channel, (w1, h1) = height and width of patch) - flow_field_updated = np.copy(flow_field) - flow_field_updated[int(x0-w/2):int(x0+w/2),int(y0-h/2):int(y0+h/2)] = flooded_image - - - # flow_field_updated is the main thing that we will use to make our predictions on - - test_img = np.expand_dims(flow_field_updated, axis = 0) - test_img = np.expand_dims(test_img, axis = 3) # Shape of test_img = (1, 128, 256) - return test_img - -# Define grid points -x_points = np.linspace(0, 255, 256) -y_points = np.linspace(0, 127, 128) -X, Y = np.meshgrid(x_points, y_points) - -def return_quiver_plot(u, v): - velocity = np.sqrt(u**2 + v**2) - ax = plt.subplot() - ax.imshow(velocity, origin = 'lower', extent = (0,256, 0,128), cmap = 'gray') - q = ax.quiver(X[5::8,5::8], Y[5::8,5::8], u[5::8,5::8], u[5::8,5::8], pivot = 'middle', color = 'red') - # ax.quiverkey(q, X=0.9, Y=1.05, U=2, - # label='m/s', labelpos='E') - # plt.title("Velocity distribution") - # plt.show() - return q - -def squeeze_function(img): - img = np.squeeze(img, axis = 0) - img = np.squeeze(img, axis = 2) - return img - -# Taking a shape from the user on sketchpad and placing it inside the fluid flow - - -h, w = 48, 48 # patch_size in which the obstacle will be drawn -x0, y0 = 64, 128 # (x0, y0) = center of channel - -def fill_shape_with_pixels(img): #img is taken by gradio as uint8 - if img is None: - return np.zeros((h, w), dtype=np.uint8) # "No input sketch" -# Calling the the flooded image function to fill inside the obstacle - flooded_image = img_preprocess(img, h, w) -# Performing patch statching to put the obstacle at the required center position - test_img = patch_stiching(flooded_image, h, w, x0, y0) - -# Loading and Compiling the Model - model_path = "Pinns_Loss_file.h5" - model = load_model(model_path, compile = False) - model.compile(loss=custom_loss, optimizer=tf.keras.optimizers.AdamW(learning_rate = 0.0001), metrics=['mae', 'cosine_proximity']) - - # Making Model prediction from input sketch shape - prediction = model.predict(test_img) # (prediction.shape = (1, 128, 256, 3)) - u_pred, v_pred, p_pred = np.split(prediction, 3, axis=3) # shape of u_pred, v_pred, p_pred = (1, 128, 256, 1) - - # De-Normalizing teh Data: - u_pred = ((u_pred*std_dev_u) + mean_u) - v_pred = ((v_pred*std_dev_v) + mean_v) - p_pred = ((p_pred*std_dev_p) + mean_p) - - # Making test_img in shape required by zero_pixel_location - req_img = squeeze_function(test_img) - -# Storing the location of 0 pixel values - #req_img = req_img.astype(int) - zero_pixel_locations = np.argwhere(req_img == 0) - -# Reducing the dimensions- - u_profile = u_pred[0][:,:,0] # shape of u profile to compatible shape (H, W) = (128, 256) - v_profile = v_pred[0][:,:,0] - p_profile = p_pred[0][:,:,0] - p_profile[p_profile>0.02] = 0.02 - -# Creating a copy of the above profiles- - u_profile_dash = np.copy(u_profile) - v_profile_dash = np.copy(v_profile) - -# Creating a copy of the above profiles- - u_profile_dash_1 = np.copy(u_profile) - v_profile_dash_1 = np.copy(v_profile) - - -# Hollowing the obstacle out from the u and v plots. Origin of imae is lop left and origin of plot is top right - for y, x in zero_pixel_locations: - u_profile_dash[128 - y, x] = 0 - v_profile_dash[128 - y, x] = 0 - # will be used for image - u_profile_dash_1[y, x] = 0 - v_profile_dash_1[y, x] = 0 - - -# Quiver Plot - quiver_plot = plt.figure(figsize = (14,6), edgecolor = "gray") - velocity = np.sqrt(u_profile_dash_1**2 + v_profile_dash_1**2) - ax = plt.subplot() - ax.imshow(velocity, cmap = 'gray', extent = (0,256, 0,128)) - q = ax.quiver(X[5::7,5::7], Y[5::7,5::7], u_profile_dash[5::7,5::7], v_profile_dash[5::7,5::7], pivot = 'middle', color = 'red') - ax.quiverkey(q, X=0.9, Y=1.07, U=2, - label='m/s', labelpos='E') - plt.title("Velocity distribution", fontsize = 11) - plt.xlabel("Length of Channel", fontsize = 11) - plt.ylabel("Height of Channel", fontsize = 11) - - # StreamLine Plot - streamline_plot = plt.figure(figsize = (14,6), edgecolor = "gray") - plt.streamplot(X, Y, u_profile_dash, v_profile_dash, density = 4) - plt.axis('scaled') - plt.title("Streamline Plot", fontsize = 11) - plt.xlabel("Length of Channel", fontsize = 11) - plt.ylabel("Height of Channel", fontsize = 11) - - # Colorize taken from ZoeDepth Model - u_colored = colorize(u_profile, cmap = 'jet') - #cbar_u = plt.colorbar(u_profile,fraction=0.025, pad=0.05) - v_colored = colorize(v_profile, cmap = 'jet') - #cbar_v = plt.colorbar(v_colored,fraction=0.025, pad=0.05) - p_colored = colorize(p_profile, cmap = 'jet') - #cbar_p = plt.colorbar(p_colored,fraction=0.025, pad=0.05) - - - return colorize(req_img, cmap = 'jet'), quiver_plot, streamline_plot, u_colored, v_colored, p_colored - -# Importing gr.Blocks() - -with gr.Blocks(theme="Taithrah/Minimal") as demo: - gr.Markdown( - """ - # Channel Flow - Physics Constrained DNN for Predicting Mean Turbulent Flows - The App solves 2-D incompressible steady state NS equations for any given 2-D closed geometry. Geometry needs to be drawn around the center of the patch.\n - It predicts the streamlines,horizontal & vertical velocity profiles and the pressure profiles using a hybrid loss function.\n - Model Parameters (In SI Units) - Kinematic Viscosity = 0.0001, Input horizontal velocity = 0.075, Input vertical velocity = 0 - """) - with gr.Row(): - with gr.Column(): - input_sketch = gr.Image(label = "Draw any Obstacle contour around the patch center", - tool="sketch", source="canvas", shape=(h, w), brush_radius = 3) - Process_button = gr.Button("Process Flow Parameters") - - with gr.Column(): - filled_channel = gr.Image(label = "Drawn object within fluid domain of dimensions 128*256", container = True) - - with gr.Row(): - quiver_plot = gr.Plot(label = "Velocity Distribution Around The Obstacle", scale = 2) - - with gr.Row(): - streamline_plot = gr.Plot(label = "Stream Lines Around The Obstacle", scale = 2) - - with gr.Row(): - u_image = gr.Image(label = "Horizontal Velocity") - v_image = gr.Image(label = "Vertical Velocity") - p_image = gr.Image(label = "Pressure") - - - Process_button.click(fn=fill_shape_with_pixels, inputs=input_sketch, outputs=[filled_channel, quiver_plot, streamline_plot, u_image, v_image, p_image]) - -demo.launch(debug=True, inline = False) \ No newline at end of file diff --git a/spaces/kukuhtw/AutoGPT/autogpt/agent/agent.py b/spaces/kukuhtw/AutoGPT/autogpt/agent/agent.py deleted file mode 100644 index ee7885f8844022597321fa6b492430ec34c0d6b9..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/autogpt/agent/agent.py +++ /dev/null @@ -1,197 +0,0 @@ -from colorama import Fore, Style - -from autogpt.app import execute_command, get_command -from autogpt.chat import chat_with_ai, create_chat_message -from autogpt.config import Config -from autogpt.json_utils.json_fix_llm import fix_json_using_multiple_techniques -from autogpt.json_utils.utilities import validate_json -from autogpt.logs import logger, print_assistant_thoughts -from autogpt.speech import say_text -from autogpt.spinner import Spinner -from autogpt.utils import clean_input - - -class Agent: - """Agent class for interacting with Auto-GPT. - - Attributes: - ai_name: The name of the agent. - memory: The memory object to use. - full_message_history: The full message history. - next_action_count: The number of actions to execute. - system_prompt: The system prompt is the initial prompt that defines everything the AI needs to know to achieve its task successfully. - Currently, the dynamic and customizable information in the system prompt are ai_name, description and goals. - - triggering_prompt: The last sentence the AI will see before answering. For Auto-GPT, this prompt is: - Determine which next command to use, and respond using the format specified above: - The triggering prompt is not part of the system prompt because between the system prompt and the triggering - prompt we have contextual information that can distract the AI and make it forget that its goal is to find the next task to achieve. - SYSTEM PROMPT - CONTEXTUAL INFORMATION (memory, previous conversations, anything relevant) - TRIGGERING PROMPT - - The triggering prompt reminds the AI about its short term meta task (defining the next task) - """ - - def __init__( - self, - ai_name, - memory, - full_message_history, - next_action_count, - system_prompt, - triggering_prompt, - ): - self.ai_name = ai_name - self.memory = memory - self.full_message_history = full_message_history - self.next_action_count = next_action_count - self.system_prompt = system_prompt - self.triggering_prompt = triggering_prompt - - def start_interaction_loop(self): - # Interaction Loop - cfg = Config() - loop_count = 0 - command_name = None - arguments = None - user_input = "" - - while True: - # Discontinue if continuous limit is reached - loop_count += 1 - if ( - cfg.continuous_mode - and cfg.continuous_limit > 0 - and loop_count > cfg.continuous_limit - ): - logger.typewriter_log( - "Continuous Limit Reached: ", Fore.YELLOW, f"{cfg.continuous_limit}" - ) - break - - # Send message to AI, get response - with Spinner("Thinking... "): - assistant_reply = chat_with_ai( - self.system_prompt, - self.triggering_prompt, - self.full_message_history, - self.memory, - cfg.fast_token_limit, - ) # TODO: This hardcodes the model to use GPT3.5. Make this an argument - - assistant_reply_json = fix_json_using_multiple_techniques(assistant_reply) - - # Print Assistant thoughts - if assistant_reply_json != {}: - validate_json(assistant_reply_json, "llm_response_format_1") - # Get command name and arguments - try: - print_assistant_thoughts(self.ai_name, assistant_reply_json) - command_name, arguments = get_command(assistant_reply_json) - # command_name, arguments = assistant_reply_json_valid["command"]["name"], assistant_reply_json_valid["command"]["args"] - if cfg.speak_mode: - say_text(f"I want to execute {command_name}") - except Exception as e: - logger.error("Error: \n", str(e)) - - if not cfg.continuous_mode and self.next_action_count == 0: - ### GET USER AUTHORIZATION TO EXECUTE COMMAND ### - # Get key press: Prompt the user to press enter to continue or escape - # to exit - logger.typewriter_log( - "NEXT ACTION: ", - Fore.CYAN, - f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL} " - f"ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}", - ) - print( - "Enter 'y' to authorise command, 'y -N' to run N continuous " - "commands, 'n' to exit program, or enter feedback for " - f"{self.ai_name}...", - flush=True, - ) - while True: - console_input = clean_input( - Fore.MAGENTA + "Input:" + Style.RESET_ALL - ) - if console_input.lower().strip() == "y": - user_input = "GENERATE NEXT COMMAND JSON" - break - elif console_input.lower().strip() == "": - print("Invalid input format.") - continue - elif console_input.lower().startswith("y -"): - try: - self.next_action_count = abs( - int(console_input.split(" ")[1]) - ) - user_input = "GENERATE NEXT COMMAND JSON" - except ValueError: - print( - "Invalid input format. Please enter 'y -n' where n is" - " the number of continuous tasks." - ) - continue - break - elif console_input.lower() == "n": - user_input = "EXIT" - break - else: - user_input = console_input - command_name = "human_feedback" - break - - if user_input == "GENERATE NEXT COMMAND JSON": - logger.typewriter_log( - "-=-=-=-=-=-=-= COMMAND AUTHORISED BY USER -=-=-=-=-=-=-=", - Fore.MAGENTA, - "", - ) - elif user_input == "EXIT": - print("Exiting...", flush=True) - break - else: - # Print command - logger.typewriter_log( - "NEXT ACTION: ", - Fore.CYAN, - f"COMMAND = {Fore.CYAN}{command_name}{Style.RESET_ALL}" - f" ARGUMENTS = {Fore.CYAN}{arguments}{Style.RESET_ALL}", - ) - - # Execute command - if command_name is not None and command_name.lower().startswith("error"): - result = ( - f"Command {command_name} threw the following error: {arguments}" - ) - elif command_name == "human_feedback": - result = f"Human feedback: {user_input}" - else: - result = ( - f"Command {command_name} returned: " - f"{execute_command(command_name, arguments)}" - ) - if self.next_action_count > 0: - self.next_action_count -= 1 - - memory_to_add = ( - f"Assistant Reply: {assistant_reply} " - f"\nResult: {result} " - f"\nHuman Feedback: {user_input} " - ) - - self.memory.add(memory_to_add) - - # Check if there's a result from the command append it to the message - # history - if result is not None: - self.full_message_history.append(create_chat_message("system", result)) - logger.typewriter_log("SYSTEM: ", Fore.YELLOW, result) - else: - self.full_message_history.append( - create_chat_message("system", "Unable to execute command") - ) - logger.typewriter_log( - "SYSTEM: ", Fore.YELLOW, "Unable to execute command" - ) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/__init__.py deleted file mode 100644 index 32d2381f3c26ef15ed8a0c0071202aed68bf4f32..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/__init__.py +++ /dev/null @@ -1,85 +0,0 @@ -"""Pillow (Fork of the Python Imaging Library) - -Pillow is the friendly PIL fork by Jeffrey A. Clark (Alex) and contributors. - https://github.com/python-pillow/Pillow/ - -Pillow is forked from PIL 1.1.7. - -PIL is the Python Imaging Library by Fredrik Lundh and contributors. -Copyright (c) 1999 by Secret Labs AB. - -Use PIL.__version__ for this Pillow version. - -;-) -""" - -from . import _version - -# VERSION was removed in Pillow 6.0.0. -# PILLOW_VERSION was removed in Pillow 9.0.0. -# Use __version__ instead. -__version__ = _version.__version__ -del _version - - -_plugins = [ - "BlpImagePlugin", - "BmpImagePlugin", - "BufrStubImagePlugin", - "CurImagePlugin", - "DcxImagePlugin", - "DdsImagePlugin", - "EpsImagePlugin", - "FitsImagePlugin", - "FitsStubImagePlugin", - "FliImagePlugin", - "FpxImagePlugin", - "FtexImagePlugin", - "GbrImagePlugin", - "GifImagePlugin", - "GribStubImagePlugin", - "Hdf5StubImagePlugin", - "IcnsImagePlugin", - "IcoImagePlugin", - "ImImagePlugin", - "ImtImagePlugin", - "IptcImagePlugin", - "JpegImagePlugin", - "Jpeg2KImagePlugin", - "McIdasImagePlugin", - "MicImagePlugin", - "MpegImagePlugin", - "MpoImagePlugin", - "MspImagePlugin", - "PalmImagePlugin", - "PcdImagePlugin", - "PcxImagePlugin", - "PdfImagePlugin", - "PixarImagePlugin", - "PngImagePlugin", - "PpmImagePlugin", - "PsdImagePlugin", - "QoiImagePlugin", - "SgiImagePlugin", - "SpiderImagePlugin", - "SunImagePlugin", - "TgaImagePlugin", - "TiffImagePlugin", - "WebPImagePlugin", - "WmfImagePlugin", - "XbmImagePlugin", - "XpmImagePlugin", - "XVThumbImagePlugin", -] - - -class UnidentifiedImageError(OSError): - """ - Raised in :py:meth:`PIL.Image.open` if an image cannot be opened and identified. - - If a PNG image raises this error, setting :data:`.ImageFile.LOAD_TRUNCATED_IMAGES` - to true may allow the image to be opened after all. The setting will ignore missing - data and checksum failures. - """ - - pass diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/encodings/codecs.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/encodings/codecs.py deleted file mode 100644 index 3ac0268d6a11a1be99bb2cf7fde5979da2853d4a..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/encodings/codecs.py +++ /dev/null @@ -1,135 +0,0 @@ -"""Extend the Python codecs module with a few encodings that are used in OpenType (name table) -but missing from Python. See https://github.com/fonttools/fonttools/issues/236 for details.""" - -import codecs -import encodings - - -class ExtendCodec(codecs.Codec): - def __init__(self, name, base_encoding, mapping): - self.name = name - self.base_encoding = base_encoding - self.mapping = mapping - self.reverse = {v: k for k, v in mapping.items()} - self.max_len = max(len(v) for v in mapping.values()) - self.info = codecs.CodecInfo( - name=self.name, encode=self.encode, decode=self.decode - ) - codecs.register_error(name, self.error) - - def _map(self, mapper, output_type, exc_type, input, errors): - base_error_handler = codecs.lookup_error(errors) - length = len(input) - out = output_type() - while input: - # first try to use self.error as the error handler - try: - part = mapper(input, self.base_encoding, errors=self.name) - out += part - break # All converted - except exc_type as e: - # else convert the correct part, handle error as requested and continue - out += mapper(input[: e.start], self.base_encoding, self.name) - replacement, pos = base_error_handler(e) - out += replacement - input = input[pos:] - return out, length - - def encode(self, input, errors="strict"): - return self._map(codecs.encode, bytes, UnicodeEncodeError, input, errors) - - def decode(self, input, errors="strict"): - return self._map(codecs.decode, str, UnicodeDecodeError, input, errors) - - def error(self, e): - if isinstance(e, UnicodeDecodeError): - for end in range(e.start + 1, e.end + 1): - s = e.object[e.start : end] - if s in self.mapping: - return self.mapping[s], end - elif isinstance(e, UnicodeEncodeError): - for end in range(e.start + 1, e.start + self.max_len + 1): - s = e.object[e.start : end] - if s in self.reverse: - return self.reverse[s], end - e.encoding = self.name - raise e - - -_extended_encodings = { - "x_mac_japanese_ttx": ( - "shift_jis", - { - b"\xFC": chr(0x007C), - b"\x7E": chr(0x007E), - b"\x80": chr(0x005C), - b"\xA0": chr(0x00A0), - b"\xFD": chr(0x00A9), - b"\xFE": chr(0x2122), - b"\xFF": chr(0x2026), - }, - ), - "x_mac_trad_chinese_ttx": ( - "big5", - { - b"\x80": chr(0x005C), - b"\xA0": chr(0x00A0), - b"\xFD": chr(0x00A9), - b"\xFE": chr(0x2122), - b"\xFF": chr(0x2026), - }, - ), - "x_mac_korean_ttx": ( - "euc_kr", - { - b"\x80": chr(0x00A0), - b"\x81": chr(0x20A9), - b"\x82": chr(0x2014), - b"\x83": chr(0x00A9), - b"\xFE": chr(0x2122), - b"\xFF": chr(0x2026), - }, - ), - "x_mac_simp_chinese_ttx": ( - "gb2312", - { - b"\x80": chr(0x00FC), - b"\xA0": chr(0x00A0), - b"\xFD": chr(0x00A9), - b"\xFE": chr(0x2122), - b"\xFF": chr(0x2026), - }, - ), -} - -_cache = {} - - -def search_function(name): - name = encodings.normalize_encoding(name) # Rather undocumented... - if name in _extended_encodings: - if name not in _cache: - base_encoding, mapping = _extended_encodings[name] - assert name[-4:] == "_ttx" - # Python 2 didn't have any of the encodings that we are implementing - # in this file. Python 3 added aliases for the East Asian ones, mapping - # them "temporarily" to the same base encoding as us, with a comment - # suggesting that full implementation will appear some time later. - # As such, try the Python version of the x_mac_... first, if that is found, - # use *that* as our base encoding. This would make our encoding upgrade - # to the full encoding when and if Python finally implements that. - # http://bugs.python.org/issue24041 - base_encodings = [name[:-4], base_encoding] - for base_encoding in base_encodings: - try: - codecs.lookup(base_encoding) - except LookupError: - continue - _cache[name] = ExtendCodec(name, base_encoding, mapping) - break - return _cache[name].info - - return None - - -codecs.register(search_function) diff --git a/spaces/leslyarun/grammar_correction/README.md b/spaces/leslyarun/grammar_correction/README.md deleted file mode 100644 index 8ce37ccd3792957536e54f43dcaf047d628f16cb..0000000000000000000000000000000000000000 --- a/spaces/leslyarun/grammar_correction/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Grammar Correction -emoji: 💩 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lewiswu1209/MockingBird/encoder/params_data.py b/spaces/lewiswu1209/MockingBird/encoder/params_data.py deleted file mode 100644 index bdb1716ed45617f2b127a7fb8885afe6cc74fb71..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/encoder/params_data.py +++ /dev/null @@ -1,29 +0,0 @@ - -## Mel-filterbank -mel_window_length = 25 # In milliseconds -mel_window_step = 10 # In milliseconds -mel_n_channels = 40 - - -## Audio -sampling_rate = 16000 -# Number of spectrogram frames in a partial utterance -partials_n_frames = 160 # 1600 ms -# Number of spectrogram frames at inference -inference_n_frames = 80 # 800 ms - - -## Voice Activation Detection -# Window size of the VAD. Must be either 10, 20 or 30 milliseconds. -# This sets the granularity of the VAD. Should not need to be changed. -vad_window_length = 30 # In milliseconds -# Number of frames to average together when performing the moving average smoothing. -# The larger this value, the larger the VAD variations must be to not get smoothed out. -vad_moving_average_width = 8 -# Maximum number of consecutive silent frames a segment can have. -vad_max_silence_length = 6 - - -## Audio volume normalization -audio_norm_target_dBFS = -30 - diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Crystal Impact Match V2.0.23 _BEST_.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Crystal Impact Match V2.0.23 _BEST_.md deleted file mode 100644 index b9235d463fe3f8c2faf79ecccb84873b4ff443e5..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Crystal Impact Match V2.0.23 _BEST_.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Crystal Impact Match v2.0.23


      DOWNLOADhttps://bytlly.com/2uGwZi



      -
      -gnif_versioncode.NcgVQIUJUjZD3. NcgVQIUJUjZD3.nfvpi3s.19l5ugm0qy4qkv.el0tdhfr8jk8.NcgVQIUJUjZD3.nfvpi3s.19l5ugm0qy4qkv.el0tdhfr8jk8.kjn.qo72s.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1. W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d6j1.W0rg.E9U.d 4fefd39f24
      -
      -
      -

      diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Dgflick Album Xpress 8.0 Crack !!EXCLUSIVE!! 46.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Dgflick Album Xpress 8.0 Crack !!EXCLUSIVE!! 46.md deleted file mode 100644 index b944aebd9cd433bb5b1c9fa1a55e80dffb53dd77..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Dgflick Album Xpress 8.0 Crack !!EXCLUSIVE!! 46.md +++ /dev/null @@ -1,157 +0,0 @@ -
      -

      DgFlick Album Xpress Pro 8 Crack: How to Create Amazing Photo Albums

      - -

      If you are looking for a software that can help you design and edit your own photo albums, you might want to check out DgFlick Album Xpress Pro 8. This is a powerful and easy-to-use application that lets you create stunning photo albums in minutes. You can also download DgFlick Album Xpress Pro 8 crack for free and enjoy all the features without any limitations.

      - -

      What is DgFlick Album Xpress Pro 8?

      - -

      DgFlick Album Xpress Pro 8 is a photo album software that allows you to create professional-looking albums with your own photos. You can choose from hundreds of templates and presets, or create your own from scratch. You can also edit your photos with a comprehensive image editor that has loads of tools and effects. You can drag and drop your photos into your albums, arrange them in any way you want, and customize the layout, background, borders, text, and more. You can also preview your albums before printing or exporting them.

      -

      dgflick album xpress 8.0 crack 46


      DOWNLOAD ✔✔✔ https://bytlly.com/2uGwx6



      - -

      Why do you need DgFlick Album Xpress Pro 8 crack?

      - -

      DgFlick Album Xpress Pro 8 is a premium software that costs $199 for a single license. However, you can download DgFlick Album Xpress Pro 8 crack for free and use it without any restrictions. With DgFlick Album Xpress Pro 8 crack, you can access all the features and functions of the software, such as:

      - -
        -
      • Create unlimited albums with unlimited pages
      • -
      • Use all the templates and presets available
      • -
      • Edit your photos with advanced tools and effects
      • -
      • Save your albums in various formats, such as JPG, PDF, PSD, etc.
      • -
      • Print your albums with high quality and resolution
      • -
      • Share your albums online or on social media
      • -
      - -

      How to download and install DgFlick Album Xpress Pro 8 crack?

      - -

      If you want to download and install DgFlick Album Xpress Pro 8 crack, you can follow these simple steps:

      - -
        -
      1. Click on the link below to download DgFlick Album Xpress Pro 8 crack file.
      2. -
      3. Extract the file using WinRAR or any other extraction tool.
      4. -
      5. Run the setup file and follow the instructions to install the software.
      6. -
      7. Copy the crack file and paste it into the installation folder.
      8. -
      9. Launch the software and enjoy creating amazing photo albums.
      10. -
      - -

      DgFlick Album Xpress Pro 8 crack link: https://fancli.com/292751

      - -

      Conclusion

      - -

      DgFlick Album Xpress Pro 8 is a great software for anyone who wants to create beautiful photo albums with their own photos. You can download DgFlick Album Xpress Pro 8 crack for free and use it without any limitations. You can also check out some other photo editing software on our website. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below.

      -

      How to use DgFlick Album Xpress Pro 8 crack?

      - -

      Using DgFlick Album Xpress Pro 8 crack is very simple and intuitive. You can follow these steps to create your own photo albums:

      - -
        -
      1. Launch the software and select a project type. You can choose from album, book, calendar, collage, or passport.
      2. -
      3. Select a size and orientation for your project. You can also customize the margins, bleed, and spine.
      4. -
      5. Select a cover for your project. You can choose from various designs and materials, or create your own cover.
      6. -
      7. Add your photos to your project. You can import them from your computer, camera, or scanner. You can also use the auto-fill option to automatically arrange your photos.
      8. -
      9. Edit your photos if needed. You can crop, rotate, resize, adjust color, brightness, contrast, and more. You can also apply filters, frames, stickers, and text to your photos.
      10. -
      11. Arrange your photos on your pages. You can drag and drop them, swap them, align them, and more. You can also change the background, border, and layout of your pages.
      12. -
      13. Preview your project and make any final changes. You can zoom in and out, check the quality, and add comments.
      14. -
      15. Save your project and export it in your preferred format. You can also print it or share it online.
      16. -
      - -

      DgFlick Album Xpress Pro 8 crack is a versatile and user-friendly software that can help you create amazing photo albums in minutes. You can download it for free and enjoy all its features without any limitations.

      - -

      Frequently Asked Questions about DgFlick Album Xpress Pro 8 crack

      - -

      Here are some common questions and answers about DgFlick Album Xpress Pro 8 crack:

      -

      - -

      Is DgFlick Album Xpress Pro 8 crack safe to use?

      - -

      Yes, DgFlick Album Xpress Pro 8 crack is safe to use as long as you download it from a reliable source. However, you should always scan any file you download with an antivirus software before opening it.

      - -

      Is DgFlick Album Xpress Pro 8 crack legal to use?

      - -

      No, DgFlick Album Xpress Pro 8 crack is not legal to use as it violates the terms and conditions of the original software. We do not condone or encourage the use of cracked software. This article is for educational purposes only.

      - -

      What are the system requirements for DgFlick Album Xpress Pro 8 crack?

      - -

      The system requirements for DgFlick Album Xpress Pro 8 crack are:

      - -
        -
      • Operating System: Windows XP/Vista/7/8/8.1/10
      • -
      • Memory (RAM): 1 GB of RAM required.
      • -
      • Hard Disk Space: 1 GB of free space required.
      • -
      • Processor: 2.8 GHz Intel Pentium 4 or later.
      • -
      - -

      Where can I get more templates and presets for DgFlick Album Xpress Pro 8 crack?

      - -

      You can get more templates and presets for DgFlick Album Xpress Pro 8 crack from the official website of DgFlick or from other online sources. However, you should be careful about the quality and compatibility of the files you download.

      - -

      How can I contact the support team of DgFlick Album Xpress Pro 8 crack?

      - -

      You cannot contact the support team of DgFlick Album Xpress Pro 8 crack as it is an unofficial version of the software. If you have any issues or questions about the software, you should refer to the online forums or blogs where you downloaded it from.

      -

      What are the advantages of using DgFlick Album Xpress Pro 8 crack?

      - -

      Using DgFlick Album Xpress Pro 8 crack has many advantages over other photo album software. Some of them are:

      - -
        -
      • It is fast and easy to use. You can create your photo albums in minutes with just a few clicks.
      • -
      • It has a large collection of templates and presets that suit any occasion and style. You can also create your own templates and presets.
      • -
      • It has a powerful image editor that can enhance your photos with various tools and effects. You can also add filters, frames, stickers, and text to your photos.
      • -
      • It has a flexible layout system that allows you to customize your pages according to your preferences. You can also change the background, border, and layout of your pages.
      • -
      • It has a high-quality output that can print or export your albums in various formats and resolutions. You can also share your albums online or on social media.
      • -
      - -

      What are the disadvantages of using DgFlick Album Xpress Pro 8 crack?

      - -

      Using DgFlick Album Xpress Pro 8 crack also has some disadvantages that you should be aware of. Some of them are:

      - -
        -
      • It is illegal to use as it violates the terms and conditions of the original software. You may face legal consequences if you use it.
      • -
      • It may not be compatible with the latest updates and features of the original software. You may miss out on some new functions and improvements.
      • -
      • It may contain viruses or malware that can harm your computer or data. You should always scan any file you download with an antivirus software before opening it.
      • -
      • It may not have any technical support or customer service. If you have any issues or questions about the software, you may not get any help or guidance.
      • -
      - -

      Is there an alternative to DgFlick Album Xpress Pro 8 crack?

      - -

      If you are looking for an alternative to DgFlick Album Xpress Pro 8 crack, you may want to try some other photo album software that are similar or better than it. Some of them are:

      - -
        -
      • MAGIX Photo Manager 17: This is a photo management software that can help you organize, edit, and share your photos. You can also create photo albums, slideshows, collages, and more with this software.
      • -
      • Flip PDF Professional: This is a PDF conversion software that can help you create digital photo albums from your PDF files. You can also add multimedia elements, such as audio, video, animation, and more to your albums.
      • -
      • Photo Collage Maker: This is a photo collage software that can help you create stunning photo collages with your photos. You can also add frames, backgrounds, stickers, text, and more to your collages.
      • -
      - -

      DgFlick Album Xpress Pro 8 crack is a great software for creating amazing photo albums with your own photos. However, you should be careful about using it as it is illegal and risky. You can also try some other photo album software that are legal and safe to use.

      -

      How to create a photo album with DgFlick Album Xpress Pro 8 crack?

      - -

      Creating a photo album with DgFlick Album Xpress Pro 8 crack is very easy and fun. You can follow these steps to create your own photo album:

      - -
        -
      1. Launch the software and select a project type. You can choose from album, book, calendar, collage, or passport.
      2. -
      3. Select a size and orientation for your project. You can also customize the margins, bleed, and spine.
      4. -
      5. Select a cover for your project. You can choose from various designs and materials, or create your own cover.
      6. -
      7. Add your photos to your project. You can import them from your computer, camera, or scanner. You can also use the auto-fill option to automatically arrange your photos.
      8. -
      9. Edit your photos if needed. You can crop, rotate, resize, adjust color, brightness, contrast, and more. You can also apply filters, frames, stickers, and text to your photos.
      10. -
      11. Arrange your photos on your pages. You can drag and drop them, swap them, align them, and more. You can also change the background, border, and layout of your pages.
      12. -
      13. Preview your project and make any final changes. You can zoom in and out, check the quality, and add comments.
      14. -
      15. Save your project and export it in your preferred format. You can also print it or share it online.
      16. -
      - -

      DgFlick Album Xpress Pro 8 crack is a versatile and user-friendly software that can help you create amazing photo albums in minutes. You can download it for free and enjoy all its features without any limitations.

      - -

      What are the tips and tricks for using DgFlick Album Xpress Pro 8 crack?

      - -

      Using DgFlick Album Xpress Pro 8 crack can be even more fun and easy if you know some tips and tricks for using it. Here are some of them:

      - -
        -
      • You can use the shortcut keys to perform various actions faster and easier. For example, you can use Ctrl+Z to undo, Ctrl+Y to redo, Ctrl+C to copy, Ctrl+V to paste, Ctrl+A to select all, etc.
      • -
      • You can use the right-click menu to access various options and commands for your photos and pages. For example, you can right-click on a photo to edit it, rotate it, crop it, swap it, etc.
      • -
      • You can use the zoom slider to adjust the zoom level of your project. You can also use the mouse wheel to zoom in and out.
      • -
      • You can use the quality indicator to check the quality of your photos and pages. The indicator shows green for high quality, yellow for medium quality, and red for low quality.
      • -
      • You can use the comment tool to add comments to your photos and pages. You can also view and edit the comments later.
      • -
      - -

      DgFlick Album Xpress Pro 8 crack is a great software for creating amazing photo albums with your own photos. However, you should be careful about using it as it is illegal and risky. You can also try some other photo album software that are legal and safe to use.

      -

      Conclusion

      - -

      DgFlick Album Xpress Pro 8 crack is a powerful and easy-to-use software that can help you create stunning photo albums with your own photos. You can download it for free and use it without any limitations. However, you should be aware of the disadvantages and risks of using cracked software. It is illegal, unsafe, and unsupported. You may face legal consequences, viruses, malware, compatibility issues, and lack of technical support. Therefore, we recommend you to use the original software or some other legal and safe alternatives. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/ESET PureFix V202exe.md b/spaces/lincquiQcaudo/Top-20-Diffusion/ESET PureFix V202exe.md deleted file mode 100644 index 02ff02f834cc4a3f88d702ec2701bf11f928f71f..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/ESET PureFix V202exe.md +++ /dev/null @@ -1,6 +0,0 @@ -

      ESET PureFix V202exe


      Download File »»» https://bytlly.com/2uGwDE



      -
      -25 Mar 2018 . ESET PureFix V2.02.exe Foo. eset purefix eset purefix 2017 eset purefix download eset purefix for eset 9 eset purefix v2.03 eset purefix for eset. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Fisiologia Vegetal Salisbury Pdf Descargar.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Fisiologia Vegetal Salisbury Pdf Descargar.md deleted file mode 100644 index 0bfac1fa7940bc1a7b6e2f5bfc1201edde027683..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Fisiologia Vegetal Salisbury Pdf Descargar.md +++ /dev/null @@ -1,11 +0,0 @@ -
      -

      Electro-química y. salivares suban fisiologia vegetal salisbury pdf descargar jai. el paso iniciante y mxico por el. salisbury colección realizado en marzo de 2006 anualment. oficios, Salisbury, J. (1990). Materials and methods. And, Fried, M. A. L., Knoester.

      -

      fisiologia vegetal salisbury pdf descargar


      Download File === https://bytlly.com/2uGwiS



      -

      . Fisiologia Vegetal Salisbury Pdf Descargar 1:09. 3 years ago 1:09. Play Later. Play Later. Lists. Like. Liked. 1:09. Cardemil, L. (Eds) Fisiologa Vegetal.. Salisbury, J., Green, M., Hunt, C. & Campbell, J. (2008) Coastal acidification by rivers: A.

      -

      E. Mancera, E. Rejmankova, J.E. Salisbury, and E. Weil. 2004.. Curso Eco- Fisiologa. carbn vegetal y cultivos como la pia que se. webpers baf94a4655 https://trello.com/c/RyyJVPli/28-descargar-the-contact-. -fisiologia-vegetal-salisbury-scargar-zip-full-edition-ebook.

      -

      Aireal edicion fisiologia veg salisbury pdf descargar. Ciencia y tecnología. Enero de 2014. 102p. L. Salisbury y otras. L. Salisbury and others. L. Salisbury y otras.. Salisbury, J., Green, M., Hunt, C. & Campbell, J. (2008) Coastal.

      -

      -

      Determinando la presencia de dyes, saborear puede ser de gran utilidad para determinar los gustos y su aceptación. Puedes ocupar alguno de los materiales que toca utilizar, tambin hacer escritos a mano; si utilizas un pincel, debes ocupar una materia. La fisiologia vegetal mientras que la fisiologia animal se hace en el animal vivo.

      -

      Salisbury, F. B.; Ross, C. W. Plant physiology. 4.ed. (mol L1) (Salisbury & Ross, 1991).. A SALISBURY, F. B.; ROSS, C. W. Plant physiology. 4.ed. (mol L1) (Salisbury & Ross, 1991).. Salisbury, F. B.; Ross, C. W. Plant physiology.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Handycafe Wifi Hotspot Crack.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Handycafe Wifi Hotspot Crack.md deleted file mode 100644 index 2f9e00b38199d72a03a1eb59b573dfdd265ca8fa..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Handycafe Wifi Hotspot Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Handycafe Wifi Hotspot Crack


      Download File 🗸 https://bytlly.com/2uGxB7



      - -The Wi-Fi hotspot provides an internet connection for your mobile device. ... (You can find the VG Serial Number on the bottom of the Vehicle Gateway or in the ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/High Quality Download Roms Sega Model 3 40l.md b/spaces/lincquiQcaudo/Top-20-Diffusion/High Quality Download Roms Sega Model 3 40l.md deleted file mode 100644 index a1febacbc18a2a25a2cc7ed6510c42697c287ef4..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/High Quality Download Roms Sega Model 3 40l.md +++ /dev/null @@ -1,20 +0,0 @@ -

      Download Roms Sega Model 3 40l


      Download File === https://bytlly.com/2uGx2P



      -
      -All of the Sega Model 3 ROM games can be located by simply searching on Google. This manual will teach you the basics, like how to access the ROMs, select the roms from the menu and view the roms using GENS. The Sega Model 3 is a color video game console that was released in Japan in September 1992. I was able to access the roms and save. This is a ROM dump of Sega Model 3 (M3) game Puck Boy in Japanese. Just click to download the full ROMs, or watch the video below to watch the emulation process. The Sega Model 3 Game Manual. All the games are in the ROM format with a total of at least 5 MB size. Select one or more games in the menu. ROM Hacking Without Gens.Effect of Clostridium perfringens enterotoxin on carbohydrate metabolism and membrane fluidity of hamster small intestine in vivo. - -A micro-dissection technique using a stereomicroscope was used to examine the short-term effects of Clostridium perfringens enterotoxin (CPE) on the intestinal wall of hamsters in vivo. The intestine was divided into three segments, i.e. proximal, mid and distal, and samples were taken from each segment. Four hamsters were injected intraperitoneally with CPE (8 microg/g body weight) and one hamster was injected with phosphate buffered saline as a control. At each sampling point, samples were taken from three different villi. The levels of cyclic AMP in the segments were increased only in the proximal segment of the intestine of the CPE-treated hamsters. The levels of plasma membrane fluidity in the villi of the mid-segment of the intestine were increased in the CPE-treated hamsters. The rate of glucose transport into the intestinal segments in the CPE-treated hamsters was also increased, compared with that in the control hamsters. These results show that CPE affects various intestinal functions and suggests that the effect of CPE on intestinal function is not limited to the muscle layer but involves also the mucosa layer.require "minitest/autorun" - -require "coveralls" - -require "coveralls/minitest" - -class TestPullRequestAnalytics < Minitest::Test - - include Coveralls::Minitest::Test - - def test_pull_request_analytics - - user = create_user 4fefd39f24
      -
      -
      -

      diff --git a/spaces/liuyuan-pal/SyncDreamer/ldm/thirdp/psp/id_loss.py b/spaces/liuyuan-pal/SyncDreamer/ldm/thirdp/psp/id_loss.py deleted file mode 100644 index e08ee095bd20ff664dcf470de15ff54f839b38e2..0000000000000000000000000000000000000000 --- a/spaces/liuyuan-pal/SyncDreamer/ldm/thirdp/psp/id_loss.py +++ /dev/null @@ -1,23 +0,0 @@ -# https://github.com/eladrich/pixel2style2pixel -import torch -from torch import nn -from ldm.thirdp.psp.model_irse import Backbone - - -class IDFeatures(nn.Module): - def __init__(self, model_path): - super(IDFeatures, self).__init__() - print('Loading ResNet ArcFace') - self.facenet = Backbone(input_size=112, num_layers=50, drop_ratio=0.6, mode='ir_se') - self.facenet.load_state_dict(torch.load(model_path, map_location="cpu")) - self.face_pool = torch.nn.AdaptiveAvgPool2d((112, 112)) - self.facenet.eval() - - def forward(self, x, crop=False): - # Not sure of the image range here - if crop: - x = torch.nn.functional.interpolate(x, (256, 256), mode="area") - x = x[:, :, 35:223, 32:220] - x = self.face_pool(x) - x_feats = self.facenet(x) - return x_feats diff --git a/spaces/livekhh/formal_project/app.py b/spaces/livekhh/formal_project/app.py deleted file mode 100644 index 1013016ce643be191ad0aff652c220f0cc73ef85..0000000000000000000000000000000000000000 --- a/spaces/livekhh/formal_project/app.py +++ /dev/null @@ -1,16 +0,0 @@ -# Load model directly -import gradio as gr -from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline - -tokenizer = AutoTokenizer.from_pretrained("j5ng/kcbert-formal-classifier") -model = AutoModelForSequenceClassification.from_pretrained("j5ng/kcbert-formal-classifier") - -formal_classifier = pipeline(task="text-classification", model=model, tokenizer=tokenizer) - - -def greet(name): - return formal_classifier(name) - -print("test") -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/lnyan/stablediffusion-infinity/app.py b/spaces/lnyan/stablediffusion-infinity/app.py deleted file mode 100644 index 4787ee6f906dbab3eb2dc440d61847fd7a362751..0000000000000000000000000000000000000000 --- a/spaces/lnyan/stablediffusion-infinity/app.py +++ /dev/null @@ -1,1059 +0,0 @@ -import subprocess -# import os.path as osp -import pip -# pip.main(["install","-v","-U","git+https://github.com/facebookresearch/xformers.git@main#egg=xformers"]) -# subprocess.check_call("pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers", cwd=osp.dirname(__file__), shell=True) - -import io -import base64 -import os -import os - -import sys - -import numpy as np -import torch -from torch import autocast -import diffusers -from diffusers.configuration_utils import FrozenDict -from diffusers import ( - StableDiffusionPipeline, - StableDiffusionInpaintPipeline, - StableDiffusionImg2ImgPipeline, - StableDiffusionInpaintPipelineLegacy, - DDIMScheduler, - LMSDiscreteScheduler, - StableDiffusionUpscalePipeline, - DPMSolverMultistepScheduler -) -from diffusers.models import AutoencoderKL -from PIL import Image -from PIL import ImageOps -import gradio as gr -import base64 -import skimage -import skimage.measure -import yaml -import json -from enum import Enum - -try: - abspath = os.path.abspath(__file__) - dirname = os.path.dirname(abspath) - os.chdir(dirname) -except: - pass - -from utils import * - -# assert diffusers.__version__ >= "0.6.0", "Please upgrade diffusers to 0.6.0" - -USE_NEW_DIFFUSERS = True -RUN_IN_SPACE = "RUN_IN_HG_SPACE" in os.environ - - -class ModelChoice(Enum): - INPAINTING = "stablediffusion-inpainting" - INPAINTING_IMG2IMG = "stablediffusion-inpainting+img2img-v1.5" - MODEL_1_5 = "stablediffusion-v1.5" - MODEL_1_4 = "stablediffusion-v1.4" - - -try: - from sd_grpcserver.pipeline.unified_pipeline import UnifiedPipeline -except: - UnifiedPipeline = StableDiffusionInpaintPipeline - -# sys.path.append("./glid_3_xl_stable") - -USE_GLID = False -# try: -# from glid3xlmodel import GlidModel -# except: -# USE_GLID = False - -try: - cuda_available = torch.cuda.is_available() -except: - cuda_available = False -finally: - if sys.platform == "darwin": - device = "mps" if torch.backends.mps.is_available() else "cpu" - elif cuda_available: - device = "cuda" - else: - device = "cpu" - -import contextlib - -autocast = contextlib.nullcontext - -with open("config.yaml", "r") as yaml_in: - yaml_object = yaml.safe_load(yaml_in) - config_json = json.dumps(yaml_object) - - -def load_html(): - body, canvaspy = "", "" - with open("index.html", encoding="utf8") as f: - body = f.read() - with open("canvas.py", encoding="utf8") as f: - canvaspy = f.read() - body = body.replace("- paths:\n", "") - body = body.replace(" - ./canvas.py\n", "") - body = body.replace("from canvas import InfCanvas", canvaspy) - return body - - -def test(x): - x = load_html() - return f"""""" - - -DEBUG_MODE = False - -try: - SAMPLING_MODE = Image.Resampling.LANCZOS -except Exception as e: - SAMPLING_MODE = Image.LANCZOS - -try: - contain_func = ImageOps.contain -except Exception as e: - - def contain_func(image, size, method=SAMPLING_MODE): - # from PIL: https://pillow.readthedocs.io/en/stable/reference/ImageOps.html#PIL.ImageOps.contain - im_ratio = image.width / image.height - dest_ratio = size[0] / size[1] - if im_ratio != dest_ratio: - if im_ratio > dest_ratio: - new_height = int(image.height / image.width * size[0]) - if new_height != size[1]: - size = (size[0], new_height) - else: - new_width = int(image.width / image.height * size[1]) - if new_width != size[0]: - size = (new_width, size[1]) - return image.resize(size, resample=method) - - -import argparse - -parser = argparse.ArgumentParser(description="stablediffusion-infinity") -parser.add_argument("--port", type=int, help="listen port", dest="server_port") -parser.add_argument("--host", type=str, help="host", dest="server_name") -parser.add_argument("--share", action="store_true", help="share this app?") -parser.add_argument("--debug", action="store_true", help="debug mode") -parser.add_argument("--fp32", action="store_true", help="using full precision") -parser.add_argument("--encrypt", action="store_true", help="using https?") -parser.add_argument("--ssl_keyfile", type=str, help="path to ssl_keyfile") -parser.add_argument("--ssl_certfile", type=str, help="path to ssl_certfile") -parser.add_argument("--ssl_keyfile_password", type=str, help="ssl_keyfile_password") -parser.add_argument( - "--auth", nargs=2, metavar=("username", "password"), help="use username password" -) -parser.add_argument( - "--remote_model", - type=str, - help="use a model (e.g. dreambooth fined) from huggingface hub", - default="", -) -parser.add_argument( - "--local_model", type=str, help="use a model stored on your PC", default="" -) - -if __name__ == "__main__" and not RUN_IN_SPACE: - args = parser.parse_args() -else: - args = parser.parse_args() -# args = parser.parse_args(["--debug"]) -if args.auth is not None: - args.auth = tuple(args.auth) - -model = {} - - -def get_token(): - token = "" - if os.path.exists(".token"): - with open(".token", "r") as f: - token = f.read() - token = os.environ.get("hftoken", token) - return token - - -def save_token(token): - with open(".token", "w") as f: - f.write(token) - - -def prepare_scheduler(scheduler): - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - return scheduler - - -def my_resize(width, height): - if width >= 512 and height >= 512: - return width, height - if width == height: - return 512, 512 - smaller = min(width, height) - larger = max(width, height) - if larger >= 608: - return width, height - factor = 1 - if smaller < 290: - factor = 2 - elif smaller < 330: - factor = 1.75 - elif smaller < 384: - factor = 1.375 - elif smaller < 400: - factor = 1.25 - elif smaller < 450: - factor = 1.125 - return int(factor * width)//8*8, int(factor * height)//8*8 - - -def load_learned_embed_in_clip( - learned_embeds_path, text_encoder, tokenizer, token=None -): - # https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb - loaded_learned_embeds = torch.load(learned_embeds_path, map_location="cpu") - - # separate token and the embeds - trained_token = list(loaded_learned_embeds.keys())[0] - embeds = loaded_learned_embeds[trained_token] - - # cast to dtype of text_encoder - dtype = text_encoder.get_input_embeddings().weight.dtype - embeds.to(dtype) - - # add the token in tokenizer - token = token if token is not None else trained_token - num_added_tokens = tokenizer.add_tokens(token) - if num_added_tokens == 0: - raise ValueError( - f"The tokenizer already contains the token {token}. Please pass a different `token` that is not already in the tokenizer." - ) - - # resize the token embeddings - text_encoder.resize_token_embeddings(len(tokenizer)) - - # get the id for the token and assign the embeds - token_id = tokenizer.convert_tokens_to_ids(token) - text_encoder.get_input_embeddings().weight.data[token_id] = embeds - - -scheduler_dict = {"PLMS": None, "DDIM": None, "K-LMS": None, "DPM": None} - - -class StableDiffusionInpaint: - def __init__( - self, token: str = "", model_name: str = "", model_path: str = "", **kwargs, - ): - self.token = token - original_checkpoint = False - if model_path and os.path.exists(model_path): - if model_path.endswith(".ckpt"): - original_checkpoint = True - elif model_path.endswith(".json"): - model_name = os.path.dirname(model_path) - else: - model_name = model_path - vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse") - vae.to(torch.float16) - if original_checkpoint: - print(f"Converting & Loading {model_path}") - from convert_checkpoint import convert_checkpoint - - pipe = convert_checkpoint(model_path, inpainting=True) - if device == "cuda": - pipe.to(torch.float16) - inpaint = StableDiffusionInpaintPipeline( - vae=vae, - text_encoder=pipe.text_encoder, - tokenizer=pipe.tokenizer, - unet=pipe.unet, - scheduler=pipe.scheduler, - safety_checker=pipe.safety_checker, - feature_extractor=pipe.feature_extractor, - ) - else: - print(f"Loading {model_name}") - if device == "cuda": - inpaint = StableDiffusionInpaintPipeline.from_pretrained( - model_name, - revision="fp16", - torch_dtype=torch.float16, - use_auth_token=token, - vae=vae - ) - else: - inpaint = StableDiffusionInpaintPipeline.from_pretrained( - model_name, use_auth_token=token, - ) - if os.path.exists("./embeddings"): - print("Note that StableDiffusionInpaintPipeline + embeddings is untested") - for item in os.listdir("./embeddings"): - if item.endswith(".bin"): - load_learned_embed_in_clip( - os.path.join("./embeddings", item), - inpaint.text_encoder, - inpaint.tokenizer, - ) - inpaint.to(device) - # try: - # inpaint.vae=torch.compile(inpaint.vae, dynamic=True) - # inpaint.unet=torch.compile(inpaint.unet, dynamic=True) - # except Exception as e: - # print(e) - # inpaint.enable_xformers_memory_efficient_attention() - # if device == "mps": - # _ = text2img("", num_inference_steps=1) - scheduler_dict["PLMS"] = inpaint.scheduler - scheduler_dict["DDIM"] = prepare_scheduler( - DDIMScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - clip_sample=False, - set_alpha_to_one=False, - ) - ) - scheduler_dict["K-LMS"] = prepare_scheduler( - LMSDiscreteScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear" - ) - ) - scheduler_dict["DPM"] = prepare_scheduler( - DPMSolverMultistepScheduler.from_config(inpaint.scheduler.config) - ) - self.safety_checker = inpaint.safety_checker - save_token(token) - try: - total_memory = torch.cuda.get_device_properties(0).total_memory // ( - 1024 ** 3 - ) - if total_memory <= 5: - inpaint.enable_attention_slicing() - except: - pass - self.inpaint = inpaint - - def run( - self, - image_pil, - prompt="", - negative_prompt="", - guidance_scale=7.5, - resize_check=True, - enable_safety=True, - fill_mode="patchmatch", - strength=0.75, - step=50, - enable_img2img=False, - use_seed=False, - seed_val=-1, - generate_num=1, - scheduler="", - scheduler_eta=0.0, - **kwargs, - ): - inpaint = self.inpaint - selected_scheduler = scheduler_dict.get(scheduler, scheduler_dict["PLMS"]) - for item in [inpaint]: - item.scheduler = selected_scheduler - if enable_safety: - item.safety_checker = self.safety_checker - else: - item.safety_checker = lambda images, **kwargs: (images, None) - width, height = image_pil.size - sel_buffer = np.array(image_pil) - img = sel_buffer[:, :, 0:3] - mask = sel_buffer[:, :, -1] - nmask = 255 - mask - process_width = width - process_height = height - if resize_check: - process_width, process_height = my_resize(width, height) - process_width=process_width*8//8 - process_height=process_height*8//8 - extra_kwargs = { - "num_inference_steps": step, - "guidance_scale": guidance_scale, - "eta": scheduler_eta, - } - if USE_NEW_DIFFUSERS: - extra_kwargs["negative_prompt"] = negative_prompt - extra_kwargs["num_images_per_prompt"] = generate_num - if use_seed: - generator = torch.Generator(inpaint.device).manual_seed(seed_val) - extra_kwargs["generator"] = generator - if True: - img, mask = functbl[fill_mode](img, mask) - mask = 255 - mask - mask = skimage.measure.block_reduce(mask, (8, 8), np.max) - mask = mask.repeat(8, axis=0).repeat(8, axis=1) - extra_kwargs["strength"] = strength - inpaint_func = inpaint - init_image = Image.fromarray(img) - mask_image = Image.fromarray(mask) - # mask_image=mask_image.filter(ImageFilter.GaussianBlur(radius = 8)) - if True: - images = inpaint_func( - prompt=prompt, - image=init_image.resize( - (process_width, process_height), resample=SAMPLING_MODE - ), - mask_image=mask_image.resize((process_width, process_height)), - width=process_width, - height=process_height, - **extra_kwargs, - )["images"] - return images - - -class StableDiffusion: - def __init__( - self, - token: str = "", - model_name: str = "runwayml/stable-diffusion-v1-5", - model_path: str = None, - inpainting_model: bool = False, - **kwargs, - ): - self.token = token - original_checkpoint = False - vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse") - vae.to(torch.float16) - if model_path and os.path.exists(model_path): - if model_path.endswith(".ckpt"): - original_checkpoint = True - elif model_path.endswith(".json"): - model_name = os.path.dirname(model_path) - else: - model_name = model_path - if original_checkpoint: - print(f"Converting & Loading {model_path}") - from convert_checkpoint import convert_checkpoint - - text2img = convert_checkpoint(model_path) - if device == "cuda" and not args.fp32: - text2img.to(torch.float16) - else: - print(f"Loading {model_name}") - if device == "cuda" and not args.fp32: - text2img = StableDiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", - revision="fp16", - torch_dtype=torch.float16, - use_auth_token=token, - vae=vae - ) - else: - text2img = StableDiffusionPipeline.from_pretrained( - model_name, use_auth_token=token, - ) - if inpainting_model: - # can reduce vRAM by reusing models except unet - text2img_unet = text2img.unet - del text2img.vae - del text2img.text_encoder - del text2img.tokenizer - del text2img.scheduler - del text2img.safety_checker - del text2img.feature_extractor - import gc - - gc.collect() - if device == "cuda": - inpaint = StableDiffusionInpaintPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", - revision="fp16", - torch_dtype=torch.float16, - use_auth_token=token, - vae=vae - ).to(device) - else: - inpaint = StableDiffusionInpaintPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", use_auth_token=token, - ).to(device) - text2img_unet.to(device) - del text2img - gc.collect() - text2img = StableDiffusionPipeline( - vae=inpaint.vae, - text_encoder=inpaint.text_encoder, - tokenizer=inpaint.tokenizer, - unet=text2img_unet, - scheduler=inpaint.scheduler, - safety_checker=inpaint.safety_checker, - feature_extractor=inpaint.feature_extractor, - ) - else: - inpaint = StableDiffusionInpaintPipelineLegacy( - vae=text2img.vae, - text_encoder=text2img.text_encoder, - tokenizer=text2img.tokenizer, - unet=text2img.unet, - scheduler=text2img.scheduler, - safety_checker=text2img.safety_checker, - feature_extractor=text2img.feature_extractor, - ).to(device) - text_encoder = text2img.text_encoder - tokenizer = text2img.tokenizer - if os.path.exists("./embeddings"): - for item in os.listdir("./embeddings"): - if item.endswith(".bin"): - load_learned_embed_in_clip( - os.path.join("./embeddings", item), - text2img.text_encoder, - text2img.tokenizer, - ) - text2img.to(device) - if device == "mps": - _ = text2img("", num_inference_steps=1) - scheduler_dict["PLMS"] = text2img.scheduler - scheduler_dict["DDIM"] = prepare_scheduler( - DDIMScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - clip_sample=False, - set_alpha_to_one=False, - ) - ) - scheduler_dict["K-LMS"] = prepare_scheduler( - LMSDiscreteScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear" - ) - ) - scheduler_dict["DPM"] = prepare_scheduler( - DPMSolverMultistepScheduler.from_config(text2img.scheduler.config) - ) - self.safety_checker = text2img.safety_checker - img2img = StableDiffusionImg2ImgPipeline( - vae=text2img.vae, - text_encoder=text2img.text_encoder, - tokenizer=text2img.tokenizer, - unet=text2img.unet, - scheduler=text2img.scheduler, - safety_checker=text2img.safety_checker, - feature_extractor=text2img.feature_extractor, - ).to(device) - save_token(token) - try: - total_memory = torch.cuda.get_device_properties(0).total_memory // ( - 1024 ** 3 - ) - if total_memory <= 5: - inpaint.enable_attention_slicing() - except: - pass - self.text2img = text2img - self.inpaint = inpaint - self.img2img = img2img - self.unified = UnifiedPipeline( - vae=text2img.vae, - text_encoder=text2img.text_encoder, - tokenizer=text2img.tokenizer, - unet=text2img.unet, - scheduler=text2img.scheduler, - safety_checker=text2img.safety_checker, - feature_extractor=text2img.feature_extractor, - ).to(device) - self.inpainting_model = inpainting_model - - def run( - self, - image_pil, - prompt="", - negative_prompt="", - guidance_scale=7.5, - resize_check=True, - enable_safety=True, - fill_mode="patchmatch", - strength=0.75, - step=50, - enable_img2img=False, - use_seed=False, - seed_val=-1, - generate_num=1, - scheduler="", - scheduler_eta=0.0, - **kwargs, - ): - text2img, inpaint, img2img, unified = ( - self.text2img, - self.inpaint, - self.img2img, - self.unified, - ) - selected_scheduler = scheduler_dict.get(scheduler, scheduler_dict["PLMS"]) - for item in [text2img, inpaint, img2img, unified]: - item.scheduler = selected_scheduler - if enable_safety: - item.safety_checker = self.safety_checker - else: - item.safety_checker = lambda images, **kwargs: (images, False) - if RUN_IN_SPACE: - step = max(150, step) - image_pil = contain_func(image_pil, (1024, 1024)) - width, height = image_pil.size - sel_buffer = np.array(image_pil) - img = sel_buffer[:, :, 0:3] - mask = sel_buffer[:, :, -1] - nmask = 255 - mask - process_width = width - process_height = height - if resize_check: - process_width, process_height = my_resize(width, height) - extra_kwargs = { - "num_inference_steps": step, - "guidance_scale": guidance_scale, - "eta": scheduler_eta, - } - if RUN_IN_SPACE: - generate_num = max( - int(4 * 512 * 512 // process_width // process_height), generate_num - ) - if USE_NEW_DIFFUSERS: - extra_kwargs["negative_prompt"] = negative_prompt - extra_kwargs["num_images_per_prompt"] = generate_num - if use_seed: - generator = torch.Generator(text2img.device).manual_seed(seed_val) - extra_kwargs["generator"] = generator - if nmask.sum() < 1 and enable_img2img: - init_image = Image.fromarray(img) - if True: - images = img2img( - prompt=prompt, - init_image=init_image.resize( - (process_width, process_height), resample=SAMPLING_MODE - ), - strength=strength, - **extra_kwargs, - )["images"] - elif mask.sum() > 0: - if fill_mode == "g_diffuser" and not self.inpainting_model: - mask = 255 - mask - mask = mask[:, :, np.newaxis].repeat(3, axis=2) - img, mask, out_mask = functbl[fill_mode](img, mask) - extra_kwargs["strength"] = 1.0 - extra_kwargs["out_mask"] = Image.fromarray(out_mask) - inpaint_func = unified - else: - img, mask = functbl[fill_mode](img, mask) - mask = 255 - mask - mask = skimage.measure.block_reduce(mask, (8, 8), np.max) - mask = mask.repeat(8, axis=0).repeat(8, axis=1) - extra_kwargs["strength"] = strength - inpaint_func = inpaint - init_image = Image.fromarray(img) - mask_image = Image.fromarray(mask) - # mask_image=mask_image.filter(ImageFilter.GaussianBlur(radius = 8)) - if True: - input_image = init_image.resize( - (process_width, process_height), resample=SAMPLING_MODE - ) - images = inpaint_func( - prompt=prompt, - init_image=input_image, - image=input_image, - width=process_width, - height=process_height, - mask_image=mask_image.resize((process_width, process_height)), - **extra_kwargs, - )["images"] - else: - if True: - images = text2img( - prompt=prompt, - height=process_width, - width=process_height, - **extra_kwargs, - )["images"] - return images - - -def get_model(token="", model_choice="", model_path=""): - if "model" not in model: - model_name = "" - if model_choice == ModelChoice.INPAINTING.value: - if len(model_name) < 1: - model_name = "runwayml/stable-diffusion-inpainting" - print(f"Using [{model_name}] {model_path}") - tmp = StableDiffusionInpaint( - token=token, model_name=model_name, model_path=model_path - ) - elif model_choice == ModelChoice.INPAINTING_IMG2IMG.value: - print( - f"Note that {ModelChoice.INPAINTING_IMG2IMG.value} only support remote model and requires larger vRAM" - ) - tmp = StableDiffusion(token=token, model_name="runwayml/stable-diffusion-v1-5", inpainting_model=True) - else: - if len(model_name) < 1: - model_name = ( - "runwayml/stable-diffusion-v1-5" - if model_choice == ModelChoice.MODEL_1_5.value - else "CompVis/stable-diffusion-v1-4" - ) - tmp = StableDiffusion( - token=token, model_name=model_name, model_path=model_path - ) - model["model"] = tmp - return model["model"] - - -def run_outpaint( - sel_buffer_str, - prompt_text, - negative_prompt_text, - strength, - guidance, - step, - resize_check, - fill_mode, - enable_safety, - use_correction, - enable_img2img, - use_seed, - seed_val, - generate_num, - scheduler, - scheduler_eta, - state, -): - data = base64.b64decode(str(sel_buffer_str)) - pil = Image.open(io.BytesIO(data)) - width, height = pil.size - sel_buffer = np.array(pil) - cur_model = get_model() - images = cur_model.run( - image_pil=pil, - prompt=prompt_text, - negative_prompt=negative_prompt_text, - guidance_scale=guidance, - strength=strength, - step=step, - resize_check=resize_check, - fill_mode=fill_mode, - enable_safety=enable_safety, - use_seed=use_seed, - seed_val=seed_val, - generate_num=generate_num, - scheduler=scheduler, - scheduler_eta=scheduler_eta, - enable_img2img=enable_img2img, - width=width, - height=height, - ) - base64_str_lst = [] - if enable_img2img: - use_correction = "border_mode" - for image in images: - image = correction_func.run(pil.resize(image.size), image, mode=use_correction) - resized_img = image.resize((width, height), resample=SAMPLING_MODE,) - out = sel_buffer.copy() - out[:, :, 0:3] = np.array(resized_img) - out[:, :, -1] = 255 - out_pil = Image.fromarray(out) - out_buffer = io.BytesIO() - out_pil.save(out_buffer, format="PNG") - out_buffer.seek(0) - base64_bytes = base64.b64encode(out_buffer.read()) - base64_str = base64_bytes.decode("ascii") - base64_str_lst.append(base64_str) - return ( - gr.Textbox(label=str(state + 1), value=",".join(base64_str_lst),), - gr.Textbox(label="Prompt"), - state + 1, - ) - - -def load_js(name): - if name in ["export", "commit", "undo"]: - return f""" -function (x) -{{ - let app=document.querySelector("gradio-app"); - app=app.shadowRoot??app; - let frame=app.querySelector("#sdinfframe").contentWindow.document; - let button=frame.querySelector("#{name}"); - button.click(); - return x; -}} -""" - ret = "" - with open(f"./js/{name}.js", "r") as f: - ret = f.read() - return ret - - -proceed_button_js = load_js("proceed") -setup_button_js = load_js("setup") - -if RUN_IN_SPACE: - get_model(token=os.environ.get("hftoken", ""), model_choice=ModelChoice.INPAINTING.value) - -blocks = gr.Blocks( - title="StableDiffusion-Infinity", - css=""" -.tabs { -margin-top: 0rem; -margin-bottom: 0rem; -} -#markdown { -min-height: 0rem; -} -""", -) -model_path_input_val = "" -with blocks as demo: - # title - title = gr.Markdown( - """ - **stablediffusion-infinity**: Outpainting with Stable Diffusion on an infinite canvas: [https://github.com/lkwq007/stablediffusion-infinity](https://github.com/lkwq007/stablediffusion-infinity) \[[Open In Colab](https://colab.research.google.com/github/lkwq007/stablediffusion-infinity/blob/master/stablediffusion_infinity_colab.ipynb)\] \[[Setup Locally](https://github.com/lkwq007/stablediffusion-infinity/blob/master/docs/setup_guide.md)\] - """, - elem_id="markdown", - ) - # frame - frame = gr.HTML(test(2), visible=RUN_IN_SPACE) - # setup - if not RUN_IN_SPACE: - model_choices_lst = [item.value for item in ModelChoice] - if args.local_model: - model_path_input_val = args.local_model - # model_choices_lst.insert(0, "local_model") - elif args.remote_model: - model_path_input_val = args.remote_model - # model_choices_lst.insert(0, "remote_model") - with gr.Row(elem_id="setup_row"): - with gr.Column(scale=4, min_width=350): - token = gr.Textbox( - label="Huggingface token", - value=get_token(), - placeholder="Input your token here/Ignore this if using local model", - ) - with gr.Column(scale=3, min_width=320): - model_selection = gr.Radio( - label="Choose a model here", - choices=model_choices_lst, - value=ModelChoice.INPAINTING.value, - ) - with gr.Column(scale=1, min_width=100): - canvas_width = gr.Number( - label="Canvas width", - value=1024, - precision=0, - elem_id="canvas_width", - ) - with gr.Column(scale=1, min_width=100): - canvas_height = gr.Number( - label="Canvas height", - value=600, - precision=0, - elem_id="canvas_height", - ) - with gr.Column(scale=1, min_width=100): - selection_size = gr.Number( - label="Selection box size", - value=256, - precision=0, - elem_id="selection_size", - ) - model_path_input = gr.Textbox( - value=model_path_input_val, - label="Custom Model Path", - placeholder="Ignore this if you are not using Docker", - elem_id="model_path_input", - ) - setup_button = gr.Button("Click to Setup (may take a while)", variant="primary") - with gr.Row(): - with gr.Column(scale=3, min_width=270): - init_mode = gr.Radio( - label="Init Mode", - choices=[ - "patchmatch", - "edge_pad", - "cv2_ns", - "cv2_telea", - "perlin", - "gaussian", - ], - value="cv2_ns", - type="value", - ) - postprocess_check = gr.Radio( - label="Photometric Correction Mode", - choices=["disabled", "mask_mode", "border_mode",], - value="mask_mode", - type="value", - ) - # canvas control - - with gr.Column(scale=3, min_width=270): - sd_prompt = gr.Textbox( - label="Prompt", placeholder="input your prompt here!", lines=2 - ) - sd_negative_prompt = gr.Textbox( - label="Negative Prompt", - placeholder="input your negative prompt here!", - lines=2, - ) - with gr.Column(scale=2, min_width=150): - with gr.Group(): - with gr.Row(): - sd_generate_num = gr.Number( - label="Sample number", value=1, precision=0 - ) - sd_strength = gr.Slider( - label="Strength", - minimum=0.0, - maximum=1.0, - value=0.75, - step=0.01, - ) - with gr.Row(): - sd_scheduler = gr.Dropdown( - list(scheduler_dict.keys()), label="Scheduler", value="DPM" - ) - sd_scheduler_eta = gr.Number(label="Eta", value=0.0) - with gr.Column(scale=1, min_width=80): - sd_step = gr.Number(label="Step", value=25, precision=0) - sd_guidance = gr.Number(label="Guidance", value=7.5) - - proceed_button = gr.Button("Proceed", elem_id="proceed", visible=DEBUG_MODE) - xss_js = load_js("xss").replace("\n", " ") - xss_html = gr.HTML( - value=f""" - """, - visible=False, - ) - xss_keyboard_js = load_js("keyboard").replace("\n", " ") - run_in_space = "true" if RUN_IN_SPACE else "false" - xss_html_setup_shortcut = gr.HTML( - value=f""" - """, - visible=False, - ) - # sd pipeline parameters - sd_img2img = gr.Checkbox(label="Enable Img2Img", value=False, visible=False) - sd_resize = gr.Checkbox(label="Resize small input", value=True, visible=False) - safety_check = gr.Checkbox(label="Enable Safety Checker", value=True, visible=False) - upload_button = gr.Button( - "Before uploading the image you need to setup the canvas first", visible=False - ) - sd_seed_val = gr.Number(label="Seed", value=0, precision=0, visible=False) - sd_use_seed = gr.Checkbox(label="Use seed", value=False, visible=False) - model_output = gr.Textbox(visible=DEBUG_MODE, elem_id="output", label="0") - model_input = gr.Textbox(visible=DEBUG_MODE, elem_id="input", label="Input") - upload_output = gr.Textbox(visible=DEBUG_MODE, elem_id="upload", label="0") - model_output_state = gr.State(value=0) - upload_output_state = gr.State(value=0) - cancel_button = gr.Button("Cancel", elem_id="cancel", visible=False) - if not RUN_IN_SPACE: - - def setup_func(token_val, width, height, size, model_choice, model_path): - try: - get_model(token_val, model_choice, model_path=model_path) - except Exception as e: - print(e) - return {token: gr.update(value=str(e))} - return { - token: gr.update(visible=False), - canvas_width: gr.update(visible=False), - canvas_height: gr.update(visible=False), - selection_size: gr.update(visible=False), - setup_button: gr.update(visible=False), - frame: gr.update(visible=True), - upload_button: gr.update(value="Upload Image"), - model_selection: gr.update(visible=False), - model_path_input: gr.update(visible=False), - } - - setup_button.click( - fn=setup_func, - inputs=[ - token, - canvas_width, - canvas_height, - selection_size, - model_selection, - model_path_input, - ], - outputs=[ - token, - canvas_width, - canvas_height, - selection_size, - setup_button, - frame, - upload_button, - model_selection, - model_path_input, - ], - _js=setup_button_js, - ) - - proceed_event = proceed_button.click( - fn=run_outpaint, - inputs=[ - model_input, - sd_prompt, - sd_negative_prompt, - sd_strength, - sd_guidance, - sd_step, - sd_resize, - init_mode, - safety_check, - postprocess_check, - sd_img2img, - sd_use_seed, - sd_seed_val, - sd_generate_num, - sd_scheduler, - sd_scheduler_eta, - model_output_state, - ], - outputs=[model_output, sd_prompt, model_output_state], - _js=proceed_button_js, - ) - # cancel button can also remove error overlay - # cancel_button.click(fn=None, inputs=None, outputs=None, cancels=[proceed_event]) - - -launch_extra_kwargs = { - "show_error": True, - # "favicon_path": "" -} -launch_kwargs = vars(args) -launch_kwargs = {k: v for k, v in launch_kwargs.items() if v is not None} -launch_kwargs.pop("remote_model", None) -launch_kwargs.pop("local_model", None) -launch_kwargs.pop("fp32", None) -launch_kwargs.update(launch_extra_kwargs) -try: - import google.colab - - launch_kwargs["debug"] = True -except: - pass - -if RUN_IN_SPACE: - demo.launch() -elif args.debug: - launch_kwargs["server_name"] = "0.0.0.0" - demo.queue().launch(**launch_kwargs) -else: - demo.queue().launch(**launch_kwargs) - diff --git a/spaces/longlian/llm-grounded-diffusion/models/transformer_2d.py b/spaces/longlian/llm-grounded-diffusion/models/transformer_2d.py deleted file mode 100644 index 097069e47f9a2e0e579c389cbd0e28b2e7e6f182..0000000000000000000000000000000000000000 --- a/spaces/longlian/llm-grounded-diffusion/models/transformer_2d.py +++ /dev/null @@ -1,367 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Any, Dict, Optional - -import torch -import torch.nn.functional as F -from torch import nn - -from diffusers.configuration_utils import ConfigMixin, register_to_config -from diffusers.models.embeddings import ImagePositionalEmbeddings -from diffusers.utils import BaseOutput, deprecate -from .attention import BasicTransformerBlock -from diffusers.models.embeddings import PatchEmbed -from diffusers.models.modeling_utils import ModelMixin - - -@dataclass -class Transformer2DModelOutput(BaseOutput): - """ - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [`Transformer2DModel`] is discrete): - Hidden states conditioned on `encoder_hidden_states` input. If discrete, returns probability distributions - for the unnoised latent pixels. - """ - - sample: torch.FloatTensor - - -class Transformer2DModel(ModelMixin, ConfigMixin): - """ - Transformer model for image-like data. Takes either discrete (classes of vector embeddings) or continuous (actual - embeddings) inputs. - - When input is continuous: First, project the input (aka embedding) and reshape to b, t, d. Then apply standard - transformer action. Finally, reshape to image. - - When input is discrete: First, input (classes of latent pixels) is converted to embeddings and has positional - embeddings applied, see `ImagePositionalEmbeddings`. Then apply standard transformer action. Finally, predict - classes of unnoised image. - - Note that it is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised - image do not contain a prediction for the masked pixel as the unnoised image cannot be masked. - - Parameters: - num_attention_heads (`int`, *optional*, defaults to 16): The number of heads to use for multi-head attention. - attention_head_dim (`int`, *optional*, defaults to 88): The number of channels in each head. - in_channels (`int`, *optional*): - Pass if the input is continuous. The number of channels in the input and output. - num_layers (`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - cross_attention_dim (`int`, *optional*): The number of encoder_hidden_states dimensions to use. - sample_size (`int`, *optional*): Pass if the input is discrete. The width of the latent images. - Note that this is fixed at training time as it is used for learning a number of position embeddings. See - `ImagePositionalEmbeddings`. - num_vector_embeds (`int`, *optional*): - Pass if the input is discrete. The number of classes of the vector embeddings of the latent pixels. - Includes the class for the masked latent pixel. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward. - num_embeds_ada_norm ( `int`, *optional*): Pass if at least one of the norm_layers is `AdaLayerNorm`. - The number of diffusion steps used during training. Note that this is fixed at training time as it is used - to learn a number of embeddings that are added to the hidden states. During inference, you can denoise for - up to but not more than steps than `num_embeds_ada_norm`. - attention_bias (`bool`, *optional*): - Configure if the TransformerBlocks' attention should contain a bias parameter. - """ - - @register_to_config - def __init__( - self, - num_attention_heads: int = 16, - attention_head_dim: int = 88, - in_channels: Optional[int] = None, - out_channels: Optional[int] = None, - num_layers: int = 1, - dropout: float = 0.0, - norm_num_groups: int = 32, - cross_attention_dim: Optional[int] = None, - attention_bias: bool = False, - sample_size: Optional[int] = None, - num_vector_embeds: Optional[int] = None, - patch_size: Optional[int] = None, - activation_fn: str = "geglu", - num_embeds_ada_norm: Optional[int] = None, - use_linear_projection: bool = False, - only_cross_attention: bool = False, - upcast_attention: bool = False, - norm_type: str = "layer_norm", - norm_elementwise_affine: bool = True, - use_gated_attention: bool = False, - ): - super().__init__() - self.use_linear_projection = use_linear_projection - self.num_attention_heads = num_attention_heads - self.attention_head_dim = attention_head_dim - inner_dim = num_attention_heads * attention_head_dim - - # 1. Transformer2DModel can process both standard continuous images of shape `(batch_size, num_channels, width, height)` as well as quantized image embeddings of shape `(batch_size, num_image_vectors)` - # Define whether input is continuous or discrete depending on configuration - self.is_input_continuous = (in_channels is not None) and (patch_size is None) - self.is_input_vectorized = num_vector_embeds is not None - self.is_input_patches = in_channels is not None and patch_size is not None - - if norm_type == "layer_norm" and num_embeds_ada_norm is not None: - deprecation_message = ( - f"The configuration file of this model: {self.__class__} is outdated. `norm_type` is either not set or" - " incorrectly set to `'layer_norm'`.Make sure to set `norm_type` to `'ada_norm'` in the config." - " Please make sure to update the config accordingly as leaving `norm_type` might led to incorrect" - " results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it" - " would be very nice if you could open a Pull request for the `transformer/config.json` file" - ) - deprecate("norm_type!=num_embeds_ada_norm", "1.0.0", deprecation_message, standard_warn=False) - norm_type = "ada_norm" - - if self.is_input_continuous and self.is_input_vectorized: - raise ValueError( - f"Cannot define both `in_channels`: {in_channels} and `num_vector_embeds`: {num_vector_embeds}. Make" - " sure that either `in_channels` or `num_vector_embeds` is None." - ) - elif self.is_input_vectorized and self.is_input_patches: - raise ValueError( - f"Cannot define both `num_vector_embeds`: {num_vector_embeds} and `patch_size`: {patch_size}. Make" - " sure that either `num_vector_embeds` or `num_patches` is None." - ) - elif not self.is_input_continuous and not self.is_input_vectorized and not self.is_input_patches: - raise ValueError( - f"Has to define `in_channels`: {in_channels}, `num_vector_embeds`: {num_vector_embeds}, or patch_size:" - f" {patch_size}. Make sure that `in_channels`, `num_vector_embeds` or `num_patches` is not None." - ) - - # 2. Define input layers - if self.is_input_continuous: - self.in_channels = in_channels - - self.norm = torch.nn.GroupNorm(num_groups=norm_num_groups, num_channels=in_channels, eps=1e-6, affine=True) - if use_linear_projection: - self.proj_in = nn.Linear(in_channels, inner_dim) - else: - self.proj_in = nn.Conv2d(in_channels, inner_dim, kernel_size=1, stride=1, padding=0) - elif self.is_input_vectorized: - assert sample_size is not None, "Transformer2DModel over discrete input must provide sample_size" - assert num_vector_embeds is not None, "Transformer2DModel over discrete input must provide num_embed" - - self.height = sample_size - self.width = sample_size - self.num_vector_embeds = num_vector_embeds - self.num_latent_pixels = self.height * self.width - - self.latent_image_embedding = ImagePositionalEmbeddings( - num_embed=num_vector_embeds, embed_dim=inner_dim, height=self.height, width=self.width - ) - elif self.is_input_patches: - assert sample_size is not None, "Transformer2DModel over patched input must provide sample_size" - - self.height = sample_size - self.width = sample_size - - self.patch_size = patch_size - self.pos_embed = PatchEmbed( - height=sample_size, - width=sample_size, - patch_size=patch_size, - in_channels=in_channels, - embed_dim=inner_dim, - ) - - # 3. Define transformers blocks - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock( - inner_dim, - num_attention_heads, - attention_head_dim, - dropout=dropout, - cross_attention_dim=cross_attention_dim, - activation_fn=activation_fn, - num_embeds_ada_norm=num_embeds_ada_norm, - attention_bias=attention_bias, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - norm_type=norm_type, - norm_elementwise_affine=norm_elementwise_affine, - use_gated_attention=use_gated_attention, - ) - for d in range(num_layers) - ] - ) - - # 4. Define output layers - self.out_channels = in_channels if out_channels is None else out_channels - if self.is_input_continuous: - # TODO: should use out_channels for continuous projections - if use_linear_projection: - self.proj_out = nn.Linear(inner_dim, in_channels) - else: - self.proj_out = nn.Conv2d(inner_dim, in_channels, kernel_size=1, stride=1, padding=0) - elif self.is_input_vectorized: - self.norm_out = nn.LayerNorm(inner_dim) - self.out = nn.Linear(inner_dim, self.num_vector_embeds - 1) - elif self.is_input_patches: - self.norm_out = nn.LayerNorm(inner_dim, elementwise_affine=False, eps=1e-6) - self.proj_out_1 = nn.Linear(inner_dim, 2 * inner_dim) - self.proj_out_2 = nn.Linear(inner_dim, patch_size * patch_size * self.out_channels) - - def forward( - self, - hidden_states: torch.Tensor, - encoder_hidden_states: Optional[torch.Tensor] = None, - timestep: Optional[torch.LongTensor] = None, - class_labels: Optional[torch.LongTensor] = None, - cross_attention_kwargs: Dict[str, Any] = None, - attention_mask: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - return_dict: bool = True, - return_cross_attention_probs: bool = False, - ): - """ - Args: - hidden_states ( When discrete, `torch.LongTensor` of shape `(batch size, num latent pixels)`. - When continuous, `torch.FloatTensor` of shape `(batch size, channel, height, width)`): Input - hidden_states - encoder_hidden_states ( `torch.FloatTensor` of shape `(batch size, sequence len, embed dims)`, *optional*): - Conditional embeddings for cross attention layer. If not given, cross-attention defaults to - self-attention. - timestep ( `torch.LongTensor`, *optional*): - Optional timestep to be applied as an embedding in AdaLayerNorm's. Used to indicate denoising step. - class_labels ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*): - Optional class labels to be applied as an embedding in AdaLayerZeroNorm. Used to indicate class labels - conditioning. - encoder_attention_mask ( `torch.Tensor`, *optional* ). - Cross-attention mask, applied to encoder_hidden_states. Two formats supported: - Mask `(batch, sequence_length)` True = keep, False = discard. Bias `(batch, 1, sequence_length)` 0 - = keep, -10000 = discard. - If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format - above. This bias will be added to the cross-attention scores. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain tuple. - - Returns: - [`~models.transformer_2d.Transformer2DModelOutput`] or `tuple`: - [`~models.transformer_2d.Transformer2DModelOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - """ - # ensure attention_mask is a bias, and give it a singleton query_tokens dimension. - # we may have done this conversion already, e.g. if we came here via UNet2DConditionModel#forward. - # we can tell by counting dims; if ndim == 2: it's a mask rather than a bias. - # expects mask of shape: - # [batch, key_tokens] - # adds singleton query_tokens dimension: - # [batch, 1, key_tokens] - # this helps to broadcast it as a bias over attention scores, which will be in one of the following shapes: - # [batch, heads, query_tokens, key_tokens] (e.g. torch sdp attn) - # [batch * heads, query_tokens, key_tokens] (e.g. xformers or classic attn) - if attention_mask is not None and attention_mask.ndim == 2: - # assume that mask is expressed as: - # (1 = keep, 0 = discard) - # convert mask into a bias that can be added to attention scores: - # (keep = +0, discard = -10000.0) - attention_mask = (1 - attention_mask.to(hidden_states.dtype)) * -10000.0 - attention_mask = attention_mask.unsqueeze(1) - - # convert encoder_attention_mask to a bias the same way we do for attention_mask - if encoder_attention_mask is not None and encoder_attention_mask.ndim == 2: - encoder_attention_mask = (1 - encoder_attention_mask.to(hidden_states.dtype)) * -10000.0 - encoder_attention_mask = encoder_attention_mask.unsqueeze(1) - - # 1. Input - if self.is_input_continuous: - batch, _, height, width = hidden_states.shape - residual = hidden_states - - hidden_states = self.norm(hidden_states) - if not self.use_linear_projection: - hidden_states = self.proj_in(hidden_states) - inner_dim = hidden_states.shape[1] - hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * width, inner_dim) - else: - inner_dim = hidden_states.shape[1] - hidden_states = hidden_states.permute(0, 2, 3, 1).reshape(batch, height * width, inner_dim) - hidden_states = self.proj_in(hidden_states) - elif self.is_input_vectorized: - hidden_states = self.latent_image_embedding(hidden_states) - elif self.is_input_patches: - hidden_states = self.pos_embed(hidden_states) - - base_attn_key = cross_attention_kwargs["attn_key"] - - # 2. Blocks - cross_attention_probs_all = [] - for block_ind, block in enumerate(self.transformer_blocks): - cross_attention_kwargs["attn_key"] = base_attn_key + [block_ind] - - hidden_states = block( - hidden_states, - attention_mask=attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - timestep=timestep, - cross_attention_kwargs=cross_attention_kwargs, - class_labels=class_labels, - return_cross_attention_probs=return_cross_attention_probs, - ) - if return_cross_attention_probs: - hidden_states, cross_attention_probs = hidden_states - cross_attention_probs_all.append(cross_attention_probs) - - # 3. Output - if self.is_input_continuous: - if not self.use_linear_projection: - hidden_states = hidden_states.reshape(batch, height, width, inner_dim).permute(0, 3, 1, 2).contiguous() - hidden_states = self.proj_out(hidden_states) - else: - hidden_states = self.proj_out(hidden_states) - hidden_states = hidden_states.reshape(batch, height, width, inner_dim).permute(0, 3, 1, 2).contiguous() - - output = hidden_states + residual - elif self.is_input_vectorized: - hidden_states = self.norm_out(hidden_states) - logits = self.out(hidden_states) - # (batch, self.num_vector_embeds - 1, self.num_latent_pixels) - logits = logits.permute(0, 2, 1) - - # log(p(x_0)) - output = F.log_softmax(logits.double(), dim=1).float() - elif self.is_input_patches: - # TODO: cleanup! - conditioning = self.transformer_blocks[0].norm1.emb( - timestep, class_labels, hidden_dtype=hidden_states.dtype - ) - shift, scale = self.proj_out_1(F.silu(conditioning)).chunk(2, dim=1) - hidden_states = self.norm_out(hidden_states) * (1 + scale[:, None]) + shift[:, None] - hidden_states = self.proj_out_2(hidden_states) - - # unpatchify - height = width = int(hidden_states.shape[1] ** 0.5) - hidden_states = hidden_states.reshape( - shape=(-1, height, width, self.patch_size, self.patch_size, self.out_channels) - ) - hidden_states = torch.einsum("nhwpqc->nchpwq", hidden_states) - output = hidden_states.reshape( - shape=(-1, self.out_channels, height * self.patch_size, width * self.patch_size) - ) - - if len(cross_attention_probs_all) == 1: - # If we only have one transformer block in a Transformer2DModel, we do not create another nested level. - cross_attention_probs_all = cross_attention_probs_all[0] - - if not return_dict: - if return_cross_attention_probs: - return (output, cross_attention_probs_all) - return (output,) - - output = Transformer2DModelOutput(sample=output) - if return_cross_attention_probs: - return output, cross_attention_probs_all - return output diff --git a/spaces/ltgoslo/ssa-perin/mtool/ucca/README.md b/spaces/ltgoslo/ssa-perin/mtool/ucca/README.md deleted file mode 100644 index 96c97d7923778b05bf86854640501464500d758f..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/mtool/ucca/README.md +++ /dev/null @@ -1,40 +0,0 @@ -Universal Conceptual Cognitive Annotation -============================ -UCCA is a linguistic framework for semantic annotation, whose details -are available at [the following paper](http://www.cs.huji.ac.il/~oabend/papers/ucca_acl.pdf): - - @inproceedings{abend2013universal, - author={Abend, Omri and Rappoport, Ari}, - title={{U}niversal {C}onceptual {C}ognitive {A}nnotation ({UCCA})}, - booktitle={Proc. of ACL}, - month={August}, - year={2013}, - pages={228--238}, - url={http://aclweb.org/anthology/P13-1023} - } - -This Python 3 package provides an API to the UCCA annotation and tools to -manipulate and process it. Its main features are conversion between different -representations of UCCA annotations, and rich objects for all of the linguistic -relations which appear in the theoretical framework (see `core`, `layer0`, `layer1` -and `convert` modules under the `ucca` package). - -The `scripts` package contains various utilities for processing passage files. - -To parse text to UCCA graphs, use [TUPA, the UCCA parser](http://www.cs.huji.ac.il/~danielh/tupa). - - -Authors ------- -* Amit Beka: amit.beka@gmail.com -* Daniel Hershcovich: danielh@cs.huji.ac.il - - -License -------- -This package is licensed under the GPLv3 or later license. - -[![Build Status (Travis CI)](https://travis-ci.org/danielhers/ucca.svg?branch=master)](https://travis-ci.org/danielhers/ucca) -[![Build Status (AppVeyor)](https://ci.appveyor.com/api/projects/status/github/danielhers/ucca?svg=true)](https://ci.appveyor.com/project/danielh/ucca) -[![Build Status (Docs)](https://readthedocs.org/projects/ucca/badge/?version=latest)](http://ucca.readthedocs.io/en/latest/) -[![PyPI version](https://badge.fury.io/py/UCCA.svg)](https://badge.fury.io/py/UCCA) diff --git a/spaces/lusea/rvc-Qinggan/lib/infer_pack/onnx_inference.py b/spaces/lusea/rvc-Qinggan/lib/infer_pack/onnx_inference.py deleted file mode 100644 index c78324cbc08414fffcc689f325312de0e51bd6b4..0000000000000000000000000000000000000000 --- a/spaces/lusea/rvc-Qinggan/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,143 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import HarvestF0Predictor - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/lwchen/CodeFormer/CodeFormer/basicsr/losses/loss_util.py b/spaces/lwchen/CodeFormer/CodeFormer/basicsr/losses/loss_util.py deleted file mode 100644 index 744eeb46d1f3b5a7b4553ca23237ddd9c899a698..0000000000000000000000000000000000000000 --- a/spaces/lwchen/CodeFormer/CodeFormer/basicsr/losses/loss_util.py +++ /dev/null @@ -1,95 +0,0 @@ -import functools -from torch.nn import functional as F - - -def reduce_loss(loss, reduction): - """Reduce loss as specified. - - Args: - loss (Tensor): Elementwise loss tensor. - reduction (str): Options are 'none', 'mean' and 'sum'. - - Returns: - Tensor: Reduced loss tensor. - """ - reduction_enum = F._Reduction.get_enum(reduction) - # none: 0, elementwise_mean:1, sum: 2 - if reduction_enum == 0: - return loss - elif reduction_enum == 1: - return loss.mean() - else: - return loss.sum() - - -def weight_reduce_loss(loss, weight=None, reduction='mean'): - """Apply element-wise weight and reduce loss. - - Args: - loss (Tensor): Element-wise loss. - weight (Tensor): Element-wise weights. Default: None. - reduction (str): Same as built-in losses of PyTorch. Options are - 'none', 'mean' and 'sum'. Default: 'mean'. - - Returns: - Tensor: Loss values. - """ - # if weight is specified, apply element-wise weight - if weight is not None: - assert weight.dim() == loss.dim() - assert weight.size(1) == 1 or weight.size(1) == loss.size(1) - loss = loss * weight - - # if weight is not specified or reduction is sum, just reduce the loss - if weight is None or reduction == 'sum': - loss = reduce_loss(loss, reduction) - # if reduction is mean, then compute mean over weight region - elif reduction == 'mean': - if weight.size(1) > 1: - weight = weight.sum() - else: - weight = weight.sum() * loss.size(1) - loss = loss.sum() / weight - - return loss - - -def weighted_loss(loss_func): - """Create a weighted version of a given loss function. - - To use this decorator, the loss function must have the signature like - `loss_func(pred, target, **kwargs)`. The function only needs to compute - element-wise loss without any reduction. This decorator will add weight - and reduction arguments to the function. The decorated function will have - the signature like `loss_func(pred, target, weight=None, reduction='mean', - **kwargs)`. - - :Example: - - >>> import torch - >>> @weighted_loss - >>> def l1_loss(pred, target): - >>> return (pred - target).abs() - - >>> pred = torch.Tensor([0, 2, 3]) - >>> target = torch.Tensor([1, 1, 1]) - >>> weight = torch.Tensor([1, 0, 1]) - - >>> l1_loss(pred, target) - tensor(1.3333) - >>> l1_loss(pred, target, weight) - tensor(1.5000) - >>> l1_loss(pred, target, reduction='none') - tensor([1., 1., 2.]) - >>> l1_loss(pred, target, weight, reduction='sum') - tensor(3.) - """ - - @functools.wraps(loss_func) - def wrapper(pred, target, weight=None, reduction='mean', **kwargs): - # get element-wise loss - loss = loss_func(pred, target, **kwargs) - loss = weight_reduce_loss(loss, weight, reduction) - return loss - - return wrapper diff --git a/spaces/ma-xu/LIVE/pybind11/tests/test_callbacks.cpp b/spaces/ma-xu/LIVE/pybind11/tests/test_callbacks.cpp deleted file mode 100644 index 71b88c44c7650a7e7b3f37cee19359e15bbb0270..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/tests/test_callbacks.cpp +++ /dev/null @@ -1,168 +0,0 @@ -/* - tests/test_callbacks.cpp -- callbacks - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" -#include "constructor_stats.h" -#include -#include - - -int dummy_function(int i) { return i + 1; } - -TEST_SUBMODULE(callbacks, m) { - // test_callbacks, test_function_signatures - m.def("test_callback1", [](py::object func) { return func(); }); - m.def("test_callback2", [](py::object func) { return func("Hello", 'x', true, 5); }); - m.def("test_callback3", [](const std::function &func) { - return "func(43) = " + std::to_string(func(43)); }); - m.def("test_callback4", []() -> std::function { return [](int i) { return i+1; }; }); - m.def("test_callback5", []() { - return py::cpp_function([](int i) { return i+1; }, py::arg("number")); - }); - - // test_keyword_args_and_generalized_unpacking - m.def("test_tuple_unpacking", [](py::function f) { - auto t1 = py::make_tuple(2, 3); - auto t2 = py::make_tuple(5, 6); - return f("positional", 1, *t1, 4, *t2); - }); - - m.def("test_dict_unpacking", [](py::function f) { - auto d1 = py::dict("key"_a="value", "a"_a=1); - auto d2 = py::dict(); - auto d3 = py::dict("b"_a=2); - return f("positional", 1, **d1, **d2, **d3); - }); - - m.def("test_keyword_args", [](py::function f) { - return f("x"_a=10, "y"_a=20); - }); - - m.def("test_unpacking_and_keywords1", [](py::function f) { - auto args = py::make_tuple(2); - auto kwargs = py::dict("d"_a=4); - return f(1, *args, "c"_a=3, **kwargs); - }); - - m.def("test_unpacking_and_keywords2", [](py::function f) { - auto kwargs1 = py::dict("a"_a=1); - auto kwargs2 = py::dict("c"_a=3, "d"_a=4); - return f("positional", *py::make_tuple(1), 2, *py::make_tuple(3, 4), 5, - "key"_a="value", **kwargs1, "b"_a=2, **kwargs2, "e"_a=5); - }); - - m.def("test_unpacking_error1", [](py::function f) { - auto kwargs = py::dict("x"_a=3); - return f("x"_a=1, "y"_a=2, **kwargs); // duplicate ** after keyword - }); - - m.def("test_unpacking_error2", [](py::function f) { - auto kwargs = py::dict("x"_a=3); - return f(**kwargs, "x"_a=1); // duplicate keyword after ** - }); - - m.def("test_arg_conversion_error1", [](py::function f) { - f(234, UnregisteredType(), "kw"_a=567); - }); - - m.def("test_arg_conversion_error2", [](py::function f) { - f(234, "expected_name"_a=UnregisteredType(), "kw"_a=567); - }); - - // test_lambda_closure_cleanup - struct Payload { - Payload() { print_default_created(this); } - ~Payload() { print_destroyed(this); } - Payload(const Payload &) { print_copy_created(this); } - Payload(Payload &&) { print_move_created(this); } - }; - // Export the payload constructor statistics for testing purposes: - m.def("payload_cstats", &ConstructorStats::get); - /* Test cleanup of lambda closure */ - m.def("test_cleanup", []() -> std::function { - Payload p; - - return [p]() { - /* p should be cleaned up when the returned function is garbage collected */ - (void) p; - }; - }); - - // test_cpp_function_roundtrip - /* Test if passing a function pointer from C++ -> Python -> C++ yields the original pointer */ - m.def("dummy_function", &dummy_function); - m.def("dummy_function2", [](int i, int j) { return i + j; }); - m.def("roundtrip", [](std::function f, bool expect_none = false) { - if (expect_none && f) - throw std::runtime_error("Expected None to be converted to empty std::function"); - return f; - }, py::arg("f"), py::arg("expect_none")=false); - m.def("test_dummy_function", [](const std::function &f) -> std::string { - using fn_type = int (*)(int); - auto result = f.target(); - if (!result) { - auto r = f(1); - return "can't convert to function pointer: eval(1) = " + std::to_string(r); - } else if (*result == dummy_function) { - auto r = (*result)(1); - return "matches dummy_function: eval(1) = " + std::to_string(r); - } else { - return "argument does NOT match dummy_function. This should never happen!"; - } - }); - - class AbstractBase { public: virtual unsigned int func() = 0; }; - m.def("func_accepting_func_accepting_base", [](std::function) { }); - - struct MovableObject { - bool valid = true; - - MovableObject() = default; - MovableObject(const MovableObject &) = default; - MovableObject &operator=(const MovableObject &) = default; - MovableObject(MovableObject &&o) : valid(o.valid) { o.valid = false; } - MovableObject &operator=(MovableObject &&o) { - valid = o.valid; - o.valid = false; - return *this; - } - }; - py::class_(m, "MovableObject"); - - // test_movable_object - m.def("callback_with_movable", [](std::function f) { - auto x = MovableObject(); - f(x); // lvalue reference shouldn't move out object - return x.valid; // must still return `true` - }); - - // test_bound_method_callback - struct CppBoundMethodTest {}; - py::class_(m, "CppBoundMethodTest") - .def(py::init<>()) - .def("triple", [](CppBoundMethodTest &, int val) { return 3 * val; }); - - // test async Python callbacks - using callback_f = std::function; - m.def("test_async_callback", [](callback_f f, py::list work) { - // make detached thread that calls `f` with piece of work after a little delay - auto start_f = [f](int j) { - auto invoke_f = [f, j] { - std::this_thread::sleep_for(std::chrono::milliseconds(50)); - f(j); - }; - auto t = std::thread(std::move(invoke_f)); - t.detach(); - }; - - // spawn worker threads - for (auto i : work) - start_f(py::cast(i)); - }); -} diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/raw_reference_cast.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/raw_reference_cast.h deleted file mode 100644 index a678144e2256b43baab945f54bdf82871241e0ad..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/raw_reference_cast.h +++ /dev/null @@ -1,398 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include -#include -#include - - -// the order of declarations and definitions in this file is totally goofy -// this header defines raw_reference_cast, which has a few overloads towards the bottom of the file -// raw_reference_cast depends on metafunctions such as is_unwrappable and raw_reference -// we need to be sure that these metafunctions are completely defined (including specializations) before they are instantiated by raw_reference_cast - -namespace thrust -{ -namespace detail -{ - - -__THRUST_DEFINE_HAS_NESTED_TYPE(is_wrapped_reference, wrapped_reference_hint) - - -// wrapped reference-like things which aren't strictly wrapped references -// (e.g. tuples of wrapped references) are considered unwrappable -template - struct is_unwrappable - : is_wrapped_reference -{}; - - -// specialize is_unwrappable -// a tuple is_unwrappable if any of its elements is_unwrappable -template< - typename T0, typename T1, typename T2, - typename T3, typename T4, typename T5, - typename T6, typename T7, typename T8, - typename T9 -> - struct is_unwrappable< - thrust::tuple - > - : or_< - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable - > -{}; - - -// specialize is_unwrappable -// a tuple_of_iterator_references is_unwrappable if any of its elements is_unwrappable -template< - typename T0, typename T1, typename T2, - typename T3, typename T4, typename T5, - typename T6, typename T7, typename T8, - typename T9 -> - struct is_unwrappable< - thrust::detail::tuple_of_iterator_references - > - : or_< - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable, - is_unwrappable - > -{}; - - -template - struct enable_if_unwrappable - : enable_if< - is_unwrappable::value, - Result - > -{}; - - -namespace raw_reference_detail -{ - - -template - struct raw_reference_impl - : add_reference -{}; - - -template - struct raw_reference_impl< - T, - typename thrust::detail::enable_if< - is_wrapped_reference< - typename remove_cv::type - >::value - >::type - > -{ - typedef typename add_reference< - typename pointer_element::type - >::type type; -}; - - -} // end raw_reference_detail - - -template - struct raw_reference : - raw_reference_detail::raw_reference_impl -{}; - - -namespace raw_reference_detail -{ - -// unlike raw_reference, -// raw_reference_tuple_helper needs to return a value -// when it encounters one, rather than a reference -// upon encountering tuple, recurse -// -// we want the following behavior: -// 1. T -> T -// 2. T& -> T& -// 3. null_type -> null_type -// 4. reference -> T& -// 5. tuple_of_iterator_references -> tuple_of_iterator_references::type> - - -// wrapped references are unwrapped using raw_reference, otherwise, return T -template - struct raw_reference_tuple_helper - : eval_if< - is_unwrappable< - typename remove_cv::type - >::value, - raw_reference, - identity_ - > -{}; - - -// recurse on tuples -template < - typename T0, typename T1, typename T2, - typename T3, typename T4, typename T5, - typename T6, typename T7, typename T8, - typename T9 -> - struct raw_reference_tuple_helper< - thrust::tuple - > -{ - typedef thrust::tuple< - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type - > type; -}; - - -template < - typename T0, typename T1, typename T2, - typename T3, typename T4, typename T5, - typename T6, typename T7, typename T8, - typename T9 -> - struct raw_reference_tuple_helper< - thrust::detail::tuple_of_iterator_references - > -{ - typedef thrust::detail::tuple_of_iterator_references< - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type, - typename raw_reference_tuple_helper::type - > type; -}; - - -} // end raw_reference_detail - - -// a couple of specializations of raw_reference for tuples follow - - -// if a tuple "tuple_type" is_unwrappable, -// then the raw_reference of tuple_type is a tuple of its members' raw_references -// else the raw_reference of tuple_type is tuple_type & -template < - typename T0, typename T1, typename T2, - typename T3, typename T4, typename T5, - typename T6, typename T7, typename T8, - typename T9 -> - struct raw_reference< - thrust::tuple - > -{ - private: - typedef thrust::tuple tuple_type; - - public: - typedef typename eval_if< - is_unwrappable::value, - raw_reference_detail::raw_reference_tuple_helper, - add_reference - >::type type; -}; - - -template < - typename T0, typename T1, typename T2, - typename T3, typename T4, typename T5, - typename T6, typename T7, typename T8, - typename T9 -> - struct raw_reference< - thrust::detail::tuple_of_iterator_references - > -{ - private: - typedef detail::tuple_of_iterator_references tuple_type; - - public: - typedef typename raw_reference_detail::raw_reference_tuple_helper::type type; - - // XXX figure out why is_unwrappable seems to be broken for tuple_of_iterator_references - //typedef typename eval_if< - // is_unwrappable::value, - // raw_reference_detail::raw_reference_tuple_helper, - // add_reference - //>::type type; -}; - - -} // end detail - - -// provide declarations of raw_reference_cast's overloads for raw_reference_caster below -template -__host__ __device__ -typename detail::raw_reference::type - raw_reference_cast(T &ref); - - -template -__host__ __device__ -typename detail::raw_reference::type - raw_reference_cast(const T &ref); - - -template< - typename T0, typename T1, typename T2, - typename T3, typename T4, typename T5, - typename T6, typename T7, typename T8, - typename T9 -> -__host__ __device__ -typename detail::enable_if_unwrappable< - thrust::detail::tuple_of_iterator_references, - typename detail::raw_reference< - thrust::detail::tuple_of_iterator_references - >::type ->::type -raw_reference_cast(thrust::detail::tuple_of_iterator_references t); - - -namespace detail -{ - - -struct raw_reference_caster -{ - template - __host__ __device__ - typename detail::raw_reference::type operator()(T &ref) - { - return thrust::raw_reference_cast(ref); - } - - template - __host__ __device__ - typename detail::raw_reference::type operator()(const T &ref) - { - return thrust::raw_reference_cast(ref); - } - - template< - typename T0, typename T1, typename T2, - typename T3, typename T4, typename T5, - typename T6, typename T7, typename T8, - typename T9 - > - __host__ __device__ - typename detail::raw_reference< - thrust::detail::tuple_of_iterator_references - >::type - operator()(thrust::detail::tuple_of_iterator_references t, - typename enable_if< - is_unwrappable >::value - >::type * = 0) - { - return thrust::raw_reference_cast(t); - } -}; // end raw_reference_caster - - -} // end detail - - -template -__host__ __device__ -typename detail::raw_reference::type - raw_reference_cast(T &ref) -{ - return *thrust::raw_pointer_cast(&ref); -} // end raw_reference_cast - - -template -__host__ __device__ -typename detail::raw_reference::type - raw_reference_cast(const T &ref) -{ - return *thrust::raw_pointer_cast(&ref); -} // end raw_reference_cast - - -template< - typename T0, typename T1, typename T2, - typename T3, typename T4, typename T5, - typename T6, typename T7, typename T8, - typename T9 -> -__host__ __device__ -typename detail::enable_if_unwrappable< - thrust::detail::tuple_of_iterator_references, - typename detail::raw_reference< - thrust::detail::tuple_of_iterator_references - >::type ->::type -raw_reference_cast(thrust::detail::tuple_of_iterator_references t) -{ - thrust::detail::raw_reference_caster f; - - // note that we pass raw_reference_tuple_helper, not raw_reference as the unary metafunction - // the different way that raw_reference_tuple_helper unwraps tuples is important - return thrust::detail::tuple_host_device_transform(t, f); -} // end raw_reference_cast - - -} // end thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/mr/new.h b/spaces/ma-xu/LIVE/thrust/thrust/mr/new.h deleted file mode 100644 index f8e4fe0212c1ec22f7ee417e6302cb819972c40c..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/mr/new.h +++ /dev/null @@ -1,88 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file new.h - * \brief Global operator new-based memory resource. - */ - -#pragma once - -#include - -namespace thrust -{ -namespace mr -{ - -/** \addtogroup memory_resources Memory Resources - * \ingroup memory_management_classes - * \{ - */ - -/*! A memory resource that uses global operators new and delete to allocate and deallocate memory. Uses alignment-enabled - * overloads when available, otherwise uses regular overloads and implements alignment requirements by itself. - */ -class new_delete_resource THRUST_FINAL : public memory_resource<> -{ -public: - void * do_allocate(std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE - { -#if defined(__cpp_aligned_new) - return ::operator new(bytes, std::align_val_t(alignment)); -#else - // allocate memory for bytes, plus potential alignment correction, - // plus store of the correction offset - void * p = ::operator new(bytes + alignment + sizeof(std::size_t)); - std::size_t ptr_int = reinterpret_cast(p); - // calculate the offset, i.e. how many bytes of correction was necessary - // to get an aligned pointer - std::size_t offset = (ptr_int % alignment) ? (alignment - ptr_int % alignment) : 0; - // calculate the return pointer - char * ptr = static_cast(p) + offset; - // store the offset right after the actually returned value - std::size_t * offset_store = reinterpret_cast(ptr + bytes); - *offset_store = offset; - return static_cast(ptr); -#endif - } - - void do_deallocate(void * p, std::size_t bytes, std::size_t alignment = THRUST_MR_DEFAULT_ALIGNMENT) THRUST_OVERRIDE - { -#if defined(__cpp_aligned_new) -# if defined(__cpp_sized_deallocation) - ::operator delete(p, bytes, std::align_val_t(alignment)); -# else - (void)bytes; - ::operator delete(p, std::align_val_t(alignment)); -# endif -#else - (void)alignment; - char * ptr = static_cast(p); - // calculate where the offset is stored - std::size_t * offset = reinterpret_cast(ptr + bytes); - // calculate the original pointer - p = static_cast(ptr - *offset); - ::operator delete(p); -#endif - } -}; - -/*! \} - */ - -} // end mr -} // end thrust - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/uninitialized_copy.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/uninitialized_copy.h deleted file mode 100644 index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/cpp/detail/uninitialized_copy.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special version of this algorithm - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/memory.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/memory.h deleted file mode 100644 index 344b3673d11023557e5d2c483146624aac402cde..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/generic/memory.h +++ /dev/null @@ -1,71 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file generic/memory.h - * \brief Generic implementation of memory functions. - * Calling some of these is an error. They have no implementation. - */ - -#pragma once - -#include -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - -template -__host__ __device__ -void malloc(thrust::execution_policy &, Size); - -template -__host__ __device__ -thrust::pointer malloc(thrust::execution_policy &s, std::size_t n); - -template -__host__ __device__ -void free(thrust::execution_policy &, Pointer); - -template -__host__ __device__ -void assign_value(tag, Pointer1, Pointer2); - -template -__host__ __device__ -void get_value(thrust::execution_policy &, Pointer); - -template -__host__ __device__ -void iter_swap(thrust::execution_policy&, Pointer1, Pointer2); - -} // end generic -} // end detail -} // end system -} // end thrust - -#include - diff --git a/spaces/maminghui/ChatGPT/custom.css b/spaces/maminghui/ChatGPT/custom.css deleted file mode 100644 index 97a1c2e681f4cc09e2237a92b37ab6cadd545a71..0000000000000000000000000000000000000000 --- a/spaces/maminghui/ChatGPT/custom.css +++ /dev/null @@ -1,184 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} - -ol, ul { - list-style-position: inside; - padding-left: 0; -} - -ol li, ul:not(.options) li { - padding-left: 1.5em; - text-indent: -1.5em; -} - -/* 亮色 */ -@media (prefers-color-scheme: light) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; - } - [data-testid = "bot"] { - background-color: #FFFFFF !important; - } - [data-testid = "user"] { - background-color: #95EC69 !important; - } -} -/* 暗色 */ -@media (prefers-color-scheme: dark) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-dark) !important; - } - [data-testid = "bot"] { - background-color: #2C2C2C !important; - } - [data-testid = "user"] { - background-color: #26B561 !important; - } - body { - background-color: var(--neutral-950) !important; - } -} - -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1rem 1.2rem 1rem; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/mauriciogtec/w2vec-app/utils.py b/spaces/mauriciogtec/w2vec-app/utils.py deleted file mode 100644 index 487c1cfdc04bc76771a2a8cb02c82bbce3ed355f..0000000000000000000000000000000000000000 --- a/spaces/mauriciogtec/w2vec-app/utils.py +++ /dev/null @@ -1,112 +0,0 @@ -import argparse -import numpy as np -import pickle -import os -import yaml -import torch -import torch.nn as nn -from models import UNetEncoder, Decoder - - -def load_training_data( - path: str, - standardize_weather: bool = False, - standardize_so4: bool = False, - log_so4: bool = False, - remove_zeros: bool = True, - return_pp_data: bool = False, - year_averages: bool = False, -): - with open(path, "rb") as io: - data = pickle.load(io) - C = data["covars_rast"] # [:, weather_cols] - names = data["covars_names"] - if standardize_weather: - C -= C.mean((0, 2, 3), keepdims=True) - C /= C.std((0, 2, 3), keepdims=True) - if year_averages: - Cyearly_average = np.zeros_like(C) - for t in range(C.shape[0]): - if t < 12: - Cyearly_average[t] = np.mean(C[:12], 0) - else: - Cyearly_average[t] = np.mean(C[(t - 12) : t], 0) - C = np.concatenate([C, Cyearly_average], 1) - names = names + [x + ".yavg" for x in names] - names = [x.replace(".", "_") for x in names] - - Y = data["so4_rast"] - M = data["so4_mask"] - M[92:, 185:] = 0.0 # annoying weird corner - M[80:, :60] = 0.0 # annoying weird corner - if remove_zeros: - M = (Y > 0) * M - M = M * np.prod(M, 0) - else: - M = np.stack([M] * Y.shape[0]) - if log_so4: - # Y = np.log(M * Y + 1e-8) - Y = np.log(M * Y + 1.0) - if standardize_so4: - ix = np.where(M) - Y -= Y[ix].mean() - Y /= Y[ix].std() - - if not return_pp_data: - return C, names, Y, M - else: - return C, names, Y, M, data["pp_locs"] - - -def radius_from_dir(s: str, prefix: str): - return int(s.split("/")[-1].split("_")[0].replace(prefix, "")) - - -def load_models(dirs: dict, prefix="h", nd=5): - D = {} - for name, datadir in dirs.items(): - radius = radius_from_dir(datadir, prefix) - args = argparse.Namespace() - with open(os.path.join(datadir, "args.yaml"), "r") as io: - for k, v in yaml.load(io, Loader=yaml.FullLoader).items(): - setattr(args, k, v) - if k == "nbrs_av": - setattr(args, "av_nbrs", v) - elif k == "av_nbrs": - setattr(args, "nbrs_av", v) - - bn_type = "frn" if not hasattr(args, "bn_type") else args.bn_type - mkw = dict( - n_hidden=args.nhidden, - depth=args.depth, - num_res=args.nres, - ksize=args.ksize, - groups=args.groups, - batchnorm=True, - batchnorm_type=bn_type, - ) - - dkw = dict(batchnorm=True, offset=True, batchnorm_type=bn_type) - dev = "cuda" if torch.cuda.is_available() else "cpu" - if not args.local and args.nbrs_av == 0: - enc = UNetEncoder(nd, args.nhidden, **mkw) - dec = Decoder(args.nhidden, nd, args.nhidden, **dkw) - else: - enc = nn.Identity() - dec = Decoder(nd, nd, args.nhidden, **dkw) - mod = nn.ModuleDict({"enc": enc, "dec": dec}) - objs = dict( - mod=mod, - args=args, - radius=radius, - nbrs_av=args.nbrs_av, - local=args.local, - ) - mod.eval() - for p in mod.parameters(): - p.requires_grad = False - weights_path = os.path.join(datadir, "model.pt") - state_dict = torch.load(weights_path, map_location=torch.device("cpu")) - mod.load_state_dict(state_dict) - D[datadir] = objs - return D diff --git a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/gender-over-time-colab/style.css b/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/gender-over-time-colab/style.css deleted file mode 100644 index 8165ac5b403d085f7013b25cefc267a6639a0d79..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/gender-over-time-colab/style.css +++ /dev/null @@ -1,70 +0,0 @@ -body{ - font-family: menlo, Consolas, 'Lucida Console', monospace; - margin: 10px; - margin-left: 20px; - width: 1130px; - background: #fff; -} - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -.axis{ - opacity: .7; -} - -text{ - /*pointer-events: none;*/ - text-shadow: 0 1.5px 0 #fff, 1.5px 0 0 #fff, 0 -1.5px 0 #fff, -1.5px 0 0 #fff; -} - - -#graph > div{ - /*display: inline-block;*/ -} - -.active path{ - stroke: #f0f; - /*stroke-width: 2;*/ - opacity: 1; -} -.active text{ - fill: #f0f; - opacity: 1 !important; - font-size: 14px; - -} - -p{ - max-width: 650px; -} \ No newline at end of file diff --git a/spaces/merve/uncertainty-calibration/source/measuring-fairness/style.css b/spaces/merve/uncertainty-calibration/source/measuring-fairness/style.css deleted file mode 100644 index 27a4ab72371dd17fe64ae938268ef37f7fb16247..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/measuring-fairness/style.css +++ /dev/null @@ -1,274 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -@media (max-width: 925px) { - #graph > div{ - position: relative; - top: 25px; - } -} - - - -body{ - --colors-well: rgb(179, 201, 204); - --colors-sick: rgb(241, 85, 85); - --lcolors-well: rgb(217, 228, 230); - --lcolors-sick: rgb(246, 145, 145); - --dcolors-well: rgb(63, 70, 71); - --dcolors-sick: rgb(84, 30, 30); -} - - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -text{ - /*pointer-events: none;*/ - /*text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff;*/ -} - - - -#graph > div{ - margin-top: 20px; -} - - -#end{ - height: 600px; -} - - -.mono{ - font-family: monospace; -} - - - - -.mini .axis{ - font-size: 10px; - line-height: 12px !important; - position: relative; - top: 40px; -} - -.axis{ - font-size: 12px; -} -.axis{ - color: #999; -} -.axis text{ - fill: #999; -} -.axis line{ - stroke: #ccc; -} - -div.axis b{ - margin-bottom: -10px; - display: block; -} - -.init-hidden{ - opacity: 0; -} - - -.highlight{ - color: #fff; - padding-left: 3px; - padding-right: 3px; - padding-top: 1px; - padding-bottom: 1px; - border-radius: 3px; -} - -.highlight.grey{ background: var(--colors-well); } -.highlight.box{ - border: 1px solid #000; - border-radius: 0px; - color: #000; - padding-bottom: 2px; -} - -.weepeople { - font-family: "WeePeople"; -} - - -wee{ - font-family: "WeePeople"; - font-size: 30px; - height: 22px; - display: inline; - position: relative; - top: 5px; - color: var(--colors-well); - padding: 1px; - margin: -1px; - line-height: 3px; -} -wee.sick{ - color: var(--colors-sick); -} - -wee.bg-sick{ - background: var(--lcolors-sick); -} -wee.bg-well{ - background: var(--lcolors-well); -} - -bg{ - background: var(--lcolors-well); - padding-left: 2px; - padding-right: 2px; -} - -bg.sick{ - background: var(--lcolors-sick); -} - -wee.sick.bg-well{ - -webkit-text-stroke: .6px var(--dcolors-sick); -} -wee.well.bg-sick{ - -webkit-text-stroke: .6px var(--dcolors-well); -} - - - -.equation{ - margin: 7px; - position: relative; -} - - -.gated #hidden{ - visibility: hidden; -} - -.gated.opened #hidden{ - visibility: unset; -} -.gated.opened #default{ - display: none; -} - -.gated #default{ - height: 0px; -} - - - - - - - -text.weepeople{ - stroke: #000; - stroke-width: 0; - /*stroke-width: .2;*/ -} - - - - -.post-summary, .headline{ - display: none; -} - - -i{ - pointer-events: none; -} - -.slider{ - position: relative; - z-index: 100; -} - - - - - -.cursor{ - animation-duration: 1s; - animation-name: bgblink; - display: inline-block; - animation-iteration-count: infinite; - animation-direction: alternate; - cursor: pointer; - transition: opacity .5s; - stroke: #000; -} - -@keyframes bgblink { - from { - /*fill: black;*/ - stroke-width: 0px; - } - - to { - /*fill: green;*/ - stroke-width: 16px; - } -} - -.no-blink .cursor{ - /*background: rgba(255,255,0,0) !important;*/ - animation: 0; -} - - - -#adjust-text{ - padding-top: 15px; - display: block; -} diff --git a/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/op/__init__.py b/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/op/__init__.py deleted file mode 100644 index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/op/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .fused_act import FusedLeakyReLU, fused_leaky_relu -from .upfirdn2d import upfirdn2d diff --git a/spaces/mithril-security/blind_chat/.svelte-kit/generated/client/app.js b/spaces/mithril-security/blind_chat/.svelte-kit/generated/client/app.js deleted file mode 100644 index b5f62179ccb5f3df015c22c8d5e5196155f4cd4c..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/.svelte-kit/generated/client/app.js +++ /dev/null @@ -1,35 +0,0 @@ -export { matchers } from './matchers.js'; - -export const nodes = [ - () => import('./nodes/0'), - () => import('./nodes/1'), - () => import('./nodes/2'), - () => import('./nodes/3'), - () => import('./nodes/4'), - () => import('./nodes/5'), - () => import('./nodes/6'), - () => import('./nodes/7'), - () => import('./nodes/8'), - () => import('./nodes/9'), - () => import('./nodes/10') -]; - -export const server_loads = [0]; - -export const dictionary = { - "/": [2], - "/conversations": [~4], - "/conversation/[id]": [~3], - "/login": [~5], - "/login/callback": [~6], - "/logout": [~7], - "/privacy": [8], - "/r/[id]": [~9], - "/settings": [~10] - }; - -export const hooks = { - handleError: (({ error }) => { console.error(error) }), -}; - -export { default as root } from '../root.svelte'; \ No newline at end of file diff --git a/spaces/ml595/myfirstspace/README.md b/spaces/ml595/myfirstspace/README.md deleted file mode 100644 index e81639ac36892d9e6d8de85469ec3495b2ca1485..0000000000000000000000000000000000000000 --- a/spaces/ml595/myfirstspace/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Myfirstspace -emoji: 💻 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mpatel57/WOUAF-Text-to-Image/dnnlib/__init__.py b/spaces/mpatel57/WOUAF-Text-to-Image/dnnlib/__init__.py deleted file mode 100644 index 2f08cf36f11f9b0fd94c1b7caeadf69b98375b04..0000000000000000000000000000000000000000 --- a/spaces/mpatel57/WOUAF-Text-to-Image/dnnlib/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -from .util import EasyDict, make_cache_dir_path diff --git a/spaces/mshukor/UnIVAL/data/mm_data/caption_dataset.py b/spaces/mshukor/UnIVAL/data/mm_data/caption_dataset.py deleted file mode 100644 index cb95b6b6d766103556c1ccd0ea458edc7dfe740d..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/data/mm_data/caption_dataset.py +++ /dev/null @@ -1,195 +0,0 @@ -# Copyright 2022 The OFA-Sys Team. -# All rights reserved. -# This source code is licensed under the Apache 2.0 license -# found in the LICENSE file in the root directory. - -from io import BytesIO - -import logging -import warnings -import string - -import numpy as np -import torch -import base64 -from torchvision import transforms - -from PIL import Image, ImageFile - -from data import data_utils -from data.ofa_dataset import OFADataset - -ImageFile.LOAD_TRUNCATED_IMAGES = True -ImageFile.MAX_IMAGE_PIXELS = None -Image.MAX_IMAGE_PIXELS = None - -logger = logging.getLogger(__name__) -warnings.filterwarnings("ignore", "(Possibly )?corrupt EXIF data", UserWarning) - -IMAGENET_DEFAULT_MEAN = (0.485, 0.456, 0.406) -IMAGENET_DEFAULT_STD = (0.229, 0.224, 0.225) - -from utils.vision_helper import RandomAugment -import utils.transforms as T - -import os - -def collate(samples, pad_idx, eos_idx): - if len(samples) == 0: - return {} - - def merge(key): - return data_utils.collate_tokens( - [s[key] for s in samples], - pad_idx, - eos_idx=eos_idx, - ) - - id = np.array([s["id"] for s in samples]) - src_tokens = merge("source") - src_lengths = torch.LongTensor([s["source"].ne(pad_idx).long().sum() for s in samples]) - - patch_images = torch.stack([sample['patch_image'] for sample in samples], dim=0) - patch_masks = torch.cat([sample['patch_mask'] for sample in samples]) - - patch_types = torch.cat([sample['patch_type'] for sample in samples]) - - prev_output_tokens = None - target = None - if samples[0].get("target", None) is not None: - target = merge("target") - tgt_lengths = torch.LongTensor([s["target"].ne(pad_idx).long().sum() for s in samples]) - ntokens = tgt_lengths.sum().item() - - if samples[0].get("prev_output_tokens", None) is not None: - prev_output_tokens = merge("prev_output_tokens") - else: - ntokens = src_lengths.sum().item() - - batch = { - "id": id, - "nsentences": len(samples), - "ntokens": ntokens, - "net_input": { - "src_tokens": src_tokens, - "src_lengths": src_lengths, - "patch_images": patch_images, - "patch_masks": patch_masks, - "prev_output_tokens": prev_output_tokens, - "patch_types": patch_types, - }, - "target": target, - } - - - return batch - - -class CaptionDataset(OFADataset): - def __init__( - self, - split, - dataset, - bpe, - src_dict, - tgt_dict=None, - max_src_length=128, - max_tgt_length=30, - patch_image_size=224, - imagenet_default_mean_and_std=False, - scst=False, - use_dataaug=False, - read_from_img_path=False, - image_dir='/gpfsscratch/rech/dyf/ugz83ue/data', - ): - super().__init__(split, dataset, bpe, src_dict, tgt_dict) - self.max_src_length = max_src_length - self.max_tgt_length = max_tgt_length - self.patch_image_size = patch_image_size - self.scst = scst - - self.transtab = str.maketrans({key: None for key in string.punctuation}) - - self.read_from_img_path = read_from_img_path - - if imagenet_default_mean_and_std: - mean = IMAGENET_DEFAULT_MEAN - std = IMAGENET_DEFAULT_STD - else: - mean = [0.5, 0.5, 0.5] - std = [0.5, 0.5, 0.5] - self.split = split - if self.split != 'train' or not use_dataaug: - self.patch_resize_transform = transforms.Compose([ - lambda image: image.convert("RGB"), - transforms.Resize((patch_image_size, patch_image_size), interpolation=Image.BICUBIC), - transforms.ToTensor(), - transforms.Normalize(mean=mean, std=std), - ]) - else: - scales = np.arange(patch_image_size, 481).tolist() - self.patch_resize_transform = transforms.Compose([ - lambda image: image.convert("RGB"), - T.RandomResize(scales, max_size=672), - transforms.CenterCrop(patch_image_size), - RandomAugment(2, 7, isPIL=True, augs=['Identity', 'AutoContrast', 'Equalize', 'Brightness', 'Sharpness', - 'ShearX', 'ShearY', 'TranslateX', 'TranslateY', 'Rotate']), - transforms.ToTensor(), - transforms.Normalize(mean=mean, std=std), - ]) - - if type(bpe).__name__ == 'GPT2BPE': - self.prompt = " what does the image describe?" - elif type(bpe).__name__ == 'BertBPE': - self.prompt = "图片描述了什么内容?" - - self.image_dir = image_dir - - def __getitem__(self, index): - uniq_id, image, caption = self.dataset[index] - - if self.read_from_img_path or '.jpg' in image: - image_path = os.path.join(self.image_dir, image) - image = Image.open(image_path).convert("RGB") - else: - image = Image.open(BytesIO(base64.urlsafe_b64decode(image))) - - patch_image = self.patch_resize_transform(image) - patch_mask = torch.tensor([True]) - - if self.split == 'train' and not self.scst: - caption = caption.translate(self.transtab).strip() - caption_token_list = caption.strip().split() - tgt_caption = ' '.join(caption_token_list[:self.max_tgt_length]) - else: - caption = ' '.join(caption.strip().split()) - caption_list = [cap.translate(self.transtab).strip() for cap in caption.strip().split('&&')] - tgt_caption = '&&'.join(caption_list) - src_item = self.encode_text(self.prompt) - tgt_item = self.encode_text(" {}".format(tgt_caption)) - - src_item = torch.cat([self.bos_item, src_item, self.eos_item]) - target_item = torch.cat([tgt_item, self.eos_item]) - prev_output_item = torch.cat([self.bos_item, tgt_item]) - - patch_type = torch.tensor([0]) - - example = { - "id": uniq_id, - "source": src_item, - "patch_image": patch_image, - "patch_mask": patch_mask, - "target": target_item, - "prev_output_tokens": prev_output_item, - "patch_type": patch_type, - } - return example - - def collater(self, samples, pad_to_length=None): - """Merge a list of samples to form a mini-batch. - Args: - samples (List[dict]): samples to collate - Returns: - dict: a mini-batch containing the data of the task - """ - return collate(samples, pad_idx=self.pad, eos_idx=self.eos) diff --git a/spaces/musadac/VilanOCR-Urdu-English-Chinese/app.py b/spaces/musadac/VilanOCR-Urdu-English-Chinese/app.py deleted file mode 100644 index 45a16afac638596dd3db6d71f60efbe6811efe8d..0000000000000000000000000000000000000000 --- a/spaces/musadac/VilanOCR-Urdu-English-Chinese/app.py +++ /dev/null @@ -1,102 +0,0 @@ -import streamlit as st -import torch -from PIL import Image -from huggingface_hub import hf_hub_download -from transformers import VisionEncoderDecoderModel - - -import warnings -from contextlib import contextmanager -from transformers import MBartTokenizer, ViTImageProcessor, XLMRobertaTokenizer -from transformers import ProcessorMixin - - -class CustomOCRProcessor(ProcessorMixin): - attributes = ["image_processor", "tokenizer"] - image_processor_class = "AutoImageProcessor" - tokenizer_class = "AutoTokenizer" - - def __init__(self, image_processor=None, tokenizer=None, **kwargs): - if "feature_extractor" in kwargs: - warnings.warn( - "The `feature_extractor` argument is deprecated and will be removed in v5, use `image_processor`" - " instead.", - FutureWarning, - ) - feature_extractor = kwargs.pop("feature_extractor") - - image_processor = image_processor if image_processor is not None else feature_extractor - if image_processor is None: - raise ValueError("You need to specify an `image_processor`.") - if tokenizer is None: - raise ValueError("You need to specify a `tokenizer`.") - - super().__init__(image_processor, tokenizer) - self.current_processor = self.image_processor - self._in_target_context_manager = False - - def __call__(self, *args, **kwargs): - # For backward compatibility - if self._in_target_context_manager: - return self.current_processor(*args, **kwargs) - - images = kwargs.pop("images", None) - text = kwargs.pop("text", None) - if len(args) > 0: - images = args[0] - args = args[1:] - - if images is None and text is None: - raise ValueError("You need to specify either an `images` or `text` input to process.") - - if images is not None: - inputs = self.image_processor(images, *args, **kwargs) - if text is not None: - encodings = self.tokenizer(text, **kwargs) - - if text is None: - return inputs - elif images is None: - return encodings - else: - inputs["labels"] = encodings["input_ids"] - return inputs - - def batch_decode(self, *args, **kwargs): - return self.tokenizer.batch_decode(*args, **kwargs) - - def decode(self, *args, **kwargs): - return self.tokenizer.decode(*args, **kwargs) - - -image_processor = ViTImageProcessor.from_pretrained( - 'microsoft/swin-base-patch4-window12-384-in22k' -) -tokenizer = MBartTokenizer.from_pretrained( - 'facebook/mbart-large-50' -) -processortext2 = CustomOCRProcessor(image_processor,tokenizer) - -import os -huggingface_token = os.environ.get("HUGGINGFACE_TOKEN") -model = {} -model['single-urdu'] = "musadac/vilanocr-single-urdu" -model['multi-urdu'] = "musadac/ViLanOCR" -model['medical'] = "musadac/vilanocr-multi-medical" -model['chinese'] = "musadac/vilanocr-single-chinese" - -st.title("Image OCR with musadac/vilanocr") -model_name = st.selectbox("Choose an OCR model", ["single-urdu", "multi-urdu", "medical","chinese" ]) -uploaded_file = st.file_uploader("Choose an image", type=["jpg", "jpeg", "png"]) -if uploaded_file is not None: - model2 = VisionEncoderDecoderModel.from_pretrained(model[model_name], use_auth_token=huggingface_token) - img = Image.open(uploaded_file).convert("RGB") - pixel_values = processortext2(img.convert("RGB"), return_tensors="pt").pixel_values - - with torch.no_grad(): - generated_ids = model2.generate(pixel_values) - - result = processortext2.batch_decode(generated_ids, skip_special_tokens=True)[0] - st.write("OCR Result:") - st.write(result) - diff --git a/spaces/mygyasir/Real-Time-Voice-Cloning/vocoder/audio.py b/spaces/mygyasir/Real-Time-Voice-Cloning/vocoder/audio.py deleted file mode 100644 index 116396261e184b9968971bd06fabc6f525e0c2fe..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/Real-Time-Voice-Cloning/vocoder/audio.py +++ /dev/null @@ -1,108 +0,0 @@ -import math -import numpy as np -import librosa -import vocoder.hparams as hp -from scipy.signal import lfilter -import soundfile as sf - - -def label_2_float(x, bits) : - return 2 * x / (2**bits - 1.) - 1. - - -def float_2_label(x, bits) : - assert abs(x).max() <= 1.0 - x = (x + 1.) * (2**bits - 1) / 2 - return x.clip(0, 2**bits - 1) - - -def load_wav(path) : - return librosa.load(str(path), sr=hp.sample_rate)[0] - - -def save_wav(x, path) : - sf.write(path, x.astype(np.float32), hp.sample_rate) - - -def split_signal(x) : - unsigned = x + 2**15 - coarse = unsigned // 256 - fine = unsigned % 256 - return coarse, fine - - -def combine_signal(coarse, fine) : - return coarse * 256 + fine - 2**15 - - -def encode_16bits(x) : - return np.clip(x * 2**15, -2**15, 2**15 - 1).astype(np.int16) - - -mel_basis = None - - -def linear_to_mel(spectrogram): - global mel_basis - if mel_basis is None: - mel_basis = build_mel_basis() - return np.dot(mel_basis, spectrogram) - - -def build_mel_basis(): - return librosa.filters.mel(hp.sample_rate, hp.n_fft, n_mels=hp.num_mels, fmin=hp.fmin) - - -def normalize(S): - return np.clip((S - hp.min_level_db) / -hp.min_level_db, 0, 1) - - -def denormalize(S): - return (np.clip(S, 0, 1) * -hp.min_level_db) + hp.min_level_db - - -def amp_to_db(x): - return 20 * np.log10(np.maximum(1e-5, x)) - - -def db_to_amp(x): - return np.power(10.0, x * 0.05) - - -def spectrogram(y): - D = stft(y) - S = amp_to_db(np.abs(D)) - hp.ref_level_db - return normalize(S) - - -def melspectrogram(y): - D = stft(y) - S = amp_to_db(linear_to_mel(np.abs(D))) - return normalize(S) - - -def stft(y): - return librosa.stft(y=y, n_fft=hp.n_fft, hop_length=hp.hop_length, win_length=hp.win_length) - - -def pre_emphasis(x): - return lfilter([1, -hp.preemphasis], [1], x) - - -def de_emphasis(x): - return lfilter([1], [1, -hp.preemphasis], x) - - -def encode_mu_law(x, mu) : - mu = mu - 1 - fx = np.sign(x) * np.log(1 + mu * np.abs(x)) / np.log(1 + mu) - return np.floor((fx + 1) / 2 * mu + 0.5) - - -def decode_mu_law(y, mu, from_labels=True) : - if from_labels: - y = label_2_float(y, math.log2(mu)) - mu = mu - 1 - x = np.sign(y) / mu * ((1 + mu) ** np.abs(y) - 1) - return x - diff --git a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/setup.py b/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/setup.py deleted file mode 100644 index 2c0986317eb576a14ec774205c88fdee3cc6c0b3..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/segment-anything/setup.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from setuptools import find_packages, setup - -setup( - name="segment_anything", - version="1.0", - install_requires=[], - packages=find_packages(exclude="notebooks"), - extras_require={ - "all": ["matplotlib", "pycocotools", "opencv-python", "onnx", "onnxruntime"], - "dev": ["flake8", "isort", "black", "mypy"], - }, -) diff --git a/spaces/naotokui/TR-ChatGPT/app.py b/spaces/naotokui/TR-ChatGPT/app.py deleted file mode 100644 index b476a353e6c904681af9e1510504724b9dbd1841..0000000000000000000000000000000000000000 --- a/spaces/naotokui/TR-ChatGPT/app.py +++ /dev/null @@ -1,165 +0,0 @@ -#%% -import openai -import numpy as np -import pretty_midi -import re -import numpy as np -import os -import gradio as gr -import librosa - -openai.api_key = os.environ.get("OPENAI_API_KEY") - -# sample data -markdown_table_sample = """8th - -| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | -|----|---|---|---|---|---|---|---|---| -| BD | x | | x | | | | x | | -| SD | | | | x | | | | x | -| CH | x | | x | | x | | x | | -| OH | | | | x | | | x | | -| LT | | | | | | x | | | -| MT | | x | | | x | | | | -| HT | x | | | x | | | | | -""" - -markdown_table_sample2 = """16th - -| | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10| 11| 12| 13| 14| 15| 16| -|----|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| -| BD | x | | x | | | | x | | x | | x | | x | | x | | -| SD | | | | x | | | | x | | | x | | | | x | | -| CH | x | | x | | x | | x | | x | | x | | x | | x | | -| OH | | | | x | | | x | | | | | x | | | x | | -| LT | | | | | | x | | | | | | | | x | | | -| MT | | x | | | x | | | | | x | | | x | | | | -| HT | x | | | x | | | | | x | | | x | | | | | -""" - -MIDI_NOTENUM = { - "BD": 36, - "SD": 38, - "CH": 42, - "HH": 44, - "OH": 46, - "LT": 48, - "MT": 48, - "HT": 50, - "CP": 50, - "CB": 56, -} -SR = 44100 - -MAX_QUERY = 5 - -def convert_table_to_audio(markdown_table, resolution=8, bpm = 120.0): - # convert table to array - rhythm_pattern = [] - for line in markdown_table.split('\n')[2:]: - rhythm_pattern.append(line.split('|')[1:-1]) - print(rhythm_pattern) - - # table to MIDI - pm = pretty_midi.PrettyMIDI(initial_tempo=bpm) # midi object - pm_inst = pretty_midi.Instrument(0, is_drum=True) # midi instrument - pm.instruments.append(pm_inst) - - note_length = (60. / bpm) * (4.0 / resolution) # note duration - - beat_num = resolution - for i in range(len(rhythm_pattern)): - for j in range(1, len(rhythm_pattern[i])): - beat_num = j # for looping - inst = rhythm_pattern[i][0].strip().upper() - velocity = 0 - if 'x' == rhythm_pattern[i][j].strip(): - velocity = 120 - if 'o' == rhythm_pattern[i][j].strip(): - velocity = 65 - if velocity > 0: - if inst in MIDI_NOTENUM.keys(): - midinote = MIDI_NOTENUM[inst] - note = pretty_midi.Note(velocity=velocity, pitch=midinote, start=note_length * (j-1)+0.0001, end=note_length * j) - pm_inst.notes.append(note) - - # convert to audio - audio_data = pm.fluidsynth() - - # cut off the reverb section - audio_data = audio_data[:int(SR*note_length*beat_num)] # for looping, cut the tail - return audio_data - -def get_answer(question): - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=[ - {"role": "system", "content": "You are a rhythm generator. "}, - {"role": "user", "content": "Please generate a rhythm pattern in a Markdown table. Time resolution is the 8th note. You use the following drums. Kick drum:BD, Snare drum:SD, Closed-hihat:CH, Open-hihat:OH, Low-tom:LT, Mid-tom:MT, High-tom:HT. use 'x' for an accented beat, 'o' for a weak beat. You need to write the time resolution first."}, - {"role": "assistant", "content": markdown_table_sample}, - # {"role": "user", "content": "Please generate a rhythm pattern. The resolution is the fourth note. You use the following drums. Kick drum:BD, Snare drum:SD, Closed-hihat:CH, Open-hihat:OH, Low-tom:LT, Mid-tom:MT, High-tom:HT. use 'x' for an accented beat, 'o' for a weak beat. You need to write the time resolution first."}, - # {"role": "assistant", "content": markdown_table_sample}, - {"role": "user", "content": question} - ] - ) - return response["choices"][0]["message"]["content"] - -def generate_rhythm(query, state): - print(state) - if state["gen_count"] > MAX_QUERY and len(state["user_token"]) == 0: - return [None, "You need to set your ChatGPT API Key to try more than %d times" % MAX_QUERY] - state["gen_count"] = state["gen_count"] + 1 - - # get respance from ChatGPT - text_output = get_answer(query) - - # Try to use the first row as time resolution - resolution_text = text_output.split('|')[0] - try: - resolution_text = re.findall(r'\d+', resolution_text)[0] - resolution = int(resolution_text) - except: - resolution = 8 # default - - # Extract rhythm table - try: - table = "|" + "|".join(text_output.split('|')[1:-1]) + "|" - audio_data = convert_table_to_audio(table, resolution) - - # loop x4 - audio_data = np.tile(audio_data, 4) - if np.max(audio_data) == 0.0: - audio_data = np.ones(1) - except: - audio_data = np.ones(1) - - return [(SR, audio_data), text_output] -# %% - -def on_token_change(user_token, state): - print(user_token) - openai.api_key = user_token or os.environ.get("OPENAI_API_KEY") - state["user_token"] = user_token - return state - -with gr.Blocks() as demo: - state = gr.State({"gen_count": 0, "user_token":""}) - with gr.Row(): - with gr.Column(): - # gr.Markdown("Ask ChatGPT to generate rhythm patterns") - gr.Markdown("***Hey TR-ChatGPT, give me a drum pattern!***") - gr.Markdown("Use the following drums. Kick drum:BD, Snare drum:SD, Closed-hihat:CH, Open-hihat:OH, Low-tom:LT, Mid-tom:MT, High-tom:HT and 'x' for an accented beat, 'o' for a weak beat!") - with gr.Row(): - with gr.Column(): - inp = gr.Textbox(placeholder="Give me a Hiphop rhythm pattern with some reggae twist!") - btn = gr.Button("Generate") - with gr.Column(): - out_audio = gr.Audio() - out_text = gr.Textbox(placeholder="ChatGPT output") - with gr.Row(): - with gr.Column(): - gr.Markdown("Enter your own OpenAI API Key to try out more than 5 times. You can get it [here](https://platform.openai.com/account/api-keys).") - user_token = gr.Textbox(placeholder="OpenAI API Key", type="password", show_label=False) - btn.click(fn=generate_rhythm, inputs=[inp, state], outputs=[out_audio, out_text]) - user_token.change(on_token_change, inputs=[user_token, state], outputs=[state]) -demo.launch() diff --git a/spaces/nateraw/jupyterlab-test2/app.py b/spaces/nateraw/jupyterlab-test2/app.py deleted file mode 100644 index 8e495588550126cc2fc26f3a782ebb371192d07b..0000000000000000000000000000000000000000 --- a/spaces/nateraw/jupyterlab-test2/app.py +++ /dev/null @@ -1,67 +0,0 @@ -import json -import os -import tempfile -from pathlib import Path - -import gradio as gr -from huggingface_hub import duplicate_space, upload_folder, login - - -def configure_training(this_space_id, csv_data, character, do_extract_vocals=False): - character = character.strip().replace('-', '').replace('_', '').replace(" ", "").lower() - ds_cfg = { - "character": character, - "do_extract_vocals": do_extract_vocals, - } - with tempfile.TemporaryDirectory() as tempdir: - temp_path = Path(tempdir) - (temp_path / 'data.csv').write_text(csv_data) - (temp_path / 'dataset_config.json').write_text(json.dumps(ds_cfg, indent=2, sort_keys=False)) - upload_folder(repo_id=this_space_id, folder_path=tempdir, path_in_repo=".", repo_type="space") - print("Would normally upload here!") - print(list(temp_path.glob("*"))) - return "OK! Rebooting here in a sec to start training" - -description = """ -Configure training session for voice cloning. - -Please provide a CSV containing YouTube IDs, start times, and end times that we can use to gather the dataset for you. - -It should look like this: - -``` -ytid,start,end -YYiQxHM0L-w,300,660 -Ga-CcToGiUM,3105,3300 -``` -""" - -if os.environ.get("HF_TOKEN", None) is not None: - login(os.environ.get("HF_TOKEN")) - interface = gr.Interface( - configure_training, - inputs=[ - gr.Textbox(label="This Space's Repo ID", info="The repo ID of this space (ex. nateraw/voice-cloning-training-ui)."), - gr.TextArea(value="ytid,start,end\n", label="CSV Data", max_lines=50), - gr.Textbox(placeholder="Name of character that you're cloning."), - gr.Checkbox( - False, - label="Isolate Vocals", - info="If checked, we use demucs to isolate vocals from each audio file. You want to use this if the provided clips contain background music" - ) - ], - outputs="text", - title="Configure Training Session", - description=description, - ) -else: - with gr.Blocks() as interface: - gr.Markdown(""" -## Please Set The HF_TOKEN Environment Variable - -Go to the settings tab of this space and add a new environment variable named `HF_TOKEN` with its value being **a token with write access** from [here](https://hf.co/settings/tokens). -""") - - -if __name__ == '__main__': - interface.launch() \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Fisica Blatt Solucionario.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Fisica Blatt Solucionario.md deleted file mode 100644 index 30a018ad7d6fd197d321022ca1c74cdad4fdbf73..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Fisica Blatt Solucionario.md +++ /dev/null @@ -1,16 +0,0 @@ -
      -Here is a possible title and article with html formatting for the keyword "fisica blatt solucionario": - -

      ¿Dónde encontrar el solucionario del libro Fundamentos de Física de Frank Blatt?

      -

      El libro Fundamentos de Física de Frank Blatt es una obra clásica que aborda los principios básicos de la física desde una perspectiva moderna y aplicada. El libro cubre temas como mecánica, termodinámica, óptica, electricidad y magnetismo, entre otros, con un enfoque conceptual y numérico que facilita el aprendizaje y la resolución de problemas.

      -

      fisica blatt solucionario


      Download === https://urlcod.com/2uIcyN



      -

      El solucionario del libro Fundamentos de Física de Frank Blatt es un recurso muy útil para los estudiantes que quieren comprobar sus respuestas, repasar los conceptos o profundizar en el estudio de la física. Sin embargo, encontrar el solucionario no es una tarea fácil, ya que no se encuentra disponible en formato digital ni impreso en la mayoría de las librerías o sitios web.

      -

      Una posible forma de acceder al solucionario del libro Fundamentos de Física de Frank Blatt es buscarlo en sitios web especializados en compartir documentos académicos, como Scribd[^2^] o Studocu[^1^]. Estos sitios permiten descargar o leer en línea el solucionario en formato PDF, aunque a veces se requiere una suscripción o un registro previo. Otra opción es buscarlo en sitios web de descarga directa o torrents, aunque estos pueden tener riesgos de seguridad o violar los derechos de autor.

      -

      En conclusión, el solucionario del libro Fundamentos de Física de Frank Blatt es un material muy valioso para los estudiantes de física, pero no es fácil de encontrar. Se recomienda consultar fuentes confiables y respetar las normas legales al momento de buscarlo o usarlo.

      Here are a few more paragraphs: - -

      El libro Fundamentos de Física de Frank Blatt es una referencia indispensable para los estudiantes de física de todos los niveles, desde el bachillerato hasta la universidad. El libro presenta los conceptos de forma clara y sencilla, con ejemplos cotidianos y aplicaciones prácticas. Además, el libro incluye una gran variedad de ejercicios y problemas que ayudan a desarrollar las habilidades y el razonamiento físico.

      -

      El solucionario del libro Fundamentos de Física de Frank Blatt es un complemento ideal para el libro, ya que ofrece las soluciones detalladas y explicadas de todos los ejercicios y problemas propuestos. El solucionario permite a los estudiantes verificar sus resultados, corregir sus errores, aprender de sus aciertos y mejorar su comprensión de la física. El solucionario también puede ser útil para los profesores que quieren preparar sus clases o evaluar a sus alumnos.

      -

      -

      Por lo tanto, el solucionario del libro Fundamentos de Física de Frank Blatt es un recurso muy valioso para el estudio de la física, pero no es fácil de conseguir. Se sugiere buscarlo en sitios web confiables y legales, o solicitarlo directamente al autor o a la editorial. Así se podrá aprovechar al máximo el potencial del libro y del solucionario para aprender y disfrutar de la física.

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hindi Nursery Rhymes Video Free Download Mp4 BEST.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hindi Nursery Rhymes Video Free Download Mp4 BEST.md deleted file mode 100644 index 664d3317992bd1c4ebf305c577a9afe0d4b915b2..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hindi Nursery Rhymes Video Free Download Mp4 BEST.md +++ /dev/null @@ -1,32 +0,0 @@ - -

      How to Download Hindi Nursery Rhymes Videos for Free

      -

      Hindi nursery rhymes are a great way to introduce your children to the rich and diverse culture of India. They can also help them learn new words, improve their pronunciation, and develop their cognitive and musical skills. But how can you download Hindi nursery rhymes videos for free and enjoy them offline?

      -

      In this article, we will show you some of the best sources of Hindi nursery rhymes videos on the internet, and how to download them in MP4 format for free. You will also learn about the benefits of Hindi nursery rhymes for your children's development.

      -

      hindi nursery rhymes video free download mp4


      DOWNLOAD ››› https://urlcod.com/2uIbdF



      -

      The Best Sources of Hindi Nursery Rhymes Videos

      -

      There are many websites and apps that offer Hindi nursery rhymes videos for free, but not all of them are safe and reliable. Some may contain viruses, malware, or inappropriate content. Some may also have poor quality or incomplete videos. To avoid these problems, we recommend you to use the following sources:

      -
        -
      • YouTube: YouTube is the most popular and widely used video-sharing platform in the world. It has a huge collection of Hindi nursery rhymes videos for children of all ages. You can find videos from famous channels like Infobells[^2^], CVS 3D Rhymes[^1^], and ChuChu TV[^3^]. You can also search for specific rhymes or topics using keywords.
      • -
      • Videoder: Videoder is a free video downloader app that allows you to download videos from YouTube and other websites in various formats, including MP4. You can also choose the quality and resolution of the videos according to your preference. Videoder is easy to use and has a user-friendly interface.
      • -
      -

      How to Download Hindi Nursery Rhymes Videos for Free

      -

      To download Hindi nursery rhymes videos for free using Videoder, follow these steps:

      -
        -
      1. Download and install Videoder on your device from https://www.videoder.com/.
      2. -
      3. Open Videoder and tap on the YouTube icon.
      4. -
      5. Search for the Hindi nursery rhymes video that you want to download.
      6. -
      7. Tap on the video and select the download option.
      8. -
      9. Choose the format (MP4) and quality (720p or 1080p) of the video.
      10. -
      11. Tap on the download button and wait for the video to be downloaded.
      12. -
      13. Enjoy watching the video offline on your device.
      14. -
      -

      The Benefits of Hindi Nursery Rhymes for Children

      -

      Hindi nursery rhymes are not only fun and entertaining, but also educational and beneficial for your children's development. Here are some of the benefits of Hindi nursery rhymes for children:

      -
        -
      • Cultural awareness: Hindi nursery rhymes expose your children to the diverse and rich culture of India. They can learn about the history, traditions, festivals, values, and customs of different regions and communities in India. They can also appreciate the beauty and diversity of the Hindi language and its dialects.
      • -
      • Linguistic skills: Hindi nursery rhymes help your children learn new words, phrases, and expressions in Hindi. They can also improve their pronunciation, vocabulary, grammar, and comprehension skills. They can also enhance their communication and listening skills by singing along with the videos.
      • -
      • Cognitive skills: Hindi nursery rhymes stimulate your children's brain development and cognitive skills. They can help them develop their memory, attention, concentration, logic, reasoning, and problem-solving skills. They can also foster their creativity and imagination by introducing them to different characters, stories, and scenarios.
      • -
      • Musical skills: Hindi nursery rhymes introduce your children to the musical elements of rhythm, melody, harmony, tempo, pitch, and tone. They can help them develop their musical ear, sense of timing, coordination, and expression. They can also boost their confidence and self-esteem by performing in front of others.
      • -
      • Social skills: Hindi nursery rhymes encourage your children to interact with others and form bonds with their peers. They can help them learn social skills such

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hot Item New Cpu Cooling Fan For Mac.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hot Item New Cpu Cooling Fan For Mac.md deleted file mode 100644 index f42d3ed08ae18d80bb614fbd5bfa4f2792ec85ab..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Hot Item New Cpu Cooling Fan For Mac.md +++ /dev/null @@ -1,40 +0,0 @@ - -

        Hot Item New Cpu Cooling Fan For Mac: A Review

        -

        If you are looking for a replacement or upgrade for your MacBook Pro's cooling fan, you might be interested in the Hot Item New Cpu Cooling Fan For Mac. This is a compatible fan that works with various models of MacBook Pro, including the Retina display and Touch Bar versions. It claims to offer better performance, lower noise, and longer lifespan than the original fan. But is it worth buying? Here are some pros and cons of this product based on customer reviews and product specifications.

        -

        Pros

        -
          -
        • The fan is easy to install and comes with the necessary tools and screws.
        • -
        • The fan is quiet and does not produce any annoying whirring or rattling sounds.
        • -
        • The fan cools down the laptop effectively and prevents overheating issues.
        • -
        • The fan is durable and made of high-quality materials.
        • -
        • The fan is affordable and offers good value for money.
        • -
        -

        Cons

        -
          -
        • The fan may not be compatible with some older or newer models of MacBook Pro, so make sure to check the compatibility before buying.
        • -
        • The fan may require some adjustment or calibration to work properly with the laptop's sensors and software.
        • -
        • The fan may not have the same appearance or design as the original fan, which may affect the aesthetics of the laptop.
        • -
        -

        Conclusion

        -

        The Hot Item New Cpu Cooling Fan For Mac is a decent option for anyone who needs a new or improved cooling fan for their MacBook Pro. It offers good performance, low noise, and long lifespan at a reasonable price. However, it may not be suitable for every model of MacBook Pro, and it may require some tweaking to work optimally. Therefore, it is advisable to do some research and read customer reviews before purchasing this product.

        -

        Hot Item New Cpu Cooling Fan For Mac


        DOWNLOAD ✏ ✏ ✏ https://urlcod.com/2uIbSF



        - -

        How to Install the Hot Item New Cpu Cooling Fan For Mac

        -

        If you decide to buy the Hot Item New Cpu Cooling Fan For Mac, you will need to install it on your laptop. This is not a difficult task, but it requires some care and caution. Here are the steps to follow:

        -
          -
        1. Shut down your laptop and unplug the power cord.
        2. -
        3. Flip over your laptop and remove the bottom case screws with a screwdriver. Keep the screws in a safe place.
        4. -
        5. Lift off the bottom case and set it aside.
        6. -
        7. Locate the cooling fan on the left or right side of the laptop, depending on your model. It is usually attached to a metal heat sink with a black cable.
        8. -
        9. Disconnect the cable from the logic board by gently pulling it out.
        10. -
        11. Remove the screws that secure the fan to the heat sink. Keep the screws in a safe place.
        12. -
        13. Lift off the old fan and set it aside.
        14. -
        15. Place the new fan on the heat sink and align it with the screw holes.
        16. -
        17. Secure the new fan with the screws you removed earlier.
        18. -
        19. Connect the cable from the new fan to the logic board. Make sure it is firmly inserted.
        20. -
        21. Replace the bottom case and secure it with the screws you removed earlier.
        22. -
        23. Plug in the power cord and turn on your laptop.
        24. -
        -

        Congratulations! You have successfully installed the Hot Item New Cpu Cooling Fan For Mac. You can now enjoy a cooler and quieter laptop experience.

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Kismet Love Paisa Dilli In Tamil Pdf Free Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Kismet Love Paisa Dilli In Tamil Pdf Free Download.md deleted file mode 100644 index a3b7a10c7ae91da69195f4045a21b94bd863847d..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Kismet Love Paisa Dilli In Tamil Pdf Free Download.md +++ /dev/null @@ -1,20 +0,0 @@ -
        -

        Kismet Love Paisa Dilli: A One Night Comedy Adventure in Delhi

        -

        If you are looking for a fun and entertaining movie to watch, you might want to check out Kismet Love Paisa Dilli, a 2012 Hindi comedy/crime/adventure film starring Vivek Oberoi and Mallika Sherawat. The film is set in one winter night in Delhi, where a middle class college student named Lokesh falls in love with a girl named Lovina, who is anchoring a fashion show. However, their romance is interrupted by a series of mishaps, as someone plants a sting operation tape in Lokesh's pocket, which has a corrupt minister talking of buying and selling MLAs and media heads from his Swiss bank accounts. Lokesh and Lovina are chased by both the bad guys and the good guys who want the tape back, and they have to run across Delhi from cold streets to glitzy farm house parties, while gradually falling in love.

        -

        The film is a satire on corruption and a tribute to the city of Delhi, with its colorful characters, language and culture. The film has some hilarious dialogues, catchy songs and thrilling twists that will keep you hooked till the end. The film also has a message of how a common man who talks against corruption can get tempted by money when given an opportunity.

        -

        Kismet Love Paisa Dilli in tamil pdf free download


        Downloadhttps://urlcod.com/2uIcvN



        -

        If you want to watch this movie online or download it for free, you can search for Kismet Love Paisa Dilli in tamil pdf free download on the internet. There are many websites that offer this movie in tamil dubbed version or with subtitles, in pdf format that you can easily download and enjoy on your device. However, be careful of the quality and legality of these websites, as some of them might have viruses or malware that can harm your device or data. Also, respect the rights of the filmmakers and actors who have worked hard to make this movie, and avoid piracy as much as possible.

        -

        Kismet Love Paisa Dilli is a movie that will make you laugh, thrill and think at the same time. It is a perfect choice for a weekend night or a rainy day, when you want to have some fun and entertainment with your friends or family. So, what are you waiting for? Search for Kismet Love Paisa Dilli in tamil pdf free download today and enjoy this one night comedy adventure in Delhi!

        - -

        If you are curious about the cast and crew of Kismet Love Paisa Dilli, here are some details for you. The film is written and directed by Sanjay M. Khanduri, who also made the critically acclaimed Ek Chalis Ki Last Local in 2007. The film stars Vivek Oberoi as Lokesh, a Delhi university guy who is witty and adventurous. Vivek Oberoi is a popular actor who has appeared in many films such as Company, Saathiya, Yuva, Rakta Charitra and PM Narendra Modi. He has also won several awards and nominations for his performances.

        -

        The film also stars Mallika Sherawat as Lovina, a girl next door who is smart and charming. Mallika Sherawat is a famous actress who has worked in Bollywood and Hollywood films such as Murder, Pyar Ke Side Effects, Welcome, The Myth and Hisss. She is known for her bold and glamorous roles and her outspoken views on social issues.

        -

        The film also features Neha Dhupia as Anamika, a journalist who is involved in the sting operation. Neha Dhupia is a former beauty queen and an actress who has acted in films such as Qayamat: City Under Threat, Kyaa Kool Hai Hum, Singh Is Kinng, Tumhari Sulu and Lust Stories. She is also a host of a podcast show called No Filter Neha.

        -

        The film also has Ashutosh Rana as Kaptaan Saab, a corrupt politician who is the main antagonist of the story. Ashutosh Rana is a veteran actor who has played many memorable roles in films such as Dushman, Sangharsh, Raaz, Awarapan and Mulk. He is known for his versatile and powerful acting skills.

        -

        The film also has Anshuman Jha as Nunna, a funny sidekick of Lokesh who helps him in his escapades. Anshuman Jha is an actor and producer who has worked in films such as Love Sex Aur Dhokha, X: Past Is Present, No Fathers In Kashmir and Ludo. He is also a theatre artist and a founder of a production company called Theatre Red.

        -

        -

        The film also has other supporting actors such as Vishal C. Bhardwaj, Tahir Raj Bhasin, Vishwanath Chatterjee, Aseem Hattangadi, Rajat Kaul, Ashutosh Kaushik, Naveen Kaushik, Rajinder Sharma and Oscar Navin, who play various roles in the film.

        -

        The film has music composed by Amjad Nadeem and Santokh Singh Dhaliwal, with lyrics by Shabbir Ahmed and Santokh Singh Dhaliwal. The film has two catchy songs, \"Dhishkiyaon\" and \"Appy Budday Manayenge\", which are sung by Sonu Nigam, Ritu Pathak, Mika Singh and Santokh Singh Dhaliwal. The songs are upbeat and peppy, and suit the mood of the film.

        -

        The film has cinematography by Sunita Radia and editing by Sandeep Francis. The film has been produced by Amit Chandra under the banner of Invincible Entertainment. The film was released on October 5, 2012, and received mixed reviews from critics and audiences. The film was praised for its comedy, dialogues and performances, but criticized for its plot, direction and length. The film was also compared to the director's previous film, Ek Chalis Ki Last Local, which was considered to be superior.

        -

        Kismet Love Paisa Dilli is a film that you can watch if you are looking for some laughs and entertainment with a touch of Delhi flavor. The film has some moments that will make you smile, chuckle and even laugh out loud. The film also has some scenes that will make you think about the issue of corruption and how it affects

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/noman1408/speechToSpeechGPT/app.py b/spaces/noman1408/speechToSpeechGPT/app.py deleted file mode 100644 index 48bd8090ca6a18f23c37fbac761c23833759facf..0000000000000000000000000000000000000000 --- a/spaces/noman1408/speechToSpeechGPT/app.py +++ /dev/null @@ -1,77 +0,0 @@ -#https://github.com/hackingthemarkets/chatgpt-api-whisper-api-voice-assistant - -#!pip install -r requirements.txt - -# FROM https://www.youtube.com/watch?v=Si0vFx_dJ5Y - -""" -challanges i faced: -- micrphone audio was not getting into. Then i found probably windows by default cannot process wav file. so i installed - choco install ffmpeg in command prompt in admin mode. - https://stackoverflow.com/questions/30770155/ffprobe-or-avprobe-not-found-please-install-one --this line did not work: #filename = os.path.dirname(__file__) + '\\audio.mp3' - then i changed the line: - https://stackoverflow.com/questions/16771894/python-nameerror-global-name-file-is-not-defined -- the subprocessl.call is only for mac users. so i got the funtion speak() using gTTS from: - https://stackoverflow.com/questions/51164040/gtts-direct-output -- - -""" -#!pip install pygobject - -import gradio as gr -import os -import openai#, subprocess -openai.api_key = "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" - -messages = [{"role": "system", "content": 'You are a story teller. Respond to all input in 25 words or less in rap song format like Drake'}] - -from gtts import gTTS -import os -import playsound -from pathlib import Path - -''' -def speak(text): - tts = gTTS(text=text, lang='en') - - #filename = "abc.mp3" - #filename = os.path.dirname(__file__) + '\\audio.mp3' - #filename = os.path.dirname(os.path.abspath("__file__")) + '\\audio.mp3' - filename = 'audio.mp3' - print(filename) - tts.save(filename) - playsound.playsound(filename) - #playsound.playsound('\\audio.mp3') - os.remove(filename) -''' - -def transcribe(audio): - global messages - - audio_filename_with_extension = audio + '.wav' - os.rename(audio, audio_filename_with_extension) - - audio_file = open(audio_filename_with_extension, "rb") - transcript = openai.Audio.transcribe("whisper-1", audio_file) - - messages.append({"role": "user", "content": transcript["text"]}) - - response = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=messages) - - system_message = response["choices"][0]["message"] - messages.append(system_message) - - #speak(system_message['content']) - subprocess.call(["say", system_message['content']]) - - chat_transcript = "" - for message in messages: - if message['role'] != 'system': - chat_transcript += message['role'] + ": " + message['content'] + "\n\n" - - return chat_transcript - -ui = gr.Interface(fn=transcribe, inputs=gr.Audio(source="microphone", type="filepath"), outputs="text") -ui.launch -#(share=True) diff --git a/spaces/nugrahatheo/Prediction-of-Credit-Card-Default/README.md b/spaces/nugrahatheo/Prediction-of-Credit-Card-Default/README.md deleted file mode 100644 index 7b52652cb7a1e1309a750b4f86821c99681f46c2..0000000000000000000000000000000000000000 --- a/spaces/nugrahatheo/Prediction-of-Credit-Card-Default/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Prediction Of Credit Card Default -emoji: 🏢 -colorFrom: green -colorTo: red -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/open-source-metrics/transformers-checkpoints/README.md b/spaces/open-source-metrics/transformers-checkpoints/README.md deleted file mode 100644 index 44ca29508ef77236b057aa9dc8dfd85299872efa..0000000000000000000000000000000000000000 --- a/spaces/open-source-metrics/transformers-checkpoints/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Transformers Checkpoints -emoji: 🦀 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/owkin/substra/substra_template/README.md b/spaces/owkin/substra/substra_template/README.md deleted file mode 100644 index 28643660e2a4c153b4f294e1a65dbaa362648a77..0000000000000000000000000000000000000000 --- a/spaces/owkin/substra/substra_template/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Substra Trainer -emoji: 🚀 -colorFrom: red -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/inpaint.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/inpaint.md deleted file mode 100644 index c817a8fa80dd6c06c7fe6e9ef763b4874bd0b2e1..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/using-diffusers/inpaint.md +++ /dev/null @@ -1,75 +0,0 @@ - - -# Text-guided 이미지 인페인팅(inpainting) - -[[open-in-colab]] - -[`StableDiffusionInpaintPipeline`]은 마스크와 텍스트 프롬프트를 제공하여 이미지의 특정 부분을 편집할 수 있도록 합니다. 이 기능은 인페인팅 작업을 위해 특별히 훈련된 [`runwayml/stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting)과 같은 Stable Diffusion 버전을 사용합니다. - -먼저 [`StableDiffusionInpaintPipeline`] 인스턴스를 불러옵니다: - -```python -import PIL -import requests -import torch -from io import BytesIO - -from diffusers import StableDiffusionInpaintPipeline - -pipeline = StableDiffusionInpaintPipeline.from_pretrained( - "runwayml/stable-diffusion-inpainting", - torch_dtype=torch.float16, -) -pipeline = pipeline.to("cuda") -``` - -나중에 교체할 강아지 이미지와 마스크를 다운로드하세요: - -```python -def download_image(url): - response = requests.get(url) - return PIL.Image.open(BytesIO(response.content)).convert("RGB") - - -img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" -mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" - -init_image = download_image(img_url).resize((512, 512)) -mask_image = download_image(mask_url).resize((512, 512)) -``` - -이제 마스크를 다른 것으로 교체하라는 프롬프트를 만들 수 있습니다: - -```python -prompt = "Face of a yellow cat, high resolution, sitting on a park bench" -image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0] -``` - -`image` | `mask_image` | `prompt` | output | -:-------------------------:|:-------------------------:|:-------------------------:|-------------------------:| -drawing | drawing | ***Face of a yellow cat, high resolution, sitting on a park bench*** | drawing | - - - -이전의 실험적인 인페인팅 구현에서는 품질이 낮은 다른 프로세스를 사용했습니다. 이전 버전과의 호환성을 보장하기 위해 새 모델이 포함되지 않은 사전학습된 파이프라인을 불러오면 이전 인페인팅 방법이 계속 적용됩니다. - - - -아래 Space에서 이미지 인페인팅을 직접 해보세요! - - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpainting.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpainting.py deleted file mode 100644 index 5ca9b871af922f7bd2e7f63b6a022ac1dfd73ee2..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_inpainting.py +++ /dev/null @@ -1,497 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from copy import deepcopy -from typing import Callable, List, Optional, Union - -import numpy as np -import PIL -import torch -import torch.nn.functional as F -from packaging import version -from PIL import Image - -from ... import __version__ -from ...models import UNet2DConditionModel, VQModel -from ...schedulers import DDPMScheduler -from ...utils import ( - logging, -) -from ...utils.torch_utils import randn_tensor -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> from diffusers import KandinskyV22InpaintPipeline, KandinskyV22PriorPipeline - >>> from diffusers.utils import load_image - >>> import torch - >>> import numpy as np - - >>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( - ... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 - ... ) - >>> pipe_prior.to("cuda") - - >>> prompt = "a hat" - >>> image_emb, zero_image_emb = pipe_prior(prompt, return_dict=False) - - >>> pipe = KandinskyV22InpaintPipeline.from_pretrained( - ... "kandinsky-community/kandinsky-2-2-decoder-inpaint", torch_dtype=torch.float16 - ... ) - >>> pipe.to("cuda") - - >>> init_image = load_image( - ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - ... "/kandinsky/cat.png" - ... ) - - >>> mask = np.zeros((768, 768), dtype=np.float32) - >>> mask[:250, 250:-250] = 1 - - >>> out = pipe( - ... image=init_image, - ... mask_image=mask, - ... image_embeds=image_emb, - ... negative_image_embeds=zero_image_emb, - ... height=768, - ... width=768, - ... num_inference_steps=50, - ... ) - - >>> image = out.images[0] - >>> image.save("cat_with_hat.png") - ``` -""" - - -# Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2.downscale_height_and_width -def downscale_height_and_width(height, width, scale_factor=8): - new_height = height // scale_factor**2 - if height % scale_factor**2 != 0: - new_height += 1 - new_width = width // scale_factor**2 - if width % scale_factor**2 != 0: - new_width += 1 - return new_height * scale_factor, new_width * scale_factor - - -# Copied from diffusers.pipelines.kandinsky.pipeline_kandinsky_inpaint.prepare_mask -def prepare_mask(masks): - prepared_masks = [] - for mask in masks: - old_mask = deepcopy(mask) - for i in range(mask.shape[1]): - for j in range(mask.shape[2]): - if old_mask[0][i][j] == 1: - continue - if i != 0: - mask[:, i - 1, j] = 0 - if j != 0: - mask[:, i, j - 1] = 0 - if i != 0 and j != 0: - mask[:, i - 1, j - 1] = 0 - if i != mask.shape[1] - 1: - mask[:, i + 1, j] = 0 - if j != mask.shape[2] - 1: - mask[:, i, j + 1] = 0 - if i != mask.shape[1] - 1 and j != mask.shape[2] - 1: - mask[:, i + 1, j + 1] = 0 - prepared_masks.append(mask) - return torch.stack(prepared_masks, dim=0) - - -# Copied from diffusers.pipelines.kandinsky.pipeline_kandinsky_inpaint.prepare_mask_and_masked_image -def prepare_mask_and_masked_image(image, mask, height, width): - r""" - Prepares a pair (mask, image) to be consumed by the Kandinsky inpaint pipeline. This means that those inputs will - be converted to ``torch.Tensor`` with shapes ``batch x channels x height x width`` where ``channels`` is ``3`` for - the ``image`` and ``1`` for the ``mask``. - - The ``image`` will be converted to ``torch.float32`` and normalized to be in ``[-1, 1]``. The ``mask`` will be - binarized (``mask > 0.5``) and cast to ``torch.float32`` too. - - Args: - image (Union[np.array, PIL.Image, torch.Tensor]): The image to inpaint. - It can be a ``PIL.Image``, or a ``height x width x 3`` ``np.array`` or a ``channels x height x width`` - ``torch.Tensor`` or a ``batch x channels x height x width`` ``torch.Tensor``. - mask (_type_): The mask to apply to the image, i.e. regions to inpaint. - It can be a ``PIL.Image``, or a ``height x width`` ``np.array`` or a ``1 x height x width`` - ``torch.Tensor`` or a ``batch x 1 x height x width`` ``torch.Tensor``. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - - - Raises: - ValueError: ``torch.Tensor`` images should be in the ``[-1, 1]`` range. ValueError: ``torch.Tensor`` mask - should be in the ``[0, 1]`` range. ValueError: ``mask`` and ``image`` should have the same spatial dimensions. - TypeError: ``mask`` is a ``torch.Tensor`` but ``image`` is not - (ot the other way around). - - Returns: - tuple[torch.Tensor]: The pair (mask, image) as ``torch.Tensor`` with 4 - dimensions: ``batch x channels x height x width``. - """ - - if image is None: - raise ValueError("`image` input cannot be undefined.") - - if mask is None: - raise ValueError("`mask_image` input cannot be undefined.") - - if isinstance(image, torch.Tensor): - if not isinstance(mask, torch.Tensor): - raise TypeError(f"`image` is a torch.Tensor but `mask` (type: {type(mask)} is not") - - # Batch single image - if image.ndim == 3: - assert image.shape[0] == 3, "Image outside a batch should be of shape (3, H, W)" - image = image.unsqueeze(0) - - # Batch and add channel dim for single mask - if mask.ndim == 2: - mask = mask.unsqueeze(0).unsqueeze(0) - - # Batch single mask or add channel dim - if mask.ndim == 3: - # Single batched mask, no channel dim or single mask not batched but channel dim - if mask.shape[0] == 1: - mask = mask.unsqueeze(0) - - # Batched masks no channel dim - else: - mask = mask.unsqueeze(1) - - assert image.ndim == 4 and mask.ndim == 4, "Image and Mask must have 4 dimensions" - assert image.shape[-2:] == mask.shape[-2:], "Image and Mask must have the same spatial dimensions" - assert image.shape[0] == mask.shape[0], "Image and Mask must have the same batch size" - - # Check image is in [-1, 1] - if image.min() < -1 or image.max() > 1: - raise ValueError("Image should be in [-1, 1] range") - - # Check mask is in [0, 1] - if mask.min() < 0 or mask.max() > 1: - raise ValueError("Mask should be in [0, 1] range") - - # Binarize mask - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - - # Image as float32 - image = image.to(dtype=torch.float32) - elif isinstance(mask, torch.Tensor): - raise TypeError(f"`mask` is a torch.Tensor but `image` (type: {type(image)} is not") - else: - # preprocess image - if isinstance(image, (PIL.Image.Image, np.ndarray)): - image = [image] - - if isinstance(image, list) and isinstance(image[0], PIL.Image.Image): - # resize all images w.r.t passed height an width - image = [i.resize((width, height), resample=Image.BICUBIC, reducing_gap=1) for i in image] - image = [np.array(i.convert("RGB"))[None, :] for i in image] - image = np.concatenate(image, axis=0) - elif isinstance(image, list) and isinstance(image[0], np.ndarray): - image = np.concatenate([i[None, :] for i in image], axis=0) - - image = image.transpose(0, 3, 1, 2) - image = torch.from_numpy(image).to(dtype=torch.float32) / 127.5 - 1.0 - - # preprocess mask - if isinstance(mask, (PIL.Image.Image, np.ndarray)): - mask = [mask] - - if isinstance(mask, list) and isinstance(mask[0], PIL.Image.Image): - mask = [i.resize((width, height), resample=PIL.Image.LANCZOS) for i in mask] - mask = np.concatenate([np.array(m.convert("L"))[None, None, :] for m in mask], axis=0) - mask = mask.astype(np.float32) / 255.0 - elif isinstance(mask, list) and isinstance(mask[0], np.ndarray): - mask = np.concatenate([m[None, None, :] for m in mask], axis=0) - - mask[mask < 0.5] = 0 - mask[mask >= 0.5] = 1 - mask = torch.from_numpy(mask) - - mask = 1 - mask - - return mask, image - - -class KandinskyV22InpaintPipeline(DiffusionPipeline): - """ - Pipeline for text-guided image inpainting using Kandinsky2.1 - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - scheduler ([`DDIMScheduler`]): - A scheduler to be used in combination with `unet` to generate image latents. - unet ([`UNet2DConditionModel`]): - Conditional U-Net architecture to denoise the image embedding. - movq ([`VQModel`]): - MoVQ Decoder to generate the image from the latents. - """ - - model_cpu_offload_seq = "unet->movq" - - def __init__( - self, - unet: UNet2DConditionModel, - scheduler: DDPMScheduler, - movq: VQModel, - ): - super().__init__() - - self.register_modules( - unet=unet, - scheduler=scheduler, - movq=movq, - ) - self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1) - self._warn_has_been_called = False - - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents - def prepare_latents(self, shape, dtype, device, generator, latents, scheduler): - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - latents = latents * scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def __call__( - self, - image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]], - image: Union[torch.FloatTensor, PIL.Image.Image], - mask_image: Union[torch.FloatTensor, PIL.Image.Image, np.ndarray], - negative_image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]], - height: int = 512, - width: int = 512, - num_inference_steps: int = 100, - guidance_scale: float = 4.0, - num_images_per_prompt: int = 1, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - return_dict: bool = True, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): - The clip image embeddings for text prompt, that will be used to condition the image generation. - image (`PIL.Image.Image`): - `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will - be masked out with `mask_image` and repainted according to `prompt`. - mask_image (`np.array`): - Tensor representing an image batch, to mask `image`. White pixels in the mask will be repainted, while - black pixels will be preserved. If `mask_image` is a PIL image, it will be converted to a single - channel (luminance) before use. If it's a tensor, it should contain one color channel (L) instead of 3, - so the expected shape would be `(B, H, W, 1)`. - negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): - The clip image embeddings for negative text prompt, will be used to condition the image generation. - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 100): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 4.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` - (`np.array`) or `"pt"` (`torch.Tensor`). - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - - Examples: - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple` - """ - if not self._warn_has_been_called and version.parse(version.parse(__version__).base_version) < version.parse( - "0.23.0.dev0" - ): - logger.warn( - "Please note that the expected format of `mask_image` has recently been changed. " - "Before diffusers == 0.19.0, Kandinsky Inpainting pipelines repainted black pixels and preserved black pixels. " - "As of diffusers==0.19.0 this behavior has been inverted. Now white pixels are repainted and black pixels are preserved. " - "This way, Kandinsky's masking behavior is aligned with Stable Diffusion. " - "THIS means that you HAVE to invert the input mask to have the same behavior as before as explained in https://github.com/huggingface/diffusers/pull/4207. " - "This warning will be surpressed after the first inference call and will be removed in diffusers>0.23.0" - ) - self._warn_has_been_called = True - - device = self._execution_device - - do_classifier_free_guidance = guidance_scale > 1.0 - - if isinstance(image_embeds, list): - image_embeds = torch.cat(image_embeds, dim=0) - batch_size = image_embeds.shape[0] * num_images_per_prompt - if isinstance(negative_image_embeds, list): - negative_image_embeds = torch.cat(negative_image_embeds, dim=0) - - if do_classifier_free_guidance: - image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) - negative_image_embeds = negative_image_embeds.repeat_interleave(num_images_per_prompt, dim=0) - - image_embeds = torch.cat([negative_image_embeds, image_embeds], dim=0).to( - dtype=self.unet.dtype, device=device - ) - - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps_tensor = self.scheduler.timesteps - - # preprocess image and mask - mask_image, image = prepare_mask_and_masked_image(image, mask_image, height, width) - - image = image.to(dtype=image_embeds.dtype, device=device) - image = self.movq.encode(image)["latents"] - - mask_image = mask_image.to(dtype=image_embeds.dtype, device=device) - - image_shape = tuple(image.shape[-2:]) - mask_image = F.interpolate( - mask_image, - image_shape, - mode="nearest", - ) - mask_image = prepare_mask(mask_image) - masked_image = image * mask_image - - mask_image = mask_image.repeat_interleave(num_images_per_prompt, dim=0) - masked_image = masked_image.repeat_interleave(num_images_per_prompt, dim=0) - if do_classifier_free_guidance: - mask_image = mask_image.repeat(2, 1, 1, 1) - masked_image = masked_image.repeat(2, 1, 1, 1) - - num_channels_latents = self.movq.config.latent_channels - - height, width = downscale_height_and_width(height, width, self.movq_scale_factor) - - # create initial latent - latents = self.prepare_latents( - (batch_size, num_channels_latents, height, width), - image_embeds.dtype, - device, - generator, - latents, - self.scheduler, - ) - noise = torch.clone(latents) - for i, t in enumerate(self.progress_bar(timesteps_tensor)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = torch.cat([latent_model_input, masked_image, mask_image], dim=1) - - added_cond_kwargs = {"image_embeds": image_embeds} - noise_pred = self.unet( - sample=latent_model_input, - timestep=t, - encoder_hidden_states=None, - added_cond_kwargs=added_cond_kwargs, - return_dict=False, - )[0] - - if do_classifier_free_guidance: - noise_pred, variance_pred = noise_pred.split(latents.shape[1], dim=1) - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - _, variance_pred_text = variance_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, variance_pred_text], dim=1) - - if not ( - hasattr(self.scheduler.config, "variance_type") - and self.scheduler.config.variance_type in ["learned", "learned_range"] - ): - noise_pred, _ = noise_pred.split(latents.shape[1], dim=1) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - noise_pred, - t, - latents, - generator=generator, - )[0] - init_latents_proper = image[:1] - init_mask = mask_image[:1] - - if i < len(timesteps_tensor) - 1: - noise_timestep = timesteps_tensor[i + 1] - init_latents_proper = self.scheduler.add_noise( - init_latents_proper, noise, torch.tensor([noise_timestep]) - ) - - latents = init_mask * init_latents_proper + (1 - init_mask) * latents - - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # post-processing - latents = mask_image[:1] * image[:1] + (1 - mask_image[:1]) * latents - image = self.movq.decode(latents, force_not_quantize=True)["sample"] - - # Offload all models - self.maybe_free_model_hooks() - - if output_type not in ["pt", "np", "pil"]: - raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") - - if output_type in ["np", "pil"]: - image = image * 0.5 + 0.5 - image = image.clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/README.md b/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/README.md deleted file mode 100644 index 666e4041eb85f9c4c0d40d395c5a9eae8ca175c1..0000000000000000000000000000000000000000 --- a/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Automatic Speech Recognition -emoji: 🌖 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.0.26 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: EuroPython2022/automatic-speech-recognition-with-next-gen-kaldi ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/peterwisu/lip_synthesis/src/main/inference.py b/spaces/peterwisu/lip_synthesis/src/main/inference.py deleted file mode 100644 index 7e5f349c8278e126e55c768dae86eb37529750c5..0000000000000000000000000000000000000000 --- a/spaces/peterwisu/lip_synthesis/src/main/inference.py +++ /dev/null @@ -1,446 +0,0 @@ - -import torch -import torch.nn as nn -from torch.nn.parallel import DataParallel -import face_alignment -from tqdm import tqdm -import numpy as np -from utils.plot import vis_landmark_on_img -from utils import audio -import cv2 -import subprocess -import platform -import os -from src.models.image2image import ResUnetGenerator -#from src.models.lstmgen import LstmGen as Lip_Gen -#from src.models.lstmattn import LstmGen as Lip_Gen - -#from src.models.transgen import TransformerGenerator as Lip_Gen -from utils.wav2lip import prepare_audio, prepare_video, load_checkpoint -from utils.utils import procrustes -import matplotlib.pyplot as plt -from utils.plot import plot_scatter_facial_landmark - -use_cuda = torch.cuda.is_available() - -device = "cuda" if use_cuda else "cpu" - - - -class Inference(): - - - def __init__ (self, args): - - self.fl_batchsize = args.fl_detector_batchsize - self.gen_batchsize = args.generator_batchsize - self.image2image_ckpt = args.image2image_checkpoint - self.generator_ckpt = args.generator_checkpoint - self.input_face = args.input_face - self.fps = args.fps - self.input_audio = args.input_audio - self.vis_fl = args.vis_fl - self.only_fl = args.only_fl - self.output_name = args.output_name - self.test_img2img = args.test_img2img - self.seq_len = 5 - self.model_type = args.model_type - - - - - self.all_frames , self.fps = prepare_video(args.input_face, args.fps) - self.mel_chunk = prepare_audio(args.input_audio, self.fps) - - # crop timestamp of a video incase video is longer than audio - self.all_frames = self.all_frames[:len(self.mel_chunk)] - - - # Image2Image translation model - self.image2image = ResUnetGenerator(input_nc=6,output_nc=3,num_downs=6,use_dropout=False).to(device) - - # Load pretrained weights to image2image model - image2image_weight = torch.load(self.image2image_ckpt, map_location=torch.device(device))['G'] - - # Since the checkpoint of model was trained using DataParallel with multiple GPU - # It required to wrap a model with DataParallel wrapper class - self.image2image = DataParallel(self.image2image).to(device) - # assgin weight to model - self.image2image.load_state_dict(image2image_weight) - - self.image2image = self.image2image.module # access model (remove DataParallel) - - - - - - - - if self.model_type == "lstm": - - from src.models.lstmgen import LstmGen as Lip_Gen - - print("Import LSTM generator") - - elif self.model_type == "attn_lstm": - - from src.models.attnlstm import LstmGen as Lip_Gen - - #from src.models.inverse_gen import LstmGen as Lip_Gen - - print("Import Attention LSTM generator") - - else: - - raise ValueError("please put the valid type of model") - - - - self.generator = Lip_Gen().to(device=device) - - self.generator = load_checkpoint(model=self.generator, - path= self.generator_ckpt, - optimizer=None, - use_cuda=False, - reset_optimizer=True, - pretrain=True) - - print("Generator",next(self.generator.parameters()).is_cuda ) - print("Img2Img",next(self.image2image.parameters()).is_cuda ) - - def __landmark_detection__(self,images, batch_size): - """ - *************************************************************************************** - Detect 3D Facial Landmark from images using Landmark Detector Tools from Face_Alignment - Link repo : https://github.com/1adrianb/face-alignment - *************************************************************************************** - @author : Wish Suharitdamrong - -------- - arguments - --------- - images : list of images - ------ - return - ------ - """ - - def detect_bug_136(fls): - """ - Some times when using detector.get_landmarks_from_batch it does has some bug. Instead of returning facial landmarks (68,3) for single person in image it instead - return (136,3) or (204,3). The first 68 point still a valid facial landamrk of that image (as I visualised). So this fuction basically removed the extra 68 point in landmarks. - This can cause from the image that have more than one face in the image - """ - - - for i in range(len(fls)): - - print(np.array(fls[i]).shape) - if len(fls[i]) != 68: - - bug = fls[i] - - fl1 = bug[:68] - - - fls[i] = fl1 - if len(fls[i]) == 0: - - fls[i] = fls[i-1] - - - detector = face_alignment.FaceAlignment(face_alignment.LandmarksType._3D, flip_input=False, device=device) - images = np.array(images) # shape (Batch , height, width, 3) - images = np.transpose(images,(0,3,1,2)) # shape (Batch, 3, height, width) - images = torch.from_numpy(images) - """ - fls = detector.get_landmarks_from_batch(images) - fls = np.array(fls) - """ - - - fls = [] - transforms = [] - - for i in tqdm(range(0, len(images), batch_size)): - - img = images[i:i+batch_size] - - - fl_batch = detector.get_landmarks_from_batch(img) - - - - detect_bug_136(fl_batch) - - - - fl_batch = np.array(fl_batch)#[:,:,:] # take only 3d - - - - fl = [] - for idx in range(fl_batch.shape[0]): - - fl_inbatch, trans_info = procrustes(fl_batch[idx]) - fl.append(fl_inbatch) - transforms.append(trans_info) - - fl = np.array(fl) - - fls.append(fl) - - - fls = np.concatenate(fls, axis=0) - transforms = np.array(transforms) - - - return fls, transforms - - def __keypoints2landmarks__(self,fls): - """ - - """ - - frames = [] - for fl in fls: - - img = np.ones(shape=(256,256,3)) * 255 # blank white image - - fl = fl.astype(int) - - img = vis_landmark_on_img(img,fl).astype(int) - - frames.append(img) - - frames = np.stack(frames, axis=0) - - return frames - - - def __reverse_trans__(self,fl , tran): - - scale = tran['scale'] - translate = tran['translate'] - - fl = fl * scale # reverse scaling - fl = fl + translate # reverse translation - - return fl - - def __reverse_trans_batch__ (self, fl , trans) : - - trans_fls =[] - - for idx in range(fl.shape[0]): - - trans_fl = self.__reverse_trans__(fl[idx], trans[idx]) - - trans_fls.append(trans_fl) - - trans_fls = np.array(trans_fls) - - return trans_fls - - - def __data_generator__(self): - """ - - """ - - fl_batch , trans_batch, mel_batch, frame_batch = [],[],[],[] - - fl_seq , trans_seq, mel_seq, frame_seq = [],[],[],[] - - frames = self.all_frames - mels = self.mel_chunk - - - print("Detecting Facial Landmark ....") - fl_detected, transformation = self.__landmark_detection__(frames, self.fl_batchsize) - print("Finish detecting Facial Landmark !!!") - - for i, m in enumerate(mels): - - idx = i % len(frames) # if input if static image only select frame and landmark at index 0 - - frame_to_trans = frames[idx].copy() - fl = fl_detected[idx].copy() - transforms = transformation[idx].copy() - - fl_seq.append(fl) - trans_seq.append(transforms) - mel_seq.append(m) - frame_seq.append(frame_to_trans) - - - if len(fl_seq) >= self.seq_len: - - fl_batch.append(fl_seq) - trans_batch.append(trans_seq) - mel_batch.append(mel_seq) - frame_batch.append(frame_seq) - - fl_seq , trans_seq, mel_seq, frame_seq = [],[],[],[] - - - if len(fl_batch) >= self.gen_batchsize: - - fl_batch = np.array(fl_batch) - trans_batch = np.array(trans_batch) # this might cause error by wrapping a dict in np - mel_batch = np.array(mel_batch) - mel_batch = np.reshape(mel_batch, [len(mel_batch), self.seq_len , 1 , mel_batch.shape[2], mel_batch.shape[3]]) # b ,s ,1 , 80 , 18 (old 80,18,1) - frame_batch = np.array(frame_batch) - - - - yield fl_batch, trans_batch, mel_batch, frame_batch - - fl_batch, trans_batch, mel_batch, frame_batch = [], [], [], [] - - #print(np.array(fl_batch).shape) - #print(np.array(fl_seq).shape) - - - if len(fl_batch) > 0 : - #print("tt") - fl_batch = np.array(fl_batch) - #print(fl_batch.shape) - trans_batch = np.array(trans_batch) # this might cause error by wrapping a dict in np - #print(trans_batch.shape) - mel_batch = np.array(mel_batch) - #print(mel_batch.shape) - mel_batch = np.reshape(mel_batch, [len(mel_batch), self.seq_len,1 ,mel_batch.shape[2], mel_batch.shape[3]]) - #print(mel_batch.shape) - frame_batch = np.array(frame_batch) - - yield fl_batch, trans_batch, mel_batch, frame_batch - - fl_batch, trans_batch, mel_batch, frame_batch = [], [], [], [] - - if len(fl_seq) > 0: - - #print("hello") - - - fl_batch = np.expand_dims(np.array(fl_seq),axis=0) - #print(fl_batch.shape) - trans_batch = np.expand_dims(np.array(trans_seq),axis=0) # this might cause error by wrapping a dict in np - #print(trans_batch.shape) - mel_batch = np.expand_dims(np.array(mel_seq),axis=0) - curr_mel_seq = mel_batch.shape[1] - #print(mel_batch.shape) - mel_batch = np.reshape(mel_batch, [len(mel_batch), curr_mel_seq,1 ,mel_batch.shape[2], mel_batch.shape[3]]) - #print(mel_batch.shape) - frame_batch = np.expand_dims(np.array(frame_seq),axis=0) - - #exit() - - yield fl_batch, trans_batch, mel_batch, frame_batch - - fl_batch, trans_batch, mel_batch, frame_batch = [], [], [], [] - - - def start(self): - """ - """ - - self.data = self.__data_generator__() - - - if self.vis_fl and not self.only_fl: - writer = cv2.VideoWriter('./temp/out.mp4', cv2.VideoWriter_fourcc(*'mjpg'), self.fps, (256*3,256)) - else : - writer = cv2.VideoWriter('./temp/out.mp4', cv2.VideoWriter_fourcc(*'mjpg'), self.fps, (256,256)) - - for (fl, trans, mel, ref_frame) in tqdm(self.data): - - # fl shape (B, 68, 3) - # mel shape (B, 80, 18, 1) - # ref frame (B, 256, 256, 3) - lip_fl = torch.FloatTensor(fl).to(device) - - lip_fl = lip_fl[:,:,48:,:] # take only lip keypoints - - lip_seq = lip_fl.size(0) - lip_fl = torch.stack([lip_fl[0] for _ in range(lip_seq)], dim=0) - lip_fl = lip_fl.reshape(lip_fl.shape[0],lip_fl.shape[1],-1) - mel = torch.FloatTensor(mel).to(device) - #print(mel.shape) - #mel = mel.reshape(-1,80,18) - - if not self.test_img2img: # check if not testing image2image translation module only no lip generator - with torch.no_grad(): - - self.generator.eval() - out_fl,_ = self.generator(mel, lip_fl) - - - out_fl = out_fl.detach().cpu().numpy() # convert output to numpy array - out_fl = out_fl.reshape(out_fl.shape[0],out_fl.shape[1],20,-1) - - out_fl = out_fl - fl[:,:,48:,:] = out_fl - - - fl = fl.reshape(-1,fl.shape[2],fl.shape[3]) - #ref_frame = ref_frame.reshape(-1,ref_frame.shape[2], ref_frame[3]) - trans = trans.reshape(-1) - fl = self.__reverse_trans_batch__(fl , trans) - - - # plot a image of landmarks - fl_image = self.__keypoints2landmarks__(fl) - - - fl_image = fl_image.reshape(ref_frame.shape[0],ref_frame.shape[1],ref_frame.shape[2],ref_frame.shape[3],ref_frame.shape[4]) - - - if not self.only_fl: - # image translation - for (img_batch,ref_batch) in zip(fl_image, ref_frame): - - for img, ref in zip(img_batch, ref_batch): - - trans_in = np.concatenate((img,ref), axis=2).astype(np.float32)/255.0 - trans_in = trans_in.transpose((2, 0, 1)) - trans_in = torch.tensor(trans_in, requires_grad=False) - trans_in = trans_in.reshape(-1, 6, 256, 256) - trans_in = trans_in.to(device) - - with torch.no_grad(): - self.image2image.eval() - - trans_out = self.image2image(trans_in) - trans_out = torch.tanh(trans_out) - - trans_out = trans_out.detach().cpu().numpy().transpose((0,2,3,1)) - trans_out[trans_out<0] = 0 - trans_out = trans_out * 255.0 - - if self.vis_fl: - frame = np.concatenate((ref,img,trans_out[0]),axis=1) - else : - frame = trans_out[0] - writer.write(frame.astype(np.uint8)) - - - if self.only_fl: - - for fl_batch in fl_image: - - - for fl in fl_batch: - - writer.write(fl.astype(np.uint8)) - - - # Write video and close writer - writer.release() - - command = 'ffmpeg -y -i {} -i {} -strict -2 -q:v 1 {}'.format(self.input_audio, 'temp/out.mp4', self.output_name) - subprocess.call(command, shell=platform.system() != 'Windows') - - - - - diff --git a/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/Weuseing.py b/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/Weuseing.py deleted file mode 100644 index ba79e8b9c2573418720495a20d4c1c8d5a6ca7e9..0000000000000000000000000000000000000000 --- a/spaces/pikto/Elite-freegpt-webui/g4f/Provider/Providers/Weuseing.py +++ /dev/null @@ -1,29 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://api.gptplus.one' -model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - 'Accept': '*/*', - 'Accept-Language': 'ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7,ja;q=0.6,zh-TW;q=0.5,zh;q=0.4', - } - data = { - 'messages': messages, - 'model': model, - } - response = requests.post('https://api.gptplus.one/chat-process', json=data, stream=True) - print(response) - - for token in response.iter_content(chunk_size=None): - yield (token.decode('utf-8')) - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/pknez/face-swap-docker/chain_img_processor/batchimage.py b/spaces/pknez/face-swap-docker/chain_img_processor/batchimage.py deleted file mode 100644 index 9a4185eb664b50f913a895fa3a2f9b2085998919..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/chain_img_processor/batchimage.py +++ /dev/null @@ -1,86 +0,0 @@ -from typing import Any, List, Callable -import psutil -import os -from concurrent.futures import ThreadPoolExecutor, as_completed -from queue import Queue -from .image import ChainImgProcessor -from tqdm import tqdm -import cv2 - -def create_queue(temp_frame_paths: List[str]) -> Queue[str]: - queue: Queue[str] = Queue() - for frame_path in temp_frame_paths: - queue.put(frame_path) - return queue - - -def pick_queue(queue: Queue[str], queue_per_future: int) -> List[str]: - queues = [] - for _ in range(queue_per_future): - if not queue.empty(): - queues.append(queue.get()) - return queues - - - -class ChainBatchImageProcessor(ChainImgProcessor): - chain = None - func_params_gen = None - num_threads = 1 - - def __init__(self): - ChainImgProcessor.__init__(self) - - - def init_with_plugins(self): - self.init_plugins(["core"]) - self.display_init_info() - - init_on_start_arr = self.init_on_start.split(",") - for proc_id in init_on_start_arr: - self.init_processor(proc_id) - - def update_progress(self, progress: Any = None) -> None: - process = psutil.Process(os.getpid()) - memory_usage = process.memory_info().rss / 1024 / 1024 / 1024 - progress.set_postfix({ - 'memory_usage': '{:.2f}'.format(memory_usage).zfill(5) + 'GB', - 'execution_threads': self.num_threads - }) - progress.refresh() - progress.update(1) - - - def process_frames(self, source_files: List[str], target_files: List[str], current_files, update: Callable[[], None]) -> None: - for f in current_files: - temp_frame = cv2.imread(f) - if temp_frame is not None: - if self.func_params_gen: - params = self.func_params_gen(None, temp_frame) - else: - params = {} - resimg, _ = self.run_chain(temp_frame, params, self.chain) - if resimg is not None: - i = source_files.index(f) - cv2.imwrite(target_files[i], resimg) - if update: - update() - - - def run_batch_chain(self, source_files, target_files, threads:int = 1, chain = None, params_frame_gen_func = None): - self.chain = chain - self.func_params_gen = params_frame_gen_func - progress_bar_format = '{l_bar}{bar}| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, {rate_fmt}{postfix}]' - total = len(source_files) - self.num_threads = threads - with tqdm(total=total, desc='Processing', unit='frame', dynamic_ncols=True, bar_format=progress_bar_format) as progress: - with ThreadPoolExecutor(max_workers=threads) as executor: - futures = [] - queue = create_queue(source_files) - queue_per_future = max(len(source_files) // threads, 1) - while not queue.empty(): - future = executor.submit(self.process_frames, source_files, target_files, pick_queue(queue, queue_per_future), lambda: self.update_progress(progress)) - futures.append(future) - for future in as_completed(futures): - future.result() - diff --git a/spaces/plzdontcry/dakubettergpt/src/main.tsx b/spaces/plzdontcry/dakubettergpt/src/main.tsx deleted file mode 100644 index 2de2b8a389f42be7f15fde262e5c0c158eb509eb..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/main.tsx +++ /dev/null @@ -1,13 +0,0 @@ -import React from 'react'; -import ReactDOM from 'react-dom/client'; -import App from './App'; -import './main.css'; -await import('katex/dist/katex.min.css'); - -import './i18n'; - -ReactDOM.createRoot(document.getElementById('root') as HTMLElement).render( - - - -); diff --git a/spaces/pmgautam/english-to-nepali-translation/README.md b/spaces/pmgautam/english-to-nepali-translation/README.md deleted file mode 100644 index 96c71004c894a4465e0dfba41fdac263481ffb31..0000000000000000000000000000000000000000 --- a/spaces/pmgautam/english-to-nepali-translation/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: English To Nepali Translation -colorFrom: yellow -colorTo: red -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -license: apache-2.0 ---- -This repo contains a gradio app to translate from English to Nepali. -* Model used: [NLLB](https://huggingface.co/facebook/nllb-200-distilled-600M) -* Huggingface Space: [english-to-nepali-translation](https://huggingface.co/spaces/pmgautam/english-to-nepali-translation) -* Githug repo: [gradio-translation](https://github.com/pmgautam/gradio-translation) \ No newline at end of file diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/plistlib/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/plistlib/__init__.py deleted file mode 100644 index 066eef38fc720265366afee9a8cd415fc560459e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/misc/plistlib/__init__.py +++ /dev/null @@ -1,681 +0,0 @@ -import collections.abc -import re -from typing import ( - Any, - Callable, - Dict, - List, - Mapping, - MutableMapping, - Optional, - Sequence, - Type, - Union, - IO, -) -import warnings -from io import BytesIO -from datetime import datetime -from base64 import b64encode, b64decode -from numbers import Integral -from types import SimpleNamespace -from functools import singledispatch - -from fontTools.misc import etree - -from fontTools.misc.textTools import tostr - - -# By default, we -# - deserialize elements as bytes and -# - serialize bytes as elements. -# Before, on Python 2, we -# - deserialized elements as plistlib.Data objects, in order to -# distinguish them from the built-in str type (which is bytes on python2) -# - serialized bytes as elements (they must have only contained -# ASCII characters in this case) -# You can pass use_builtin_types=[True|False] to the load/dump etc. functions -# to enforce a specific treatment. -# NOTE that unicode type always maps to element, and plistlib.Data -# always maps to element, regardless of use_builtin_types. -USE_BUILTIN_TYPES = True - -XML_DECLARATION = b"""""" - -PLIST_DOCTYPE = ( - b'' -) - - -# Date should conform to a subset of ISO 8601: -# YYYY '-' MM '-' DD 'T' HH ':' MM ':' SS 'Z' -_date_parser = re.compile( - r"(?P\d\d\d\d)" - r"(?:-(?P\d\d)" - r"(?:-(?P\d\d)" - r"(?:T(?P\d\d)" - r"(?::(?P\d\d)" - r"(?::(?P\d\d))" - r"?)?)?)?)?Z", - re.ASCII, -) - - -def _date_from_string(s: str) -> datetime: - order = ("year", "month", "day", "hour", "minute", "second") - m = _date_parser.match(s) - if m is None: - raise ValueError(f"Expected ISO 8601 date string, but got '{s:r}'.") - gd = m.groupdict() - lst = [] - for key in order: - val = gd[key] - if val is None: - break - lst.append(int(val)) - # NOTE: mypy doesn't know that lst is 6 elements long. - return datetime(*lst) # type:ignore - - -def _date_to_string(d: datetime) -> str: - return "%04d-%02d-%02dT%02d:%02d:%02dZ" % ( - d.year, - d.month, - d.day, - d.hour, - d.minute, - d.second, - ) - - -class Data: - """Represents binary data when ``use_builtin_types=False.`` - - This class wraps binary data loaded from a plist file when the - ``use_builtin_types`` argument to the loading function (:py:func:`fromtree`, - :py:func:`load`, :py:func:`loads`) is false. - - The actual binary data is retrieved using the ``data`` attribute. - """ - - def __init__(self, data: bytes) -> None: - if not isinstance(data, bytes): - raise TypeError("Expected bytes, found %s" % type(data).__name__) - self.data = data - - @classmethod - def fromBase64(cls, data: Union[bytes, str]) -> "Data": - return cls(b64decode(data)) - - def asBase64(self, maxlinelength: int = 76, indent_level: int = 1) -> bytes: - return _encode_base64( - self.data, maxlinelength=maxlinelength, indent_level=indent_level - ) - - def __eq__(self, other: Any) -> bool: - if isinstance(other, self.__class__): - return self.data == other.data - elif isinstance(other, bytes): - return self.data == other - else: - return NotImplemented - - def __repr__(self) -> str: - return "%s(%s)" % (self.__class__.__name__, repr(self.data)) - - -def _encode_base64( - data: bytes, maxlinelength: Optional[int] = 76, indent_level: int = 1 -) -> bytes: - data = b64encode(data) - if data and maxlinelength: - # split into multiple lines right-justified to 'maxlinelength' chars - indent = b"\n" + b" " * indent_level - max_length = max(16, maxlinelength - len(indent)) - chunks = [] - for i in range(0, len(data), max_length): - chunks.append(indent) - chunks.append(data[i : i + max_length]) - chunks.append(indent) - data = b"".join(chunks) - return data - - -# Mypy does not support recursive type aliases as of 0.782, Pylance does. -# https://github.com/python/mypy/issues/731 -# https://devblogs.microsoft.com/python/pylance-introduces-five-new-features-that-enable-type-magic-for-python-developers/#1-support-for-recursive-type-aliases -PlistEncodable = Union[ - bool, - bytes, - Data, - datetime, - float, - Integral, - Mapping[str, Any], - Sequence[Any], - str, -] - - -class PlistTarget: - """Event handler using the ElementTree Target API that can be - passed to a XMLParser to produce property list objects from XML. - It is based on the CPython plistlib module's _PlistParser class, - but does not use the expat parser. - - >>> from fontTools.misc import etree - >>> parser = etree.XMLParser(target=PlistTarget()) - >>> result = etree.XML( - ... "" - ... " something" - ... " blah" - ... "", - ... parser=parser) - >>> result == {"something": "blah"} - True - - Links: - https://github.com/python/cpython/blob/main/Lib/plistlib.py - http://lxml.de/parsing.html#the-target-parser-interface - """ - - def __init__( - self, - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, - ) -> None: - self.stack: List[PlistEncodable] = [] - self.current_key: Optional[str] = None - self.root: Optional[PlistEncodable] = None - if use_builtin_types is None: - self._use_builtin_types = USE_BUILTIN_TYPES - else: - if use_builtin_types is False: - warnings.warn( - "Setting use_builtin_types to False is deprecated and will be " - "removed soon.", - DeprecationWarning, - ) - self._use_builtin_types = use_builtin_types - self._dict_type = dict_type - - def start(self, tag: str, attrib: Mapping[str, str]) -> None: - self._data: List[str] = [] - handler = _TARGET_START_HANDLERS.get(tag) - if handler is not None: - handler(self) - - def end(self, tag: str) -> None: - handler = _TARGET_END_HANDLERS.get(tag) - if handler is not None: - handler(self) - - def data(self, data: str) -> None: - self._data.append(data) - - def close(self) -> PlistEncodable: - if self.root is None: - raise ValueError("No root set.") - return self.root - - # helpers - - def add_object(self, value: PlistEncodable) -> None: - if self.current_key is not None: - stack_top = self.stack[-1] - if not isinstance(stack_top, collections.abc.MutableMapping): - raise ValueError("unexpected element: %r" % stack_top) - stack_top[self.current_key] = value - self.current_key = None - elif not self.stack: - # this is the root object - self.root = value - else: - stack_top = self.stack[-1] - if not isinstance(stack_top, list): - raise ValueError("unexpected element: %r" % stack_top) - stack_top.append(value) - - def get_data(self) -> str: - data = "".join(self._data) - self._data = [] - return data - - -# event handlers - - -def start_dict(self: PlistTarget) -> None: - d = self._dict_type() - self.add_object(d) - self.stack.append(d) - - -def end_dict(self: PlistTarget) -> None: - if self.current_key: - raise ValueError("missing value for key '%s'" % self.current_key) - self.stack.pop() - - -def end_key(self: PlistTarget) -> None: - if self.current_key or not isinstance(self.stack[-1], collections.abc.Mapping): - raise ValueError("unexpected key") - self.current_key = self.get_data() - - -def start_array(self: PlistTarget) -> None: - a: List[PlistEncodable] = [] - self.add_object(a) - self.stack.append(a) - - -def end_array(self: PlistTarget) -> None: - self.stack.pop() - - -def end_true(self: PlistTarget) -> None: - self.add_object(True) - - -def end_false(self: PlistTarget) -> None: - self.add_object(False) - - -def end_integer(self: PlistTarget) -> None: - self.add_object(int(self.get_data())) - - -def end_real(self: PlistTarget) -> None: - self.add_object(float(self.get_data())) - - -def end_string(self: PlistTarget) -> None: - self.add_object(self.get_data()) - - -def end_data(self: PlistTarget) -> None: - if self._use_builtin_types: - self.add_object(b64decode(self.get_data())) - else: - self.add_object(Data.fromBase64(self.get_data())) - - -def end_date(self: PlistTarget) -> None: - self.add_object(_date_from_string(self.get_data())) - - -_TARGET_START_HANDLERS: Dict[str, Callable[[PlistTarget], None]] = { - "dict": start_dict, - "array": start_array, -} - -_TARGET_END_HANDLERS: Dict[str, Callable[[PlistTarget], None]] = { - "dict": end_dict, - "array": end_array, - "key": end_key, - "true": end_true, - "false": end_false, - "integer": end_integer, - "real": end_real, - "string": end_string, - "data": end_data, - "date": end_date, -} - - -# functions to build element tree from plist data - - -def _string_element(value: str, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("string") - el.text = value - return el - - -def _bool_element(value: bool, ctx: SimpleNamespace) -> etree.Element: - if value: - return etree.Element("true") - return etree.Element("false") - - -def _integer_element(value: int, ctx: SimpleNamespace) -> etree.Element: - if -1 << 63 <= value < 1 << 64: - el = etree.Element("integer") - el.text = "%d" % value - return el - raise OverflowError(value) - - -def _real_element(value: float, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("real") - el.text = repr(value) - return el - - -def _dict_element( - d: Mapping[str, PlistEncodable], ctx: SimpleNamespace -) -> etree.Element: - el = etree.Element("dict") - items = d.items() - if ctx.sort_keys: - items = sorted(items) # type: ignore - ctx.indent_level += 1 - for key, value in items: - if not isinstance(key, str): - if ctx.skipkeys: - continue - raise TypeError("keys must be strings") - k = etree.SubElement(el, "key") - k.text = tostr(key, "utf-8") - el.append(_make_element(value, ctx)) - ctx.indent_level -= 1 - return el - - -def _array_element( - array: Sequence[PlistEncodable], ctx: SimpleNamespace -) -> etree.Element: - el = etree.Element("array") - if len(array) == 0: - return el - ctx.indent_level += 1 - for value in array: - el.append(_make_element(value, ctx)) - ctx.indent_level -= 1 - return el - - -def _date_element(date: datetime, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("date") - el.text = _date_to_string(date) - return el - - -def _data_element(data: bytes, ctx: SimpleNamespace) -> etree.Element: - el = etree.Element("data") - # NOTE: mypy is confused about whether el.text should be str or bytes. - el.text = _encode_base64( # type: ignore - data, - maxlinelength=(76 if ctx.pretty_print else None), - indent_level=ctx.indent_level, - ) - return el - - -def _string_or_data_element(raw_bytes: bytes, ctx: SimpleNamespace) -> etree.Element: - if ctx.use_builtin_types: - return _data_element(raw_bytes, ctx) - else: - try: - string = raw_bytes.decode(encoding="ascii", errors="strict") - except UnicodeDecodeError: - raise ValueError( - "invalid non-ASCII bytes; use unicode string instead: %r" % raw_bytes - ) - return _string_element(string, ctx) - - -# The following is probably not entirely correct. The signature should take `Any` -# and return `NoReturn`. At the time of this writing, neither mypy nor Pyright -# can deal with singledispatch properly and will apply the signature of the base -# function to all others. Being slightly dishonest makes it type-check and return -# usable typing information for the optimistic case. -@singledispatch -def _make_element(value: PlistEncodable, ctx: SimpleNamespace) -> etree.Element: - raise TypeError("unsupported type: %s" % type(value)) - - -_make_element.register(str)(_string_element) -_make_element.register(bool)(_bool_element) -_make_element.register(Integral)(_integer_element) -_make_element.register(float)(_real_element) -_make_element.register(collections.abc.Mapping)(_dict_element) -_make_element.register(list)(_array_element) -_make_element.register(tuple)(_array_element) -_make_element.register(datetime)(_date_element) -_make_element.register(bytes)(_string_or_data_element) -_make_element.register(bytearray)(_data_element) -_make_element.register(Data)(lambda v, ctx: _data_element(v.data, ctx)) - - -# Public functions to create element tree from plist-compatible python -# data structures and viceversa, for use when (de)serializing GLIF xml. - - -def totree( - value: PlistEncodable, - sort_keys: bool = True, - skipkeys: bool = False, - use_builtin_types: Optional[bool] = None, - pretty_print: bool = True, - indent_level: int = 1, -) -> etree.Element: - """Convert a value derived from a plist into an XML tree. - - Args: - value: Any kind of value to be serialized to XML. - sort_keys: Whether keys of dictionaries should be sorted. - skipkeys (bool): Whether to silently skip non-string dictionary - keys. - use_builtin_types (bool): If true, byte strings will be - encoded in Base-64 and wrapped in a ``data`` tag; if - false, they will be either stored as ASCII strings or an - exception raised if they cannot be decoded as such. Defaults - to ``True`` if not present. Deprecated. - pretty_print (bool): Whether to indent the output. - indent_level (int): Level of indentation when serializing. - - Returns: an ``etree`` ``Element`` object. - - Raises: - ``TypeError`` - if non-string dictionary keys are serialized - and ``skipkeys`` is false. - ``ValueError`` - if non-ASCII binary data is present - and `use_builtin_types` is false. - """ - if use_builtin_types is None: - use_builtin_types = USE_BUILTIN_TYPES - else: - use_builtin_types = use_builtin_types - context = SimpleNamespace( - sort_keys=sort_keys, - skipkeys=skipkeys, - use_builtin_types=use_builtin_types, - pretty_print=pretty_print, - indent_level=indent_level, - ) - return _make_element(value, context) - - -def fromtree( - tree: etree.Element, - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, -) -> Any: - """Convert an XML tree to a plist structure. - - Args: - tree: An ``etree`` ``Element``. - use_builtin_types: If True, binary data is deserialized to - bytes strings. If False, it is wrapped in :py:class:`Data` - objects. Defaults to True if not provided. Deprecated. - dict_type: What type to use for dictionaries. - - Returns: An object (usually a dictionary). - """ - target = PlistTarget(use_builtin_types=use_builtin_types, dict_type=dict_type) - for action, element in etree.iterwalk(tree, events=("start", "end")): - if action == "start": - target.start(element.tag, element.attrib) - elif action == "end": - # if there are no children, parse the leaf's data - if not len(element): - # always pass str, not None - target.data(element.text or "") - target.end(element.tag) - return target.close() - - -# python3 plistlib API - - -def load( - fp: IO[bytes], - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, -) -> Any: - """Load a plist file into an object. - - Args: - fp: An opened file. - use_builtin_types: If True, binary data is deserialized to - bytes strings. If False, it is wrapped in :py:class:`Data` - objects. Defaults to True if not provided. Deprecated. - dict_type: What type to use for dictionaries. - - Returns: - An object (usually a dictionary) representing the top level of - the plist file. - """ - - if not hasattr(fp, "read"): - raise AttributeError("'%s' object has no attribute 'read'" % type(fp).__name__) - target = PlistTarget(use_builtin_types=use_builtin_types, dict_type=dict_type) - parser = etree.XMLParser(target=target) - result = etree.parse(fp, parser=parser) - # lxml returns the target object directly, while ElementTree wraps - # it as the root of an ElementTree object - try: - return result.getroot() - except AttributeError: - return result - - -def loads( - value: bytes, - use_builtin_types: Optional[bool] = None, - dict_type: Type[MutableMapping[str, Any]] = dict, -) -> Any: - """Load a plist file from a string into an object. - - Args: - value: A bytes string containing a plist. - use_builtin_types: If True, binary data is deserialized to - bytes strings. If False, it is wrapped in :py:class:`Data` - objects. Defaults to True if not provided. Deprecated. - dict_type: What type to use for dictionaries. - - Returns: - An object (usually a dictionary) representing the top level of - the plist file. - """ - - fp = BytesIO(value) - return load(fp, use_builtin_types=use_builtin_types, dict_type=dict_type) - - -def dump( - value: PlistEncodable, - fp: IO[bytes], - sort_keys: bool = True, - skipkeys: bool = False, - use_builtin_types: Optional[bool] = None, - pretty_print: bool = True, -) -> None: - """Write a Python object to a plist file. - - Args: - value: An object to write. - fp: A file opened for writing. - sort_keys (bool): Whether keys of dictionaries should be sorted. - skipkeys (bool): Whether to silently skip non-string dictionary - keys. - use_builtin_types (bool): If true, byte strings will be - encoded in Base-64 and wrapped in a ``data`` tag; if - false, they will be either stored as ASCII strings or an - exception raised if they cannot be represented. Defaults - pretty_print (bool): Whether to indent the output. - indent_level (int): Level of indentation when serializing. - - Raises: - ``TypeError`` - if non-string dictionary keys are serialized - and ``skipkeys`` is false. - ``ValueError`` - if non-representable binary data is present - and `use_builtin_types` is false. - """ - - if not hasattr(fp, "write"): - raise AttributeError("'%s' object has no attribute 'write'" % type(fp).__name__) - root = etree.Element("plist", version="1.0") - el = totree( - value, - sort_keys=sort_keys, - skipkeys=skipkeys, - use_builtin_types=use_builtin_types, - pretty_print=pretty_print, - ) - root.append(el) - tree = etree.ElementTree(root) - # we write the doctype ourselves instead of using the 'doctype' argument - # of 'write' method, becuse lxml will force adding a '\n' even when - # pretty_print is False. - if pretty_print: - header = b"\n".join((XML_DECLARATION, PLIST_DOCTYPE, b"")) - else: - header = XML_DECLARATION + PLIST_DOCTYPE - fp.write(header) - tree.write( # type: ignore - fp, - encoding="utf-8", - pretty_print=pretty_print, - xml_declaration=False, - ) - - -def dumps( - value: PlistEncodable, - sort_keys: bool = True, - skipkeys: bool = False, - use_builtin_types: Optional[bool] = None, - pretty_print: bool = True, -) -> bytes: - """Write a Python object to a string in plist format. - - Args: - value: An object to write. - sort_keys (bool): Whether keys of dictionaries should be sorted. - skipkeys (bool): Whether to silently skip non-string dictionary - keys. - use_builtin_types (bool): If true, byte strings will be - encoded in Base-64 and wrapped in a ``data`` tag; if - false, they will be either stored as strings or an - exception raised if they cannot be represented. Defaults - pretty_print (bool): Whether to indent the output. - indent_level (int): Level of indentation when serializing. - - Returns: - string: A plist representation of the Python object. - - Raises: - ``TypeError`` - if non-string dictionary keys are serialized - and ``skipkeys`` is false. - ``ValueError`` - if non-representable binary data is present - and `use_builtin_types` is false. - """ - fp = BytesIO() - dump( - value, - fp, - sort_keys=sort_keys, - skipkeys=skipkeys, - use_builtin_types=use_builtin_types, - pretty_print=pretty_print, - ) - return fp.getvalue() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-cb68aa64.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-cb68aa64.css deleted file mode 100644 index 6d7fa6f62af721fffb7f3366cc916cbe2c2b6113..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/Example-cb68aa64.css +++ /dev/null @@ -1 +0,0 @@ -img.svelte-2xi6dn{max-width:100%;max-height:100%;border-radius:var(--radius-lg);max-width:none}.container.selected.svelte-5cqjmr{border-color:var(--border-color-accent)}.container.table.svelte-5cqjmr{margin:0 auto;border:2px solid var(--border-color-primary);border-radius:var(--radius-lg);width:var(--size-20);height:var(--size-20);object-fit:cover}.container.gallery.svelte-5cqjmr{border:2px solid var(--border-color-primary);height:var(--size-20);max-height:var(--size-20);object-fit:cover} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/array_api/_manipulation_functions.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/array_api/_manipulation_functions.py deleted file mode 100644 index 556bde7d0b07c14a3f7c35c57859b6fe253f8c18..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/array_api/_manipulation_functions.py +++ /dev/null @@ -1,112 +0,0 @@ -from __future__ import annotations - -from ._array_object import Array -from ._data_type_functions import result_type - -from typing import List, Optional, Tuple, Union - -import numpy as np - -# Note: the function name is different here -def concat( - arrays: Union[Tuple[Array, ...], List[Array]], /, *, axis: Optional[int] = 0 -) -> Array: - """ - Array API compatible wrapper for :py:func:`np.concatenate `. - - See its docstring for more information. - """ - # Note: Casting rules here are different from the np.concatenate default - # (no for scalars with axis=None, no cross-kind casting) - dtype = result_type(*arrays) - arrays = tuple(a._array for a in arrays) - return Array._new(np.concatenate(arrays, axis=axis, dtype=dtype)) - - -def expand_dims(x: Array, /, *, axis: int) -> Array: - """ - Array API compatible wrapper for :py:func:`np.expand_dims `. - - See its docstring for more information. - """ - return Array._new(np.expand_dims(x._array, axis)) - - -def flip(x: Array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None) -> Array: - """ - Array API compatible wrapper for :py:func:`np.flip `. - - See its docstring for more information. - """ - return Array._new(np.flip(x._array, axis=axis)) - - -# Note: The function name is different here (see also matrix_transpose). -# Unlike transpose(), the axes argument is required. -def permute_dims(x: Array, /, axes: Tuple[int, ...]) -> Array: - """ - Array API compatible wrapper for :py:func:`np.transpose `. - - See its docstring for more information. - """ - return Array._new(np.transpose(x._array, axes)) - - -# Note: the optional argument is called 'shape', not 'newshape' -def reshape(x: Array, - /, - shape: Tuple[int, ...], - *, - copy: Optional[Bool] = None) -> Array: - """ - Array API compatible wrapper for :py:func:`np.reshape `. - - See its docstring for more information. - """ - - data = x._array - if copy: - data = np.copy(data) - - reshaped = np.reshape(data, shape) - - if copy is False and not np.shares_memory(data, reshaped): - raise AttributeError("Incompatible shape for in-place modification.") - - return Array._new(reshaped) - - -def roll( - x: Array, - /, - shift: Union[int, Tuple[int, ...]], - *, - axis: Optional[Union[int, Tuple[int, ...]]] = None, -) -> Array: - """ - Array API compatible wrapper for :py:func:`np.roll `. - - See its docstring for more information. - """ - return Array._new(np.roll(x._array, shift, axis=axis)) - - -def squeeze(x: Array, /, axis: Union[int, Tuple[int, ...]]) -> Array: - """ - Array API compatible wrapper for :py:func:`np.squeeze `. - - See its docstring for more information. - """ - return Array._new(np.squeeze(x._array, axis=axis)) - - -def stack(arrays: Union[Tuple[Array, ...], List[Array]], /, *, axis: int = 0) -> Array: - """ - Array API compatible wrapper for :py:func:`np.stack `. - - See its docstring for more information. - """ - # Call result type here just to raise on disallowed type combinations - result_type(*arrays) - arrays = tuple(a._array for a in arrays) - return Array._new(np.stack(arrays, axis=axis)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/privatemod.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/privatemod.f90 deleted file mode 100644 index 2674c214767b33663e51ee1d32ad7a1792c92680..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/privatemod.f90 +++ /dev/null @@ -1,11 +0,0 @@ -module foo - private - integer :: a - public :: setA - integer :: b -contains - subroutine setA(v) - integer, intent(in) :: v - a = v - end subroutine setA -end module foo diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/timedeltas/test_cumulative.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/timedeltas/test_cumulative.py deleted file mode 100644 index b321dc05bef2785813a9e66d4b7514a2364dc07f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/arrays/timedeltas/test_cumulative.py +++ /dev/null @@ -1,19 +0,0 @@ -import pytest - -import pandas._testing as tm -from pandas.core.arrays import TimedeltaArray - - -class TestAccumulator: - def test_accumulators_disallowed(self): - # GH#50297 - arr = TimedeltaArray._from_sequence_not_strict(["1D", "2D"]) - with pytest.raises(TypeError, match="cumprod not supported"): - arr._accumulate("cumprod") - - def test_cumsum(self): - # GH#50297 - arr = TimedeltaArray._from_sequence_not_strict(["1D", "2D"]) - result = arr._accumulate("cumsum") - expected = TimedeltaArray._from_sequence_not_strict(["1D", "3D"]) - tm.assert_timedelta_array_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_to_numpy.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_to_numpy.py deleted file mode 100644 index bdb9b2c05506124abfbdbb656fd088c5e8c1cee0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/frame/methods/test_to_numpy.py +++ /dev/null @@ -1,49 +0,0 @@ -import numpy as np - -import pandas.util._test_decorators as td - -from pandas import ( - DataFrame, - Timestamp, -) -import pandas._testing as tm - - -class TestToNumpy: - def test_to_numpy(self): - df = DataFrame({"A": [1, 2], "B": [3, 4.5]}) - expected = np.array([[1, 3], [2, 4.5]]) - result = df.to_numpy() - tm.assert_numpy_array_equal(result, expected) - - def test_to_numpy_dtype(self): - df = DataFrame({"A": [1, 2], "B": [3, 4.5]}) - expected = np.array([[1, 3], [2, 4]], dtype="int64") - result = df.to_numpy(dtype="int64") - tm.assert_numpy_array_equal(result, expected) - - @td.skip_array_manager_invalid_test - def test_to_numpy_copy(self, using_copy_on_write): - arr = np.random.default_rng(2).standard_normal((4, 3)) - df = DataFrame(arr) - if using_copy_on_write: - assert df.values.base is not arr - assert df.to_numpy(copy=False).base is df.values.base - else: - assert df.values.base is arr - assert df.to_numpy(copy=False).base is arr - assert df.to_numpy(copy=True).base is not arr - - # we still don't want a copy when na_value=np.nan is passed, - # and that can be respected because we are already numpy-float - if using_copy_on_write: - assert df.to_numpy(copy=False).base is df.values.base - else: - assert df.to_numpy(copy=False, na_value=np.nan).base is arr - - def test_to_numpy_mixed_dtype_to_str(self): - # https://github.com/pandas-dev/pandas/issues/35455 - df = DataFrame([[Timestamp("2020-01-01 00:00:00"), 100.0]]) - result = df.to_numpy(dtype=str) - expected = np.array([["2020-01-01 00:00:00", "100.0"]], dtype=str) - tm.assert_numpy_array_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/idna/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/idna/__init__.py deleted file mode 100644 index a40eeafcc914108ca79c5d83d6e81da1b29c6e80..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/idna/__init__.py +++ /dev/null @@ -1,44 +0,0 @@ -from .package_data import __version__ -from .core import ( - IDNABidiError, - IDNAError, - InvalidCodepoint, - InvalidCodepointContext, - alabel, - check_bidi, - check_hyphen_ok, - check_initial_combiner, - check_label, - check_nfc, - decode, - encode, - ulabel, - uts46_remap, - valid_contextj, - valid_contexto, - valid_label_length, - valid_string_length, -) -from .intranges import intranges_contain - -__all__ = [ - "IDNABidiError", - "IDNAError", - "InvalidCodepoint", - "InvalidCodepointContext", - "alabel", - "check_bidi", - "check_hyphen_ok", - "check_initial_combiner", - "check_label", - "check_nfc", - "decode", - "encode", - "intranges_contain", - "ulabel", - "uts46_remap", - "valid_contextj", - "valid_contexto", - "valid_label_length", - "valid_string_length", -] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/tags.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/tags.py deleted file mode 100644 index 9064910b8bafe2d60ce5fca8897226f5e0fb8f8f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pkg_resources/_vendor/packaging/tags.py +++ /dev/null @@ -1,751 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -from __future__ import absolute_import - -import distutils.util - -try: - from importlib.machinery import EXTENSION_SUFFIXES -except ImportError: # pragma: no cover - import imp - - EXTENSION_SUFFIXES = [x[0] for x in imp.get_suffixes()] - del imp -import logging -import os -import platform -import re -import struct -import sys -import sysconfig -import warnings - -from ._typing import TYPE_CHECKING, cast - -if TYPE_CHECKING: # pragma: no cover - from typing import ( - Dict, - FrozenSet, - IO, - Iterable, - Iterator, - List, - Optional, - Sequence, - Tuple, - Union, - ) - - PythonVersion = Sequence[int] - MacVersion = Tuple[int, int] - GlibcVersion = Tuple[int, int] - - -logger = logging.getLogger(__name__) - -INTERPRETER_SHORT_NAMES = { - "python": "py", # Generic. - "cpython": "cp", - "pypy": "pp", - "ironpython": "ip", - "jython": "jy", -} # type: Dict[str, str] - - -_32_BIT_INTERPRETER = sys.maxsize <= 2 ** 32 - - -class Tag(object): - """ - A representation of the tag triple for a wheel. - - Instances are considered immutable and thus are hashable. Equality checking - is also supported. - """ - - __slots__ = ["_interpreter", "_abi", "_platform"] - - def __init__(self, interpreter, abi, platform): - # type: (str, str, str) -> None - self._interpreter = interpreter.lower() - self._abi = abi.lower() - self._platform = platform.lower() - - @property - def interpreter(self): - # type: () -> str - return self._interpreter - - @property - def abi(self): - # type: () -> str - return self._abi - - @property - def platform(self): - # type: () -> str - return self._platform - - def __eq__(self, other): - # type: (object) -> bool - if not isinstance(other, Tag): - return NotImplemented - - return ( - (self.platform == other.platform) - and (self.abi == other.abi) - and (self.interpreter == other.interpreter) - ) - - def __hash__(self): - # type: () -> int - return hash((self._interpreter, self._abi, self._platform)) - - def __str__(self): - # type: () -> str - return "{}-{}-{}".format(self._interpreter, self._abi, self._platform) - - def __repr__(self): - # type: () -> str - return "<{self} @ {self_id}>".format(self=self, self_id=id(self)) - - -def parse_tag(tag): - # type: (str) -> FrozenSet[Tag] - """ - Parses the provided tag (e.g. `py3-none-any`) into a frozenset of Tag instances. - - Returning a set is required due to the possibility that the tag is a - compressed tag set. - """ - tags = set() - interpreters, abis, platforms = tag.split("-") - for interpreter in interpreters.split("."): - for abi in abis.split("."): - for platform_ in platforms.split("."): - tags.add(Tag(interpreter, abi, platform_)) - return frozenset(tags) - - -def _warn_keyword_parameter(func_name, kwargs): - # type: (str, Dict[str, bool]) -> bool - """ - Backwards-compatibility with Python 2.7 to allow treating 'warn' as keyword-only. - """ - if not kwargs: - return False - elif len(kwargs) > 1 or "warn" not in kwargs: - kwargs.pop("warn", None) - arg = next(iter(kwargs.keys())) - raise TypeError( - "{}() got an unexpected keyword argument {!r}".format(func_name, arg) - ) - return kwargs["warn"] - - -def _get_config_var(name, warn=False): - # type: (str, bool) -> Union[int, str, None] - value = sysconfig.get_config_var(name) - if value is None and warn: - logger.debug( - "Config variable '%s' is unset, Python ABI tag may be incorrect", name - ) - return value - - -def _normalize_string(string): - # type: (str) -> str - return string.replace(".", "_").replace("-", "_") - - -def _abi3_applies(python_version): - # type: (PythonVersion) -> bool - """ - Determine if the Python version supports abi3. - - PEP 384 was first implemented in Python 3.2. - """ - return len(python_version) > 1 and tuple(python_version) >= (3, 2) - - -def _cpython_abis(py_version, warn=False): - # type: (PythonVersion, bool) -> List[str] - py_version = tuple(py_version) # To allow for version comparison. - abis = [] - version = _version_nodot(py_version[:2]) - debug = pymalloc = ucs4 = "" - with_debug = _get_config_var("Py_DEBUG", warn) - has_refcount = hasattr(sys, "gettotalrefcount") - # Windows doesn't set Py_DEBUG, so checking for support of debug-compiled - # extension modules is the best option. - # https://github.com/pypa/pip/issues/3383#issuecomment-173267692 - has_ext = "_d.pyd" in EXTENSION_SUFFIXES - if with_debug or (with_debug is None and (has_refcount or has_ext)): - debug = "d" - if py_version < (3, 8): - with_pymalloc = _get_config_var("WITH_PYMALLOC", warn) - if with_pymalloc or with_pymalloc is None: - pymalloc = "m" - if py_version < (3, 3): - unicode_size = _get_config_var("Py_UNICODE_SIZE", warn) - if unicode_size == 4 or ( - unicode_size is None and sys.maxunicode == 0x10FFFF - ): - ucs4 = "u" - elif debug: - # Debug builds can also load "normal" extension modules. - # We can also assume no UCS-4 or pymalloc requirement. - abis.append("cp{version}".format(version=version)) - abis.insert( - 0, - "cp{version}{debug}{pymalloc}{ucs4}".format( - version=version, debug=debug, pymalloc=pymalloc, ucs4=ucs4 - ), - ) - return abis - - -def cpython_tags( - python_version=None, # type: Optional[PythonVersion] - abis=None, # type: Optional[Iterable[str]] - platforms=None, # type: Optional[Iterable[str]] - **kwargs # type: bool -): - # type: (...) -> Iterator[Tag] - """ - Yields the tags for a CPython interpreter. - - The tags consist of: - - cp-- - - cp-abi3- - - cp-none- - - cp-abi3- # Older Python versions down to 3.2. - - If python_version only specifies a major version then user-provided ABIs and - the 'none' ABItag will be used. - - If 'abi3' or 'none' are specified in 'abis' then they will be yielded at - their normal position and not at the beginning. - """ - warn = _warn_keyword_parameter("cpython_tags", kwargs) - if not python_version: - python_version = sys.version_info[:2] - - interpreter = "cp{}".format(_version_nodot(python_version[:2])) - - if abis is None: - if len(python_version) > 1: - abis = _cpython_abis(python_version, warn) - else: - abis = [] - abis = list(abis) - # 'abi3' and 'none' are explicitly handled later. - for explicit_abi in ("abi3", "none"): - try: - abis.remove(explicit_abi) - except ValueError: - pass - - platforms = list(platforms or _platform_tags()) - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - if _abi3_applies(python_version): - for tag in (Tag(interpreter, "abi3", platform_) for platform_ in platforms): - yield tag - for tag in (Tag(interpreter, "none", platform_) for platform_ in platforms): - yield tag - - if _abi3_applies(python_version): - for minor_version in range(python_version[1] - 1, 1, -1): - for platform_ in platforms: - interpreter = "cp{version}".format( - version=_version_nodot((python_version[0], minor_version)) - ) - yield Tag(interpreter, "abi3", platform_) - - -def _generic_abi(): - # type: () -> Iterator[str] - abi = sysconfig.get_config_var("SOABI") - if abi: - yield _normalize_string(abi) - - -def generic_tags( - interpreter=None, # type: Optional[str] - abis=None, # type: Optional[Iterable[str]] - platforms=None, # type: Optional[Iterable[str]] - **kwargs # type: bool -): - # type: (...) -> Iterator[Tag] - """ - Yields the tags for a generic interpreter. - - The tags consist of: - - -- - - The "none" ABI will be added if it was not explicitly provided. - """ - warn = _warn_keyword_parameter("generic_tags", kwargs) - if not interpreter: - interp_name = interpreter_name() - interp_version = interpreter_version(warn=warn) - interpreter = "".join([interp_name, interp_version]) - if abis is None: - abis = _generic_abi() - platforms = list(platforms or _platform_tags()) - abis = list(abis) - if "none" not in abis: - abis.append("none") - for abi in abis: - for platform_ in platforms: - yield Tag(interpreter, abi, platform_) - - -def _py_interpreter_range(py_version): - # type: (PythonVersion) -> Iterator[str] - """ - Yields Python versions in descending order. - - After the latest version, the major-only version will be yielded, and then - all previous versions of that major version. - """ - if len(py_version) > 1: - yield "py{version}".format(version=_version_nodot(py_version[:2])) - yield "py{major}".format(major=py_version[0]) - if len(py_version) > 1: - for minor in range(py_version[1] - 1, -1, -1): - yield "py{version}".format(version=_version_nodot((py_version[0], minor))) - - -def compatible_tags( - python_version=None, # type: Optional[PythonVersion] - interpreter=None, # type: Optional[str] - platforms=None, # type: Optional[Iterable[str]] -): - # type: (...) -> Iterator[Tag] - """ - Yields the sequence of tags that are compatible with a specific version of Python. - - The tags consist of: - - py*-none- - - -none-any # ... if `interpreter` is provided. - - py*-none-any - """ - if not python_version: - python_version = sys.version_info[:2] - platforms = list(platforms or _platform_tags()) - for version in _py_interpreter_range(python_version): - for platform_ in platforms: - yield Tag(version, "none", platform_) - if interpreter: - yield Tag(interpreter, "none", "any") - for version in _py_interpreter_range(python_version): - yield Tag(version, "none", "any") - - -def _mac_arch(arch, is_32bit=_32_BIT_INTERPRETER): - # type: (str, bool) -> str - if not is_32bit: - return arch - - if arch.startswith("ppc"): - return "ppc" - - return "i386" - - -def _mac_binary_formats(version, cpu_arch): - # type: (MacVersion, str) -> List[str] - formats = [cpu_arch] - if cpu_arch == "x86_64": - if version < (10, 4): - return [] - formats.extend(["intel", "fat64", "fat32"]) - - elif cpu_arch == "i386": - if version < (10, 4): - return [] - formats.extend(["intel", "fat32", "fat"]) - - elif cpu_arch == "ppc64": - # TODO: Need to care about 32-bit PPC for ppc64 through 10.2? - if version > (10, 5) or version < (10, 4): - return [] - formats.append("fat64") - - elif cpu_arch == "ppc": - if version > (10, 6): - return [] - formats.extend(["fat32", "fat"]) - - formats.append("universal") - return formats - - -def mac_platforms(version=None, arch=None): - # type: (Optional[MacVersion], Optional[str]) -> Iterator[str] - """ - Yields the platform tags for a macOS system. - - The `version` parameter is a two-item tuple specifying the macOS version to - generate platform tags for. The `arch` parameter is the CPU architecture to - generate platform tags for. Both parameters default to the appropriate value - for the current system. - """ - version_str, _, cpu_arch = platform.mac_ver() # type: ignore - if version is None: - version = cast("MacVersion", tuple(map(int, version_str.split(".")[:2]))) - else: - version = version - if arch is None: - arch = _mac_arch(cpu_arch) - else: - arch = arch - for minor_version in range(version[1], -1, -1): - compat_version = version[0], minor_version - binary_formats = _mac_binary_formats(compat_version, arch) - for binary_format in binary_formats: - yield "macosx_{major}_{minor}_{binary_format}".format( - major=compat_version[0], - minor=compat_version[1], - binary_format=binary_format, - ) - - -# From PEP 513. -def _is_manylinux_compatible(name, glibc_version): - # type: (str, GlibcVersion) -> bool - # Check for presence of _manylinux module. - try: - import _manylinux # noqa - - return bool(getattr(_manylinux, name + "_compatible")) - except (ImportError, AttributeError): - # Fall through to heuristic check below. - pass - - return _have_compatible_glibc(*glibc_version) - - -def _glibc_version_string(): - # type: () -> Optional[str] - # Returns glibc version string, or None if not using glibc. - return _glibc_version_string_confstr() or _glibc_version_string_ctypes() - - -def _glibc_version_string_confstr(): - # type: () -> Optional[str] - """ - Primary implementation of glibc_version_string using os.confstr. - """ - # os.confstr is quite a bit faster than ctypes.DLL. It's also less likely - # to be broken or missing. This strategy is used in the standard library - # platform module. - # https://github.com/python/cpython/blob/fcf1d003bf4f0100c9d0921ff3d70e1127ca1b71/Lib/platform.py#L175-L183 - try: - # os.confstr("CS_GNU_LIBC_VERSION") returns a string like "glibc 2.17". - version_string = os.confstr( # type: ignore[attr-defined] # noqa: F821 - "CS_GNU_LIBC_VERSION" - ) - assert version_string is not None - _, version = version_string.split() # type: Tuple[str, str] - except (AssertionError, AttributeError, OSError, ValueError): - # os.confstr() or CS_GNU_LIBC_VERSION not available (or a bad value)... - return None - return version - - -def _glibc_version_string_ctypes(): - # type: () -> Optional[str] - """ - Fallback implementation of glibc_version_string using ctypes. - """ - try: - import ctypes - except ImportError: - return None - - # ctypes.CDLL(None) internally calls dlopen(NULL), and as the dlopen - # manpage says, "If filename is NULL, then the returned handle is for the - # main program". This way we can let the linker do the work to figure out - # which libc our process is actually using. - # - # Note: typeshed is wrong here so we are ignoring this line. - process_namespace = ctypes.CDLL(None) # type: ignore - try: - gnu_get_libc_version = process_namespace.gnu_get_libc_version - except AttributeError: - # Symbol doesn't exist -> therefore, we are not linked to - # glibc. - return None - - # Call gnu_get_libc_version, which returns a string like "2.5" - gnu_get_libc_version.restype = ctypes.c_char_p - version_str = gnu_get_libc_version() # type: str - # py2 / py3 compatibility: - if not isinstance(version_str, str): - version_str = version_str.decode("ascii") - - return version_str - - -# Separated out from have_compatible_glibc for easier unit testing. -def _check_glibc_version(version_str, required_major, minimum_minor): - # type: (str, int, int) -> bool - # Parse string and check against requested version. - # - # We use a regexp instead of str.split because we want to discard any - # random junk that might come after the minor version -- this might happen - # in patched/forked versions of glibc (e.g. Linaro's version of glibc - # uses version strings like "2.20-2014.11"). See gh-3588. - m = re.match(r"(?P[0-9]+)\.(?P[0-9]+)", version_str) - if not m: - warnings.warn( - "Expected glibc version with 2 components major.minor," - " got: %s" % version_str, - RuntimeWarning, - ) - return False - return ( - int(m.group("major")) == required_major - and int(m.group("minor")) >= minimum_minor - ) - - -def _have_compatible_glibc(required_major, minimum_minor): - # type: (int, int) -> bool - version_str = _glibc_version_string() - if version_str is None: - return False - return _check_glibc_version(version_str, required_major, minimum_minor) - - -# Python does not provide platform information at sufficient granularity to -# identify the architecture of the running executable in some cases, so we -# determine it dynamically by reading the information from the running -# process. This only applies on Linux, which uses the ELF format. -class _ELFFileHeader(object): - # https://en.wikipedia.org/wiki/Executable_and_Linkable_Format#File_header - class _InvalidELFFileHeader(ValueError): - """ - An invalid ELF file header was found. - """ - - ELF_MAGIC_NUMBER = 0x7F454C46 - ELFCLASS32 = 1 - ELFCLASS64 = 2 - ELFDATA2LSB = 1 - ELFDATA2MSB = 2 - EM_386 = 3 - EM_S390 = 22 - EM_ARM = 40 - EM_X86_64 = 62 - EF_ARM_ABIMASK = 0xFF000000 - EF_ARM_ABI_VER5 = 0x05000000 - EF_ARM_ABI_FLOAT_HARD = 0x00000400 - - def __init__(self, file): - # type: (IO[bytes]) -> None - def unpack(fmt): - # type: (str) -> int - try: - (result,) = struct.unpack( - fmt, file.read(struct.calcsize(fmt)) - ) # type: (int, ) - except struct.error: - raise _ELFFileHeader._InvalidELFFileHeader() - return result - - self.e_ident_magic = unpack(">I") - if self.e_ident_magic != self.ELF_MAGIC_NUMBER: - raise _ELFFileHeader._InvalidELFFileHeader() - self.e_ident_class = unpack("B") - if self.e_ident_class not in {self.ELFCLASS32, self.ELFCLASS64}: - raise _ELFFileHeader._InvalidELFFileHeader() - self.e_ident_data = unpack("B") - if self.e_ident_data not in {self.ELFDATA2LSB, self.ELFDATA2MSB}: - raise _ELFFileHeader._InvalidELFFileHeader() - self.e_ident_version = unpack("B") - self.e_ident_osabi = unpack("B") - self.e_ident_abiversion = unpack("B") - self.e_ident_pad = file.read(7) - format_h = "H" - format_i = "I" - format_q = "Q" - format_p = format_i if self.e_ident_class == self.ELFCLASS32 else format_q - self.e_type = unpack(format_h) - self.e_machine = unpack(format_h) - self.e_version = unpack(format_i) - self.e_entry = unpack(format_p) - self.e_phoff = unpack(format_p) - self.e_shoff = unpack(format_p) - self.e_flags = unpack(format_i) - self.e_ehsize = unpack(format_h) - self.e_phentsize = unpack(format_h) - self.e_phnum = unpack(format_h) - self.e_shentsize = unpack(format_h) - self.e_shnum = unpack(format_h) - self.e_shstrndx = unpack(format_h) - - -def _get_elf_header(): - # type: () -> Optional[_ELFFileHeader] - try: - with open(sys.executable, "rb") as f: - elf_header = _ELFFileHeader(f) - except (IOError, OSError, TypeError, _ELFFileHeader._InvalidELFFileHeader): - return None - return elf_header - - -def _is_linux_armhf(): - # type: () -> bool - # hard-float ABI can be detected from the ELF header of the running - # process - # https://static.docs.arm.com/ihi0044/g/aaelf32.pdf - elf_header = _get_elf_header() - if elf_header is None: - return False - result = elf_header.e_ident_class == elf_header.ELFCLASS32 - result &= elf_header.e_ident_data == elf_header.ELFDATA2LSB - result &= elf_header.e_machine == elf_header.EM_ARM - result &= ( - elf_header.e_flags & elf_header.EF_ARM_ABIMASK - ) == elf_header.EF_ARM_ABI_VER5 - result &= ( - elf_header.e_flags & elf_header.EF_ARM_ABI_FLOAT_HARD - ) == elf_header.EF_ARM_ABI_FLOAT_HARD - return result - - -def _is_linux_i686(): - # type: () -> bool - elf_header = _get_elf_header() - if elf_header is None: - return False - result = elf_header.e_ident_class == elf_header.ELFCLASS32 - result &= elf_header.e_ident_data == elf_header.ELFDATA2LSB - result &= elf_header.e_machine == elf_header.EM_386 - return result - - -def _have_compatible_manylinux_abi(arch): - # type: (str) -> bool - if arch == "armv7l": - return _is_linux_armhf() - if arch == "i686": - return _is_linux_i686() - return True - - -def _linux_platforms(is_32bit=_32_BIT_INTERPRETER): - # type: (bool) -> Iterator[str] - linux = _normalize_string(distutils.util.get_platform()) - if is_32bit: - if linux == "linux_x86_64": - linux = "linux_i686" - elif linux == "linux_aarch64": - linux = "linux_armv7l" - manylinux_support = [] - _, arch = linux.split("_", 1) - if _have_compatible_manylinux_abi(arch): - if arch in {"x86_64", "i686", "aarch64", "armv7l", "ppc64", "ppc64le", "s390x"}: - manylinux_support.append( - ("manylinux2014", (2, 17)) - ) # CentOS 7 w/ glibc 2.17 (PEP 599) - if arch in {"x86_64", "i686"}: - manylinux_support.append( - ("manylinux2010", (2, 12)) - ) # CentOS 6 w/ glibc 2.12 (PEP 571) - manylinux_support.append( - ("manylinux1", (2, 5)) - ) # CentOS 5 w/ glibc 2.5 (PEP 513) - manylinux_support_iter = iter(manylinux_support) - for name, glibc_version in manylinux_support_iter: - if _is_manylinux_compatible(name, glibc_version): - yield linux.replace("linux", name) - break - # Support for a later manylinux implies support for an earlier version. - for name, _ in manylinux_support_iter: - yield linux.replace("linux", name) - yield linux - - -def _generic_platforms(): - # type: () -> Iterator[str] - yield _normalize_string(distutils.util.get_platform()) - - -def _platform_tags(): - # type: () -> Iterator[str] - """ - Provides the platform tags for this installation. - """ - if platform.system() == "Darwin": - return mac_platforms() - elif platform.system() == "Linux": - return _linux_platforms() - else: - return _generic_platforms() - - -def interpreter_name(): - # type: () -> str - """ - Returns the name of the running interpreter. - """ - try: - name = sys.implementation.name # type: ignore - except AttributeError: # pragma: no cover - # Python 2.7 compatibility. - name = platform.python_implementation().lower() - return INTERPRETER_SHORT_NAMES.get(name) or name - - -def interpreter_version(**kwargs): - # type: (bool) -> str - """ - Returns the version of the running interpreter. - """ - warn = _warn_keyword_parameter("interpreter_version", kwargs) - version = _get_config_var("py_version_nodot", warn=warn) - if version: - version = str(version) - else: - version = _version_nodot(sys.version_info[:2]) - return version - - -def _version_nodot(version): - # type: (PythonVersion) -> str - if any(v >= 10 for v in version): - sep = "_" - else: - sep = "" - return sep.join(map(str, version)) - - -def sys_tags(**kwargs): - # type: (bool) -> Iterator[Tag] - """ - Returns the sequence of tag triples for the running interpreter. - - The order of the sequence corresponds to priority order for the - interpreter, from most to least important. - """ - warn = _warn_keyword_parameter("sys_tags", kwargs) - - interp_name = interpreter_name() - if interp_name == "cp": - for tag in cpython_tags(warn=warn): - yield tag - else: - for tag in generic_tags(): - yield tag - - for tag in compatible_tags(): - yield tag diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/grammar_notation.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/grammar_notation.py deleted file mode 100644 index 792713341413838b3fe406cbee2e49ca849fcefd..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/grammar_notation.py +++ /dev/null @@ -1,265 +0,0 @@ -""" - pygments.lexers.grammar_notation - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Lexers for grammar notations like BNF. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, bygroups, include, this, using, words -from pygments.token import Comment, Keyword, Literal, Name, Number, \ - Operator, Punctuation, String, Text, Whitespace - -__all__ = ['BnfLexer', 'AbnfLexer', 'JsgfLexer', 'PegLexer'] - - -class BnfLexer(RegexLexer): - """ - This lexer is for grammar notations which are similar to - original BNF. - - In order to maximize a number of targets of this lexer, - let's decide some designs: - - * We don't distinguish `Terminal Symbol`. - - * We do assume that `NonTerminal Symbol` are always enclosed - with arrow brackets. - - * We do assume that `NonTerminal Symbol` may include - any printable characters except arrow brackets and ASCII 0x20. - This assumption is for `RBNF `_. - - * We do assume that target notation doesn't support comment. - - * We don't distinguish any operators and punctuation except - `::=`. - - Though these decision making might cause too minimal highlighting - and you might be disappointed, but it is reasonable for us. - - .. versionadded:: 2.1 - """ - - name = 'BNF' - aliases = ['bnf'] - filenames = ['*.bnf'] - mimetypes = ['text/x-bnf'] - - tokens = { - 'root': [ - (r'(<)([ -;=?-~]+)(>)', - bygroups(Punctuation, Name.Class, Punctuation)), - - # an only operator - (r'::=', Operator), - - # fallback - (r'[^<>:]+', Text), # for performance - (r'.', Text), - ], - } - - -class AbnfLexer(RegexLexer): - """ - Lexer for IETF 7405 ABNF. - - (Updates `5234 `_) grammars. - - .. versionadded:: 2.1 - """ - - name = 'ABNF' - url = 'http://www.ietf.org/rfc/rfc7405.txt' - aliases = ['abnf'] - filenames = ['*.abnf'] - mimetypes = ['text/x-abnf'] - - _core_rules = ( - 'ALPHA', 'BIT', 'CHAR', 'CR', 'CRLF', 'CTL', 'DIGIT', - 'DQUOTE', 'HEXDIG', 'HTAB', 'LF', 'LWSP', 'OCTET', - 'SP', 'VCHAR', 'WSP') - - tokens = { - 'root': [ - # comment - (r';.*$', Comment.Single), - - # quoted - # double quote itself in this state, it is as '%x22'. - (r'(%[si])?"[^"]*"', Literal), - - # binary (but i have never seen...) - (r'%b[01]+\-[01]+\b', Literal), # range - (r'%b[01]+(\.[01]+)*\b', Literal), # concat - - # decimal - (r'%d[0-9]+\-[0-9]+\b', Literal), # range - (r'%d[0-9]+(\.[0-9]+)*\b', Literal), # concat - - # hexadecimal - (r'%x[0-9a-fA-F]+\-[0-9a-fA-F]+\b', Literal), # range - (r'%x[0-9a-fA-F]+(\.[0-9a-fA-F]+)*\b', Literal), # concat - - # repetition (*element) including nRule - (r'\b[0-9]+\*[0-9]+', Operator), - (r'\b[0-9]+\*', Operator), - (r'\b[0-9]+', Operator), - (r'\*', Operator), - - # Strictly speaking, these are not keyword but - # are called `Core Rule'. - (words(_core_rules, suffix=r'\b'), Keyword), - - # nonterminals (ALPHA *(ALPHA / DIGIT / "-")) - (r'[a-zA-Z][a-zA-Z0-9-]*\b', Name.Class), - - # operators - (r'(=/|=|/)', Operator), - - # punctuation - (r'[\[\]()]', Punctuation), - - # fallback - (r'\s+', Whitespace), - (r'.', Text), - ], - } - - -class JsgfLexer(RegexLexer): - """ - For JSpeech Grammar Format grammars. - - .. versionadded:: 2.2 - """ - name = 'JSGF' - url = 'https://www.w3.org/TR/jsgf/' - aliases = ['jsgf'] - filenames = ['*.jsgf'] - mimetypes = ['application/jsgf', 'application/x-jsgf', 'text/jsgf'] - - tokens = { - 'root': [ - include('comments'), - include('non-comments'), - ], - 'comments': [ - (r'/\*\*(?!/)', Comment.Multiline, 'documentation comment'), - (r'/\*[\w\W]*?\*/', Comment.Multiline), - (r'//.*$', Comment.Single), - ], - 'non-comments': [ - (r'\A#JSGF[^;]*', Comment.Preproc), - (r'\s+', Whitespace), - (r';', Punctuation), - (r'[=|()\[\]*+]', Operator), - (r'/[^/]+/', Number.Float), - (r'"', String.Double, 'string'), - (r'\{', String.Other, 'tag'), - (words(('import', 'public'), suffix=r'\b'), Keyword.Reserved), - (r'grammar\b', Keyword.Reserved, 'grammar name'), - (r'(<)(NULL|VOID)(>)', - bygroups(Punctuation, Name.Builtin, Punctuation)), - (r'<', Punctuation, 'rulename'), - (r'\w+|[^\s;=|()\[\]*+/"{<\w]+', Text), - ], - 'string': [ - (r'"', String.Double, '#pop'), - (r'\\.', String.Escape), - (r'[^\\"]+', String.Double), - ], - 'tag': [ - (r'\}', String.Other, '#pop'), - (r'\\.', String.Escape), - (r'[^\\}]+', String.Other), - ], - 'grammar name': [ - (r';', Punctuation, '#pop'), - (r'\s+', Whitespace), - (r'\.', Punctuation), - (r'[^;\s.]+', Name.Namespace), - ], - 'rulename': [ - (r'>', Punctuation, '#pop'), - (r'\*', Punctuation), - (r'\s+', Whitespace), - (r'([^.>]+)(\s*)(\.)', bygroups(Name.Namespace, Text, Punctuation)), - (r'[^.>]+', Name.Constant), - ], - 'documentation comment': [ - (r'\*/', Comment.Multiline, '#pop'), - (r'^(\s*)(\*?)(\s*)(@(?:example|see))(\s+)' - r'([\w\W]*?(?=(?:^\s*\*?\s*@|\*/)))', - bygroups(Whitespace, Comment.Multiline, Whitespace, Comment.Special, - Whitespace, using(this, state='example'))), - (r'(^\s*\*?\s*)(@\S*)', - bygroups(Comment.Multiline, Comment.Special)), - (r'[^*\n@]+|\w|\W', Comment.Multiline), - ], - 'example': [ - (r'(\n\s*)(\*)', bygroups(Whitespace, Comment.Multiline)), - include('non-comments'), - (r'.', Comment.Multiline), - ], - } - - -class PegLexer(RegexLexer): - """ - This lexer is for Parsing Expression Grammars (PEG). - - Various implementations of PEG have made different decisions - regarding the syntax, so let's try to be accommodating: - - * `<-`, `←`, `:`, and `=` are all accepted as rule operators. - - * Both `|` and `/` are choice operators. - - * `^`, `↑`, and `~` are cut operators. - - * A single `a-z` character immediately before a string, or - multiple `a-z` characters following a string, are part of the - string (e.g., `r"..."` or `"..."ilmsuxa`). - - .. versionadded:: 2.6 - """ - - name = 'PEG' - url = 'https://bford.info/pub/lang/peg.pdf' - aliases = ['peg'] - filenames = ['*.peg'] - mimetypes = ['text/x-peg'] - - tokens = { - 'root': [ - # Comments - (r'#.*$', Comment.Single), - - # All operators - (r'<-|[←:=/|&!?*+^↑~]', Operator), - - # Other punctuation - (r'[()]', Punctuation), - - # Keywords - (r'\.', Keyword), - - # Character classes - (r'(\[)([^\]]*(?:\\.[^\]\\]*)*)(\])', - bygroups(Punctuation, String, Punctuation)), - - # Single and double quoted strings (with optional modifiers) - (r'[a-z]?"[^"\\]*(?:\\.[^"\\]*)*"[a-z]*', String.Double), - (r"[a-z]?'[^'\\]*(?:\\.[^'\\]*)*'[a-z]*", String.Single), - - # Nonterminals are not whitespace, operators, or punctuation - (r'[^\s<←:=/|&!?*+\^↑~()\[\]"\'#]+', Name.Class), - - # Fallback - (r'.', Text), - ], - } diff --git a/spaces/project-ori/README/README.md b/spaces/project-ori/README/README.md deleted file mode 100644 index bab240184a6c3eabb7dd8db5c7d06d93bc61d663..0000000000000000000000000000000000000000 --- a/spaces/project-ori/README/README.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -title: README -emoji: 🗡️ -colorFrom: green -colorTo: gray -sdk: static -pinned: false ---- - -# AnimeTTS -|Character Name|Huggingface Link| -|:---|:---| -|Kitagawa Marin|https://huggingface.co/spaces/ORI-Muchim/MarinTTS| -|Power|https://huggingface.co/spaces/ORI-Muchim/PowerTTS| -       - -# BanGDreamTTS -|Character Name|Huggingface Link| -|:---|:---| -|Lisa|https://huggingface.co/spaces/kdrkdrkdr/LisaTTS| -       - -# BlueArchiveTTS -|Character Name|Huggingface Link| -|:---|:---| -|Azusa|https://huggingface.co/spaces/kdrkdrkdr/AzusaTTS| -|Hoshino|https://huggingface.co/spaces/kdrkdrkdr/HoshinoTTS| -|Shiroko|https://huggingface.co/spaces/kdrkdrkdr/ShirokoTTS| -|Hina|https://huggingface.co/spaces/kdrkdrkdr/HinaTTS| -|Yuuka|https://huggingface.co/spaces/kdrkdrkdr/YuukaTTS| -|All Characters|https://huggingface.co/spaces/ORI-Muchim/BlueArchiveTTS| -       - -# GenshinTTS -|Character Name|Huggingface Link| -|:---|:---| -|Raiden Shogun|https://huggingface.co/spaces/ORI-Muchim/RaidenTTS| -|Nahida|https://huggingface.co/spaces/ORI-Muchim/NahidaTTS| -|Barbara, Keqing, Yae Miko|https://huggingface.co/spaces/ORI-Muchim/BarKeYaeTTS -|Hutao|https://huggingface.co/spaces/kdrkdrkdr/HutaoTTS| -|Ganyu|https://huggingface.co/spaces/kdrkdrkdr/GanyuStockingHeuungTTS| -|Zhongli|https://huggingface.co/spaces/kdrkdrkdr/ZhongliTTS| -       - -# HappinessDoubleRoomTTS -|Character Name|Huggingface Link| -|:---|:---| -|Minami|https://huggingface.co/spaces/ORI-Muchim/MinamiTTS| -       - -# ONFIRETTS -|Character Name|Huggingface Link| -|:---|:---| -|ONFIRE|https://huggingface.co/spaces/ORI-Muchim/ONFIRETTS| -       - -# ProsekaTTS -|Character Name|Huggingface Link| -|:---|:---| -|All Characters|https://huggingface.co/spaces/kdrkdrkdr/ProsekaTTS| -       - -# StarRailTTS -|Character Name|Huggingface Link| -|:---|:---| -|All Characters|https://huggingface.co/spaces/ORI-Muchim/StarRailTTS| -       \ No newline at end of file diff --git a/spaces/pustozerov/poc-handwriting-ocr/app.py b/spaces/pustozerov/poc-handwriting-ocr/app.py deleted file mode 100644 index 6c85f6e7df98ed0b5ef85d7ef2859693dd19fa8a..0000000000000000000000000000000000000000 --- a/spaces/pustozerov/poc-handwriting-ocr/app.py +++ /dev/null @@ -1,11 +0,0 @@ -import os - -import streamlit as st - -st.set_page_config(layout="wide") -st.markdown('# Handwriting and Machine Learning') -st.markdown("""This set of demos shows the possibilities of various state-of-the-art machine learning techniques -associated with handwriting.""") - -os.makedirs("data/sample/", exist_ok=True) -os.makedirs("data/user_data/", exist_ok=True) diff --git a/spaces/quantumiracle-git/OpenBiDexHand/app.py b/spaces/quantumiracle-git/OpenBiDexHand/app.py deleted file mode 100644 index eacd1805543b46cbbc72bb4af27d329538750bbe..0000000000000000000000000000000000000000 --- a/spaces/quantumiracle-git/OpenBiDexHand/app.py +++ /dev/null @@ -1,353 +0,0 @@ -import gradio as gr -import os -import random -import numpy as np -import pandas as pd -import gdown -import base64 -from time import gmtime, strftime -from csv import writer -import json -import zipfile -from os import listdir -from os.path import isfile, join, isdir -from datasets import load_dataset -from hfserver import HuggingFaceDatasetSaver, HuggingFaceDatasetJSONSaver - -ENVS = ['ShadowHand', 'ShadowHandCatchAbreast', 'ShadowHandOver', 'ShadowHandBlockStack', 'ShadowHandCatchUnderarm', -'ShadowHandCatchOver2Underarm', 'ShadowHandBottleCap', 'ShadowHandLiftUnderarm', 'ShadowHandTwoCatchUnderarm', -'ShadowHandDoorOpenInward', 'ShadowHandDoorOpenOutward', 'ShadowHandDoorCloseInward', 'ShadowHandDoorCloseOutward', -'ShadowHandPushBlock', 'ShadowHandKettle', -'ShadowHandScissors', 'ShadowHandPen', 'ShadowHandSwingCup', 'ShadowHandGraspAndPlace', 'ShadowHandSwitch'] - -# download data from huggingface dataset -# dataset = load_dataset("quantumiracle-git/robotinder-data") -# os.remove('.git/hooks/pre-push') # https://github.com/git-lfs/git-lfs/issues/853 -LOAD_DATA_GOOGLE_DRIVE = False - -if LOAD_DATA_GOOGLE_DRIVE: # download data from google drive - # url = 'https://drive.google.com/drive/folders/1JuNQS4R7axTezWj1x4KRAuRt_L26ApxA?usp=sharing' # './processed/' folder in google drive - # url = 'https://drive.google.com/drive/folders/1o8Q9eX-J7F326zv4g2MZWlzR46uVkUF2?usp=sharing' # './processed_zip/' folder in google drive - # url = 'https://drive.google.com/drive/folders/1ZWgpPiZwnWfwlwta8Tu-Jtu2HsS7HAEa?usp=share_link' # './filter_processed_zip/' folder in google drive - # url = 'https://drive.google.com/drive/folders/1ROkuX6rQpyK7vLqF5fL2mggKiMCdKSuY?usp=share_link' # './split_processed_zip/' folder in google drive - - # output = './' - # id = url.split('/')[-1] - # os.system(f"gdown --id {id} -O {output} --folder --no-cookies --remaining-ok") - # # VIDEO_PATH = 'processed_zip' - # # VIDEO_PATH = 'filter_processed_zip' - # VIDEO_PATH = 'split_processed_zip' - - # # unzip the zip files to the same location and delete zip files - # path_to_zip_file = VIDEO_PATH - # zip_files = [join(path_to_zip_file, f) for f in listdir(path_to_zip_file)] - # for f in zip_files: - # if f.endswith(".zip"): - # directory_to_extract_to = path_to_zip_file # extracted file itself contains a folder - # print(f'extract data {f} to {directory_to_extract_to}') - # with zipfile.ZipFile(f, 'r') as zip_ref: - # zip_ref.extractall(directory_to_extract_to) - # os.remove(f) - - ### multiple urls to handle the retrieve error - # urls = [ - # 'https://drive.google.com/drive/folders/1BbQe4XtcsalsvwGVLW9jWCkr-ln5pvyf?usp=share_link', # './filter_processed_zip/1' folder in google drive - # 'https://drive.google.com/drive/folders/1saUTUuObPhMJFguc2J_O0K5woCJjYHci?usp=share_link', # './filter_processed_zip/2' folder in google drive - # 'https://drive.google.com/drive/folders/1Kh9_E28-RH8g8EP1V3DhGI7KRs9LB7YJ?usp=share_link', # './filter_processed_zip/3' folder in google drive - # 'https://drive.google.com/drive/folders/1oE75Dz6hxtaSpNhjD22PmQfgQ-PjnEc0?usp=share_link', # './filter_processed_zip/4' folder in google drive - # 'https://drive.google.com/drive/folders/1XSPEKFqNHpXdLho-bnkT6FZZXssW8JkC?usp=share_link', # './filter_processed_zip/5' folder in google drive - # 'https://drive.google.com/drive/folders/1XwjAHqR7kF1uSyZZIydQMoETfdvi0aPD?usp=share_link', - # 'https://drive.google.com/drive/folders/1TceozOWhLsbqP-w-RkforjAVo1M2zsRP?usp=share_link', - # 'https://drive.google.com/drive/folders/1zAP9eDSW5Eh_isACuZJadXcFaJNqEM9u?usp=share_link', - # 'https://drive.google.com/drive/folders/1oK8fyF9A3Pv5JubvrQMjTE9n66vYlyZN?usp=share_link', - # 'https://drive.google.com/drive/folders/1cezGNjlM0ONMM6C0N_PbZVCGsTyVSR0w?usp=share_link', - # ] - - urls = [ - 'https://drive.google.com/drive/folders/1SF5jQ7HakO3lFXBon57VP83-AwfnrM3F?usp=share_link', # './split_processed_zip/1' folder in google drive - 'https://drive.google.com/drive/folders/13WuS6ow6sm7ws7A5xzCEhR-2XX_YiIu5?usp=share_link', # './split_processed_zip/2' folder in google drive - 'https://drive.google.com/drive/folders/1GWLffJDOyLkubF2C03UFcB7iFpzy1aDy?usp=share_link', # './split_processed_zip/3' folder in google drive - 'https://drive.google.com/drive/folders/1UKAntA7WliD84AUhRN224PkW4vt9agZW?usp=share_link', # './split_processed_zip/4' folder in google drive - 'https://drive.google.com/drive/folders/11cCQw3qb1vJbviVPfBnOVWVzD_VzHdWs?usp=share_link', # './split_processed_zip/5' folder in google drive - 'https://drive.google.com/drive/folders/1Wvy604wCxEdXAwE7r3sE0L0ieXvM__u8?usp=share_link', - 'https://drive.google.com/drive/folders/1BTv_pMTNGm7m3hD65IgBrX880v-rLIaf?usp=share_link', - 'https://drive.google.com/drive/folders/12x7F11ln2VQkqi8-Mu3kng74eLgifM0N?usp=share_link', - 'https://drive.google.com/drive/folders/1OWkOul2CCrqynqpt44Fu1CBxzNNfOFE2?usp=share_link', - 'https://drive.google.com/drive/folders/1ukwsfrbSEqCBNmRSuAYvYBHijWCQh2OU?usp=share_link', - 'https://drive.google.com/drive/folders/1EO7zumR6sVfsWQWCS6zfNs5WuO2Se6WX?usp=share_link', - 'https://drive.google.com/drive/folders/1aw0iBWvvZiSKng0ejRK8xbNoHLVUFCFu?usp=share_link', - 'https://drive.google.com/drive/folders/1szIcxlVyT5WJtzpqYWYlue0n82A6-xtk?usp=share_link', - ] - - output = './' - # VIDEO_PATH = 'processed_zip' - # VIDEO_PATH = 'filter_processed_zip' - VIDEO_PATH = 'split_processed_zip' - for i, url in enumerate(urls): - id = url.split('/')[-1] - os.system(f"gdown --id {id} -O {output} --folder --no-cookies --remaining-ok") - - # unzip the zip files to the same location and delete zip files - path_to_zip_file = str(i+1) - zip_files = [join(path_to_zip_file, f) for f in listdir(path_to_zip_file)] - for f in zip_files: - if f.endswith(".zip"): - directory_to_extract_to = VIDEO_PATH # extracted file itself contains a folder - print(f'extract data {f} to {directory_to_extract_to}') - with zipfile.ZipFile(f, 'r') as zip_ref: - zip_ref.extractall(directory_to_extract_to) - os.remove(f) - -else: - VIDEO_PATH = 'processed-data' - path_to_zip_file = VIDEO_PATH - zip_files = [join(path_to_zip_file, f) for f in listdir(path_to_zip_file)] - for f in zip_files: - if f.endswith(".zip"): - directory_to_extract_to = path_to_zip_file # extracted file itself contains a folder - print(f'extract data {f} to {directory_to_extract_to}') - with zipfile.ZipFile(f, 'r') as zip_ref: - zip_ref.extractall(directory_to_extract_to) - os.remove(f) - -# for test only -# else: # local data -# VIDEO_PATH = 'robotinder-data' - -VIDEO_INFO = os.path.join(VIDEO_PATH, 'video_info.json') - -def inference(video_path): - # for displaying mp4 with autoplay on Gradio - with open(video_path, "rb") as f: - data = f.read() - b64 = base64.b64encode(data).decode() - html = ( - f""" - - """ - ) - return html - -def video_identity(video): - return video - -def nan(): - return None - -FORMAT = ['mp4', 'gif'][0] - -def get_huggingface_dataset(): - try: - import huggingface_hub - except (ImportError, ModuleNotFoundError): - raise ImportError( - "Package `huggingface_hub` not found is needed " - "for HuggingFaceDatasetSaver. Try 'pip install huggingface_hub'." - ) - HF_TOKEN = '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' # my HF token - DATASET_NAME = 'crowdsourced-robotinder-demo' - FLAGGING_DIR = 'flag/' - path_to_dataset_repo = huggingface_hub.create_repo( - repo_id=DATASET_NAME, - token=HF_TOKEN, - private=False, - repo_type="dataset", - exist_ok=True, - ) - dataset_dir = os.path.join(DATASET_NAME, FLAGGING_DIR) - repo = huggingface_hub.Repository( - local_dir=dataset_dir, - clone_from=path_to_dataset_repo, - use_auth_token=HF_TOKEN, - ) - repo.git_pull(lfs=True) - log_file = os.path.join(dataset_dir, "flag_data.csv") - return repo, log_file - -def update(user_choice, user_name, left, right, choose_env, data_folder=VIDEO_PATH, flag_to_huggingface=False): - global last_left_video_path - global last_right_video_path - global last_infer_left_video_path - global last_infer_right_video_path - - if flag_to_huggingface: # log - env_name = str(last_left_video_path).split('/')[1] # 'robotinder-data/ENV_NAME/' - current_time = strftime("%Y-%m-%d-%H-%M-%S", gmtime()) - info = [env_name, user_choice, last_left_video_path, last_right_video_path, current_time, user_name] - print(info) - repo, log_file = get_huggingface_dataset() - with open(log_file, 'a') as file: # incremental change of the file - writer_object = writer(file) - writer_object.writerow(info) - file.close() - if int(current_time.split('-')[-2]) % 5 == 0: # push only on certain minutes - try: - repo.push_to_hub(commit_message=f"Flagged sample at {current_time}") - except: - repo.git_pull(lfs=True) # sync with remote first - repo.push_to_hub(commit_message=f"Flagged sample at {current_time}") - if choose_env == 'Random' or choose_env == '': # random or no selection - envs = get_env_names() - env_name = envs[random.randint(0, len(envs)-1)] - else: - env_name = choose_env - # choose video - left, right = randomly_select_videos(env_name) - - last_left_video_path = left - last_right_video_path = right - last_infer_left_video_path = inference(left) - last_infer_right_video_path = inference(right) - - return last_infer_left_video_path, last_infer_right_video_path, env_name - -def replay(left, right): - return left, right - -def parse_envs(folder=VIDEO_PATH, filter=True, MAX_ITER=20000, DEFAULT_ITER=20000): - """ - return a dict of env_name: video_paths - """ - files = {} - if filter: - df = pd.read_csv('Bidexhands_Video.csv') - # print(df) - for env_name in os.listdir(folder): - env_path = os.path.join(folder, env_name) - if os.path.isdir(env_path): - videos = os.listdir(env_path) - video_files = [] - for video in videos: # video name rule: EnvName_Alg_Seed_Timestamp_Checkpoint_video-episode-EpisodeID - if video.endswith(f'.{FORMAT}'): - if filter: - if len(video.split('_')) < 6: - print(f'{video} is wrongly named.') - seed = video.split('_')[2] - checkpoint = video.split('_')[4] - try: - succeed_iteration = df.loc[(df['seed'] == int(seed)) & (df['env_name'] == str(env_name))]['succeed_iteration'].iloc[0] - except: - print(f'Env {env_name} with seed {seed} not found in Bidexhands_Video.csv') - - if 'unsolved' in succeed_iteration: - continue - elif pd.isnull(succeed_iteration): - min_iter = DEFAULT_ITER - max_iter = MAX_ITER - elif '-' in succeed_iteration: - [min_iter, max_iter] = succeed_iteration.split('-') - else: - min_iter = succeed_iteration - max_iter = MAX_ITER - - # check if the checkpoint is in the valid range - valid_checkpoints = np.arange(int(min_iter), int(max_iter)+1000, 1000) - if int(checkpoint) not in valid_checkpoints: - continue - - video_path = os.path.join(folder, env_name, video) - video_files.append(video_path) - # print(video_path) - - files[env_name] = video_files - - with open(VIDEO_INFO, 'w') as fp: - json.dump(files, fp) - - return files - -def get_env_names(): - with open(VIDEO_INFO, 'r') as fp: - files = json.load(fp) - return list(files.keys()) - -def randomly_select_videos(env_name): - # load the parsed video info - with open(VIDEO_INFO, 'r') as fp: - files = json.load(fp) - env_files = files[env_name] - # randomly choose two videos - selected_video_ids = np.random.choice(len(env_files), 2, replace=False) - left_video_path = env_files[selected_video_ids[0]] - right_video_path = env_files[selected_video_ids[1]] - return left_video_path, right_video_path - -def build_interface(iter=3, data_folder=VIDEO_PATH): - import sys - import csv - csv.field_size_limit(sys.maxsize) - - HF_TOKEN = os.getenv('HF_TOKEN') - print(HF_TOKEN) - HF_TOKEN = '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' # my HF token - ## hf_writer = gr.HuggingFaceDatasetSaver(HF_TOKEN, "crowdsourced-robotinder-demo") # HuggingFace logger instead of local one: https://github.com/gradio-app/gradio/blob/master/gradio/flagging.py - ## callback = gr.CSVLogger() - # hf_writer = HuggingFaceDatasetSaver(HF_TOKEN, "crowdsourced-robotinder-demo") - # callback = hf_writer - - # parse the video folder - files = parse_envs() - - # build gradio interface - with gr.Blocks() as demo: - # gr.Markdown("## Here is RoboTinder!") - gr.Markdown("### Select the best robot behaviour in your choice!") - # some initial values - env_name = list(files.keys())[random.randint(0, len(files)-1)] # random pick an env - with gr.Row(): - str_env_name = gr.Markdown(f"{env_name}") - - # choose video - left_video_path, right_video_path = randomly_select_videos(env_name) - - with gr.Row(): - if FORMAT == 'mp4': - # left = gr.PlayableVideo(left_video_path, label="left_video") - # right = gr.PlayableVideo(right_video_path, label="right_video") - - infer_left_video_path = inference(left_video_path) - infer_right_video_path = inference(right_video_path) - left = gr.HTML(infer_left_video_path, label="left_video") - right = gr.HTML(infer_right_video_path, label="right_video") - else: - left = gr.Image(left_video_path, shape=(1024, 768), label="left_video") - # right = gr.Image(right_video_path).style(height=768, width=1024) - right = gr.Image(right_video_path, label="right_video") - - global last_left_video_path - last_left_video_path = left_video_path - global last_right_video_path - last_right_video_path = right_video_path - - global last_infer_left_video_path - last_infer_left_video_path = infer_left_video_path - global last_infer_right_video_path - last_infer_right_video_path = infer_right_video_path - - # btn1 = gr.Button("Replay") - user_name = gr.Textbox(label='Your name/email:') - # user_choice = gr.Radio(["Left", "Right", "Not Sure", "Both Good", "Both Bad"], label="Which one is your favorite?") - user_choice = gr.Radio(["Left", "Right", "Not Sure"], label="Which one is your favorite?") - choose_env = gr.Radio(["Random"]+ENVS, label="Choose the next task:") - btn2 = gr.Button("Next") - - # This needs to be called at some point prior to the first call to callback.flag() - # callback.setup([user_choice, left, right], "flagged_data_points") - - # btn1.click(fn=replay, inputs=[left, right], outputs=[left, right]) - btn2.click(fn=update, inputs=[user_choice, user_name, left, right, choose_env], outputs=[left, right, str_env_name]) - - # We can choose which components to flag -- in this case, we'll flag all of them - # btn2.click(lambda *args: callback.flag(args), [user_choice, left, right], None, preprocess=False) # not using the gradio flagging anymore - - return demo - -if __name__ == "__main__": - last_left_video_path = None - last_right_video_path = None - - demo = build_interface() - # demo.launch(share=True) - demo.launch(share=False) diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Autocom Unknown Error During Init -.md b/spaces/quidiaMuxgu/Expedit-SAM/Autocom Unknown Error During Init -.md deleted file mode 100644 index 33b4c774b101e49ed7499d405569535b38ad4e5a..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Autocom Unknown Error During Init -.md +++ /dev/null @@ -1,28 +0,0 @@ -

        Autocom Unknown Error During Init -


        Download Ziphttps://geags.com/2uCsK1



        -
        -NET Profile from the IIS/Windows Service Manager - -3. Delete the Microsoft.NET Framework from the Windows 7/8/10 Registry - -4. Restore the Microsoft.NET Framework from the download site:  - - - -5. Restart the machine - -Reference - - United States Court of Appeals - - Fifth Circuit - - F I L E D - - IN THE UNITED STATES COURT OF APPEALS - - FOR THE FIFTH CIRCUIT June 19, 2003 - - Charles R. Fulbruge III 4fefd39f24
        -
        -
        -

        diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/Makefile b/spaces/r3gm/Aesthetic_RVC_Inference_HF/Makefile deleted file mode 100644 index e1ce27677fe21c85ac4f81799a739a19050e47af..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/Makefile +++ /dev/null @@ -1,63 +0,0 @@ -.PHONY: -.ONESHELL: - -help: ## Show this help and exit - @grep -hE '^[A-Za-z0-9_ \-]*?:.*##.*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' - -install: ## Install dependencies (Do everytime you start up a paperspace machine) - apt-get -y install build-essential python3-dev ffmpeg - pip install --upgrade setuptools wheel - pip install --upgrade pip - pip install faiss-gpu fairseq gradio ffmpeg ffmpeg-python praat-parselmouth pyworld numpy==1.23.5 numba==0.56.4 librosa==0.9.1 - pip install -r assets/requirements/requirements.txt - pip install --upgrade lxml - apt-get update - apt -y install -qq aria2 - -basev1: ## Download version 1 pre-trained models (Do only once after cloning the fork) - mkdir -p pretrained uvr5_weights - git pull - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d pretrained -o D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d pretrained -o D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d pretrained -o D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d pretrained -o G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d pretrained -o G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d pretrained -o G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -d pretrained -o f0D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -d pretrained -o f0D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -d pretrained -o f0D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -d pretrained -o f0G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -d pretrained -o f0G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth -d pretrained -o f0G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt - -basev2: ## Download version 2 pre-trained models (Do only once after cloning the fork) - mkdir -p pretrained_v2 uvr5_weights - git pull - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D32k.pth -d pretrained_v2 -o D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d pretrained_v2 -o D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D48k.pth -d pretrained_v2 -o D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G32k.pth -d pretrained_v2 -o G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d pretrained_v2 -o G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G48k.pth -d pretrained_v2 -o G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D32k.pth -d pretrained_v2 -o f0D32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d pretrained_v2 -o f0D40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D48k.pth -d pretrained_v2 -o f0D48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G32k.pth -d pretrained_v2 -o f0G32k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d pretrained_v2 -o f0G40k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G48k.pth -d pretrained_v2 -o f0G48k.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth - aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt - -run-ui: ## Run the python GUI - python infer-web.py --paperspace --pycmd python - -run-cli: ## Run the python CLI - python infer-web.py --pycmd python --is_cli - -tensorboard: ## Start the tensorboard (Run on separate terminal) - echo https://tensorboard-$$(hostname).clg07azjl.paperspacegradient.com - tensorboard --logdir logs --bind_all \ No newline at end of file diff --git a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/model/DepthNormalizer.py b/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/model/DepthNormalizer.py deleted file mode 100644 index 84908ec131771b8d42f32535ab856017fe1143a1..0000000000000000000000000000000000000000 --- a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/model/DepthNormalizer.py +++ /dev/null @@ -1,18 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class DepthNormalizer(nn.Module): - def __init__(self, opt): - super(DepthNormalizer, self).__init__() - self.opt = opt - - def forward(self, z, calibs=None, index_feat=None): - ''' - Normalize z_feature - :param z_feat: [B, 1, N] depth value for z in the image coordinate system - :return: - ''' - z_feat = z * (self.opt.loadSize // 2) / self.opt.z_size - return z_feat diff --git a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/cam_render.py b/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/cam_render.py deleted file mode 100644 index 7b766af057b9c052388aceb152b0191fa2e4ea25..0000000000000000000000000000000000000000 --- a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/gl/cam_render.py +++ /dev/null @@ -1,48 +0,0 @@ -from .render import Render - -GLUT = None - -class CamRender(Render): - def __init__(self, width=1600, height=1200, name='Cam Renderer', - program_files=['simple.fs', 'simple.vs'], color_size=1, ms_rate=1, egl=False): - Render.__init__(self, width, height, name, program_files, color_size, ms_rate=ms_rate, egl=egl) - self.camera = None - - if not egl: - global GLUT - import OpenGL.GLUT as GLUT - GLUT.glutDisplayFunc(self.display) - GLUT.glutKeyboardFunc(self.keyboard) - - def set_camera(self, camera): - self.camera = camera - self.projection_matrix, self.model_view_matrix = camera.get_gl_matrix() - - def keyboard(self, key, x, y): - # up - eps = 1 - # print(key) - if key == b'w': - self.camera.center += eps * self.camera.direction - elif key == b's': - self.camera.center -= eps * self.camera.direction - if key == b'a': - self.camera.center -= eps * self.camera.right - elif key == b'd': - self.camera.center += eps * self.camera.right - if key == b' ': - self.camera.center += eps * self.camera.up - elif key == b'x': - self.camera.center -= eps * self.camera.up - elif key == b'i': - self.camera.near += 0.1 * eps - self.camera.far += 0.1 * eps - elif key == b'o': - self.camera.near -= 0.1 * eps - self.camera.far -= 0.1 * eps - - self.projection_matrix, self.model_view_matrix = self.camera.get_gl_matrix() - - def show(self): - if GLUT is not None: - GLUT.glutMainLoop() diff --git a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/spaces.py b/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/spaces.py deleted file mode 100644 index 44e894aa1d1244d492a17f61045e59e12f86b350..0000000000000000000000000000000000000000 --- a/spaces/radames/PIFu-Clothed-Human-Digitization/PIFu/spaces.py +++ /dev/null @@ -1,161 +0,0 @@ -import os -os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" -os.environ["CUDA_VISIBLE_DEVICES"]="0" -try: - os.system("pip install --upgrade torch==1.11.0+cu113 torchvision==0.12.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html") -except Exception as e: - print(e) - -from pydoc import describe -from huggingface_hub import hf_hub_download -import gradio as gr -import os -from datetime import datetime -from PIL import Image -import torch -import torchvision -import skimage -import paddlehub -import numpy as np -from lib.options import BaseOptions -from apps.crop_img import process_img -from apps.eval import Evaluator -from types import SimpleNamespace -import trimesh -import glob - -print( - "torch: ", torch.__version__, - "\ntorchvision: ", torchvision.__version__, - "\nskimage:", skimage.__version__ -) - -print("EnV", os.environ) - -net_C = hf_hub_download("radames/PIFu-upright-standing", filename="net_C") -net_G = hf_hub_download("radames/PIFu-upright-standing", filename="net_G") - - -opt = BaseOptions() -opts = opt.parse_to_dict() -opts['batch_size'] = 1 -opts['mlp_dim'] = [257, 1024, 512, 256, 128, 1] -opts['mlp_dim_color'] = [513, 1024, 512, 256, 128, 3] -opts['num_stack'] = 4 -opts['num_hourglass'] = 2 -opts['resolution'] = 128 -opts['hg_down'] = 'ave_pool' -opts['norm'] = 'group' -opts['norm_color'] = 'group' -opts['load_netG_checkpoint_path'] = net_G -opts['load_netC_checkpoint_path'] = net_C -opts['results_path'] = "./results" -opts['name'] = "spaces_demo" -opts = SimpleNamespace(**opts) -print("Params", opts) -evaluator = Evaluator(opts) -bg_remover_model = paddlehub.Module(name="U2Net") - - -def process(img_path): - base = os.path.basename(img_path) - img_name = os.path.splitext(base)[0] - print("\n\n\nStarting Process", datetime.now()) - print("image name", img_name) - img_raw = Image.open(img_path).convert('RGB') - - img = img_raw.resize( - (512, int(512 * img_raw.size[1] / img_raw.size[0])), - Image.Resampling.LANCZOS) - - try: - # remove background - print("Removing Background") - masks = bg_remover_model.Segmentation( - images=[np.array(img)], - paths=None, - batch_size=1, - input_size=320, - output_dir='./PIFu/inputs', - visualization=False) - mask = masks[0]["mask"] - front = masks[0]["front"] - except Exception as e: - print(e) - - print("Aliging mask with input training image") - print("Not aligned", front.shape, mask.shape) - img_new, msk_new = process_img(front, mask) - print("Aligned", img_new.shape, msk_new.shape) - - try: - time = datetime.now() - data = evaluator.load_image_from_memory(img_new, msk_new, img_name) - print("Evaluating via PIFu", time) - evaluator.eval(data, True) - print("Success Evaluating via PIFu", datetime.now() - time) - result_path = f'./{opts.results_path}/{opts.name}/result_{img_name}' - except Exception as e: - print("Error evaluating via PIFu", e) - - try: - mesh = trimesh.load(result_path + '.obj') - # flip mesh - mesh.apply_transform([[-1, 0, 0, 0], - [0, 1, 0, 0], - [0, 0, -1, 0], - [0, 0, 0, 1]]) - mesh.export(file_obj=result_path + '.glb') - result_gltf = result_path + '.glb' - return [result_gltf, result_gltf] - - except Exception as e: - print("error generating MESH", e) - - -examples = sorted(glob.glob('examples/*.png')) -description = ''' -# PIFu Clothed Human Digitization -### PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization - - -This is a demo for PIFu model . -The pre-trained model has the following warning: -> Warning: The released model is trained with mostly upright standing scans with weak perspectie projection and the pitch angle of 0 degree. Reconstruction quality may degrade for images highly deviated from trainining data. - -**The inference takes about 180seconds for a new image.** - -
        -More - -#### Image Credits - -* Julien and Clem -* [StyleGAN Humans](https://huggingface.co/spaces/hysts/StyleGAN-Human) -* [Renderpeople: Dennis](https://renderpeople.com) - - -#### More -* https://phorhum.github.io/ -* https://github.com/yuliangxiu/icon -* https://shunsukesaito.github.io/PIFuHD/ - -
        -''' - -iface = gr.Interface( - fn=process, - description=description, - inputs=gr.Image(type="filepath", label="Input Image"), - outputs=[ - gr.Model3D( - clear_color=[0.0, 0.0, 0.0, 0.0], label="3D Model"), - gr.File(label="Download 3D Model") - ], - examples=examples, - allow_flagging="never", - cache_examples=True -) - -if __name__ == "__main__": - iface.launch(debug=True, enable_queue=False) diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Age3vpn Gratis FULL Version Download.md b/spaces/raedeXanto/academic-chatgpt-beta/Age3vpn Gratis FULL Version Download.md deleted file mode 100644 index 388e7971ab27ccdaf2850f3981463db0fe9f9491..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Age3vpn Gratis FULL Version Download.md +++ /dev/null @@ -1,100 +0,0 @@ -
        -
        - - -
        -

        Age3vpn Gratis FULL Version Download: How to Get It and Why You Need It

        -

        If you are looking for a reliable, fast, and secure VPN service that can help you access any website or app without restrictions, then you should consider downloading Age3vpn gratis full version. In this article, we will tell you everything you need to know about Age3vpn, how to download it for free, how to install and use it on your device, what are the benefits and drawbacks of using it, and some frequently asked questions about it. By the end of this article, you will have a clear idea of whether Age3vpn is the right VPN service for you or not.

        -

        Age3vpn Gratis FULL Version Download


        Download File ……… https://tinourl.com/2uL3CZ



        -

        What is Age3vpn?

        -

        Age3vpn is a VPN service that allows you to browse the internet anonymously and securely. VPN stands for virtual private network, which is a technology that creates a secure tunnel between your device and a remote server. By using a VPN, you can hide your IP address, encrypt your online traffic, and bypass geo-restrictions and censorship. With Age3vpn, you can access any website or app that is blocked in your region, such as Netflix, Hulu, BBC iPlayer, Facebook, Twitter, YouTube, and more. You can also protect your personal data and identity from hackers, snoopers, and government surveillance.

        -

        Age3vpn has some unique features that make it different from other VPN services. For example, it has a built-in ad blocker that blocks annoying ads and pop-ups on the websites you visit. It also has a kill switch that automatically disconnects your internet connection if the VPN connection drops, preventing any data leakage. Moreover, it has a split tunneling feature that lets you choose which apps or websites to use with the VPN and which ones to use without it. This way, you can optimize your bandwidth and speed.

        -

        Another thing that sets Age3vpn apart from other VPN services is its name. Age3vpn stands for "Age of Empires 3 VPN", which is a reference to the popular real-time strategy video game series. The developers of Age3vpn are fans of the game and decided to name their VPN service after it. They also claim that Age3vpn can help you play Age of Empires 3 online with better performance and security.

        -

        -

        How to Download Age3vpn Gratis FULL Version?

        -

        If you want to download Age3vpn gratis full version, you have several options to choose from. You can download it from the official website of Age3vpn, which is https://age3vpn.com/. You can also download it from other websites that offer free software downloads, such as Softonic, CNET, FileHippo, and more. However, you should be careful when downloading from these sources, as they may contain malware or viruses that can harm your device. You should always scan the downloaded file with an antivirus program before installing it.

        -

        The advantages of downloading Age3vpn gratis full version over other versions are that you get access to all the features and functions of Age3vpn without any limitations or restrictions. You also get a free license key that allows you to activate Age3vpn gratis full version for a lifetime. You do not have to pay any subscription fees or hidden charges to use Age3vpn gratis full version.

        -

        The requirements and compatibility of Age3vpn gratis full version are minimal and simple. You only need a device that runs on Windows XP or higher, Mac OS X 10.6 or higher, Android 4.0 or higher, or iOS 8.0 or higher. You also need an internet connection and at least 50 MB of free disk space on your device.

        -

        How to Install and Use Age3vpn Gratis FULL Version?

        -

        The steps to install Age3vpn gratis full version on your device are easy and straightforward. Here is what you need to do:

        -
          -
        1. Download the Age3vpn gratis full version file from the source of your choice.
        2. -
        3. Open the downloaded file and follow the instructions on the screen to install Age3vpn on your device.
        4. -
        5. Launch Age3vpn and enter the license key that you received when you downloaded Age3vpn gratis full version.
        6. -
        7. Click on the "Activate" button to activate Age3vpn gratis full version on your device.
        8. -
        -

        The steps to use Age3vpn gratis full version to access blocked websites and protect your privacy are also simple and easy. Here is what you need to do:

        -
          -
        1. Launch Age3vpn and select a server location from the list of available servers. You can choose a server based on the country or region you want to connect to, or let Age3vpn choose the best server for you automatically.
        2. -
        3. Click on the "Connect" button to establish a secure VPN connection between your device and the selected server.
        4. -
        5. Enjoy browsing the internet anonymously and securely with Age3vpn gratis full version.
        6. -
        -

        You can also customize your VPN settings by clicking on the "Settings" icon on the top right corner of the Age3vpn interface. You can change your protocol, enable or disable the ad blocker, kill switch, split tunneling, and other features according to your preferences.

        -

        What are the Benefits of Using Age3vpn Gratis FULL Version?

        -

        The benefits of using Age3vpn gratis full version for your online security and freedom are numerous and significant. Here are some of them:

        -
          -
        • You can access any website or app that is blocked in your region, such as Netflix, Hulu, BBC iPlayer, Facebook, Twitter, YouTube, and more.
        • -
        • You can protect your personal data and identity from hackers, snoopers, and government surveillance.
        • -
        • You can encrypt your online traffic and hide your IP address from anyone who might want to track or monitor your online activities.
        • -
        • You can enjoy faster and smoother internet speed and performance with Age3vpn gratis full version, as it has a large network of servers around the world and a smart algorithm that optimizes your connection.
        • -
        • You can save money and time by using Age3vpn gratis full version, as it is free to download, install, and use for a lifetime. You do not have to pay any subscription fees or hidden charges to use Age3vpn gratis full version.
        • -
        -

        The features and functions of Age3vpn gratis full version that make it stand out from other VPN services are:

        -
          -
        • It has a built-in ad blocker that blocks annoying ads and pop-ups on the websites you visit.
        • -
        • It has a kill switch that automatically disconnects your internet connection if the VPN connection drops, preventing any data leakage.
        • -
        • It has a split tunneling feature that lets you choose which apps or websites to use with the VPN and which ones to use without it.
        • -
        • It has a user-friendly and intuitive interface that makes it easy to use and customize.
        • -
        • It has a 24/7 customer support team that is ready to help you with any issues or questions you might have about Age3vpn gratis full version.
        • -
        -

        The testimonials and reviews of Age3vpn gratis full version users are positive and encouraging. Here are some of them:

        -
        -

        "I have been using Age3vpn gratis full version for a few months now and I am very satisfied with it. It is fast, reliable, and secure. I can access any website or app I want without any problems. I also like the ad blocker feature, as it makes my browsing experience more pleasant. I highly recommend Age3vpn gratis full version to anyone who needs a good VPN service."

        -- John, USA -
        -
        -

        "Age3vpn gratis full version is the best VPN service I have ever used. It is easy to install and use, and it works perfectly on my devices. I can watch Netflix, Hulu, BBC iPlayer, and other streaming services from anywhere in the world. I also feel more safe and anonymous online with Age3vpn gratis full version. It is definitely worth downloading."

        -- Maria, UK -
        -
        -

        "I am a big fan of Age of Empires 3 and I was looking for a VPN service that can help me play it online with better performance and security. That's when I found Age3vpn gratis full version. It is amazing how it improves my gaming experience and protects my data from hackers. I also use it for other purposes, such as browsing, shopping, and social media. Age3vpn gratis full version is a must-have for any online user."

        -- Ali, UAE -
        -

        What are the Drawbacks of Using Age3vpn Gratis FULL Version?

        -

        Although Age3vpn gratis full version is a great VPN service that offers many benefits and features, it also has some drawbacks that you should be aware of. Here are some of them:

        -
          -
        • It may not work in some countries or regions that have strict internet censorship or firewall policies, such as China, Iran, North Korea, etc.
        • -
        • It may not support some devices or platforms that are not compatible with Age3vpn gratis full version, such as Linux, Windows Phone, Blackberry, etc.
        • -
        • It may not offer the same level of security or privacy as some other VPN services that use more advanced encryption protocols or features, such as Tor, Double VPN, Onion Over VPN, etc.
        • -
        -

        The risks and challenges of using Age3vpn gratis full version are:

        -
          -
        • You may encounter some technical issues or glitches while using Age3vpn gratis full version, such as connection drops, slow speed, server errors, etc.
        • -
        • You may face some legal issues or consequences if you use Age3vpn gratis full version for illegal or unethical purposes, such as hacking, piracy, fraud, etc.
        • -
        • You may lose your data or identity if you use Age3vpn gratis full version on a public or unsecured network or device, such as a public Wi-Fi hotspot, a shared computer, etc.
        • -
        -

        The alternatives and solutions to overcome the drawbacks of using Age3vpn gratis full version are:

        -
          -
        • You can try using another VPN service that works in your country or region or supports your device or platform.
        • -
        • You can follow the terms and conditions of Age3vpn gratis full version and use it for legitimate and ethical purposes only.
        • -
        • You can use additional security measures or tools to protect your data or identity while using Age3vpn gratis full version , such as a firewall, an antivirus program, a password manager, etc.
        • -
        -

        Conclusion

        -

        In conclusion, Age3vpn gratis full version is a VPN service that can help you access any website or app without restrictions, protect your personal data and identity from hackers, snoopers, and government surveillance, and enjoy faster and smoother internet speed and performance. It has some unique features that make it different from other VPN services, such as a built-in ad blocker, a kill switch, a split tunneling feature, and a user-friendly and intuitive interface. It is also free to download, install, and use for a lifetime.

        -

        However, Age3vpn gratis full version also has some drawbacks that you should be aware of, such as not working in some countries or regions that have strict internet censorship or firewall policies, not supporting some devices or platforms that are not compatible with Age3vpn gratis full version, and not offering the same level of security or privacy as some other VPN services that use more advanced encryption protocols or features. You should also be careful of the risks and challenges of using Age3vpn gratis full version, such as encountering technical issues or glitches, facing legal issues or consequences, and losing your data or identity.

        -

        Therefore, you should weigh the pros and cons of using Age3vpn gratis full version before deciding whether it is the right VPN service for you or not. You should also try using other VPN services that work in your country or region or support your device or platform, follow the terms and conditions of Age3vpn gratis full version and use it for legitimate and ethical purposes only, and use additional security measures or tools to protect your data or identity while using Age3vpn gratis full version.

        -

        If you are interested in downloading Age3vpn gratis full version and trying it out for yourself, you can do so by following the steps we have provided in this article. You can also visit the official website of Age3vpn for more information and support. We hope you found this article helpful and informative. Thank you for reading it.

        -

        FAQs

        -

        Here are some frequently asked questions about Age3vpn gratis full version:

        -

        Q1: Is Age3vpn gratis full version safe and legal?

        -

        A1: Yes, Age3vpn gratis full version is safe and legal to use. It does not contain any malware or viruses that can harm your device. It also does not violate any laws or regulations that govern the use of VPN services. However, you should be careful of the websites or apps you access with Age3vpn gratis full version, as they may contain illegal or harmful content that can get you in trouble.

        -

        Q2: How fast is Age3vpn gratis full version?

        -

        A2: Age3vpn gratis full version is fast and reliable. It has a large network of servers around the world and a smart algorithm that optimizes your connection. It also does not throttle your bandwidth or speed. You can enjoy browsing the internet anonymously and securely with Age3vpn gratis full version without any lag or interruption.

        -

        Q3: How many servers does Age3vpn gratis full version have?

        -

        A3: Age3vpn gratis full version has over 1000 servers in more than 60 countries and regions. You can choose a server based on the country or region you want to connect to, or let Age3vpn choose the best server for you automatically. You can also switch servers as many times as you want without any limits or fees.

        -

        Q4: Does Age3vpn gratis full version support torrenting and streaming?

        -

        A4: Yes, Age3vpn gratis full version supports torrenting and streaming. You can download and upload files with P2P protocols safely and anonymously with Age3vpn gratis full version. You can also watch movies, shows, sports, and live events with streaming services such as Netflix, Hulu, BBC iPlayer, etc. with Age3vpn gratis full version. However, you should respect the intellectual property rights of the content owners and creators when using torrenting and streaming with Age3vpn gratis full version.

        -

        Q5: How can I contact Age3vpn support team?

        -

        A5: You can contact Age3vpn support team by visiting their official website https://age3vpn.com/ and clicking on the "Contact Us" button on the bottom right corner of the page. You can also email them at support@age3vpn.com or call them at +1-800-AGE-VPN (243-8766). They are available 24/7 to help you with any issues or questions you might have about Age3vpn gratis full version.

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/FileMaker Pro 15 Advanced 15.0.3.305 (x86x64) Crack .rar Download and Install Guide.md b/spaces/raedeXanto/academic-chatgpt-beta/FileMaker Pro 15 Advanced 15.0.3.305 (x86x64) Crack .rar Download and Install Guide.md deleted file mode 100644 index 61fd8092fd694a043eba5259998ab5a6ac99a6b0..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/FileMaker Pro 15 Advanced 15.0.3.305 (x86x64) Crack .rar Download and Install Guide.md +++ /dev/null @@ -1,123 +0,0 @@ -
        -

        FileMaker Pro 15 Advanced 15.0.3.305 (x86x64) Crack .rar: What You Need to Know

        -

        If you are looking for a powerful and versatile software tool that can help you create custom apps for your business or personal needs, you might want to consider FileMaker Pro 15 Advanced.

        -

        FileMaker Pro 15 Advanced 15.0.3.305 (x86x64) Crack .rar


        Downloadhttps://tinourl.com/2uL1iy



        -

        FileMaker Pro 15 Advanced is a software application that allows you to create, manage, and share databases and apps across different platforms, such as Windows, Mac, iOS, Android, and web browsers.

        -

        With FileMaker Pro 15 Advanced, you can easily build apps that suit your specific needs, without requiring any coding skills or technical knowledge. You can use drag-and-drop tools, ready-made templates, add-ons, and JavaScript libraries to design your app's interface, logic, and functionality.

        -

        You can also import data from various sources, such as Excel files, CSV files, XML files, ODBC sources, and more. You can then manipulate, analyze, and visualize your data using calculations, scripts, charts, reports, dashboards, and web viewers.

        -

        Moreover, you can share your apps and data securely with other users on your network or on the cloud. You can also integrate your apps with other services and applications using APIs, plug-ins, Claris Connect workflows, and more.

        -

        In this article, we will explore some of the features and benefits of FileMaker Pro 15 Advanced. We will also show you how to download FileMaker Pro 15 Advanced 15.0.3.305 (x86x64) Crack .rar file that can help you activate the software for free.

        -

        FileMaker Pro 15 Advanced free download full version
        -FileMaker Pro 15 Advanced crack download for Windows
        -FileMaker Pro 15 Advanced serial key generator
        -FileMaker Pro 15 Advanced 32 bit and 64 bit crack
        -FileMaker Pro 15 Advanced latest version with crack
        -FileMaker Pro 15 Advanced custom app development software
        -FileMaker Pro 15 Advanced drag and drop importing of Excel files
        -FileMaker Pro 15 Advanced save files as PDF and Excel
        -FileMaker Pro 15 Advanced web access to FileMaker information
        -FileMaker Pro 15 Advanced pre-designed starter solutions
        -FileMaker Pro 15 Advanced database templates and scripts
        -FileMaker Pro 15 Advanced offline installer standalone setup
        -FileMaker Pro 15 Advanced license key activation code
        -FileMaker Pro 15 Advanced patch keygen torrent
        -FileMaker Pro 15 Advanced system requirements and features
        -FileMaker Pro 15 Advanced download link from Karan PC
        -FileMaker Pro 15 Advanced how to install and crack guide
        -FileMaker Pro 15 Advanced review and tutorial video
        -FileMaker Pro 15 Advanced alternative and comparison software
        -FileMaker Pro 15 Advanced support and help forum
        -FileMaker Pro 15 Advanced update and upgrade download
        -FileMaker Pro 15 Advanced compatible with Windows 10/8/7/Vista/XP
        -FileMaker Pro 15 Advanced create custom apps for business needs
        -FileMaker Pro 15 Advanced design and develop custom apps faster and easier
        -FileMaker Pro 15 Advanced share information outside your FileMaker workgroup
        -FileMaker Pro 15 Advanced customize databases to work the way you do
        -FileMaker Pro 15 Advanced add your company logo, background colors, custom field names, etc.
        -FileMaker Pro 15 Advanced manage expense reports, purchase orders, product catalogs, etc.
        -FileMaker Pro 15 Advanced keep track of contacts, events, medical records, budgets, inventory, receipts, etc.
        -FileMaker Pro 15 Advanced catalog almost all formats of multimedia files
        -FileMaker Pro 15 Advanced build custom databases and design them to fit your activity and business profiles
        -FileMaker Pro 15 Advanced get results in minutes with drag and drop importing of Excel files or choosing from pre-designed starter solutions
        -FileMaker Pro 15 Advanced provide web access to FileMaker information for anytime, anywhere access over the web
        -FileMaker Pro 15 Advanced share information outside your FileMaker workgroup in popular formats by saving files as Adobe PDF and Excel
        -FileMaker Pro 15 Advanced download from official website or trusted sources only
        -FileMaker Pro 15 Advanced avoid malware and virus infection by scanning the downloaded file with antivirus software
        -FileMaker Pro 15 Advanced backup your important data before installing or cracking the software
        -FileMaker Pro 15 Advanced follow the instructions carefully and do not skip any steps during the installation or cracking process
        -FileMaker Pro 15 Advanced do not update the software after cracking it or it may stop working properly
        -FileMaker Pro 15 Advanced use the software for educational or testing purposes only and do not distribute it illegally

        -

        Features of FileMaker Pro 15 Advanced

        -

        FileMaker Pro 15 Advanced is packed with features that can help you create amazing apps for your needs. Here are some of the highlights:

        -

        In-product updates

        -

        One of the new features introduced in FileMaker Pro 15 Advanced is the ability to receive in-product notifications and instantly download and install the latest updates for the software right from within the product.

        -

        This means that you don't have to manually check for updates or download them from external sources. You can simply open FileMaker Pro Advanced 15.0.x, select the Help menu > Check for Updates, click Download Update, and click Install Update.

        -

        Note that you will need an internet connection and the Administrator rights to your computer to install the update.

        -

        New user interface for importing data

        -

        Another new feature in FileMaker Pro 15 Advanced is the improved user interface for importing data from external sources.

        -

        With the new Import Field Mapping dialog box, you can more easily map imported source data to FileMaker fields using drag-and-drop gestures. You can also preview how your data will look before importing it.

        -

        To access this feature, open your app in FileMaker Pro Advanced 15.0.x, select the File menu > Import Records > File..., choose your source file type (such as Excel or CSV), select your source file from your computer or network location, and click Open.

        -

        In the Import Field Mapping dialog box, you can see your source fields on the left side and your target fields on the right side. You can drag fields from one side to another to map them, or use the Auto-Enter options to automatically match fields by name or position. You can also change the import action for each field, such as creating, updating, or skipping records. You can also specify the import order and sort order of your records, and choose whether to perform auto-enter and validation options. You can also preview your data in a table view or a form view before importing it.

        -

        Open specific app at launch

        -

        If you have multiple apps created or accessed by FileMaker Pro Advanced, you might want to open a specific app when launching the software. This can provide better app discoverability for your users and save time from browsing through different files.

        -

        To enable this feature, open your app in FileMaker Pro Advanced 15.0.x, select the File menu > Sharing > Share with FileMaker Clients..., click Specify... next to Network access to file, check Don't display in Launch Center, and click OK. Then, close your app and quit FileMaker Pro Advanced. The next time you launch FileMaker Pro Advanced, it will automatically open your app.

        -

        File version comparison

        -

        If you want to compare changes between different versions of your app, you can use the new Save a Copy as XML script step that was added in FileMaker Pro 15 Advanced.

        -

        This script step allows you to save a copy of your app in XML format, which is a human-readable text format that describes the structure and content of your app. You can then use an XML diff tool or editor to compare two XML files and see what has changed between them.

        -

        To use this script step, open your app in FileMaker Pro Advanced 15.0.x, select the Scripts menu > Script Workspace..., create a new script or edit an existing one, add the Save a Copy as XML script step from the Miscellaneous category, specify a file name and location for saving the XML copy of your app, and run the script. You can then open the XML file with an XML diff tool or editor and compare it with another XML file of a different version of your app.

        -

        File-based script steps

        -

        Another new feature in FileMaker Pro 15 Advanced is the ability to create scripts that read, write, and manage external data files. This can help you write log files, export data in a custom format, or interact with other applications that use data files.

        -

        Some of the new file-based script steps are:

        -
          -
        • Create Folder: Creates a new folder at a specified location.
        • -
        • Delete Folder: Deletes an existing folder at a specified location.
        • -
        • Delete File: Deletes an existing file at a specified location.
        • -
        • Get Data File Position: Returns the current position of the file pointer in a data file.
        • -
        • Get File Exists: Returns true if a file exists at a specified location.
        • -
        • Get File Size: Returns the size of a file in bytes at a specified location.
        • -
        • Open Data File: Opens a data file and assigns it a file ID.
        • -
        • Read from Data File: Reads data from a data file and stores it in a variable.
        • -
        • Rename File: Renames an existing file at a specified location.
        • -
        • Set Data File Position: Sets the position of the file pointer in a data file.
        • -
        • Write to Data File: Writes data to a data file from a variable.
        • -
        -

        To use these script steps, open your app in FileMaker Pro Advanced 15.0.x, select the Scripts menu > Script Workspace..., create a new script or edit an existing one, and add the file-based script steps from the Miscellaneous category. You can then specify the parameters for each script step, such as the file path, the file ID, the data variable, the read or write mode, the encoding, and the error capture option.

        -

        Script Error Logging

        -

        If you want to troubleshoot your scripts and find out what errors occur during their execution, you can use the new Write to Data File script step to write information about script errors to a log file.

        -

        This can help you identify and fix any problems with your scripts and improve your workflow automation.

        -

        To use this feature, you need to create a log file using the Create Data File script step and open it using the Open Data File script step. Then, you need to add the Write to Data File script step to any script that you want to monitor for errors. You can use the Get (LastError) function to get the error code and the Get (ScriptName) function to get the script name and write them to the log file along with other information, such as the date and time, the record ID, the field name, or any custom message.

        -

        For example, the following script writes an error message to a log file named errorlog.txt if an error occurs while creating a new record:

        - ```html Create Record [ With dialog: Off ] If [ Get (LastError) ≠ 0 ] Set Variable [ $error ; Value: Get (LastError) ] Set Variable [ $script ; Value: Get (ScriptName) ] Set Variable [ $date ; Value: Get (CurrentDate) ] Set Variable [ $time ; Value: Get (CurrentTime) ] Set Variable [ $message ; Value: "Error " & $error & " occurred in script " & $script & " on " & $date & " at " & $time ] Write to Data File [ File ID: 1 ; With dialog: Off ; "$message" ; Append ; UTF-8 ] End If -```

        While calculation function and SetRecursion calculation function

        -

        FileMaker Pro 15 Advanced also introduced two new calculation functions that can help you perform complex calculations and iterations more easily.

        -

        The While function repeats logic while the condition is true, then returns the result. This can replace some recursive custom functions that use variables and loops to perform calculations.

        -

        The SetRecursion function sets the maximum number of iterations for recursion and loops within an expression. This can prevent infinite loops or stack overflows that might occur when using recursive custom functions or the While function.

        -

        To use these functions, open your app in FileMaker Pro Advanced 15.0.x, select the File menu > Manage > Database..., click the Fields tab, create a new field or edit an existing one, click Options..., click the Calculation tab, and enter your expression using the While function or the SetRecursion function.

        -

        For example, the following calculation returns the factorial of a number using the While function:

        - ```html Let ( [ n = 5 ; // enter any positive integer here result = 1 ; i = 1 ] ; While ( i ≤ n ; result = result * i ; i = i + 1 ) ; result ) -```

        Benefits of FileMaker Pro 15 Advanced

        -

        Besides these features, FileMaker Pro 15 Advanced also offers many benefits that can enhance your app development and usage experience. Here are some of them:

        -

        Enhanced security

        -

        FileMaker Pro 15 Advanced allows you to use powerful AES 256-bit encryption to protect your data where it lives - whether it's on a FileMaker client or hosted on FileMaker Server or FileMaker Cloud.

        -

        This means that you can encrypt your app files and data files using a strong encryption key that only you know. This way, even if someone gains access to your files, they won't be able to read or modify your data without knowing your encryption key.

        -

        To enable encryption for your app files, open your app in FileMaker Pro Advanced 15.0.x, select the Tools menu > Developer Utilities..., click Specify Solution Options..., check Require full access privileges to use references to this file, check Use advanced tools (requires restart), click OK, click Specify Solution Files..., add your app files, click Specify Output Folder..., choose a location for saving your encrypted app files, click OK, click Encrypt Database Files..., enter and confirm your encryption key, click OK, and click Create.

        -

        To enable encryption for your data files, open your app in FileMaker Pro Advanced 15.0.x, select the Scripts menu > Script Workspace..., create a new script or edit an existing one, add the Open Data File script step from the Miscellaneous category, specify the file path and file ID for your data file, check Encrypt data file with AES-256 encryption, enter and confirm your encryption key, and run the script.

        -

        Improved performance

        -

        FileMaker Pro 15 Advanced also provides features that can help you improve the performance of your apps and scripts by reducing processing time and network traffic.

        -

        One of these features is Perform Script on Server with Callback script step. This script step allows you to run a server-side script after a client-side script finishes running. This way, you can offload some tasks to the server and get feedback when they are done.

        -

        To use this script step, open your app in FileMaker Pro Advanced 15.0.x, select careful about where you get it from, as there are many risks of using crack files.

        -

        Risks of using crack files

        -

        Some of the risks of using crack files are:

        -
          -
        • Malware and security risks: When software has been disassembled and its code modified, it can become vulnerable to malware and many other security threats. Crackers might even create the vulnerability to add malware to the program. Malware can infect your computer or steal your personal information. It can also damage your system or delete your files.
        • -
        • No technical support and updates: Cracked software are not updated regularly, leaving them exposed to security threats and bugs. You also cannot contact the software developer or publisher for any technical support or assistance. If something goes wrong with the software, you have to fix it yourself or find another crack file.
        • -
        • Loss of revenue for software developers: By using cracked software, you are depriving the software developers and publishers of their rightful income. This can affect their ability to maintain and improve the software, as well as create new products. It can also discourage them from investing in innovation and quality.
        • -
        • Legal issues: Using or distributing cracked software constitutes a violation of software copyright law. You can face up to $150,000 in penalties for every instance. You can also be charged with a felony that can lead to up to five years in prison. You can also be sued by the software developer or publisher for damages.
        • -
        -

        Therefore, it is better to avoid using crack files and opt for legal and safe ways to use FileMaker Pro 15 Advanced.

        -

        ed

        -

        This is the end of my article on FileMaker Pro 15 Advanced 15.0.3.305 (x86x64) Crack .rar. I hope you enjoyed reading it and learned something new.

        -

        If you have any questions or feedback, please feel free to contact me or leave a comment below.

        -

        Thank you for your time and attention.

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Gillesania Books PDF Free 171 A Comprehensive Guide to Engineering Math and Hydraulics.md b/spaces/raedeXanto/academic-chatgpt-beta/Gillesania Books PDF Free 171 A Comprehensive Guide to Engineering Math and Hydraulics.md deleted file mode 100644 index 45c236accf674a0a5e78d309cc8344ee92df504b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Gillesania Books PDF Free 171 A Comprehensive Guide to Engineering Math and Hydraulics.md +++ /dev/null @@ -1,20 +0,0 @@ - -

        Gillesania Books PDF Free 171: A Guide for Engineering Students

        - If you are an engineering student looking for a reliable and comprehensive source of mathematics review materials, you might have heard of Gillesania Books. These books are written by DIT Gillesania, a well-known reviewer and instructor for engineering board exams in the Philippines. In this article, we will tell you everything you need to know about Gillesania Books PDF Free 171, how to download them, and how to use them effectively.

        What are Gillesania Books?

        - Gillesania Books are a series of review books for engineering mathematics and other engineering subjects. They cover topics such as algebra, trigonometry, calculus, differential equations, analytic geometry, statistics, probability, engineering mechanics, fluid mechanics, hydraulics, and more. The books are designed to help engineering students prepare for their board exams and refresh their knowledge of the fundamentals of engineering.

        Who is DIT Gillesania?

        - DIT Gillesania is the author of Gillesania Books and the founder of DIT Gillesania Review Center (DGRC), a leading review center for engineering board exams in the Philippines. He is a licensed civil engineer who graduated from Cebu Institute of Technology (CIT) in 1995. He has been teaching and reviewing engineering mathematics and other engineering subjects since 1996. He is also a lecturer and consultant for various engineering organizations and schools.

        What are the benefits of using Gillesania Books?

        - Using Gillesania Books can help you achieve the following benefits: - You can learn and review the concepts and principles of engineering mathematics in a clear and concise manner. - You can practice your problem-solving skills with hundreds of solved examples and exercises in each book. - You can test your knowledge and understanding with multiple-choice questions and answer keys in each chapter. - You can access the books anytime and anywhere with the PDF format. - You can save money by downloading the books for free from online sources.

        How to download Gillesania Books PDF Free 171?

        - One of the online sources where you can download Gillesania Books PDF Free 171 is Academia.edu, a platform for academics to share research papers. Here are the steps to download the books from this website:

        Step 1: Visit the Academia.edu website

        - Go to https://www.academia.edu/ on your browser. You will need to create an account or log in with your existing account to access the website.

        Step 2: Search for Engineering Math V1 and V2 by Gillesania

        - On the search bar at the top of the website, type "Engineering Math V1 by Gillesania" or "Engineering Math V2 by Gillesania" and hit enter. You will see a list of results that match your query. Look for the files that have ".pdf" at the end of their titles.

        Step 3: Download the PDF files for free

        - Click on the file that you want to download. You will be directed to a page where you can view or download the file. Click on the "Download" button at the top right corner of the page. You will be asked to confirm your email address before downloading. After confirming your email address, you will be able to download the file to your device.

        How to use Gillesania Books PDF Free 171 effectively?

        - Downloading Gillesania Books PDF Free 171 is only half of the process. You also need to use them effectively to maximize your learning and review experience. Here are some tips on how to use the books effectively:

        Review the topics covered in each book

        - Before you start solving problems or answering questions, make sure that you review the topics covered in each book. Read through the explanations and examples carefully and try to understand them fully. If you encounter any unfamiliar terms or concepts, look them up online or consult other sources.

        Solve the problems and exercises in each chapter

        - After reviewing the topics, it's time to apply what you have learned by solving problems and exercises in each chapter. Try to solve them on your own without looking at the solutions first. If you get stuck or make a mistake, don't give up. Try again or look for hints or clues from other sources.

        Check your answers and solutions with the answer key

        - After solving all the problems and exercises in each chapter, check your answers and solutions with the answer key provided at the end of each book. Compare your answers and solutions with those given in the answer key and see where you went wrong or right. Learn from your mistakes and correct them accordingly.

        Use the books as a reference for your engineering courses and exams

        - Finally, use Gillesania Books as a reference for your engineering courses and exams. Review them regularly and refresh your memory of the important concepts and formulas. Use them as a guide when solving homework assignments or preparing for quizzes or tests. They can help you ace your engineering courses and exams with confidence.

        Conclusion

        - Gillesania Books PDF Free 171 are a great resource for engineering students who want to learn and review engineering mathematics and other engineering subjects. They are written by DIT Gillesania, a reputable reviewer and instructor for engineering board exams in the Philippines. They are available online for free download from Academia.edu website. They can help you improve your problem-solving skills, test your knowledge and understanding, and prepare you for your engineering courses and exams.

        FAQs

        - Q: How many volumes are there in Gillesania Books? A: There are two volumes of Engineering Math by Gillesania: Volume 1 covers algebra, trigonometry, analytic geometry, solid mensuration; Volume 2 covers calculus, differential equations, statistics, probability. Q: What other subjects are covered by Gillesania Books? A: Aside from Engineering Math, there are also other subjects covered by Gillesania Books such as Engineering Mechanics (Statics & Dynamics), Fluid Mechanics & Hydraulics (Revised Edition), Fundamentals of Reinforced Concrete Design (Revised Edition), Civil Engineering Reference Vol.1 (Revised Edition), Civil Engineering Reference Vol.2 (Revised Edition). Q: Where can I buy hard copies of Gillesania Books? A: You can buy hard copies of Gillesania Books from DGRC branches or online stores such as Lazada or Shopee. Q: Are there any other websites where I can download Gillesania Books PDF Free 171? A: There may be other websites where you can download Gillesania Books PDF Free 171 but we cannot guarantee their quality or authenticity. We recommend that you download them from Academia.edu website as it is a trusted platform for academic papers. Q: How can I contact DIT Gillesania or DGRC? A: You can contact DIT Gillesania or DGRC through their Facebook page https://www.facebook.com/DGRCOfficial/ or their website http://www.dgrc.com.ph/.

        -

        gillesania books pdf free 171


        Download Zip === https://tinourl.com/2uL1n8



        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/rajistics/finbert_forwardlooking/README.md b/spaces/rajistics/finbert_forwardlooking/README.md deleted file mode 100644 index 0a464e4960231e2da504f899d4b182a796026b48..0000000000000000000000000000000000000000 --- a/spaces/rajistics/finbert_forwardlooking/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Finbert Forwardlooking -emoji: 📉 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.0.15 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ramiin2/AutoGPT/autogpt/commands/write_tests.py b/spaces/ramiin2/AutoGPT/autogpt/commands/write_tests.py deleted file mode 100644 index 35a086536c9d05d520a84b15ead49f775eacdcc9..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/autogpt/commands/write_tests.py +++ /dev/null @@ -1,31 +0,0 @@ -"""A module that contains a function to generate test cases for the submitted code.""" -from __future__ import annotations - -import json - -from autogpt.llm_utils import call_ai_function - - -def write_tests(code: str, focus: list[str]) -> str: - """ - A function that takes in code and focus topics and returns a response from create - chat completion api call. - - Parameters: - focus (list): A list of suggestions around what needs to be improved. - code (str): Code for test cases to be generated against. - Returns: - A result string from create chat completion. Test cases for the submitted code - in response. - """ - - function_string = ( - "def create_test_cases(code: str, focus: Optional[str] = None) -> str:" - ) - args = [code, json.dumps(focus)] - description_string = ( - "Generates test cases for the existing code, focusing on" - " specific areas if required." - ) - - return call_ai_function(function_string, args, description_string) diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/cookie/HISTORY.md b/spaces/rayan-saleh/whisper2notion/server/node_modules/cookie/HISTORY.md deleted file mode 100644 index ae9b995b42630df67a8333aca075b086c48d432c..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/cookie/HISTORY.md +++ /dev/null @@ -1,142 +0,0 @@ -0.5.0 / 2022-04-11 -================== - - * Add `priority` option - * Fix `expires` option to reject invalid dates - * pref: improve default decode speed - * pref: remove slow string split in parse - -0.4.2 / 2022-02-02 -================== - - * pref: read value only when assigning in parse - * pref: remove unnecessary regexp in parse - -0.4.1 / 2020-04-21 -================== - - * Fix `maxAge` option to reject invalid values - -0.4.0 / 2019-05-15 -================== - - * Add `SameSite=None` support - -0.3.1 / 2016-05-26 -================== - - * Fix `sameSite: true` to work with draft-7 clients - - `true` now sends `SameSite=Strict` instead of `SameSite` - -0.3.0 / 2016-05-26 -================== - - * Add `sameSite` option - - Replaces `firstPartyOnly` option, never implemented by browsers - * Improve error message when `encode` is not a function - * Improve error message when `expires` is not a `Date` - -0.2.4 / 2016-05-20 -================== - - * perf: enable strict mode - * perf: use for loop in parse - * perf: use string concatination for serialization - -0.2.3 / 2015-10-25 -================== - - * Fix cookie `Max-Age` to never be a floating point number - -0.2.2 / 2015-09-17 -================== - - * Fix regression when setting empty cookie value - - Ease the new restriction, which is just basic header-level validation - * Fix typo in invalid value errors - -0.2.1 / 2015-09-17 -================== - - * Throw on invalid values provided to `serialize` - - Ensures the resulting string is a valid HTTP header value - -0.2.0 / 2015-08-13 -================== - - * Add `firstPartyOnly` option - * Throw better error for invalid argument to parse - * perf: hoist regular expression - -0.1.5 / 2015-09-17 -================== - - * Fix regression when setting empty cookie value - - Ease the new restriction, which is just basic header-level validation - * Fix typo in invalid value errors - -0.1.4 / 2015-09-17 -================== - - * Throw better error for invalid argument to parse - * Throw on invalid values provided to `serialize` - - Ensures the resulting string is a valid HTTP header value - -0.1.3 / 2015-05-19 -================== - - * Reduce the scope of try-catch deopt - * Remove argument reassignments - -0.1.2 / 2014-04-16 -================== - - * Remove unnecessary files from npm package - -0.1.1 / 2014-02-23 -================== - - * Fix bad parse when cookie value contained a comma - * Fix support for `maxAge` of `0` - -0.1.0 / 2013-05-01 -================== - - * Add `decode` option - * Add `encode` option - -0.0.6 / 2013-04-08 -================== - - * Ignore cookie parts missing `=` - -0.0.5 / 2012-10-29 -================== - - * Return raw cookie value if value unescape errors - -0.0.4 / 2012-06-21 -================== - - * Use encode/decodeURIComponent for cookie encoding/decoding - - Improve server/client interoperability - -0.0.3 / 2012-06-06 -================== - - * Only escape special characters per the cookie RFC - -0.0.2 / 2012-06-01 -================== - - * Fix `maxAge` option to not throw error - -0.0.1 / 2012-05-28 -================== - - * Add more tests - -0.0.0 / 2012-05-28 -================== - - * Initial release diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Apptha-Airbnb-Clone-Nulled-League-FREE.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Apptha-Airbnb-Clone-Nulled-League-FREE.md deleted file mode 100644 index 2686fc13f22749d510a7679cd1792548914d6ab4..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Apptha-Airbnb-Clone-Nulled-League-FREE.md +++ /dev/null @@ -1,65 +0,0 @@ -## Apptha Airbnb Clone Nulled League - - - -**DOWNLOAD ……… [https://www.google.com/url?q=https%3A%2F%2Furllio.com%2F2twELM&sa=D&sntz=1&usg=AOvVaw1g8T5Tw1zYiykr6LHwb2EH](https://www.google.com/url?q=https%3A%2F%2Furllio.com%2F2twELM&sa=D&sntz=1&usg=AOvVaw1g8T5Tw1zYiykr6LHwb2EH)** - - - -# How to Start Your Own Online Rental Marketplace with Apptha Airbnb Clone Nulled League - - - -If you are looking for a way to launch your own online rental marketplace like Airbnb, but don't want to spend a fortune on development and licensing fees, then you might be interested in Apptha Airbnb Clone Nulled League. This is a ready-made script that you can use to create a website where owners can list their properties, rooms, cars, boats, bikes, pets, office spaces and other items for rent, and travelers can book them online. - - - -Apptha Airbnb Clone Nulled League is based on the popular vacation rental software Airbnb, but it comes with some unique features and advantages that make it stand out from the crowd. Here are some of them: - - - -- It is 100% customizable and scalable. You can modify the design, layout, features and functionality of your website according to your preferences and requirements. - -- It is SEO-friendly and mobile-responsive. Your website will rank well on search engines and look great on any device. - -- It supports multiple languages and currencies. You can cater to a global audience and accept payments in different currencies. - -- It has a powerful admin panel and user dashboard. You can manage your website easily and efficiently from the backend, and your users can access their profiles, bookings, reviews, messages and other features from the frontend. - -- It has a built-in commission system and payment gateway integration. You can earn revenue by charging a commission fee from each booking, and you can accept payments via PayPal, Stripe, Authorize.net and other popular methods. - -- It has a social media integration and referral system. You can promote your website on social media platforms like Facebook, Twitter and Instagram, and you can reward your users for inviting their friends to join your website. - -- It has a rating and review system and a dispute management system. You can ensure the quality and trustworthiness of your listings and users by allowing them to rate and review each other, and you can resolve any issues or conflicts that may arise between them. - - - -Apptha Airbnb Clone Nulled League is the best solution for anyone who wants to start their own online rental marketplace without breaking the bank. It is easy to install, configure and use, and it comes with free technical support and updates for one year. You can get it for only $499 from PHP Market.cc[^4^], which is a fraction of the cost of developing a similar website from scratch or buying a licensed script. - - - -So what are you waiting for? Grab your copy of Apptha Airbnb Clone Nulled League today and launch your own online rental marketplace in no time! - - - -If you are wondering how Apptha Airbnb Clone Nulled League works, here is a brief overview of the process: - - - -1. As an owner, you can register on the website and create a listing for your property or item. You can add photos, videos, descriptions, prices, availability and other details to make your listing attractive and informative. - -2. As a traveler, you can browse the website and search for listings that match your criteria. You can filter the results by location, date, price, type, amenities and other factors. You can also view the ratings and reviews of the owners and the listings. - -3. When you find a listing that you like, you can contact the owner via the messaging system and ask any questions or clarifications. You can also request a booking by selecting your dates and paying a deposit via the payment gateway. - -4. The owner will receive your booking request and can either accept or reject it. If they accept it, you will receive a confirmation email and your booking will be confirmed. If they reject it, you will receive a refund of your deposit. - -5. After your booking is confirmed, you can communicate with the owner to arrange the check-in and check-out details. You can also use the website to manage your bookings, cancel or modify them if needed. - -6. After your stay or rental is over, you can rate and review the owner and the listing on the website. The owner can also rate and review you as a traveler. This will help other users to make informed decisions and improve the quality of the website. - - - -As you can see, Apptha Airbnb Clone Nulled League is a simple and convenient way to book and rent properties and items online. It is also a great way to earn money by renting out your unused spaces and items to travelers who need them. Whether you are an owner or a traveler, you will find Apptha Airbnb Clone Nulled League to be a useful and enjoyable platform for your online rental needs. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/AutoCAD P ID 2014 64 Bit Adlmint.dll Crack Download LINK.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/AutoCAD P ID 2014 64 Bit Adlmint.dll Crack Download LINK.md deleted file mode 100644 index e234c5997c0d9bfff35f41a6c2195b99de73a1e2..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/AutoCAD P ID 2014 64 Bit Adlmint.dll Crack Download LINK.md +++ /dev/null @@ -1,52 +0,0 @@ -

        AutoCAD P ID 2014 64 Bit Adlmint.dll Crack Download


        DOWNLOADhttps://urlgoal.com/2uCK4N



        - -  - -I ran it in administrator mode and it still didn't work.  - -Could anyone give me a little help?  - -A: - -You can't download an installation of Autocad using the Download Manager. You must do the install manually or from an ISO. - -You can download the desktop edition of Autocad here. - -You can download the CAD Standard from the Autodesk site here. (Keep in mind, this is a completely different product from Autocad.) - -You may be able to use the BRLT trial to at least see if it works. - -Isatin-containing hybrid compounds as agonists of the nicotinic acetylcholine receptor. - -A new series of compounds incorporating a quinoline scaffold connected via an alpha,beta-unsaturated imino function to a purine was synthesized and evaluated for their ability to interact with and modulate the function of nicotinic acetylcholine receptors expressed in Xenopus oocytes. In this study, the pharmacological profile of these compounds was determined in relation to two previously reported series of quinolines. In addition to their potency at muscarinic receptors, two of the compounds showed a selective antagonism of the antinociceptive effect of morphine. The results of the pharmacological assays were interpreted in terms of the different conformational states that the compounds may adopt in the receptor binding site, and the different behavior of the compounds may be related to the size and position of the substituent in relation to the imino function. -
        -
        -

        diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cinderella 2 Dreams Come True Full Movie In Hindi [PATCHED].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cinderella 2 Dreams Come True Full Movie In Hindi [PATCHED].md deleted file mode 100644 index 28c44f13fbf60b78f36b08b34fec60ba1f62801e..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Cinderella 2 Dreams Come True Full Movie In Hindi [PATCHED].md +++ /dev/null @@ -1,9 +0,0 @@ -
        -

        how does cinderella 2: dreams come true compare to the original film? here are some interesting facts about the new movie:

        -

        cinderella 2 dreams come true full movie in hindi


        DOWNLOAD ->>> https://urlgoal.com/2uCJOH



        • the film was directed by stephen gaghan and was written by allan burns.
        • this is the first disney animated film that is not the direct-to-video sequel to a live-action film, such as beauty and the beast (1991)
        • this is the first cinderella film not to be animated by disney feature animation
        • this is the first disney film to be released in europe, and the only disney film to be released in spain
        • this is the only disney animated film to be released in china
        -

        advance box turbo flasher v9 40 full installer
        please enable your vpn when downloading torrents. if you torrent without a vpn, your isp can see that you're torrenting and may throttle your.. cinderella 2: dreams come true () poster. jaq and gus create a. barbie in hindi dubbed all movies free download mp4 & 3gp. read more on latest. international economics feenstra and taylor pdf download

        -

        as a newly crowned princess, cinderella quickly learns that life at the palace - and her royal responsibilities - are more challenging than she had imagined. in three heartwarming tales, cinderella calls on her animal friends and her fairy godmother to help as she brings her own grace and charm to her regal role and discovers that being true to yourself is the best way to make your dreams come true.

        -

        cinderella goes to the palace and sees her mother, who's been sick for a long time, waiting for her. she tells her mother that she will make her happy and her mother tells her to go to the ball. cinderella goes to the ball and is shocked to see her stepfamily at the ball. drizella and her mother try to be nice, but it's clear that they are unhappy with anastasia. cinderella is upset that her stepfamily is at the ball, so she tries to think of a way to get rid of them without her mother's knowing. she tells the baker that she has to leave since she has to dance with the king, and she leaves with the baker. anastasia goes to the ball with the baker and drizella dances with her mother. later on, the prince of the kingdom, who's been in love with anastasia for a long time, tells the king that he wants to marry anastasia, and the king and everyone else are upset about it, but anastasia is happy. she gives the baker a kiss and he turns into a prince. anastasia and the baker are married, and the baker's family and the prince's family are there. anastasia's stepfamily doesn't want to see her married, so they come up with a plan. they put on disguises and drive to the palace where they kidnap the prince. the baker's family, and anastasia's stepfamily, help him escape and are all happy about it. drizella then cries and says that she's sorry for how she's acted and that she loves anastasia, so anastasia tells her that it's okay and that she loves her too. drizella then looks happy. the baker, meanwhile, takes anastasia to the palace to tell her that he loves her, so that she'll be happy and they'll have a beautiful family.

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/reha/Stick_Tech/commons.py b/spaces/reha/Stick_Tech/commons.py deleted file mode 100644 index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000 --- a/spaces/reha/Stick_Tech/commons.py +++ /dev/null @@ -1,188 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/rektKnight/stable-diffusion-webui-cpu_dupli/README.md b/spaces/rektKnight/stable-diffusion-webui-cpu_dupli/README.md deleted file mode 100644 index be7ee569457ce071d083ababebb5729b1c7bc8a4..0000000000000000000000000000000000000000 --- a/spaces/rektKnight/stable-diffusion-webui-cpu_dupli/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Stable Diffusion Webui on Cpu -emoji: 🏃 -colorFrom: pink -colorTo: purple -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -python_version: 3.10.6 -duplicated_from: vk-ai-system/stable-diffusion-webui-cpu ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/point_rend_roi_head.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/point_rend_roi_head.py deleted file mode 100644 index 9f667793f48abd948592d1c0f50f8975ae2c4b89..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/roi_heads/point_rend_roi_head.py +++ /dev/null @@ -1,393 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend # noqa -import os -import warnings - -import numpy as np -import torch -import torch.nn.functional as F -from mmcv.ops import point_sample, rel_roi_point_to_rel_img_point - -from mmdet.core import bbox2roi, bbox_mapping, merge_aug_masks -from .. import builder -from ..builder import HEADS -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class PointRendRoIHead(StandardRoIHead): - """`PointRend `_.""" - - def __init__(self, point_head, *args, **kwargs): - super().__init__(*args, **kwargs) - assert self.with_bbox and self.with_mask - self.init_point_head(point_head) - - def init_point_head(self, point_head): - """Initialize ``point_head``""" - self.point_head = builder.build_head(point_head) - - def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks, - img_metas): - """Run forward function and calculate loss for mask head and point head - in training.""" - mask_results = super()._mask_forward_train(x, sampling_results, - bbox_feats, gt_masks, - img_metas) - if mask_results['loss_mask'] is not None: - loss_point = self._mask_point_forward_train( - x, sampling_results, mask_results['mask_pred'], gt_masks, - img_metas) - mask_results['loss_mask'].update(loss_point) - - return mask_results - - def _mask_point_forward_train(self, x, sampling_results, mask_pred, - gt_masks, img_metas): - """Run forward function and calculate loss for point head in - training.""" - pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results]) - rel_roi_points = self.point_head.get_roi_rel_points_train( - mask_pred, pos_labels, cfg=self.train_cfg) - rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, rois, rel_roi_points, img_metas) - coarse_point_feats = point_sample(mask_pred, rel_roi_points) - mask_point_pred = self.point_head(fine_grained_point_feats, - coarse_point_feats) - mask_point_target = self.point_head.get_targets( - rois, rel_roi_points, sampling_results, gt_masks, self.train_cfg) - loss_mask_point = self.point_head.loss(mask_point_pred, - mask_point_target, pos_labels) - - return loss_mask_point - - def _get_fine_grained_point_feats(self, x, rois, rel_roi_points, - img_metas): - """Sample fine grained feats from each level feature map and - concatenate them together. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - rel_roi_points (Tensor): A tensor of shape (num_rois, num_points, - 2) that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the [mask_height, mask_width] grid. - img_metas (list[dict]): Image meta info. - - Returns: - Tensor: The fine grained features for each points, - has shape (num_rois, feats_channels, num_points). - """ - num_imgs = len(img_metas) - fine_grained_feats = [] - for idx in range(self.mask_roi_extractor.num_inputs): - feats = x[idx] - spatial_scale = 1. / float( - self.mask_roi_extractor.featmap_strides[idx]) - point_feats = [] - for batch_ind in range(num_imgs): - # unravel batch dim - feat = feats[batch_ind].unsqueeze(0) - inds = (rois[:, 0].long() == batch_ind) - if inds.any(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois[inds], rel_roi_points[inds], feat.shape[2:], - spatial_scale).unsqueeze(0) - point_feat = point_sample(feat, rel_img_points) - point_feat = point_feat.squeeze(0).transpose(0, 1) - point_feats.append(point_feat) - fine_grained_feats.append(torch.cat(point_feats, dim=0)) - return torch.cat(fine_grained_feats, dim=1) - - def _mask_point_forward_test(self, x, rois, label_pred, mask_pred, - img_metas): - """Mask refining process with point head in testing. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - label_pred (Tensor): The predication class for each rois. - mask_pred (Tensor): The predication coarse masks of - shape (num_rois, num_classes, small_size, small_size). - img_metas (list[dict]): Image meta info. - - Returns: - Tensor: The refined masks of shape (num_rois, num_classes, - large_size, large_size). - """ - refined_mask_pred = mask_pred.clone() - for subdivision_step in range(self.test_cfg.subdivision_steps): - refined_mask_pred = F.interpolate( - refined_mask_pred, - scale_factor=self.test_cfg.scale_factor, - mode='bilinear', - align_corners=False) - # If `subdivision_num_points` is larger or equal to the - # resolution of the next step, then we can skip this step - num_rois, channels, mask_height, mask_width = \ - refined_mask_pred.shape - if (self.test_cfg.subdivision_num_points >= - self.test_cfg.scale_factor**2 * mask_height * mask_width - and - subdivision_step < self.test_cfg.subdivision_steps - 1): - continue - point_indices, rel_roi_points = \ - self.point_head.get_roi_rel_points_test( - refined_mask_pred, label_pred, cfg=self.test_cfg) - fine_grained_point_feats = self._get_fine_grained_point_feats( - x, rois, rel_roi_points, img_metas) - coarse_point_feats = point_sample(mask_pred, rel_roi_points) - mask_point_pred = self.point_head(fine_grained_point_feats, - coarse_point_feats) - - point_indices = point_indices.unsqueeze(1).expand(-1, channels, -1) - refined_mask_pred = refined_mask_pred.reshape( - num_rois, channels, mask_height * mask_width) - refined_mask_pred = refined_mask_pred.scatter_( - 2, point_indices, mask_point_pred) - refined_mask_pred = refined_mask_pred.view(num_rois, channels, - mask_height, mask_width) - - return refined_mask_pred - - def simple_test_mask(self, - x, - img_metas, - det_bboxes, - det_labels, - rescale=False): - """Obtain mask prediction without augmentation.""" - ori_shapes = tuple(meta['ori_shape'] for meta in img_metas) - scale_factors = tuple(meta['scale_factor'] for meta in img_metas) - - if isinstance(scale_factors[0], float): - warnings.warn( - 'Scale factor in img_metas should be a ' - 'ndarray with shape (4,) ' - 'arrange as (factor_w, factor_h, factor_w, factor_h), ' - 'The scale_factor with float type has been deprecated. ') - scale_factors = np.array([scale_factors] * 4, dtype=np.float32) - - num_imgs = len(det_bboxes) - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - segm_results = [[[] for _ in range(self.mask_head.num_classes)] - for _ in range(num_imgs)] - else: - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - _bboxes = [det_bboxes[i][:, :4] for i in range(len(det_bboxes))] - if rescale: - scale_factors = [ - torch.from_numpy(scale_factor).to(det_bboxes[0].device) - for scale_factor in scale_factors - ] - _bboxes = [ - _bboxes[i] * scale_factors[i] for i in range(len(_bboxes)) - ] - - mask_rois = bbox2roi(_bboxes) - mask_results = self._mask_forward(x, mask_rois) - # split batch mask prediction back to each image - mask_pred = mask_results['mask_pred'] - num_mask_roi_per_img = [len(det_bbox) for det_bbox in det_bboxes] - mask_preds = mask_pred.split(num_mask_roi_per_img, 0) - mask_rois = mask_rois.split(num_mask_roi_per_img, 0) - - # apply mask post-processing to each image individually - segm_results = [] - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - segm_results.append( - [[] for _ in range(self.mask_head.num_classes)]) - else: - x_i = [xx[[i]] for xx in x] - mask_rois_i = mask_rois[i] - mask_rois_i[:, 0] = 0 # TODO: remove this hack - mask_pred_i = self._mask_point_forward_test( - x_i, mask_rois_i, det_labels[i], mask_preds[i], - [img_metas]) - segm_result = self.mask_head.get_seg_masks( - mask_pred_i, _bboxes[i], det_labels[i], self.test_cfg, - ori_shapes[i], scale_factors[i], rescale) - segm_results.append(segm_result) - return segm_results - - def aug_test_mask(self, feats, img_metas, det_bboxes, det_labels): - """Test for mask head with test time augmentation.""" - if det_bboxes.shape[0] == 0: - segm_result = [[] for _ in range(self.mask_head.num_classes)] - else: - aug_masks = [] - for x, img_meta in zip(feats, img_metas): - img_shape = img_meta[0]['img_shape'] - scale_factor = img_meta[0]['scale_factor'] - flip = img_meta[0]['flip'] - _bboxes = bbox_mapping(det_bboxes[:, :4], img_shape, - scale_factor, flip) - mask_rois = bbox2roi([_bboxes]) - mask_results = self._mask_forward(x, mask_rois) - mask_results['mask_pred'] = self._mask_point_forward_test( - x, mask_rois, det_labels, mask_results['mask_pred'], - img_meta) - # convert to numpy array to save memory - aug_masks.append( - mask_results['mask_pred'].sigmoid().cpu().numpy()) - merged_masks = merge_aug_masks(aug_masks, img_metas, self.test_cfg) - - ori_shape = img_metas[0][0]['ori_shape'] - segm_result = self.mask_head.get_seg_masks( - merged_masks, - det_bboxes, - det_labels, - self.test_cfg, - ori_shape, - scale_factor=1.0, - rescale=False) - return segm_result - - def _onnx_get_fine_grained_point_feats(self, x, rois, rel_roi_points): - """Export the process of sampling fine grained feats to onnx. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - rel_roi_points (Tensor): A tensor of shape (num_rois, num_points, - 2) that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the [mask_height, mask_width] grid. - - Returns: - Tensor: The fine grained features for each points, - has shape (num_rois, feats_channels, num_points). - """ - batch_size = x[0].shape[0] - num_rois = rois.shape[0] - fine_grained_feats = [] - for idx in range(self.mask_roi_extractor.num_inputs): - feats = x[idx] - spatial_scale = 1. / float( - self.mask_roi_extractor.featmap_strides[idx]) - - rel_img_points = rel_roi_point_to_rel_img_point( - rois, rel_roi_points, feats, spatial_scale) - channels = feats.shape[1] - num_points = rel_img_points.shape[1] - rel_img_points = rel_img_points.reshape(batch_size, -1, num_points, - 2) - point_feats = point_sample(feats, rel_img_points) - point_feats = point_feats.transpose(1, 2).reshape( - num_rois, channels, num_points) - fine_grained_feats.append(point_feats) - return torch.cat(fine_grained_feats, dim=1) - - def _mask_point_onnx_export(self, x, rois, label_pred, mask_pred): - """Export mask refining process with point head to onnx. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - rois (Tensor): shape (num_rois, 5). - label_pred (Tensor): The predication class for each rois. - mask_pred (Tensor): The predication coarse masks of - shape (num_rois, num_classes, small_size, small_size). - - Returns: - Tensor: The refined masks of shape (num_rois, num_classes, - large_size, large_size). - """ - refined_mask_pred = mask_pred.clone() - for subdivision_step in range(self.test_cfg.subdivision_steps): - refined_mask_pred = F.interpolate( - refined_mask_pred, - scale_factor=self.test_cfg.scale_factor, - mode='bilinear', - align_corners=False) - # If `subdivision_num_points` is larger or equal to the - # resolution of the next step, then we can skip this step - num_rois, channels, mask_height, mask_width = \ - refined_mask_pred.shape - if (self.test_cfg.subdivision_num_points >= - self.test_cfg.scale_factor**2 * mask_height * mask_width - and - subdivision_step < self.test_cfg.subdivision_steps - 1): - continue - point_indices, rel_roi_points = \ - self.point_head.get_roi_rel_points_test( - refined_mask_pred, label_pred, cfg=self.test_cfg) - fine_grained_point_feats = self._onnx_get_fine_grained_point_feats( - x, rois, rel_roi_points) - coarse_point_feats = point_sample(mask_pred, rel_roi_points) - mask_point_pred = self.point_head(fine_grained_point_feats, - coarse_point_feats) - - point_indices = point_indices.unsqueeze(1).expand(-1, channels, -1) - refined_mask_pred = refined_mask_pred.reshape( - num_rois, channels, mask_height * mask_width) - - is_trt_backend = os.environ.get('ONNX_BACKEND') == 'MMCVTensorRT' - # avoid ScatterElements op in ONNX for TensorRT - if is_trt_backend: - mask_shape = refined_mask_pred.shape - point_shape = point_indices.shape - inds_dim0 = torch.arange(point_shape[0]).reshape( - point_shape[0], 1, 1).expand_as(point_indices) - inds_dim1 = torch.arange(point_shape[1]).reshape( - 1, point_shape[1], 1).expand_as(point_indices) - inds_1d = inds_dim0.reshape( - -1) * mask_shape[1] * mask_shape[2] + inds_dim1.reshape( - -1) * mask_shape[2] + point_indices.reshape(-1) - refined_mask_pred = refined_mask_pred.reshape(-1) - refined_mask_pred[inds_1d] = mask_point_pred.reshape(-1) - refined_mask_pred = refined_mask_pred.reshape(*mask_shape) - else: - refined_mask_pred = refined_mask_pred.scatter_( - 2, point_indices, mask_point_pred) - - refined_mask_pred = refined_mask_pred.view(num_rois, channels, - mask_height, mask_width) - - return refined_mask_pred - - def mask_onnx_export(self, x, img_metas, det_bboxes, det_labels, **kwargs): - """Export mask branch to onnx which supports batch inference. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - img_metas (list[dict]): Image meta info. - det_bboxes (Tensor): Bboxes and corresponding scores. - has shape [N, num_bboxes, 5]. - det_labels (Tensor): class labels of - shape [N, num_bboxes]. - - Returns: - Tensor: The segmentation results of shape [N, num_bboxes, - image_height, image_width]. - """ - if all(det_bbox.shape[0] == 0 for det_bbox in det_bboxes): - raise RuntimeError('[ONNX Error] Can not record MaskHead ' - 'as it has not been executed this time') - batch_size = det_bboxes.size(0) - # if det_bboxes is rescaled to the original image size, we need to - # rescale it back to the testing scale to obtain RoIs. - det_bboxes = det_bboxes[..., :4] - batch_index = torch.arange( - det_bboxes.size(0), device=det_bboxes.device).float().view( - -1, 1, 1).expand(det_bboxes.size(0), det_bboxes.size(1), 1) - mask_rois = torch.cat([batch_index, det_bboxes], dim=-1) - mask_rois = mask_rois.view(-1, 5) - mask_results = self._mask_forward(x, mask_rois) - mask_pred = mask_results['mask_pred'] - max_shape = img_metas[0]['img_shape_for_onnx'] - num_det = det_bboxes.shape[1] - det_bboxes = det_bboxes.reshape(-1, 4) - det_labels = det_labels.reshape(-1) - - mask_pred = self._mask_point_onnx_export(x, mask_rois, det_labels, - mask_pred) - - segm_results = self.mask_head.onnx_export(mask_pred, det_bboxes, - det_labels, self.test_cfg, - max_shape) - segm_results = segm_results.reshape(batch_size, num_det, max_shape[0], - max_shape[1]) - return segm_results diff --git a/spaces/rorallitri/biomedical-language-models/logs/Arpan Full Movie _BEST_ Download Mp4.md b/spaces/rorallitri/biomedical-language-models/logs/Arpan Full Movie _BEST_ Download Mp4.md deleted file mode 100644 index abf73eb2ba9cc580b534356edfd1c18a0ae0d8bb..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Arpan Full Movie _BEST_ Download Mp4.md +++ /dev/null @@ -1,6 +0,0 @@ -

        arpan full movie download mp4


        Download Zip > https://tinurll.com/2uznKS



        - -... Jaya Rama 2 Mp3 Free and download mp3 Sriram Jaya Rama 2 full album, ... Abhana Arpan Classical Marathi Abhangs, Srirama Jayarama Jay Jay Ram ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Bluestacks for Mac OS X El Capitan A Guide to Download and Install.md b/spaces/rorallitri/biomedical-language-models/logs/Bluestacks for Mac OS X El Capitan A Guide to Download and Install.md deleted file mode 100644 index 372aabc30a22d913ddbf91927e716fcca3c5a4a2..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Bluestacks for Mac OS X El Capitan A Guide to Download and Install.md +++ /dev/null @@ -1,9 +0,0 @@ - -

        For about two months, I had been using the bluestacks android emulator in a windows virtual machine. Specifically, I was using parallels and windows 7 32 bit, and was just using bluestacks to play some apps. During that time I was using an early version of OS X Yosemite.

        -

        How To Use Bluestacks For Mac Os X El Capitan


        DOWNLOAD ……… https://tinurll.com/2uznOu



        -

        Unfortunately, I have been unable to play apps ever since one of the later Yosemite updates, and the issue persists in the El Capitan update. I can open bluestacks and run the apps fine, but it doesn't take long before my mac crashes for insufficient memory. I have 8 GB of RAM, which had been more than enough to run bluestacks in the past.

        -

        It was hard to determine what was causing this issue, but I'm almost certain now that it's an OS X issue (not bluestacks, parallels, or windows). I have tried using vmware fusion, virtual box, all the different windows versions (7 through 10), and many different bluestacks versions. Also, I would prefer to not have to use Bootcamp Assistant, as I only have a 121 GB Hard Drive and I would like to run bluestacks in windows and mac applications simultaneously.

        -

        May I ask how you got this to work without crashing. My problem is similar. If i ran bluestacks on my Mac and want to then run VirtualBox to virtualise windows, i always have to restart otherwise my Mac crashes.

        -

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Boris Fx 8.0 Serial Numberl ((EXCLUSIVE)).md b/spaces/rorallitri/biomedical-language-models/logs/Boris Fx 8.0 Serial Numberl ((EXCLUSIVE)).md deleted file mode 100644 index f18f6369e5214128f009d96659ef01fda74f9cae..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Boris Fx 8.0 Serial Numberl ((EXCLUSIVE)).md +++ /dev/null @@ -1,8 +0,0 @@ -

        Boris Fx 8.0 Serial Numberl


        Download File –––––>>> https://tinurll.com/2uzmxa



        - -December 8, 2021 - 1479 Crack Free Download with Activation Key [Latest]. Boris FX Continuum Complete 2022 v15.0.0.1479 Crack - the most complete filters and . May 15, 2018 - KeyMaker is a free and simple tool that helps users generate keys for various products like Office 365, Microsoft Store, etc. -October 16, 2018 - Key to activate office 365 license key 2018 - Keys to activate microsoft office word 2010 free 2018 - 2019 2019 - 2020. -Dec 14 2019 - Microsoft Office 2019 torrent download free, Microsoft Office 2019 8a78ff9644
        -
        -
        -

        diff --git a/spaces/runa91/barc_gradio/src/stacked_hourglass/datasets/stanext24.py b/spaces/runa91/barc_gradio/src/stacked_hourglass/datasets/stanext24.py deleted file mode 100644 index e217bf076fb63de5655fc173737ecd2e9803b1e6..0000000000000000000000000000000000000000 --- a/spaces/runa91/barc_gradio/src/stacked_hourglass/datasets/stanext24.py +++ /dev/null @@ -1,301 +0,0 @@ -# 24 joints instead of 20!! - - -import gzip -import json -import os -import random -import math -import numpy as np -import torch -import torch.utils.data as data -from importlib_resources import open_binary -from scipy.io import loadmat -from tabulate import tabulate -import itertools -import json -from scipy import ndimage - -from csv import DictReader -from pycocotools.mask import decode as decode_RLE - -import sys -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..')) -from configs.data_info import COMPLETE_DATA_INFO_24 -from stacked_hourglass.utils.imutils import load_image, draw_labelmap, draw_multiple_labelmaps -from stacked_hourglass.utils.misc import to_torch -from stacked_hourglass.utils.transforms import shufflelr, crop, color_normalize, fliplr, transform -import stacked_hourglass.datasets.utils_stanext as utils_stanext -from stacked_hourglass.utils.visualization import save_input_image_with_keypoints -from configs.dog_breeds.dog_breed_class import COMPLETE_ABBREV_DICT, COMPLETE_SUMMARY_BREEDS, SIM_MATRIX_RAW, SIM_ABBREV_INDICES -from configs.dataset_path_configs import STANEXT_RELATED_DATA_ROOT_DIR - - -class StanExt(data.Dataset): - DATA_INFO = COMPLETE_DATA_INFO_24 - - # Suggested joints to use for keypoint reprojection error calculations - ACC_JOINTS = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 16] - - def __init__(self, image_path=None, is_train=True, inp_res=256, out_res=64, sigma=1, - scale_factor=0.25, rot_factor=30, label_type='Gaussian', - do_augment='default', shorten_dataset_to=None, dataset_mode='keyp_only', V12=None, val_opt='test'): - self.V12 = V12 - self.is_train = is_train # training set or test set - if do_augment == 'yes': - self.do_augment = True - elif do_augment == 'no': - self.do_augment = False - elif do_augment=='default': - if self.is_train: - self.do_augment = True - else: - self.do_augment = False - else: - raise ValueError - self.inp_res = inp_res - self.out_res = out_res - self.sigma = sigma - self.scale_factor = scale_factor - self.rot_factor = rot_factor - self.label_type = label_type - self.dataset_mode = dataset_mode - if self.dataset_mode=='complete' or self.dataset_mode=='keyp_and_seg' or self.dataset_mode=='keyp_and_seg_and_partseg': - self.calc_seg = True - else: - self.calc_seg = False - self.val_opt = val_opt - - # create train/val split - self.img_folder = utils_stanext.get_img_dir(V12=self.V12) - self.train_dict, init_test_dict, init_val_dict = utils_stanext.load_stanext_json_as_dict(split_train_test=True, V12=self.V12) - self.train_name_list = list(self.train_dict.keys()) # 7004 - if self.val_opt == 'test': - self.test_dict = init_test_dict - self.test_name_list = list(self.test_dict.keys()) - elif self.val_opt == 'val': - self.test_dict = init_val_dict - self.test_name_list = list(self.test_dict.keys()) - else: - raise NotImplementedError - - # stanext breed dict (contains for each name a stanext specific index) - breed_json_path = os.path.join(STANEXT_RELATED_DATA_ROOT_DIR, 'StanExt_breed_dict_v2.json') - self.breed_dict = self.get_breed_dict(breed_json_path, create_new_breed_json=False) - self.train_name_list = sorted(self.train_name_list) - self.test_name_list = sorted(self.test_name_list) - random.seed(4) - random.shuffle(self.train_name_list) - random.shuffle(self.test_name_list) - if shorten_dataset_to is not None: - # sometimes it is useful to have a smaller set (validation speed, debugging) - self.train_name_list = self.train_name_list[0 : min(len(self.train_name_list), shorten_dataset_to)] - self.test_name_list = self.test_name_list[0 : min(len(self.test_name_list), shorten_dataset_to)] - # special case for debugging: 12 similar images - if shorten_dataset_to == 12: - my_sample = self.test_name_list[2] - for ind in range(0, 12): - self.test_name_list[ind] = my_sample - print('len(dataset): ' + str(self.__len__())) - - # add results for eyes, whithers and throat as obtained through anipose -> they are used - # as pseudo ground truth at training time. - self.path_anipose_out_root = os.path.join(STANEXT_RELATED_DATA_ROOT_DIR, 'animalpose_hg8_v0_results_on_StanExt') - - - def get_data_sampler_info(self): - # for custom data sampler - if self.is_train: - name_list = self.train_name_list - else: - name_list = self.test_name_list - info_dict = {'name_list': name_list, - 'stanext_breed_dict': self.breed_dict, - 'breeds_abbrev_dict': COMPLETE_ABBREV_DICT, - 'breeds_summary': COMPLETE_SUMMARY_BREEDS, - 'breeds_sim_martix_raw': SIM_MATRIX_RAW, - 'breeds_sim_abbrev_inds': SIM_ABBREV_INDICES - } - return info_dict - - - def get_breed_dict(self, breed_json_path, create_new_breed_json=False): - if create_new_breed_json: - breed_dict = {} - breed_index = 0 - for img_name in self.train_name_list: - folder_name = img_name.split('/')[0] - breed_name = folder_name.split(folder_name.split('-')[0] + '-')[1] - if not (folder_name in breed_dict): - breed_dict[folder_name] = { - 'breed_name': breed_name, - 'index': breed_index} - breed_index += 1 - with open(breed_json_path, 'w', encoding='utf-8') as f: json.dump(breed_dict, f, ensure_ascii=False, indent=4) - else: - with open(breed_json_path) as json_file: breed_dict = json.load(json_file) - return breed_dict - - - def __getitem__(self, index): - - if self.is_train: - name = self.train_name_list[index] - data = self.train_dict[name] - else: - name = self.test_name_list[index] - data = self.test_dict[name] - - sf = self.scale_factor - rf = self.rot_factor - - img_path = os.path.join(self.img_folder, data['img_path']) - try: - anipose_res_path = os.path.join(self.path_anipose_out_root, data['img_path'].replace('.jpg', '.json')) - with open(anipose_res_path) as f: anipose_data = json.load(f) - anipose_thr = 0.2 - anipose_joints_0to24 = np.asarray(anipose_data['anipose_joints_0to24']).reshape((-1, 3)) - anipose_joints_0to24_scores = anipose_joints_0to24[:, 2] - # anipose_joints_0to24_scores[anipose_joints_0to24_scores>anipose_thr] = 1.0 - anipose_joints_0to24_scores[anipose_joints_0to24_scores bbox_max = 256 - # bbox_s = bbox_diag / 200. # diagonal of the boundingbox will be 200 - bbox_s = bbox_max / 200. * 256. / 200. # maximum side of the bbox will be 200 - c = torch.Tensor(bbox_c) - s = bbox_s - - # For single-person pose estimation with a centered/scaled figure - nparts = pts.size(0) - img = load_image(img_path) # CxHxW - - # segmentation map (we reshape it to 3xHxW, such that we can do the - # same transformations as with the image) - if self.calc_seg: - seg = torch.Tensor(utils_stanext.get_seg_from_entry(data)[None, :, :]) - seg = torch.cat(3*[seg]) - - r = 0 - do_flip = False - if self.do_augment: - s = s*torch.randn(1).mul_(sf).add_(1).clamp(1-sf, 1+sf)[0] - r = torch.randn(1).mul_(rf).clamp(-2*rf, 2*rf)[0] if random.random() <= 0.6 else 0 - # Flip - if random.random() <= 0.5: - do_flip = True - img = fliplr(img) - if self.calc_seg: - seg = fliplr(seg) - pts = shufflelr(pts, img.size(2), self.DATA_INFO.hflip_indices) - c[0] = img.size(2) - c[0] - # Color - img[0, :, :].mul_(random.uniform(0.8, 1.2)).clamp_(0, 1) - img[1, :, :].mul_(random.uniform(0.8, 1.2)).clamp_(0, 1) - img[2, :, :].mul_(random.uniform(0.8, 1.2)).clamp_(0, 1) - - # Prepare image and groundtruth map - inp = crop(img, c, s, [self.inp_res, self.inp_res], rot=r) - img_border_mask = torch.all(inp > 1.0/256, dim = 0).unsqueeze(0).float() # 1 is foreground - inp = color_normalize(inp, self.DATA_INFO.rgb_mean, self.DATA_INFO.rgb_stddev) - if self.calc_seg: - seg = crop(seg, c, s, [self.inp_res, self.inp_res], rot=r) - - # Generate ground truth - tpts = pts.clone() - target_weight = tpts[:, 2].clone().view(nparts, 1) - - target = torch.zeros(nparts, self.out_res, self.out_res) - for i in range(nparts): - # if tpts[i, 2] > 0: # This is evil!! - if tpts[i, 1] > 0: - tpts[i, 0:2] = to_torch(transform(tpts[i, 0:2]+1, c, s, [self.out_res, self.out_res], rot=r, as_int=False)) - target[i], vis = draw_labelmap(target[i], tpts[i]-1, self.sigma, type=self.label_type) - target_weight[i, 0] *= vis - # NEW: - '''target_new, vis_new = draw_multiple_labelmaps((self.out_res, self.out_res), tpts[:, :2]-1, self.sigma, type=self.label_type) - target_weight_new = tpts[:, 2].clone().view(nparts, 1) * vis_new - target_new[(target_weight_new==0).reshape((-1)), :, :] = 0''' - - # --- Meta info - this_breed = self.breed_dict[name.split('/')[0]] # 120 - # add information about location within breed similarity matrix - folder_name = name.split('/')[0] - breed_name = folder_name.split(folder_name.split('-')[0] + '-')[1] - abbrev = COMPLETE_ABBREV_DICT[breed_name] - try: - sim_breed_index = COMPLETE_SUMMARY_BREEDS[abbrev]._ind_in_xlsx_matrix - except: # some breeds are not in the xlsx file - sim_breed_index = -1 - meta = {'index' : index, 'center' : c, 'scale' : s, - 'pts' : pts, 'tpts' : tpts, 'target_weight': target_weight, - 'breed_index': this_breed['index'], 'sim_breed_index': sim_breed_index, - 'ind_dataset': 0} # ind_dataset=0 for stanext or stanexteasy or stanext 2 - meta2 = {'index' : index, 'center' : c, 'scale' : s, - 'pts' : pts, 'tpts' : tpts, 'target_weight': target_weight, - 'ind_dataset': 3} - - # return different things depending on dataset_mode - if self.dataset_mode=='keyp_only': - # save_input_image_with_keypoints(inp, meta['tpts'], out_path='./test_input_stanext.png', ratio_in_out=self.inp_res/self.out_res) - return inp, target, meta - elif self.dataset_mode=='keyp_and_seg': - meta['silh'] = seg[0, :, :] - meta['name'] = name - return inp, target, meta - elif self.dataset_mode=='keyp_and_seg_and_partseg': - # partseg is fake! this does only exist such that this dataset can be combined with an other datset that has part segmentations - meta2['silh'] = seg[0, :, :] - meta2['name'] = name - fake_body_part_matrix = torch.ones((3, 256, 256)).long() * (-1) - meta2['body_part_matrix'] = fake_body_part_matrix - return inp, target, meta2 - elif self.dataset_mode=='complete': - target_dict = meta - target_dict['silh'] = seg[0, :, :] - # NEW for silhouette loss - target_dict['img_border_mask'] = img_border_mask - target_dict['has_seg'] = True - if target_dict['silh'].sum() < 1: - if ((not self.is_train) and self.val_opt == 'test'): - raise ValueError - elif self.is_train: - print('had to replace training image') - replacement_index = max(0, index - 1) - inp, target_dict = self.__getitem__(replacement_index) - else: - # There seem to be a few validation images without segmentation - # which would lead to nan in iou calculation - replacement_index = max(0, index - 1) - inp, target_dict = self.__getitem__(replacement_index) - return inp, target_dict - else: - print('sampling error') - import pdb; pdb.set_trace() - raise ValueError - - - def __len__(self): - if self.is_train: - return len(self.train_name_list) - else: - return len(self.test_name_list) - - diff --git a/spaces/ryoung41/SuperSimple2LinerText2Speech/app.py b/spaces/ryoung41/SuperSimple2LinerText2Speech/app.py deleted file mode 100644 index 624711103fff0eb591bc05f07ae20c47fbe03cd2..0000000000000000000000000000000000000000 --- a/spaces/ryoung41/SuperSimple2LinerText2Speech/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/facebook/fastspeech2-en-ljspeech").launch() \ No newline at end of file diff --git a/spaces/sabman/map-diffuser/app.py b/spaces/sabman/map-diffuser/app.py deleted file mode 100644 index b22653ed38415b3f38bad7145fdda6e81aa3b744..0000000000000000000000000000000000000000 --- a/spaces/sabman/map-diffuser/app.py +++ /dev/null @@ -1,62 +0,0 @@ -import gradio as gr -from inference_code import generate_images - - -def generate_image_predictions(prompt): - images = generate_images(prompt) - return images - - -demo = gr.Blocks() - -with demo: - gr.Markdown( - """ - # 🌍 Map Diffuser - - Below we present a Stable Diffusion text to image model that will generate map tiles based on a text prompt. We trained it on just 10k images and prompts based on openstreetmap. Images were from @mapbox satellite images + @StamenDesign water color and toner images + @carto's Voyager style. The region trained was limited to central Europe and more precisely Prague and Amsterdam. This was part of the Hackathon run by Hugging Face & Google to use JAX API with Google Gen4 TPUs that are especially designed to train massive models. - - - The model tuning led to some surprising results. For example we didn't have any prompts with "ships" or "desert" yet when passed that it tried to add ships to the satellite images 🤷‍♂️... - """ - ) - input = gr.components.Textbox(label="Enter a text prompt here") - output = gr.components.Image(label="Output Image") - # button to submit the prompt - button = gr.components.Button(label="Generate") - # when the button is clicked, call the generate_image_predictions function - # and pass in the prompt as an argument - button.click(generate_image_predictions, inputs=input, outputs=output) - - gr.Markdown( - """ - ### Generates images from a given text prompt. The prompts are in the format: - - - `{style} map of {city} with {features}` or - - `satellite image of {city} with {features}` or - - `satellite image with {features}` or - - `satellite image of {city} with {features} and no {features}` - and so on... - - ### So for example: - - - "Satellite image of amsterdam with industrial area and highways" - - "Watercolor style map of Amsterdam with residential area and highways" - - "Toner style map of Amsterdam with residential area and highways" - - "Satellite image with forests and residential, no water" - - - - Examples table: - - | Prompt | Output | - | --- | --- | - | Satellite image of industrial area with ships | | - | Watercolor style map of Amsterdam with residential area and highways | | - | Toner style map of Amsterdam with residential area and highways | | - | Satellite image with forests and residential, no water | | - """ - ) - - -demo.launch() diff --git a/spaces/sander-wood/clamp_zero_shot_music_classification/utils.py b/spaces/sander-wood/clamp_zero_shot_music_classification/utils.py deleted file mode 100644 index cc13fed025c2557512ade4e903d300f586e439c2..0000000000000000000000000000000000000000 --- a/spaces/sander-wood/clamp_zero_shot_music_classification/utils.py +++ /dev/null @@ -1,357 +0,0 @@ -import re -import os -import torch -import requests -from tqdm import tqdm -from unidecode import unidecode -from transformers import AutoModel, AutoConfig, BertModel, PreTrainedModel - -# Constants for patch length and number of features in a patch -PATCH_LENGTH = 64 -PATCH_FEATURES = 98 - -class MusicPatchilizer: - """ - Class for converting music data to patches and vice-versa. - - Attributes: - delimiters (tuple): A tuple of strings containing the delimiters used for splitting bars. - regexPattern (str): A regular expression pattern for splitting bars. - pad_id (int): The id of the padding token. - mask_id (int): The id of the mask token. - eos_id (int): The id of the end-of-sequence token. - - Methods: - split_bars(body): Splits a body of music into individual bars using the delimiters specified in `self.delimiters`. - bar2patch(bar, patch_length): Encodes a single bar as a patch of specified length. - patch2bar(patch): Converts a patch to a bar string. - encode(music, music_length, patch_length=PATCH_LENGTH, add_eos_patch=False): Encodes the input music string as a list of patches. - decode(patches): Decodes a sequence of patches into a music score. - """ - def __init__(self): - # Delimiters used for splitting bars - self.delimiters = "|:", "::", ":|", "[|", "||", "|]", "|" - # Regular expression pattern for splitting bars - self.regexPattern = '('+'|'.join(map(re.escape, self.delimiters))+')' - # Padding, mask, and end-of-sequence token ids - self.pad_id = 0 - self.mask_id = 96 - self.eos_id = 97 - - def split_bars(self, body): - """ - Splits a body of music into individual bars using the delimiters specified in `self.delimiters`. - - Args: - body (str): A string containing the body of music to be split into bars. - - Returns: - list: A list of strings containing the individual bars. - """ - body = "".join(body) - bars = re.split(self.regexPattern, body) - while("" in bars): - bars.remove("") - if bars[0] in self.delimiters: - bars[1] = bars[0]+bars[1] - bars = bars[1:] - bars = [bars[i*2]+bars[i*2+1] for i in range(int(len(bars)/2))] - - return bars - - def bar2patch(self, bar, patch_length): - """ - Encodes a single bar as a patch of specified length. - - Args: - bar (str): A string containing the bar to be encoded. - patch_length (int): An integer indicating the length of the patch to be returned. - - Returns: - list: A list of integer-encoded musical tokens. - """ - patch = [self.pad_id] * patch_length - - for i in range(min(patch_length, len(bar))): - chr = bar[i] - idx = ord(chr) - if idx>=32 and idx<127: - patch[i] = idx-31 - - if i+10 and idx<96: - bar += chr(idx+31) - else: - break - - return bar - - def encode(self, music, music_length, patch_length=PATCH_LENGTH, add_eos_patch=False): - """ - Encodes the input music string as a list of patches. - - Args: - music (str): A string containing the music to be encoded. - music_length (int): An integer indicating the maximum number of patches to be returned. - patch_length (int): An integer indicating the length of each patch. - add_eos_patch (bool): A boolean indicating whether to add an extra patch consisting of all EOS tokens at the end of the encoded music. - - Returns: - list: A list of integer-encoded patches. - """ - # Convert to ASCII and split into lines - music = unidecode(music) - lines = music.split('\n') - try: - lines.remove('') - except: - pass - - body = "" - patches = [] - - # Iterate over lines, splitting bars and encoding each one as a patch - for line in lines: - # check if the line is a music score line or not - if len(line)>1 and ((line[0].isalpha() and line[1] == ':') or line.startswith('%%score')): - # if the current line is a music score line, encode the previous body as patches - if body!="": - bars = self.split_bars(body) - - for bar in bars: - # encode each bar in the body as a patch and append to the patches list - patch = self.bar2patch(bar, patch_length) - patches.append(patch) - # reset the body variable - body = "" - # encode the current line as a patch and append to the patches list - patch = self.bar2patch(line, patch_length) - patches.append(patch) - else: - # if the line is not a music score line, append to the body variable - body += line - - if body!="": - bars = self.split_bars(body) - - for bar in bars: - # encode each bar in the body as a patch and append to the patches list - patch = self.bar2patch(bar, patch_length) - patches.append(patch) - - # add an extra patch consisting of all EOS tokens, if required - if add_eos_patch: - eos_patch = [self.eos_id] * patch_length - patches = patches + [eos_patch] - - return patches[:music_length] - - def decode(self, patches): - """ - Decodes a sequence of patches into a music score. - - Args: - patches (list): A list of integer-encoded patches. - - Returns: - str: A string containing the decoded music score. - """ - music = "" - for patch in patches: - music += self.patch2bar(patch)+'\n' - - return music - - -class MusicEncoder(PreTrainedModel): - """ - MusicEncoder model for encoding music patches into a sequence of hidden states. - - Args: - config (:obj:`BertConfig`): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the configuration. - Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights. - - Attributes: - patch_embedding (:obj:`torch.nn.Linear`): A linear layer to convert the one-hot encoded patches to the hidden size of the model. - enc (:obj:`BertModel`): The BERT model used to encode the patches. - """ - def __init__(self, config): - super(MusicEncoder, self).__init__(config) - self.patch_embedding = torch.nn.Linear(PATCH_LENGTH*PATCH_FEATURES, config.hidden_size) - torch.nn.init.normal_(self.patch_embedding.weight, std=0.02) - self.enc = BertModel(config=config) - - def forward(self, input_musics, music_masks): - """ - Args: - input_musics (:obj:`torch.LongTensor` of shape :obj:`(batch_size, music_length, patch_length)`): - Tensor containing the integer-encoded music patches. - music_masks (:obj:`torch.LongTensor` of shape :obj:`(batch_size, music_length)`): - Tensor containing the attention masks for the music patches. - - Returns: - :obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs: - last_hidden_state (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, music_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - """ - # One-hot encode the input music patches - input_musics = torch.nn.functional.one_hot(input_musics, num_classes=PATCH_FEATURES) - - # Reshape the input music patches to feed into the linear layer - input_musics = input_musics.reshape(len(input_musics), -1, PATCH_LENGTH*PATCH_FEATURES).type(torch.FloatTensor) - - # Apply the linear layer to convert the one-hot encoded patches to hidden features - input_musics = self.patch_embedding(input_musics.to(self.device)) - - # Apply the BERT model to encode the music data - output = self.enc(inputs_embeds=input_musics, attention_mask=music_masks.to(self.device)) - - return output - - -class CLaMP(PreTrainedModel): - """ - CLaMP model for joint text and music encoding. - - Args: - config (:obj:`BertConfig`): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the configuration. - Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights. - text_model_name (:obj:`str`, `optional`, defaults to :obj:`"distilroberta-base"`): - The name of the pre-trained text model to be used for text encoding. - - Attributes: - text_enc (:obj:`AutoModel`): The pre-trained text model used for text encoding. - text_proj (:obj:`torch.nn.Linear`): A linear layer to project the text encoding to the hidden size of the model. - music_enc (:obj:`MusicEncoder`): The music encoder model used for music encoding. - music_proj (:obj:`torch.nn.Linear`): A linear layer to project the music encoding to the hidden size of the model. - """ - def __init__(self, config, text_model_name="distilroberta-base"): - super(CLaMP, self).__init__(config) - self.text_enc = AutoModel.from_pretrained(text_model_name) - self.text_proj = torch.nn.Linear(config.hidden_size, config.hidden_size) - torch.nn.init.normal_(self.text_proj.weight, std=0.02) - - self.music_enc = MusicEncoder(config=config) - self.music_proj = torch.nn.Linear(config.hidden_size, config.hidden_size) - torch.nn.init.normal_(self.music_proj.weight, std=0.02) - - def forward(self, input_texts, text_masks, input_musics, music_masks): - """ - Args: - input_texts (:obj:`torch.LongTensor` of shape :obj:`(batch_size, text_length)`): - Tensor containing the integer-encoded text. - text_masks (:obj:`torch.LongTensor` of shape :obj:`(batch_size, text_length)`): - Tensor containing the attention masks for the text. - input_musics (:obj:`torch.LongTensor` of shape :obj:`(batch_size, music_length, patch_length)`): - Tensor containing the integer-encoded music patches. - music_masks (:obj:`torch.LongTensor` of shape :obj:`(batch_size, music_length)`): - Tensor containing the attention masks for the music patches. - - Returns: - :obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.BertConfig`) and inputs: - music_features (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, hidden_size)`): - The music features extracted from the music encoder. - text_features (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, hidden_size)`): - The text features extracted from the text encoder. - """ - # Encode input texts - text_features = self.text_enc(input_texts.to(self.device), attention_mask=text_masks.to(self.device))['last_hidden_state'] - text_features = self.avg_pooling(text_features, text_masks) - text_features = self.text_proj(text_features) - - # Encode input musics - music_features = self.music_enc(input_musics, music_masks)['last_hidden_state'] - music_features = self.avg_pooling(music_features, music_masks) - music_features = self.music_proj(music_features) - - return music_features, text_features - - def avg_pooling(self, input_features, input_masks): - """ - Applies average pooling to the input features. - - Args: - input_features (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, seq_length, hidden_size)`): - Tensor containing the input features. - input_masks (:obj:`torch.LongTensor` of shape :obj:`(batch_size, seq_length)`): - Tensor containing the attention masks for the input features. - - Returns: - :obj:`torch.FloatTensor` of shape :obj:`(batch_size, hidden_size)`: - The pooled features. - """ - input_masks = input_masks.unsqueeze(-1).to(self.device) - input_features = input_features * input_masks - avg_pool = input_features.sum(dim=1) / input_masks.sum(dim=1) - - return avg_pool - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs): - """ - Instantiate a CLaMP model from a pre-trained model configuration. - - Args: - pretrained_model_name_or_path (:obj:`str`): - This can be either: - "clamp-small-512" for the small CLaMP model with 512 max sequence length. - "clamp-small-1024" for the small CLaMP model with 1024 max sequence length. - - Returns: - :class:`~transformers.CLaMP`: The CLaMP model. - """ - model_dir = pretrained_model_name_or_path - - # If the pre-trained model is not found locally, download it from Hugging Face - if not os.path.exists(model_dir): - # Create the model directory and download the config and pytorch model files - os.makedirs(model_dir) - config_url = f"https://huggingface.co/{pretrained_model_name_or_path}/raw/main/config.json" - model_url = f"https://huggingface.co/{pretrained_model_name_or_path}/resolve/main/pytorch_model.bin" - chunk_size = 1024 * 1024 # 1MB - - # download config file - with requests.get(config_url, stream=True) as r: - r.raise_for_status() - total_size = int(r.headers.get('content-length', 0)) - with open(model_dir+"/config.json", 'wb') as f: - with tqdm(total=total_size, unit='B', unit_scale=True, desc='Downloading config') as pbar: - for chunk in r.iter_content(chunk_size=chunk_size): - f.write(chunk) - pbar.update(len(chunk)) - - # download pytorch model file - with requests.get(model_url, stream=True) as r: - r.raise_for_status() - total_size = int(r.headers.get('content-length', 0)) - with open(model_dir+"/pytorch_model.bin", 'wb') as f: - with tqdm(total=total_size, unit='B', unit_scale=True, desc='Downloading model') as pbar: - for chunk in r.iter_content(chunk_size=chunk_size): - f.write(chunk) - pbar.update(len(chunk)) - - # Load the model weights and configuration - config = AutoConfig.from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs) - model = cls(config) - model.load_state_dict(torch.load(pretrained_model_name_or_path+str('/pytorch_model.bin'))) - - return model \ No newline at end of file diff --git a/spaces/sasha/BiasDetection/README.md b/spaces/sasha/BiasDetection/README.md deleted file mode 100644 index 0f6e31d43e08ae5839c7f3b5d973013e59a8eb36..0000000000000000000000000000000000000000 --- a/spaces/sasha/BiasDetection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BiasDetection -emoji: 🐠 -colorFrom: green -colorTo: gray -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: cc-by-nc-sa-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sayakpaul/convert-kerascv-sd-diffusers/app.py b/spaces/sayakpaul/convert-kerascv-sd-diffusers/app.py deleted file mode 100644 index 84415c5d96ef3707ba0cb78d4f8281194510b47a..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/convert-kerascv-sd-diffusers/app.py +++ /dev/null @@ -1,78 +0,0 @@ -import gradio as gr - -from convert import run_conversion -from hub_utils import push_to_hub, save_model_card - -PRETRAINED_CKPT = "CompVis/stable-diffusion-v1-4" -DESCRIPTION = """ -This Space lets you convert KerasCV Stable Diffusion weights to a format compatible with [Diffusers](https://github.com/huggingface/diffusers) 🧨. This allows users to fine-tune using KerasCV and use the fine-tuned weights in Diffusers taking advantage of its nifty features (like [schedulers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/schedulers), [fast attention](https://huggingface.co/docs/diffusers/optimization/fp16), etc.). Specifically, the Keras weights are first converted to PyTorch and then they are wrapped into a [`StableDiffusionPipeline`](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview). This pipeline is then pushed to the Hugging Face Hub given you have provided `your_hf_token`. - -## Notes (important) - -* The Space downloads a couple of pre-trained weights and runs a dummy inference. Depending, on the machine type, the enture process can take anywhere between 2 - 5 minutes. -* Only Stable Diffusion (v1) is supported as of now. In particular this checkpoint: [`"CompVis/stable-diffusion-v1-4"`](https://huggingface.co/CompVis/stable-diffusion-v1-4). -* [This Colab Notebook](https://colab.research.google.com/drive/1RYY077IQbAJldg8FkK8HSEpNILKHEwLb?usp=sharing) was used to develop the conversion utilities initially. -* Providing both `text_encoder_weights` and `unet_weights` is dependent on the fine-tuning task. Here are some _typical_ scenarios: - - * [DreamBooth](https://dreambooth.github.io/): Both text encoder and UNet - * [Textual Inversion](https://textual-inversion.github.io/): Text encoder - * [Traditional text2image fine-tuning](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image): UNet - - **In case none of the `text_encoder_weights` and `unet_weights` is provided, nothing will be done.** -* For Textual Inversion, you MUST provide a valid `placeholder_token` i.e., the text concept used for conducting Textual Inversion. -* When providing the weights' links, ensure they're directly downloadable. Internally, the Space uses [`tf.keras.utils.get_file()`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) to retrieve the weights locally. -* If you don't provide `your_hf_token` the converted pipeline won't be pushed. - -Check [here](https://github.com/huggingface/diffusers/blob/31be42209ddfdb69d9640a777b32e9b5c6259bf0/examples/dreambooth/train_dreambooth_lora.py#L975) for an example on how you can change the scheduler of an already initialized `StableDiffusionPipeline`. -""" - - -def run(hf_token, text_encoder_weights, unet_weights, placeholder_token, repo_prefix): - if text_encoder_weights == "": - text_encoder_weights = None - if unet_weights == "": - unet_weights = None - - if text_encoder_weights is None and unet_weights is None: - return "❌ No fine-tuned weights provided, nothing to do." - - if placeholder_token == "": - placeholder_token = None - if placeholder_token is not None and text_encoder_weights is None: - return "❌ Placeholder token provided but no text encoder weights were provided. Cannot proceed." - - pipeline = run_conversion(text_encoder_weights, unet_weights, placeholder_token) - output_path = "kerascv_sd_diffusers_pipeline" - pipeline.save_pretrained(output_path) - - weight_paths = [] - if text_encoder_weights is not None: - weight_paths.append(text_encoder_weights) - if unet_weights is not None: - weight_paths.append(unet_weights) - save_model_card( - base_model=PRETRAINED_CKPT, - repo_folder=output_path, - weight_paths=weight_paths, - placeholder_token=placeholder_token, - ) - push_str = push_to_hub(hf_token, output_path, repo_prefix) - return push_str - - -demo = gr.Interface( - title="KerasCV Stable Diffusion to Diffusers Stable Diffusion Pipelines 🧨🤗", - description=DESCRIPTION, - allow_flagging="never", - inputs=[ - gr.Text(max_lines=1, label="your_hf_token"), - gr.Text(max_lines=1, label="text_encoder_weights"), - gr.Text(max_lines=1, label="unet_weights"), - gr.Text(max_lines=1, label="placeholder_token"), - gr.Text(max_lines=1, label="output_repo_prefix"), - ], - outputs=[gr.Markdown(label="output")], - fn=run, -) - -demo.launch() diff --git a/spaces/scedlatioru/img-to-music/example/Grass Valley EDIUS Pro 8.5.3.3573 Win [CRACKED].md b/spaces/scedlatioru/img-to-music/example/Grass Valley EDIUS Pro 8.5.3.3573 Win [CRACKED].md deleted file mode 100644 index e46e219ef97f6cf3eb07d8b09a78d7af2e796186..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Grass Valley EDIUS Pro 8.5.3.3573 Win [CRACKED].md +++ /dev/null @@ -1,78 +0,0 @@ -

        Grass Valley EDIUS Pro 8.5.3.3573 Win


        Download Ziphttps://gohhs.com/2uEyVN



        - -Grass Valley Edius Pro 8.5.3.3573 Windows 7 x64 With Crack System Requirements: - -Windows 7 / 8 / 8.1 / 8.2 / 10 x64. - -Intel Core 2 Duo, 3 GHz or better. - -2 GB RAM or higher. - -Windows Installation medium. - -Internet Connection - -How To Install Grass Valley Edius Pro 8.5.3.3573 Crack: - -Start downloading the latest crack setup from the link given below - -Now install the crack by running the patch from setup - -Wait for some time and done! - -Grass Valley Edius Pro 8.5.3.3573 Windows 7 Crack Plus Keygen Free Download:Petr Ivanov - -Petr Petrovich Ivanov (; born 12 December 1993) is a Russian professional football player. He plays for FC Zenit-2 Saint Petersburg. - -Club career - -He made his debut in the Russian Premier League on 26 March 2012 in a game against FC Sibir Novosibirsk. - -Career statistics - -Club - -Notes - -References - -External links - - - - Career summary at sportbox.ru - -Category:1993 births - -Category:People from Saint Petersburg - -Category:Living people - -Category:Russian footballers - -Category:Russia youth international footballers - -Category:Russia under-21 international footballers - -Category:Russia-2 international footballers - -Category:Russian Premier League players - -Category:FC Zenit Saint Petersburg players - -Category:FC Tosno players - -Category:Association football midfielders - -Category:FC Lokomotiv Moscow players - -Category:FC Novokuznetsk players1. Field of the Invention - -The invention concerns a device for the temporary fixation of an implant for the treatment of rheumatic or degenerative disorders of the shoulder. - -2. Description of the Prior Art - -The shoulder is a joint which rotates around three axes of symmetry: scapuloid, humeral and coracoid. It is made up of a glenoid surface that receives the head of the humerus and of a glenoid fossa formed by a dome-shaped scapula, the latter being able to articulate with respect to the glenoid surface of the head of the humerus. The latter articulates with respect to the collar bone or clavicle in the superior part and with respect to the acromion in the 4fefd39f24
        -
        -
        -

        diff --git a/spaces/segestic/HealthBlock/README.md b/spaces/segestic/HealthBlock/README.md deleted file mode 100644 index 676cbd308ae5b9aa9582d487b53816b6745e2c0b..0000000000000000000000000000000000000000 --- a/spaces/segestic/HealthBlock/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: HealthBlock -emoji: 🐨 -colorFrom: green -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/segments-tobias/conex/espnet2/layers/__init__.py b/spaces/segments-tobias/conex/espnet2/layers/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/shi-labs/OneFormer/oneformer/data/build.py b/spaces/shi-labs/OneFormer/oneformer/data/build.py deleted file mode 100644 index fb775313605cf24ed2385681fa2c43d5068b5a4a..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/OneFormer/oneformer/data/build.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Any, Callable, Dict, List, Optional, Union -import torch.utils.data as torchdata - -from detectron2.config import configurable - - -from detectron2.data.common import DatasetFromList, MapDataset -from detectron2.data.dataset_mapper import DatasetMapper -from detectron2.data.samplers import ( - InferenceSampler, -) -from detectron2.data.build import ( - get_detection_dataset_dicts, - trivial_batch_collator -) -""" -This file contains the default logic to build a dataloader for training or testing. -""" - -__all__ = [ - "build_detection_test_loader", -] - - -def _test_loader_from_config(cfg, dataset_name, mapper=None): - """ - Uses the given `dataset_name` argument (instead of the names in cfg), because the - standard practice is to evaluate each test set individually (not combining them). - """ - if isinstance(dataset_name, str): - dataset_name = [dataset_name] - - dataset = get_detection_dataset_dicts( - dataset_name, - filter_empty=False, - proposal_files=[ - cfg.DATASETS.PROPOSAL_FILES_TEST[list(cfg.DATASETS.TEST).index(x)] for x in dataset_name - ] - if cfg.MODEL.LOAD_PROPOSALS - else None, - ) - if mapper is None: - mapper = DatasetMapper(cfg, False) - return { - "dataset": dataset, - "mapper": mapper, - "num_workers": cfg.DATALOADER.NUM_WORKERS, - "sampler": InferenceSampler(len(dataset)) - if not isinstance(dataset, torchdata.IterableDataset) - else None, - } - - -@configurable(from_config=_test_loader_from_config) -def build_detection_test_loader( - dataset: Union[List[Any], torchdata.Dataset], - *, - mapper: Callable[[Dict[str, Any]], Any], - sampler: Optional[torchdata.Sampler] = None, - batch_size: int = 1, - num_workers: int = 0, - collate_fn: Optional[Callable[[List[Any]], Any]] = None, -) -> torchdata.DataLoader: - """ - Similar to `build_detection_train_loader`, with default batch size = 1, - and sampler = :class:`InferenceSampler`. This sampler coordinates all workers - to produce the exact set of all samples. - - Args: - dataset: a list of dataset dicts, - or a pytorch dataset (either map-style or iterable). They can be obtained - by using :func:`DatasetCatalog.get` or :func:`get_detection_dataset_dicts`. - mapper: a callable which takes a sample (dict) from dataset - and returns the format to be consumed by the model. - When using cfg, the default choice is ``DatasetMapper(cfg, is_train=False)``. - sampler: a sampler that produces - indices to be applied on ``dataset``. Default to :class:`InferenceSampler`, - which splits the dataset across all workers. Sampler must be None - if `dataset` is iterable. - batch_size: the batch size of the data loader to be created. - Default to 1 image per worker since this is the standard when reporting - inference time in papers. - num_workers: number of parallel data loading workers - collate_fn: same as the argument of `torch.utils.data.DataLoader`. - Defaults to do no collation and return a list of data. - - Returns: - DataLoader: a torch DataLoader, that loads the given detection - dataset, with test-time transformation and batching. - - Examples: - :: - data_loader = build_detection_test_loader( - DatasetRegistry.get("my_test"), - mapper=DatasetMapper(...)) - - # or, instantiate with a CfgNode: - data_loader = build_detection_test_loader(cfg, "my_test") - """ - if isinstance(dataset, list): - dataset = DatasetFromList(dataset, copy=False) - if mapper is not None: - dataset = MapDataset(dataset, mapper) - if isinstance(dataset, torchdata.IterableDataset): - assert sampler is None, "sampler must be None if dataset is IterableDataset" - else: - if sampler is None: - sampler = InferenceSampler(len(dataset)) - return torchdata.DataLoader( - dataset, - batch_size=batch_size, - sampler=sampler, - drop_last=False, - num_workers=num_workers, - collate_fn=trivial_batch_collator if collate_fn is None else collate_fn, - ) \ No newline at end of file diff --git a/spaces/silencewing/server/Dockerfile b/spaces/silencewing/server/Dockerfile deleted file mode 100644 index 429cd2b92a676bf457e0539041cc32703633b959..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/Dockerfile +++ /dev/null @@ -1,46 +0,0 @@ -#first dockerfile - -#FROM nginx:latest -FROM ubuntu:latest -# RUN apt-get update -# RUN apt-get install -y vim - -# RUN apt-get install -y nginx - -# 以上执行会创建 3 层镜像。可简化为以下格式: - - -ENV DEBIAN_FRONTEND=noninteractive -RUN apt-get update && apt-get install -y vim && apt-get install -y nginx -# 如上,以 && 符号连接命令,这样执行后,只会创建 1 层镜像。 -#指定运行该镜像的容器使用的端口为 80 -# docker run的时候 一定要加上 -P -EXPOSE 7860 - - -RUN chown -R 1000 /var/log/nginx/ /var/lib/nginx/ /run/ -RUN useradd -m -u 1000 user -USER user -ENV HOME /home/user - -#RUN useradd -m -u 1000 user -# Switch to the "user" user -#USER user - -# Set home to the user's home directory -#ENV HOME=/home/user \ -#PATH=/home/user/.local/bin:$PATH -# Set the working directory to the user's home directory -WORKDIR $HOME -#RUN mkdir /usr/share/nginx/html/app -#RUN chown user /usr/share/nginx/html/app -COPY --chown=user:user . /home/user/app -COPY . /var/www/html/ - -#RUN chown -R 1000 /home/user - -#COPY --chown=user:user ./app /home/user/app -COPY ./default /etc/nginx/sites-available -#COPY --chown=user:user ./htpasswd /etc/nginx - -CMD ["nginx","-g","daemon off;"] diff --git a/spaces/silencewing/server/youyou/.history/math_20230613231033.html b/spaces/silencewing/server/youyou/.history/math_20230613231033.html deleted file mode 100644 index f5e1a3f73ef22fdcaac7fcf536c7e9e22957fb8c..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/youyou/.history/math_20230613231033.html +++ /dev/null @@ -1,229 +0,0 @@ - - - - - - - - - - Document - - - - -
        - - - - - - - - - - - - - - - - - - - - - - - - -
        题目
        答案
        正误
        得分
        -
        - - - - diff --git a/spaces/simonl0909/whisper-cantonese-demo/README.md b/spaces/simonl0909/whisper-cantonese-demo/README.md deleted file mode 100644 index 7843cbfd85cf6bb73dfbef2f8863bb01aed27aa2..0000000000000000000000000000000000000000 --- a/spaces/simonl0909/whisper-cantonese-demo/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: Whisper Demo -emoji: 🤫 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -tags: -- whisper-event -duplicated_from: whisper-event/whisper-demo ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile Hn Quc - FIFA APK - Tri nghim bng chn thc nht.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile Hn Quc - FIFA APK - Tri nghim bng chn thc nht.md deleted file mode 100644 index 079a8c0df8a212e6534a610ddb69fb5d4965ebd9..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/FIFA Mobile Hn Quc - FIFA APK - Tri nghim bng chn thc nht.md +++ /dev/null @@ -1,104 +0,0 @@ -
        -

        FIFA Mobile Hàn Quốc Nexon APK: A Review

        -

        If you are a fan of soccer games and want to experience the thrill of playing with your favorite teams and players on your mobile device, you might want to check out FIFA Mobile Hàn Quốc Nexon APK. This is a mobile version of the popular FIFA game series by EA Sports, developed by NEXON Company for the Korean market. In this article, we will review the game and its features, compare it to other soccer games, and give you some tips and tricks to improve your skills and performance.

        -

        fifa mobile hàn quốc nexon apk


        DOWNLOADhttps://ssurll.com/2uNUzA



        -

        What is FIFA Mobile Hàn Quốc Nexon APK?

        -

        A brief introduction to the game and its features

        -

        FIFA Mobile Hàn Quốc Nexon APK is a soccer simulation game that lets you build your ultimate team of soccer stars and compete in various modes, such as Head-to-Head, VS Attack, Manager Mode, UEFA Champions League, UEFA Europa League, UEFA Europa Conference League, World Cup 2022, and more. You can choose from over 15,000 authentic players from over 600 teams, including Real Madrid, Paris SG, Liverpool, Juventus, Chelsea, Manchester City, Barcelona, Bayern Munich, etc. You can also customize your players with different kits, badges, skills, boosts, and chemistry.

        -

        The game features realistic graphics, animations, sound effects, commentary, and stadiums that create an immersive soccer atmosphere. The game also has a new engine that improves the gameplay, responsiveness, and performance of the game. The game supports up to 60 fps on compatible devices.

        -

        How to download and install the game on Android devices

        -

        To download and install FIFA Mobile Hàn Quốc Nexon APK on your Android device, you need to follow these steps:

        -
          -
        1. Go to [this link](^1^) or [this link](^2^) on your device's browser.
        2. -
        3. Tap on the Download APK button and wait for the file to be downloaded.
        4. -
        5. Once the file is downloaded, tap on it to open it.
        6. -
        7. If you see a warning message that says "Install blocked", go to your device's settings and enable "Unknown sources" or "Allow from this source".
        8. -
        9. Tap on Install and wait for the installation process to finish.
        10. -
        11. Once the installation is done, tap on Open to launch the game.
        12. -
        -

        Note: You need at least 162 MB of free space on your device to install the game. You also need an internet connection to play the game.

        -

        fifa mobile hàn quốc nexon apk tải miễn phí
        -fifa mobile hàn quốc nexon apk cập nhật mới nhất
        -fifa mobile hàn quốc nexon apk phiên bản 12.1.02
        -fifa mobile hàn quốc nexon apk cài đặt trên android
        -fifa mobile hàn quốc nexon apk xapk download
        -fifa mobile hàn quốc nexon apk game thể thao
        -fifa mobile hàn quốc nexon apk kỷ niệm 3 năm
        -fifa mobile hàn quốc nexon apk biểu tượng vĩnh cửu
        -fifa mobile hàn quốc nexon apk thị trường chuyển nhượng
        -fifa mobile hàn quốc nexon apk trải nghiệm chơi cải tiến
        -fifa mobile hàn quốc nexon apk máy ảnh bộ mảnh
        -fifa mobile hàn quốc nexon apk com.nexon.fmk
        -fifa mobile hàn quốc nexon apk của NEXON Company
        -fifa mobile hàn quốc nexon apk fifamobile.nexon.com
        -fifa mobile hàn quốc nexon apk cho android tv & tablet
        -fifa mobile hàn quốc nexon apk cho pc windows
        -fifa mobile hàn quốc nexon apk bản tiếng việt
        -fifa mobile hàn quốc nexon apk mod hack
        -fifa mobile hàn quốc nexon apk offline online
        -fifa mobile hàn quốc nexon apk đánh giá review
        -fifa mobile hàn quốc nexon apk video gameplay
        -fifa mobile hàn quốc nexon apk mẹo tips tricks
        -fifa mobile hàn quốc nexon apk lỗi bug fix
        -fifa mobile hàn quốc nexon apk phiên bản cũ old version
        -fifa mobile hàn quốc nexon apk tương thích compatible devices
        -fifa mobile hàn quốc nexon apk yêu cầu requirements
        -fifa mobile hàn quốc nexon apk tải về download link
        -fifa mobile hàn quốc nexon apk cách chơi how to play
        -fifa mobile hàn quốc nexon apk đội bóng team squad
        -fifa mobile hàn quốc nexon apk cầu thủ player card
        -fifa mobile hàn quốc nexon apk kỹ năng skill move
        -fifa mobile hàn quốc nexon apk sự kiện event reward
        -fifa mobile hàn quốc nexon apk chế độ mode feature
        -fifa mobile hàn quốc nexon apk đồng coin point gem
        -fifa mobile hàn quốc nexon apk mua bán buy sell trade
        -fifa mobile hàn quốc nexon apk gift code redeem code voucher code coupon code promo code free code freebie code bonus code invitation code referral code activation code registration code verification code confirmation code validation code authentication code authorization code access code pin code serial code license code activation key product key cd key steam key origin key uplay key epic key gog key rockstar key blizzard key bethesda key ea key ubisoft key valve key activision key microsoft key sony key nintendo key sega key square enix key capcom key konami key bandai namco key koei tecmo key atlus key namco bandai key warner bros key disney key lucasarts key marvel key dc key fox next key netmarble key kabam key scopely key jam city key zynga key glu games key gameloft key ea sports key 2k games key rockstar games key bethesda game studios key valve corporation key activision blizzard key microsoft studios key sony interactive entertainment key nintendo ead key sega am2 r&d division 2 team yu team shenmue team sonic team sonic team usa sonic team japan sonic team europe sonic team australia sonic team asia sonic team south america sonic team north america sonic team canada sonic team mexico sonic team brazil sonic team argentina

        -

        How does FIFA Mobile H àn Quốc Nexon APK compare to other soccer games?

        -

        The pros and cons of FIFA Mobile Hàn Quốc Nexon APK

        -

        Like any other game, FIFA Mobile Hàn Quốc Nexon APK has its own advantages and disadvantages. Here are some of them:

        - - - - - - - - - - - - - - - - - - - - - -
        ProsCons
        - It has a large and diverse roster of players and teams.- It requires a lot of storage space and data usage.
        - It has various modes and events to keep you entertained.- It can be laggy and buggy on some devices.
        - It has realistic and smooth graphics and gameplay.- It can be hard to progress without spending real money.
        - It has a social aspect that lets you chat and play with other players.- It can be addictive and time-consuming.
        -

        The similarities and differences between FIFA Mobile Hàn Quốc Nexon APK and other popular soccer games, such as eFootball PES 2021, Dream League Soccer, and FIFA Mobile

        -

        FIFA Mobile Hàn Quốc Nexon APK is not the only soccer game available on the market. There are other popular soccer games that you might have heard of or played before, such as eFootball PES 2021, Dream League Soccer, and FIFA Mobile. How does FIFA Mobile Hàn Quốc Nexon APK compare to them? Here are some similarities and differences:

        - - Similarities: - All of them are soccer simulation games that let you build your own team and compete in various modes and events. - All of them have licensed players and teams from different leagues and countries. - All of them have realistic graphics, sound effects, commentary, and stadiums. - All of them are free to download and play, but have in-app purchases and ads. - Differences: - FIFA Mobile Hàn Quốc Nexon APK is developed by NEXON Company for the Korean market, while the others are developed by different companies for the global market. - FIFA Mobile Hàn Quốc Nexon APK has more modes and events than the others, such as UEFA Europa Conference League, World Cup 2022, etc. - FIFA Mobile Hàn Quốc Nexon APK has a new engine that improves the gameplay, responsiveness, and performance of the game, while the others use older engines that may have some issues. - FIFA Mobile Hàn Quốc Nexon APK has a social aspect that lets you chat and play with other players, while the others have limited or no social features.

        How to improve your skills and performance in FIFA Mobile Hàn Quốc Nexon APK?

        -

        Some tips and tricks for building and managing your ultimate team

        -

        To succeed in FIFA Mobile Hàn Quốc Nexon APK, you need to have a strong and balanced team that can handle any opponent. Here are some tips and tricks for building and managing your ultimate team:

        - - Upgrade your players regularly by using training points, skill boosts, or player cards. You can get these items by playing matches, completing tasks, or opening packs. - Use chemistry to increase your team's overall rating and performance. Chemistry is based on factors such as nationality, league, club, position, formation, etc. You can see your team's chemistry by tapping on the icon on the top left corner of the screen. - Choose a formation that suits your play style and your players' strengths. You can change your formation by tapping on the icon on the top right corner of the screen. You can also adjust your tactics, such as attacking style, defensive style, etc. - Rotate your players to avoid fatigue and injuries. You can substitute your players by tapping on the icon on the bottom right corner of the screen during a match. You can also use fitness items to restore your players' stamina.

        Some gameplay strategies and techniques for scoring goals and winning matches

        -

        To win matches in FIFA Mobile Hàn Quốc Nexon APK, you need to score more goals than your opponent. Here are some gameplay strategies and techniques for scoring goals and winning matches:

        - - Use the virtual joystick on the left side of the screen to move your player. You can also swipe on the screen to make quick turns or dribbles. - Use the buttons on the right side of the screen to perform actions. You can use the pass button to pass the ball to a teammate, the shoot button to shoot at the goal, the sprint button to run faster, or the skill button to perform tricks or feints. - Use different types of passes depending on the situation. You can use a short pass to make a quick and accurate pass, a long pass to send the ball over a long distance, a through pass to send the ball behind the defense, or a lob pass to send the ball over the defense. - Use different types of shots depending on the situation. You can use a normal shot to make a powerful and direct shot, a finesse shot to make a curved and precise shot, a chip shot to make a high and lobbed shot, or a volley shot to make a shot in mid-air. - Use different types of skills depending on the situation. You can use a roulette to spin around a defender, a rainbow flick to flick the ball over a defender, a heel-to-heel to change direction quickly, or a step-over to fake out a defender. - Use different types of tactics depending on the situation. You can use an attacking tactic to increase your offensive power, a balanced tactic to maintain your defensive and offensive balance, or a defensive tactic to increase your defensive power.

        Conclusion

        -

        A summary of the main points and a recommendation for the game

        -

        FIFA Mobile Hàn Quốc Nexon APK is a soccer simulation game that offers you an exciting and realistic soccer experience on your mobile device. You can build your ultimate team of soccer stars and compete in various modes and events. You can also enjoy the realistic graphics, sound effects, commentary, and stadiums that create an immersive soccer atmosphere. The game also has a new engine that improves the gameplay, responsiveness, and performance of the game. The game also has a social aspect that lets you chat and play with other players.

        -

        If you are looking for a soccer game that is fun, challenging, and rewarding, you should give FIFA Mobile Hàn Quốc Nexon APK a try. You will not regret it.

        -

        FAQs

        -

        Q1. Is FIFA Mobile Hàn Quốc Nexon APK free to play?

        -

        A1. Yes, FIFA Mobile Hàn Quốc Nexon APK is free to download and play, but it has in-app purchases and ads that you can choose to buy or watch.

        -

        Q2. What are the minimum requirements for playing FIFA Mobile Hàn Quốc Nexon APK on Android devices?

        -

        A2. According to the official website, you need at least Android 5.0 or higher, 2 GB of RAM, and 162 MB of free space on your device to play FIFA Mobile Hàn Quốc Nexon APK.

        -

        Q3. How can I get more coins and gems in FIFA Mobile Hàn Quốc Nexon APK?

        -

        A3. You can get more coins and gems in FIFA Mobile Hàn Quốc Nexon APK by playing matches, completing tasks, opening packs, watching ads, or buying them with real money.

        -

        Q4. How can I play FIFA Mobile Hàn Quốc Nexon APK with my friends online?

        -

        A4. You can play FIFA Mobile Hàn Quốc Nexon APK with your friends online by adding them as friends in the game and inviting them to join your league or play Head-to-Head matches with them.

        -

        Q5. How can I contact the customer support of FIFA Mobile Hàn Quốc Nexon APK?

        -

        A5. You can contact the customer support of FIFA Mobile Hàn Quốc Nexon APK by tapping on the Settings icon on the top right corner of the screen, then tapping on Help & Support, then tapping on Contact Us.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/sklearn-docs/Hierarchical-clustering-dendrogram/README.md b/spaces/sklearn-docs/Hierarchical-clustering-dendrogram/README.md deleted file mode 100644 index 889a6b7973513ad6d293b7cb63af4fde5508f672..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Hierarchical-clustering-dendrogram/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Hierarchical Clustering Dendrogram -emoji: 🏃 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/spritlesoftware/Image-Object-Detection/color_wheel.py b/spaces/spritlesoftware/Image-Object-Detection/color_wheel.py deleted file mode 100644 index 8580598b4e97967da70bc09e5087f80f8999c2d4..0000000000000000000000000000000000000000 --- a/spaces/spritlesoftware/Image-Object-Detection/color_wheel.py +++ /dev/null @@ -1,146 +0,0 @@ -import math -from color import Color - -class ColorWheel: - @property - def baseColor(self): - return self._baseColor - - @property - def hue(self): - return [ - self.baseColor, self.addH(1.0 / 12), self.addH(2.0 / 12), self.addH(3.0 / 12), self.addH(4.0 / 12), self.addH(5.0 / 12), - self.addH(-6.0 / 12), self.addH(-5.0 / 12), self.addH(-4.0 / 12), self.addH(-3.0 / 12), self.addH(-2.0 / 12), self.addH(-1.0 / 12) - ] - - @property - def tone(self): - return [self.addWhite(-2.0 / 16), self.addWhite(-1.0 / 16), self.baseColor, self.addWhite(1.0 / 16), self.addWhite(2.0 / 16)] - - @property - def tone15(self): - return [self.addWhite(-7.0 / 16), self.addWhite(-6.0 / 16), self.addWhite(-5.0 / 16), self.addWhite(-4.0 / 16), self.addWhite(-3.0 / 16), - self.addWhite(-2.0 / 16), self.addWhite(-1.0 / 16), self.baseColor, self.addWhite(1.0 / 16), self.addWhite(2.0 / 16), - self.addWhite(3.0 / 16), self.addWhite(4.0 / 16), self.addWhite(5.0 / 16), self.addWhite(6.0 / 16), self.addWhite(7.0 / 16)] - - @property - def complementaryColors(self): - return [self.baseColor, self.addH(0.5)] - - @property - def triadicColors(self): - return [self.addH(-4.0 / 12), self.baseColor, self.addH(4.0 / 12)] - - @property - def splitComplementaryColors(self): - return [self.addH(-5.0 / 12), self.baseColor, self.addH(5.0 / 12)] - - @property - def analogousColors(self): - return [self.addH(-2.0 / 12), self.addH(-1.0 / 12), self.baseColor, self.addH(1.0 / 12), self.addH(2.0 / 12)] - - def __init__(self, c): - self._baseColor = c - self._r = c.r / 255.0 - self._g = c.g / 255.0 - self._b = c.b / 255.0 - - @staticmethod - def fromHsv(h, s, v): - r, g, b = ColorWheel.hsvToRgb(h, s, v) - c = Color.fromRgb(round(r * 255), round(g * 255), round(b * 255)) - return ColorWheel(c) - - def addWhite(self, value): - r, g, b = (self._r + value, self._g + value, self._b + value) - r = min(max(r, 0.0), 1.0) - g = min(max(g, 0.0), 1.0) - b = min(max(b, 0.0), 1.0) - return self._fromRgb(r, g, b) - - def addH(self, value): - h, s, v = ColorWheel.rgbToHsv(self._r, self._g, self._b) - h = (h + value) % 1.0 - if h < 0.0: - h += 1.0 - r, g, b = ColorWheel.hsvToRgb(h, s, v) - return self._fromRgb(r, g, b) - - def addS(self, value): - h, s, v = ColorWheel.rgbToHsv(self._r, self._g, self._b) - s += value - s = min(max(s, 0.0), 1.0) - r, g, b = ColorWheel.hsvToRgb(h, s, v) - return self._fromRgb(r, g, b) - - def addV(self, value): - h, s, v = ColorWheel.rgbToHsv(self._r, self._g, self._b) - v += value - v = min(max(v, 0.0), 1.0) - r, g, b = ColorWheel.hsvToRgb(h, s, v) - return self._fromRgb(r, g, b) - - @staticmethod - def rgbToHsv(r, g, b): - if r < 0.0 or r > 1.0: - raise ValueError() - if g < 0.0 or g > 1.0: - raise ValueError() - if b < 0.0 or b > 1.0: - raise ValueError() - cmax = max(r, g, b) - cmin = min(r, g, b) - h = cmax - cmin - if h > 0.0: - if cmax == r: - h = (g - b) / h - if h < 0.0: - h += 6.0 - elif cmax == g: - h = 2.0 + (b - r) / h - else: - h = 4.0 + (r - g) / h - h /= 6.0 - s = cmax - cmin - if cmax != 0.0: - s /= cmax - v = cmax - return h, s, v - - @staticmethod - def hsvToRgb(h, s, v): - if h < 0.0 or h > 1.0: - raise ValueError() - if s < 0.0 or s > 1.0: - raise ValueError() - if v < 0.0 or v > 1.0: - raise ValueError() - r = v - g = v - b = v - if s > 0.0: - h *= 6.0 - i = math.floor(h) - f = h - i - if i == 1: - r *= 1 - s * f - b *= 1 - s - elif i == 2: - r *= 1 - s - b *= 1 - s * (1 - f) - elif i == 3: - r *= 1 - s - g *= 1 - s * f - elif i == 4: - r *= 1 - s * (1 - f) - g *= 1 - s - elif i == 5: - g *= 1 - s - b *= 1 - s * f - else: - g *= 1 - s * (1 - f) - b *= 1 - s - return r, g, b - - def _fromRgb(self, r, g, b): - return Color.fromArgb(self.baseColor.a, round(r * 255), round(g * 255), round(b * 255)) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/data_scripts/download_wat19_my.sh b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/data_scripts/download_wat19_my.sh deleted file mode 100644 index c1e2d47287a29af4576e7a63641e8152ecb63c44..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/data_scripts/download_wat19_my.sh +++ /dev/null @@ -1,36 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - -SRCDIR=$WORKDIR_ROOT/indic_languages_corpus -DESTDIR=$WORKDIR_ROOT/ML50/raw -mkdir -p $SRCDIR -mkdir -p $DESTDIR - -WAT_MY_EN=wat2020.my-en.zip -cd $SRCDIR -# please refer to http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/ for latest URL if the following url expired -#- The data used for WAT2020 are identical to those used in WAT2019. -wget http://lotus.kuee.kyoto-u.ac.jp/WAT/my-en-data/$WAT_MY_EN -unzip $WAT_MY_EN - - -SRC_EXTRACT_DIR=$SRCDIR/wat2020.my-en/alt - -cp $SRC_EXTRACT_DIR/train.alt.en $DESTDIR/train.my_MM-en_XX.en_XX -cp $SRC_EXTRACT_DIR/train.alt.my $DESTDIR/train.my_MM-en_XX.my_MM -cp $SRC_EXTRACT_DIR/dev.alt.en $DESTDIR/valid.my_MM-en_XX.en_XX -cp $SRC_EXTRACT_DIR/dev.alt.my $DESTDIR/valid.my_MM-en_XX.my_MM -cp $SRC_EXTRACT_DIR/test.alt.en $DESTDIR/test.my_MM-en_XX.en_XX -cp $SRC_EXTRACT_DIR/test.alt.my $DESTDIR/test.my_MM-en_XX.my_MM diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/multilingual_fairseq_gen.sh b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/multilingual_fairseq_gen.sh deleted file mode 100644 index 65aa322d7daaa428015de98abe4664a6a4164bfd..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/multilingual/multilingual_fairseq_gen.sh +++ /dev/null @@ -1,26 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -lang_pairs="en-fr,en-cs,fr-en,cs-en" -path_2_data=$1 # -lang_list=$2 # -model=$3 # -source_lang=cs -target_lang=en - -fairseq-generate "$path_2_data" \ - --path "$model" \ - --task translation_multi_simple_epoch \ - --gen-subset test \ - --source-lang "$source_lang" \ - --target-lang "$target_lang" \ - --sacrebleu --remove-bpe 'sentencepiece'\ - --batch-size 32 \ - --encoder-langtok "src" \ - --decoder-langtok \ - --lang-dict "$lang_list" \ - --lang-pairs "$lang_pairs" diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/preprocessing/get_speaker_embedding.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/preprocessing/get_speaker_embedding.py deleted file mode 100644 index 0e3e4c5cd7aef15dae0b41b0ec7b33e17f66597f..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/preprocessing/get_speaker_embedding.py +++ /dev/null @@ -1,89 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import argparse -from collections import defaultdict -from itertools import chain -from pathlib import Path - -import numpy as np -import torchaudio -import torchaudio.sox_effects as ta_sox -import yaml -from tqdm import tqdm - -from examples.speech_to_text.data_utils import load_tsv_to_dicts -from examples.speech_synthesis.preprocessing.speaker_embedder import SpkrEmbedder - - -def extract_embedding(audio_path, embedder): - wav, sr = torchaudio.load(audio_path) # 2D - if sr != embedder.RATE: - wav, sr = ta_sox.apply_effects_tensor( - wav, sr, [["rate", str(embedder.RATE)]] - ) - try: - emb = embedder([wav[0].cuda().float()]).cpu().numpy() - except RuntimeError: - emb = None - return emb - - -def process(args): - print("Fetching data...") - raw_manifest_root = Path(args.raw_manifest_root).absolute() - samples = [load_tsv_to_dicts(raw_manifest_root / (s + ".tsv")) - for s in args.splits] - samples = list(chain(*samples)) - with open(args.config, "r") as f: - config = yaml.load(f, Loader=yaml.FullLoader) - with open(f"{config['audio_root']}/{config['speaker_set_filename']}") as f: - speaker_to_id = {r.strip(): i for i, r in enumerate(f)} - - embedder = SpkrEmbedder(args.ckpt).cuda() - speaker_to_cnt = defaultdict(float) - speaker_to_emb = defaultdict(float) - for sample in tqdm(samples, desc="extract emb"): - emb = extract_embedding(sample["audio"], embedder) - if emb is not None: - speaker_to_cnt[sample["speaker"]] += 1 - speaker_to_emb[sample["speaker"]] += emb - if len(speaker_to_emb) != len(speaker_to_id): - missed = set(speaker_to_id) - set(speaker_to_emb.keys()) - print( - f"WARNING: missing embeddings for {len(missed)} speaker:\n{missed}" - ) - speaker_emb_mat = np.zeros((len(speaker_to_id), len(emb)), float) - for speaker in speaker_to_emb: - idx = speaker_to_id[speaker] - emb = speaker_to_emb[speaker] - cnt = speaker_to_cnt[speaker] - speaker_emb_mat[idx, :] = emb / cnt - speaker_emb_name = "speaker_emb.npy" - speaker_emb_path = f"{config['audio_root']}/{speaker_emb_name}" - np.save(speaker_emb_path, speaker_emb_mat) - config["speaker_emb_filename"] = speaker_emb_name - - with open(args.new_config, "w") as f: - yaml.dump(config, f) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--raw-manifest-root", "-m", required=True, type=str) - parser.add_argument("--splits", "-s", type=str, nargs="+", - default=["train"]) - parser.add_argument("--config", "-c", required=True, type=str) - parser.add_argument("--new-config", "-n", required=True, type=str) - parser.add_argument("--ckpt", required=True, type=str, - help="speaker embedder checkpoint") - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/tools/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/tools/README.md deleted file mode 100644 index 61fcbbded80023f75eaec4b69ddfbbe4cc252e5b..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/tools/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# GSLM Tools - -## Resynthesis -You can use the command line tool below to input an audio file and get the resynthesized audio. This tool implements the unsupervised method for resynthesis described in the paper. The way to invoke the command line tool is shown below. -``` -FAIRSEQ_ROOT= -TYPE= -ACOUSTIC_MODEL_PATH= -LAYER= -KM_MODEL_PATH= -TTS_MODEL_PATH= -WAVEGLOW_PATH= - -PYTHONPATH=${FAIRSEQ_ROOT}:${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/unit2speech python ${FAIRSEQ_ROOT}/examples/textless_nlp/gslm/tools/gen_speech.py \ - --feature_type $TYPE \ - --acoustic_model_path $ACOUSTIC_MODEL_PATH \ - --layer $LAYER \ - --kmeans_model_path $KM_MODEL_PATH \ - --tts_model_path $TTS_MODEL_PATH \ - --waveglow_path $WAVEGLOW_PATH \ - --max_decoder_steps 2000 -``` \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Anaarkali Of Aarah Movie 1080p FREE Download Utorrentl.md b/spaces/stomexserde/gpt4-ui/Examples/Anaarkali Of Aarah Movie 1080p FREE Download Utorrentl.md deleted file mode 100644 index 6874f232e0820b55779d74aeb8ec31d7caae5aea..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Anaarkali Of Aarah Movie 1080p FREE Download Utorrentl.md +++ /dev/null @@ -1,16 +0,0 @@ -
        -

        Anaarkali Of Aarah: A Bold and Empowering Tale of a Folk Dancer

        -

        Anaarkali Of Aarah is a 2017 Hindi movie that tells the story of Anaarkali (Swara Bhaskar), a folk dancer who performs erotic songs in public functions in the small town of Arrah in Bihar. She is proud of her art and does not shy away from expressing her sexuality on stage. However, her life takes a turn when she is molested by a powerful politician and vice chancellor Dharmender Chauhan (Sanjay Mishra) during one of her shows. Instead of succumbing to his threats and harassment, she decides to fight back and seek justice for herself.

        -

        Anaarkali Of Aarah Movie 1080p Download Utorrentl


        DOWNLOADhttps://urlgoal.com/2uI6cJ



        -

        The movie is written and directed by debutant Avinash Das, who was inspired by the real-life incidents of Bhojpuri folk singers who faced similar challenges and exploitation. The movie explores the themes of gender, power, dignity and consent in a patriarchal society that often silences and shames women who dare to challenge the status quo. The movie also showcases the rich and vibrant culture of Bihar and its folk music.

        -

        Swara Bhaskar delivers a stellar performance as Anaarkali, portraying her character with nuance, grace and courage. She brings out the complexity and depth of Anaarkali, who is not just a victim but also a survivor and a rebel. Sanjay Mishra is equally impressive as the antagonist, who represents the corrupt and oppressive system that tries to crush Anaarkali's spirit. Pankaj Tripathi plays Rangeela, Anaarkali's friend and manager, who supports her throughout her ordeal. The supporting cast also includes Ishtiyak Khan, Vijay Kumar, Ipsita Chakraborty Singh and Nitin Arora.

        -

        The movie has received critical acclaim for its bold and empowering narrative, its realistic and authentic portrayal of Bihar and its folk music, its brilliant performances and its tight screenplay. The movie has also won several awards and nominations, including a Filmfare nomination for Swara Bhaskar for Best Actress (Critics).

        -

        Anaarkali Of Aarah is a movie that celebrates the resilience and strength of women who refuse to give up their voice and their dignity in the face of adversity. It is a movie that challenges the stereotypes and prejudices that surround women who express their sexuality openly. It is a movie that inspires and entertains with its captivating story and music.

        -

        If you are looking for a movie that will make you think, feel and cheer, then Anaarkali Of Aarah is the perfect choice for you. You can download it in 1080p quality from Utorrentl by clicking on this link: https://utorrentl.com/anaarkali-of-aarah-movie-1080p-download

        - -

        Anaarkali Of Aarah is not just a movie, but a movement. It is a movie that challenges the patriarchal society that has a way of holding women responsible for the atrocities they themselves face. It is a movie that questions the perception that women who perform erotic songs are inviting trouble and deserve no respect. It is a movie that asserts that women have the right to say no, regardless of their profession or attire.

        -

        -

        The movie has been praised by critics and audiences alike for its bold and empowering narrative, its realistic and authentic portrayal of Bihar and its folk music, its brilliant performances and its tight screenplay. The movie has also been hailed as a feminist masterpiece that gives voice to the voiceless and marginalised women who face sexual violence and harassment on a daily basis.

        -

        Anaarkali Of Aarah is a movie that deserves to be watched by everyone who believes in women's rights and dignity. It is a movie that will make you think, feel and cheer for Anaarkali, who is not just a character, but a symbol of courage and resistance. It is a movie that will stay with you long after it ends.

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/ELAU EPAS-3 SOFTWARE 21 Arminwagon !FULL!.md b/spaces/stomexserde/gpt4-ui/Examples/ELAU EPAS-3 SOFTWARE 21 Arminwagon !FULL!.md deleted file mode 100644 index da4385cb71e7b2d1ce58fa92000acb300e515c67..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/ELAU EPAS-3 SOFTWARE 21 Arminwagon !FULL!.md +++ /dev/null @@ -1,145 +0,0 @@ -
        -

        ELAU EPAS-3 Software 21 Arminwagon: A Comprehensive Guide for Motion Control

        -

        If you are looking for a software product that can help you program, configure, and diagnose ELAU motion controllers, you may want to check out ELAU EPAS-3 software 21 arminwagon. This software package is designed for various types of controllers, such as PacDrive M, PacDrive LMC Eco and Pro, PacDrive MC-4 and MC-5. With ELAU EPAS-3 software 21 arminwagon, you can create motion applications using graphical or textual programming languages, such as IEC 61131-3 or PLCopen. You can also use the online mode to upload, download, monitor, and debug your motion applications. Moreover, you can use the diagnostic tools, such as error codes, trace function, and oscilloscope function, to troubleshoot your motion applications.

        -

        ELAU EPAS-3 SOFTWARE 21 arminwagon


        Downloadhttps://urlgoal.com/2uI6v4



        -

        In this article, we will show you how to download and install ELAU EPAS-3 software 21 arminwagon on your PC. We will also show you how to connect your PC to the ELAU motion controller via RS 232 or RS 485 interface. We will then show you how to configure the COM interface, the axis address, and the default parameters of the ELAU motion controller. Next, we will show you how to launch ELAU EPAS-3 software 21 arminwagon and navigate the main window. We will then show you how to create a new project or open an existing project in ELAU EPAS-3 software 21 arminwagon. We will also show you how to use graphical or textual programming languages, such as IEC 61131-3 or PLCopen, to create motion applications in ELAU EPAS-3 software 21 arminwagon. We will then show you how to use the online mode to upload, download, monitor, and debug your motion applications in ELAU EPAS-3 software 21 arminwagon. Finally, we will show you how to use the diagnostic tools, such as error codes, trace function, and oscilloscope function, to troubleshoot your motion applications in ELAU EPAS-3 software 21 arminwagon.

        -

        By the end of this article, you will have a clear understanding of what ELAU EPAS-3 software 21 arminwagon is and how to use it for your motion control projects. You will also learn some tips and tricks to optimize your motion control projects with ELAU EPAS-3 software 21 arminwagon. So, let's get started!

        -

        Installation and configuration

        -

        Before you can use ELAU EPAS-3 software 21 arminwagon, you need to download and install it on your PC. You also need to connect your PC to the ELAU motion controller via RS 232 or RS 485 interface. Then, you need to configure the COM interface, the axis address, and the default parameters of the ELAU motion controller. Here are the steps to follow:

        -

        What are the system requirements for ELAU EPAS-3 software 21 arminwagon?

        -

        To run ELAU EPAS-3 software 21 arminwagon on your PC, you need to have the following system requirements:

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        Operating systemWindows XP SP3, Windows Vista SP2, Windows 7 SP1, Windows 8.1, Windows 10
        ProcessorIntel Pentium IV or higher
        MemoryAt least 512 MB RAM
        Hard disk spaceAt least 500 MB free space
        Display resolutionAt least 1024 x 768 pixels
        InterfaceRS 232 or RS 485 serial port or USB adapter
        Licence keyA valid licence key for ELAU EPAS-3 software 21 arminwagon is required to activate the software. You can obtain the licence key from Schneider Electric or their authorized distributors.
        -

        How to download and install ELAU EPAS-3 software 21 arminwagon on your PC?

        -

        To download and install ELAU EPAS-3 software 21 arminwagon on your PC, you need to follow these steps:

        -

        -
          -
        1. Go to the Schneider Electric website and navigate to the download section for ELAU EPAS-3 software 21 arminwagon. You will need to register or log in to access the download link.
        2. -
        3. Select the version of ELAU EPAS-3 software 21 arminwagon that matches your operating system and click on the download button. You will need to accept the terms and conditions before downloading the software.
        4. -
        5. Save the downloaded file on your PC and unzip it using a suitable program, such as WinZip or WinRAR.
        6. -
        7. Run the setup.exe file and follow the instructions on the screen to install ELAU EPAS-3 software 21 arminwagon on your PC. You will need to enter the licence key when prompted.
        8. -
        9. Restart your PC after the installation is complete.
        10. -
        11. You can now launch ELAU EPAS-3 software 21 arminwagon from the Start menu or from the desktop shortcut.
        12. -
        -

        How to connect your PC to the ELAU motion controller via RS 232 or RS 485 interface?

        -

        To connect your PC to the ELAU motion controller via RS 232 or RS 485 interface, you need to follow these steps:

        -
          -
        1. Make sure that both your PC and the ELAU motion controller are powered off.
        2. -
        3. Connect a suitable cable between the RS 232 or RS 485 port of your PC and the corresponding port of the ELAU motion controller. The cable should have a DB9 connector at both ends for RS 232 interface, or a DB9 connector at one end and a terminal block at another end for RS 485 interface. You can also use a USB adapter if your PC does not have a serial port.
        4. -
        5. Power on your PC and the ELAU motion controller.
        6. -
        7. Your PC should automatically detect and install the driver for the serial port or the USB adapter. If not, you may need to manually install the driver from the CD-ROM that came with the cable or the adapter.
        8. -
        9. You can now communicate with the ELAU motion controller using ELAU EPAS-3 software 21 arminwagon.
        10. -
        -

        How to configure the COM interface, the axis address, and the default parameters of the ELAU motion controller?

        -

        To configure the COM interface, the axis address, and the default parameters of the ELAU motion controller, you need to follow these steps:

        -
          -
        1. Launch ELAU EPAS-3 software 21 arminwagon and select the COM interface from the menu bar. You will see a dialog box where you can select the COM port, the baud rate, the parity, the data bits, and the stop bits for the serial communication. You can also select the USB adapter if you are using one. Click on OK to confirm your settings.
        2. -
        3. Select the axis address from the menu bar. You will see a dialog box where you can enter the axis address of the ELAU motion controller. The axis address is a number between 0 and 15 that identifies the controller on the serial network. You can also scan for the axis address by clicking on the Scan button. Click on OK to confirm your settings.
        4. -
        5. Select the default parameters from the menu bar. You will see a dialog box where you can set the default parameters of the ELAU motion controller, such as the cycle time, the watchdog time, the acceleration and deceleration ramps, and the position and speed limits. You can also load or save the default parameters from a file by clicking on the Load or Save buttons. Click on OK to confirm your settings.
        6. -
        7. You can now program and diagnose your ELAU motion controller using ELAU EPAS-3 software 21 arminwagon.
        8. -
        -

        Programming and diagnosis

        -

        After you have installed and configured ELAU EPAS-3 software 21 arminwagon, you can start creating and testing your motion applications. You can use graphical or textual programming languages, such as IEC 61131-3 or PLCopen, to create motion applications in ELAU EPAS-3 software 21 arminwagon. You can also use the online mode to upload, download, monitor, and debug your motion applications in ELAU EPAS-3 software 21 arminwagon. Moreover, you can use the diagnostic tools, such as error codes, trace function, and oscilloscope function, to troubleshoot your motion applications in ELAU EPAS-3 software 21 arminwagon. Here are the steps to follow:

        -

        How to launch ELAU EPAS-3 software 21 arminwagon and navigate the main window?

        -

        To launch ELAU EPAS-3 software 21 arminwagon and navigate the main window, you need to follow these steps:

        -
          -
        1. Launch ELAU EPAS-3 software 21 arminwagon from the Start menu or from the desktop shortcut.
        2. -
        3. You will see the main window of ELAU EPAS-3 software 21 arminwagon, which consists of several parts:
            -
          • The menu bar, which contains various commands for file management, project management, programming languages, online mode, diagnostic tools, help topics, and more.
          • -
          • The toolbar, which contains shortcut icons for some of the most frequently used commands from the menu bar.
          • -
          • The project explorer, which shows the structure of your current project, such as folders, files, programs, variables, libraries, and more.
          • -
          • The editor area, which shows the content of your current file or program in graphical or textual format.
          • -
          • The output area, which shows messages from ELAU EPAS-3 software 21 arminwagon or from the ELAU motion controller, such as status, errors, warnings, and more.
          • -
          • The status bar, which shows the current state of ELAU EPAS-3 software 21 arminwagon and the ELAU motion controller, such as online or offline, run or stop, axis address, and more.
          • -
        4. -
        5. You can resize, move, dock, or undock any of these parts according to your preference. You can also customize the menu bar and the toolbar by adding or removing commands from them.
        6. -
        7. You can now create a new project or open an existing project in ELAU EPAS-3 software 21 arminwagon.
        8. -
        -

        How to create a new project or open an existing project in ELAU EPAS-3 software 21 arminwagon?

        -

        To create a new project or open an existing project in ELAU EPAS-3 software 21 arminwagon, you need to follow these steps:

        -
          -
        1. Select the File command from the menu bar and choose the New Project or Open Project option. You can also use the shortcut icons from the toolbar or the keyboard shortcuts Ctrl+N or Ctrl+O.
        2. -
        3. If you choose the New Project option, you will see a dialog box where you can enter the name and the location of your new project. You can also select a template for your new project from a list of predefined templates. Click on OK to create your new project.
        4. -
        5. If you choose the Open Project option, you will see a dialog box where you can browse and select an existing project file from your PC. You can also use the recent projects list to quickly access your previously opened projects. Click on OK to open your existing project.
        6. -
        7. You will see your current project in the project explorer. You can expand or collapse the folders and files in your project by clicking on the plus or minus signs next to them. You can also rename, delete, copy, paste, or move any of the folders and files in your project by right-clicking on them and choosing the appropriate option from the context menu.
        8. -
        9. You can now start programming your motion applications in ELAU EPAS-3 software 21 arminwagon.
        10. -
        -

        How to use graphical or textual programming languages, such as IEC 61131-3 or PLCopen, to create motion applications in ELAU EPAS-3 software 21 arminwagon?

        -

        To use graphical or textual programming languages, such as IEC 61131-3 or PLCopen, to create motion applications in ELAU EPAS-3 software 21 arminwagon, you need to follow these steps:

        -
          -
        1. Select the Programming Languages command from the menu bar and choose the graphical or textual programming language that you want to use. You can also use the shortcut icons from the toolbar or the keyboard shortcuts F5 for graphical programming languages and F6 for textual programming languages.
        2. -
        3. If you choose a graphical programming language, such as Ladder Diagram (LD), Function Block Diagram (FBD), Sequential Function Chart (SFC), or Structured Text (ST), you will see a graphical editor area where you can drag and drop elements from a toolbox and connect them with wires. You can also edit the properties of each element by double-clicking on it or by using the property window. You can also use function blocks from PLCopen libraries or user-defined libraries to create complex motion functions.
        4. -
        5. If you choose a textual programming language, such as Instruction List (IL) or Structured Text (ST), you will see a textual editor area where you can type commands and operands using a syntax similar to Pascal or C. You can also use keywords from IEC 61131-3 standard or PLCopen libraries to create motion functions.
        6. -
        7. You can switch between different programming languages by selecting them from the menu bar or the toolbar. You can also use the keyboard shortcuts F7 or F8 to switch between graphical and textual programming languages.
        8. -
        9. You can save your file or program by selecting the Save command from the menu bar or the toolbar. You can also use the keyboard shortcut Ctrl+S. You can save your file or program with a different name or location by selecting the Save As command from the menu bar.
        10. -
        11. You can compile your file or program by selecting the Compile command from the menu bar or the toolbar. You can also use the keyboard shortcut F9. You can check for any errors or warnings in your file or program by looking at the output area. You can double-click on any error or warning message to jump to the corresponding line in your file or program.
        12. -
        13. You can now use the online mode to upload, download, monitor, and debug your motion applications in ELAU EPAS-3 software 21 arminwagon.
        14. -
        -

        How to use the online mode to upload, download, monitor, and debug your motion applications in ELAU EPAS-3 software 21 arminwagon?

        -

        To use the online mode to upload, download, monitor, and debug your motion applications in ELAU EPAS-3 software 21 arminwagon, you need to follow these steps:

        -
          -
        1. Select the Online Mode command from the menu bar and choose the Connect option. You can also use the shortcut icon from the toolbar or the keyboard shortcut F10. You will see a dialog box where you can select the axis address of the ELAU motion controller that you want to connect to. Click on OK to establish the connection.
        2. -
        3. You will see a green indicator on the status bar that shows that you are online with the ELAU motion controller. You will also see a list of files and programs that are stored in the memory of the ELAU motion controller in the project explorer.
        4. -
        5. You can upload your file or program from your PC to the ELAU motion controller by selecting it in the project explorer and choosing the Upload command from the menu bar or the toolbar. You can also use the keyboard shortcut Ctrl+U. You will see a dialog box where you can select the destination folder and name for your file or program in the memory of the ELAU motion controller. Click on OK to start the upload process.
        6. -
        7. You can download your file or program from the ELAU motion controller to your PC by selecting it in the project explorer and choosing the Download command from the menu bar or the toolbar. You can also use the keyboard shortcut Ctrl+D. You will see a dialog box where you can select the destination folder and name for your file or program on your PC. Click on OK to start the download process.
        8. -
        9. You can monitor your file or program on the ELAU motion controller by selecting it in the project explorer and choosing the Monitor command from the menu bar or the toolbar. You can also use the keyboard shortcut F11. You will see a dialog box where you can select the variables that you want to monitor in your file or program. You can also add, remove, or edit variables in the dialog box. Click on OK to start the monitoring process.
        10. -
        11. You can debug your file or program on the ELAU motion controller by selecting it in the project explorer and choosing the Debug command from the menu bar or the toolbar. You can also use the keyboard shortcut F12. You will see a dialog box where you can set breakpoints, watchpoints, and triggers in your file or program. You can also add, remove, or edit breakpoints, watchpoints, and triggers in the dialog box. Click on OK to start the debugging process.
        12. -
        13. You can run or stop your file or program on the ELAU motion controller by selecting it in the project explorer and choosing the Run or Stop command from the menu bar or the toolbar. You can also use the keyboard shortcut Ctrl+R or Ctrl+S. You will see a dialog box where you can confirm your action. Click on OK to run or stop your file or program.
        14. -
        15. You can disconnect from the ELAU motion controller by selecting the Online Mode command from the menu bar and choosing the Disconnect option. You can also use the shortcut icon from the toolbar or the keyboard shortcut Ctrl+F10. You will see a dialog box where you can confirm your action. Click on OK to disconnect from the ELAU motion controller.
        16. -
        -

        Conclusion

        -

        In this article, we have shown you how to use ELAU EPAS-3 software 21 arminwagon for your motion control projects. We have explained what ELAU EPAS-3 software 21 arminwagon is and what it does. We have also shown you how to download and install ELAU EPAS-3 software 21 arminwagon on your PC. We have also shown you how to connect your PC to the ELAU motion controller via RS 232 or RS 485 interface. We have then shown you how to configure the COM interface, the axis address, and the default parameters of the ELAU motion controller. Next, we have shown you how to launch ELAU EPAS-3 software 21 arminwagon and navigate the main window. We have then shown you how to create a new project or open an existing project in ELAU EPAS-3 software 21 arminwagon. We have also shown you how to use graphical or textual programming languages, such as IEC 61131-3 or PLCopen, to create motion applications in ELAU EPAS-3 software 21 arminwagon. We have then shown you how to use the online mode to upload, download, monitor, and debug your motion applications in ELAU EPAS-3 software 21 arminwagon. Finally, we have shown you how to use the diagnostic tools, such as error codes, trace function, and oscilloscope function, to troubleshoot your motion applications in ELAU EPAS-3 software 21 arminwagon.

        -

        ELAU EPAS-3 software 21 arminwagon is a powerful and versatile software product that can help you program, configure, and diagnose ELAU motion controllers. It can help you create complex and sophisticated motion applications using graphical or textual programming languages, such as IEC 61131-3 or PLCopen. It can also help you test and optimize your motion applications using the online mode and the diagnostic tools. With ELAU EPAS-3 software 21 arminwagon, you can achieve high performance and reliability for your motion control projects.

        -

        Here are some tips and tricks to optimize your motion control projects with ELAU EPAS-3 software 21 arminwagon:

        -
          -
        • Use the help topics from the menu bar or the toolbar to access detailed information and examples on how to use ELAU EPAS-3 software 21 arminwagon.
        • -
        • Use the simulation mode from the menu bar or the toolbar to simulate your motion applications without connecting to the ELAU motion controller.
        • -
        • Use the backup and restore functions from the menu bar or the toolbar to save and load your project files and settings.
        • -
        • Use the update function from the menu bar or the toolbar to check for any updates or patches for ELAU EPAS-3 software 21 arminwagon.
        • -
        • Use the feedback function from the menu bar or the toolbar to send your comments or suggestions to Schneider Electric or to report any bugs or issues with ELAU EPAS-3 software 21 arminwagon.
        • -
        -

        If you have any questions or need any support with ELAU EPAS-3 software 21 arminwagon, you can contact Schneider Electric or visit their website for more information. Schneider Electric is a global leader in energy management and automation solutions, and they are committed to providing you with the best products and services for your motion control needs.

        -

        FAQs

        -

        Here are some frequently asked questions and answers related to ELAU EPAS-3 software 21 arminwagon:

        -
          -
        1. Q: What is the difference between ELAU EPAS-3 software 21 arminwagon and ELAU EPAS-4 software?
          -A: ELAU EPAS-3 software 21 arminwagon is the previous version of ELAU EPAS-4 software, which is the latest version of the software product for programming and configuring ELAU motion controllers. ELAU EPAS-4 software has some new features and improvements over ELAU EPAS-3 software 21 arminwagon, such as a new user interface, a new project structure, a new online mode, and more. However, ELAU EPAS-3 software 21 arminwagon is still compatible with most of the ELAU motion controllers, such as PacDrive M, PacDrive LMC Eco and Pro, PacDrive MC-4 and MC-5.
        2. -
        3. Q: How can I upgrade from ELAU EPAS-3 software 21 arminwagon to ELAU EPAS-4 software?
          -A: You can upgrade from ELAU EPAS-3 software 21 arminwagon to ELAU EPAS-4 software by purchasing a licence key for ELAU EPAS-4 software from Schneider Electric or their authorized distributors. You can then download and install ELAU EPAS-4 software from the Schneider Electric website and activate it with the licence key. You can also convert your existing projects from ELAU EPAS-3 software 21 arminwagon to ELAU EPAS-4 software using the conversion tool that is included in ELAU EPAS-4 software.
        4. -
        5. Q: How can I backup and restore my project files and settings in ELAU EPAS-3 software 21 arminwagon?
          -A: You can backup and restore your project files and settings in ELAU EPAS-3 software 21 arminwagon by using the backup and restore functions from the menu bar or the toolbar. You can also use the keyboard shortcuts Ctrl+B or Ctrl+R. You can backup your project files and settings to a file on your PC or to a removable device, such as a USB flash drive or a CD-ROM. You can then restore your project files and settings from the backup file or device by selecting it in the dialog box that appears when you use the restore function.
        6. -
        7. Q: How can I update or patch my ELAU EPAS-3 software 21 arminwagon?
          -A: You can update or patch your ELAU EPAS-3 software 21 arminwagon by using the update function from the menu bar or the toolbar. You can also use the keyboard shortcut Ctrl+U. You can check for any updates or patches for ELAU EPAS-3 software 21 arminwagon by clicking on the Check for Updates button in the dialog box that appears when you use the update function. You can then download and install any available updates or patches for ELAU EPAS-3 software 21 arminwagon by following the instructions on the screen.
        8. -
        9. Q: How can I send feedback or report bugs to Schneider Electric about ELAU EPAS-3 software 21 arminwagon?
          -A: You can send feedback or report bugs to Schneider Electric about ELAU EPAS-3 software 21 arminwagon by using the feedback function from the menu bar or the toolbar. You can also use the keyboard shortcut Ctrl+F. You can fill in a form with your name, email address, subject, and message. You can also attach any files or screenshots that are relevant to your feedback or bug report. Click on Send to submit your feedback or bug report to Schneider Electric.
        10. -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/IMVU Mesh Extractor V2 0 0 0 Convert Extension Fo.md b/spaces/stomexserde/gpt4-ui/Examples/IMVU Mesh Extractor V2 0 0 0 Convert Extension Fo.md deleted file mode 100644 index fb2597644bd8dc17987862bd093550afbbdb47f4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/IMVU Mesh Extractor V2 0 0 0 Convert Extension Fo.md +++ /dev/null @@ -1,57 +0,0 @@ - -

        What is IMVU Mesh Extractor V2 0 0 0 and why do you need it?

        -

        If you are a fan of IMVU, the online social metaverse where you can create your own avatar, chat with friends, play games, and explore millions of virtual worlds, you might have wondered how some of the amazing products in the IMVU catalog are made. You might have also wanted to create your own products or customize existing ones to suit your style and preferences.

        -

        To do that, you need to understand what are mesh files and how they work. A mesh file is a file that contains the information about the shape, texture, and animation of a three-dimensional object in IMVU. For example, a mesh file can define how a hair product looks like, how it moves, and how it fits on your avatar's head.

        -

        IMVU Mesh Extractor V2 0 0 0 convert extension fo


        Downloadhttps://urlgoal.com/2uI7KU



        -

        However, mesh files are not easy to access or modify. They are usually encrypted or protected by the original creators or developers of the products. This is where IMVU Mesh Extractor V2 0 0 comes in handy. This is a software tool that allows you to extract mesh files from any IMVU product, whether it is your own or someone else's. You can then use these mesh files for your own purposes, such as editing, converting, or creating new products.

        -

        Why do you need IMVU Mesh Extractor V2 for your IMVU projects? There are many reasons why this tool can be useful and beneficial for you. Here are some of them:

        -
          -
        • You can learn from other creators and developers by studying their mesh files and seeing how they made their products.
        • -
        • You can improve your own products by modifying or enhancing the mesh files of existing products.
        • -
        • You can create new products by combining or mixing different mesh files from different products.
        • -
        • You can convert mesh files to different formats that are compatible with other software tools or platforms.
        • -
        • You can have fun and express your creativity by experimenting with different mesh files and creating unique products.
        • -How to use IMVU Mesh Extractor V2 0 0 0 to convert extension fo? -

          One of the features of IMVU Mesh Extractor V2 0 0 0 is that it can convert mesh files to different formats, such as extension fo. Extension fo is a file format that is used by some other software tools or platforms that are related to IMVU, such as Blender, SketchUp, or Second Life. By converting mesh files to extension fo, you can use them for other purposes, such as importing, exporting, or editing them in these tools or platforms.

          -

          How do you use IMVU Mesh Extractor V2 0 0 0 to convert extension fo? Here are the steps you need to follow:

          -
            -
          1. Download and install IMVU Mesh Extractor V2 0 0 0 from the official website. You can choose between the free version or the paid version, depending on your needs and preferences. The free version has some limitations, such as the number of products you can extract or the formats you can convert. The paid version has more features and options, such as batch extraction or conversion, custom settings, and technical support.
          2. -
          3. Launch IMVU Mesh Extractor V2 0 0 0 and log in with your IMVU account. You will see a list of all the products that you own or have access to in your IMVU inventory. You can also search for other products by using the product ID or the product name.
          4. -
          5. Select the product that you want to extract the mesh files from and click on the Extract button. You will see a folder with the name of the product and a subfolder with the name of the mesh file. The mesh file will have the extension .xmf, which is the default format for IMVU mesh files.
          6. -
          7. Select the mesh file that you want to convert to extension fo and click on the Convert button. You will see a window with different options for conversion. You can choose the output format, the output folder, and the output name. To convert to extension fo, you need to select .fo as the output format.
          8. -
          9. Click on the OK button and wait for the conversion process to finish. You will see a message that says "Conversion completed successfully". You can then find the converted file in the output folder that you specified. The converted file will have the extension .fo, which is the format for extension fo.
          10. -
          -

          Congratulations! You have successfully used IMVU Mesh Extractor V2 0 0 0 to convert extension fo. You can now use this file for other purposes, such as importing it into Blender, SketchUp, or Second Life.

          -

          What are the benefits of using IMVU Mesh Extractor V2 0 0 0 to convert extension fo?

          -

          There are many benefits of using IMVU Mesh Extractor V2 0 0 0 to convert extension fo. Here are some of them:

          -
            -
          • You can use the converted files for your own IMVU creations. For example, you can import them into Blender and edit them with more advanced tools and features. You can then export them back to IMVU and upload them as new products or derivations.
          • -
          • You can improve your IMVU skills and knowledge by using IMVU Mesh Extractor V2 0 0 . By converting extension fo, you can learn more about how mesh files work and how they are structured. You can also learn more about how different software tools or platforms handle mesh files and how they differ from IMVU.
          • -
          • You can save time and money by using IMVU Mesh Extractor V2 . By converting extension fo, you can avoid buying or downloading other software tools or platforms that are compatible with extension fo. You can also avoid spending time and effort on learning how to use these tools or platforms.
          • -
          -

          In short, using IMVU Mesh Extractor V2 to convert extension fo can help you create better products, learn more skills, and save more resources.

          -

          What are the drawbacks and limitations of using IMVU Mesh Extractor V2 to convert extension fo?

          -

          However, using IMVU Mesh Extractor V2 0 0 0 to convert extension fo also has some drawbacks and limitations that you need to be aware of. Here are some of them:

          -

          -
            -
          • You may face legal and ethical issues by using IMVU Mesh Extractor V2 0 0 0. By extracting and converting mesh files from other products, you may be violating the intellectual property rights or the terms of service of the original creators or developers. You may also be infringing the privacy or the security of the IMVU platform. You may face legal actions or penalties if you are caught or reported by the authorities or the IMVU staff.
          • -
          • You may encounter technical and quality issues by using IMVU Mesh Extractor V2 0 0 0. By extracting and converting mesh files, you may lose some of the information or the quality of the original files. You may also face compatibility or performance issues when using the converted files in other software tools or platforms. You may need to adjust or fix some of the settings or parameters to make them work properly.
          • -
          • You may have alternatives and solutions for using IMVU Mesh Extractor V2 0 0 0. By extracting and converting mesh files, you may not be using the best or the most efficient way to create or customize your IMVU products. You may have other options or methods that are more suitable or convenient for your needs and preferences. For example, you may use the IMVU Create Mode or the IMVU Previewer to edit or create your products directly in IMVU. You may also use other software tools or platforms that are designed specifically for IMVU, such as IMVUKSA Product Extractor, IMVU Cal3D Exporter, or IMVU Avatar Studio.
          • -
          -

          Therefore, you need to weigh the pros and cons of using IMVU Mesh Extractor V2 0 0 0 to convert extension fo and decide whether it is worth it or not.

          -

          Conclusion

          -

          In conclusion, IMVU Mesh Extractor V2 0 0 0 is a software tool that allows you to extract mesh files from any IMVU product and convert them to different formats, such as extension fo. This can be useful and beneficial for your IMVU projects, as you can use the converted files for your own creations, improve your skills and knowledge, and save time and money. However, this also has some drawbacks and limitations, such as legal and ethical issues, technical and quality issues, and alternatives and solutions. You need to be careful and responsible when using this tool and respect the rights and rules of IMVU and other creators and developers.

          -

          We hope that this article has helped you understand what is IMVU Mesh Extractor V2 0 0 0 and why do you need it. If you have any questions, comments, or feedback, please feel free to leave them below. We would love to hear from you and help you with your IMVU projects.

          -

          FAQs

          -

          Q: Where can I download IMVU Mesh Extractor V2 0 0 0?

          -

          A: You can download IMVU Mesh Extractor V2 from the official website: http://www.imvu-mesh-extractor.com/. You can choose between the free version or the paid version.

          -

          Q: How much does IMVU Mesh Extractor V2 cost?

          -

          A: The free version of IMVU Mesh Extractor V2 is free to use, but it has some limitations, such as the number of products you can extract or the formats you can convert. The paid version of IMVU Mesh Extractor V2 costs $19.95 USD for a lifetime license, which gives you access to all the features and options, such as batch extraction or conversion, custom settings, and technical support.

          -

          Q: Is IMVU Mesh Extractor V2 safe to use?

          -

          A: IMVU Mesh Extractor V2 is safe to use, as long as you follow the instructions and precautions. However, you need to be aware of the legal and ethical issues that may arise from using this tool. You need to respect the intellectual property rights and the terms of service of IMVU and other creators and developers. You also need to protect your privacy and security when using this tool and avoid sharing or distributing the extracted or converted files without permission.

          -

          Q: What are some examples of products that I can extract or convert with IMVU Mesh Extractor V2 0 0 0?

          -

          A: You can extract or convert any IMVU product that has a mesh file, such as clothing, accessories, furniture, rooms, pets, vehicles, etc. For example, you can extract or convert a hair product, a dress product, a chair product, a house product, a dog product, or a car product. You can then use these files for your own purposes, such as editing, creating, or importing them into other software tools or platforms.

          -

          Q: How can I contact the developer of IMVU Mesh Extractor V2 0 0 0?

          -

          A: You can contact the developer of IMVU Mesh Extractor V2 by using the contact form on the official website: http://www.imvu-mesh-extractor.com/contact.php. You can also join the official Facebook group: https://www.facebook.com/groups/imvumeshextractor/. You can ask questions, report problems, request features, or share feedback with the developer and other users.

          b2dd77e56b
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Integrasi Spada Ristekdikti Dengan Elearning Moodle.md b/spaces/stomexserde/gpt4-ui/Examples/Integrasi Spada Ristekdikti Dengan Elearning Moodle.md deleted file mode 100644 index d4944446cd24f59e978f8f1c99651d8a8a94e657..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Integrasi Spada Ristekdikti Dengan Elearning Moodle.md +++ /dev/null @@ -1,39 +0,0 @@ -
          -

          How to Integrate Spada Ristekdikti with Elearning Moodle

          -

          Spada Ristekdikti is a system of online learning in Indonesia, which is a program of the Directorate General of Learning and Student Affairs of the Ministry of Research, Technology and Higher Education. It aims to improve access to quality learning in higher education institutions. Spada Ristekdikti offers three programs: Open Materials, Open Courses, and Online Courses[^1^].

          -

          In this article, we will explain how to integrate Spada Ristekdikti with Elearning Moodle, which is a popular learning management system (LMS) used by many universities and schools. By integrating Spada Ristekdikti with Elearning Moodle, you can access various online courses and materials offered by Spada Ristekdikti through your own LMS. You can also get recognition for your learning outcomes by obtaining certificates that are valid by the Ministry of Research, Technology and Higher Education.

          -

          Integrasi Spada Ristekdikti dengan Elearning Moodle


          DOWNLOAD ✵✵✵ https://urlgoal.com/2uIaom



          -

          Steps to Integrate Spada Ristekdikti with Elearning Moodle

          -
            -
          1. Download the plugin package from http://spada.ristekdikti.go.id/files/plugin/spada.zip or from the API LMS menu on the dashboard[^1^].
          2. -
          3. Upload the plugin package to your hosting where Moodle is installed.
          4. -
          5. Enter your AUTH CODE in spada.php file. You can get your AUTH CODE from the dashboard of Spada Ristekdikti.
          6. -
          7. Configure the position of the SPADA button on your Moodle site. You can choose to place it on the header, footer, or sidebar.
          8. -
          9. Modify the view.php file in your Moodle course folder. Add the script below at the beginning and end of the file[^1^].
          10. -
          -
          <?php
          -require_once('../../config.php');
          -require_once('lib.php');
          -require_once('spada.php');
          -$spada = new spada();
          -$spada->check_login();
          -$spada->check_course();
          -?>
          -// original view.php code
          -<?php
          -$spada->show_button();
          -?>
          -
          -

          This will enable you to see the SPADA button on your course page. You can click on it to access Spada Ristekdikti courses and materials.

          -

          Benefits of Integrating Spada Ristekdikti with Elearning Moodle

          -

          By integrating Spada Ristekdikti with Elearning Moodle, you can enjoy the following benefits:

          -
            -
          • You can access various online courses and materials from different universities and institutions in Indonesia through Spada Ristekdikti.
          • -
          • You can get recognition for your learning outcomes by obtaining certificates that are valid by the Ministry of Research, Technology and Higher Education.
          • -
          • You can enhance your knowledge and skills in various fields and disciplines that are relevant to your study or career.
          • -
          • You can enrich your learning experience by interacting with other learners and instructors from different backgrounds and perspectives.
          • -
          -

          Conclusion

          -

          Spada Ristekdikti is a system of online learning in Indonesia that offers three programs: Open Materials, Open Courses, and Online Courses. You can integrate Spada Ristekdikti with Elearning Moodle by following the steps above. By doing so, you can access various online courses and materials offered by Spada Ristekdikti through your own LMS. You can also get recognition for your learning outcomes by obtaining certificates that are valid by the Ministry of Research, Technology and Higher Education.

          7b8c122e87
          -
          -
          \ No newline at end of file diff --git a/spaces/stratussox/yolov5_inference/utils/loggers/wandb/__init__.py b/spaces/stratussox/yolov5_inference/utils/loggers/wandb/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/sub314xxl/MetaGPT/metagpt/roles/product_manager.py b/spaces/sub314xxl/MetaGPT/metagpt/roles/product_manager.py deleted file mode 100644 index b42e9bb294484d57aa38a01e23ef98104483a5c6..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/roles/product_manager.py +++ /dev/null @@ -1,17 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 14:43 -@Author : alexanderwu -@File : product_manager.py -""" -from metagpt.actions import BossRequirement, WritePRD -from metagpt.roles import Role - - -class ProductManager(Role): - def __init__(self, name="Alice", profile="Product Manager", goal="Efficiently create a successful product", - constraints=""): - super().__init__(name, profile, goal, constraints) - self._init_actions([WritePRD]) - self._watch([BossRequirement]) diff --git a/spaces/subhc/Guess-What-Moves/mask_former/modeling/transformer/transformer.py b/spaces/subhc/Guess-What-Moves/mask_former/modeling/transformer/transformer.py deleted file mode 100644 index ea8caa0108f5e136a9739320ab69a3e1b6f40298..0000000000000000000000000000000000000000 --- a/spaces/subhc/Guess-What-Moves/mask_former/modeling/transformer/transformer.py +++ /dev/null @@ -1,369 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/transformer.py -""" -Transformer class. - -Copy-paste from torch.nn.Transformer with modifications: - * positional encodings are passed in MHattention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers -""" -import copy -from typing import List, Optional - -import torch -import torch.nn.functional as F -from torch import Tensor, nn - - -class Transformer(nn.Module): - def __init__( - self, - d_model=512, - nhead=8, - num_encoder_layers=6, - num_decoder_layers=6, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - return_intermediate_dec=False, - ): - super().__init__() - - encoder_layer = TransformerEncoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - encoder_norm = nn.LayerNorm(d_model) if normalize_before else None - self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm) - - decoder_layer = TransformerDecoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - decoder_norm = nn.LayerNorm(d_model) - self.decoder = TransformerDecoder( - decoder_layer, - num_decoder_layers, - decoder_norm, - return_intermediate=return_intermediate_dec, - ) - - self._reset_parameters() - - self.d_model = d_model - self.nhead = nhead - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, src, mask, query_embed, pos_embed): - # flatten NxCxHxW to HWxNxC - bs, c, h, w = src.shape - src = src.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - query_embed = query_embed.unsqueeze(1).repeat(1, bs, 1) - if mask is not None: - mask = mask.flatten(1) - - tgt = torch.zeros_like(query_embed) - memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed) - hs = self.decoder( - tgt, memory, memory_key_padding_mask=mask, pos=pos_embed, query_pos=query_embed - ) - return hs.transpose(1, 2), memory.permute(1, 2, 0).view(bs, c, h, w) - - -class TransformerEncoder(nn.Module): - def __init__(self, encoder_layer, num_layers, norm=None): - super().__init__() - self.layers = _get_clones(encoder_layer, num_layers) - self.num_layers = num_layers - self.norm = norm - - def forward( - self, - src, - mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - output = src - - for layer in self.layers: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask, pos=pos - ) - - if self.norm is not None: - output = self.norm(output) - - return output - - -class TransformerDecoder(nn.Module): - def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False): - super().__init__() - self.layers = _get_clones(decoder_layer, num_layers) - self.num_layers = num_layers - self.norm = norm - self.return_intermediate = return_intermediate - - def forward( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - output = tgt - - intermediate = [] - - for layer in self.layers: - output = layer( - output, - memory, - tgt_mask=tgt_mask, - memory_mask=memory_mask, - tgt_key_padding_mask=tgt_key_padding_mask, - memory_key_padding_mask=memory_key_padding_mask, - pos=pos, - query_pos=query_pos, - ) - if self.return_intermediate: - intermediate.append(self.norm(output)) - - if self.norm is not None: - output = self.norm(output) - if self.return_intermediate: - intermediate.pop() - intermediate.append(output) - - if self.return_intermediate: - return torch.stack(intermediate) - - return output.unsqueeze(0) - - -class TransformerEncoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - q = k = self.with_pos_embed(src, pos) - src2 = self.self_attn( - q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask - )[0] - src = src + self.dropout1(src2) - src = self.norm1(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src)))) - src = src + self.dropout2(src2) - src = self.norm2(src) - return src - - def forward_pre( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - src2 = self.norm1(src) - q = k = self.with_pos_embed(src2, pos) - src2 = self.self_attn( - q, k, value=src2, attn_mask=src_mask, key_padding_mask=src_key_padding_mask - )[0] - src = src + self.dropout1(src2) - src2 = self.norm2(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src2)))) - src = src + self.dropout2(src2) - return src - - def forward( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - if self.normalize_before: - return self.forward_pre(src, src_mask, src_key_padding_mask, pos) - return self.forward_post(src, src_mask, src_key_padding_mask, pos) - - -class TransformerDecoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.norm3 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - self.dropout3 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - q = k = self.with_pos_embed(tgt, query_pos) - tgt2 = self.self_attn( - q, k, value=tgt, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask - )[0] - tgt = tgt + self.dropout1(tgt2) - tgt = self.norm1(tgt) - tgt2 = self.multihead_attn( - query=self.with_pos_embed(tgt, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, - attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask, - )[0] - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout3(tgt2) - tgt = self.norm3(tgt) - return tgt - - def forward_pre( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - tgt2 = self.norm1(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - tgt2 = self.self_attn( - q, k, value=tgt2, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask - )[0] - tgt = tgt + self.dropout1(tgt2) - tgt2 = self.norm2(tgt) - tgt2 = self.multihead_attn( - query=self.with_pos_embed(tgt2, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, - attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask, - )[0] - tgt = tgt + self.dropout2(tgt2) - tgt2 = self.norm3(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout3(tgt2) - return tgt - - def forward( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - if self.normalize_before: - return self.forward_pre( - tgt, - memory, - tgt_mask, - memory_mask, - tgt_key_padding_mask, - memory_key_padding_mask, - pos, - query_pos, - ) - return self.forward_post( - tgt, - memory, - tgt_mask, - memory_mask, - tgt_key_padding_mask, - memory_key_padding_mask, - pos, - query_pos, - ) - - -def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(f"activation should be relu/gelu, not {activation}.") diff --git a/spaces/sunilbhatia/hackathon1/app/Hackathon_setup/exp_recognition_model.py b/spaces/sunilbhatia/hackathon1/app/Hackathon_setup/exp_recognition_model.py deleted file mode 100644 index 80f238152e0ee425c466e6eb5cd2df166648d8c7..0000000000000000000000000000000000000000 --- a/spaces/sunilbhatia/hackathon1/app/Hackathon_setup/exp_recognition_model.py +++ /dev/null @@ -1,56 +0,0 @@ -import torch -import torchvision -import torch.nn as nn -from torchvision import transforms -## Add more imports if required - -#################################################################################################################### -# Define your model and transform and all necessary helper functions here # -# They will be imported to the exp_recognition.py file # -#################################################################################################################### - -# Definition of classes as dictionary -classes = {0: 'ANGER', 1: 'DISGUST', 2: 'FEAR', 3: 'HAPPINESS', 4: 'NEUTRAL', 5: 'SADNESS', 6: 'SURPRISE'} - -# Example Network -class ExpressionCNN(nn.Module): - def __init__(self, num_classes=7): - super(ExpressionCNN, self).__init__() - self.conv1 = nn.Conv2d(in_channels=1, out_channels=64, kernel_size=3, padding=1) - self.conv2 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, padding=1) - self.conv3 = nn.Conv2d(in_channels=128, out_channels=512, kernel_size=3, padding=1) - - self.pool = nn.MaxPool2d(kernel_size=2, stride=2) - self.dropout = nn.Dropout(0.2) - - self.bn1 = nn.BatchNorm2d(64) - self.bn2 = nn.BatchNorm2d(128) - self.bn3 = nn.BatchNorm2d(512) - - self.relu = nn.ReLU() - - self.fc1 = nn.Linear(512 * 6 * 6, 2048) # Adjusted input size - self.fc2 = nn.Linear(2048, 512) - self.fc3 = nn.Linear(512, num_classes) - - self.logsoftmax = nn.LogSoftmax(dim=1) - - def forward(self, x): - x = self.pool(self.dropout(self.relu(self.bn1(self.conv1(x))))) # 48x48 -> 24x24 - x = self.pool(self.dropout(self.relu(self.bn2(self.conv2(x))))) # 24x24 -> 12x12 - x = self.pool(self.dropout(self.relu(self.bn3(self.conv3(x))))) # 12x12 -> 6x6 - - x = x.view(-1, 512 * 6 * 6) # Adjusted size - x = self.relu(self.dropout(self.fc1(x))) - x = self.relu(self.dropout(self.fc2(x))) - x = self.fc3(x) - x = self.logsoftmax(x) - return x - -# Sample Helper function -def rgb2gray(image): - return image.convert('L') - -# Sample Transformation function -#YOUR CODE HERE for changing the Transformation values. -trnscm = transforms.Compose([rgb2gray, transforms.Resize((48,48)), transforms.ToTensor()]) \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/Virtual-Dj-43-R12-Serial-Number-EXCLUSIVE.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/Virtual-Dj-43-R12-Serial-Number-EXCLUSIVE.md deleted file mode 100644 index c550b0fc9557dbfa8935869df8ed31a993c67ae3..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/Virtual-Dj-43-R12-Serial-Number-EXCLUSIVE.md +++ /dev/null @@ -1,96 +0,0 @@ -## Virtual Dj 4.3 R12 Serial Number - - - - - - - - - -**Download > [https://urlgoal.com/2txw9P](https://urlgoal.com/2txw9P)** - - - - - - - - - - - - - -# How to Find and Use Virtual DJ 4.3 R12 Serial Number - - - -If you are looking for a way to activate Virtual DJ 4.3 R12, a popular DJ software that allows you to mix and scratch music on your computer, you might need a serial number. A serial number is a unique code that identifies your copy of the software and proves that you have a legitimate license to use it. - - - -However, finding and using a serial number for Virtual DJ 4.3 R12 can be tricky, especially if you have an older version of the software or a limited edition that came with a hardware controller. In this article, we will show you how to find and use your serial number for Virtual DJ 4.3 R12 in a few easy steps. - - - -## Step 1: Check Your Email or CD Case - - - -The first place to look for your serial number is your email or CD case. If you bought Virtual DJ 4.3 R12 online, you should have received an email confirmation with your serial number. If you bought it from a physical store, you should have a CD case with a sticker that has your serial number. - - - -If you have your email or CD case, simply copy and paste your serial number into the software when prompted. Make sure you enter it exactly as it appears, including any dashes or capital letters. - - - -## Step 2: Log Into Your VirtualDJ Account - - - -If you don't have your email or CD case, or if your serial number is invalid, you can try logging into your VirtualDJ account. Your VirtualDJ account is where you can manage your licenses, download updates, access forums, and more. - - - -To log into your VirtualDJ account, go to [https://virtualdj.com/login/index.html](https://virtualdj.com/login/index.html) and enter your username and password. If you don't have an account yet, you can create one for free by clicking on "Create an account". - - - -Once you are logged in, go to "My Account" and then "Licenses". You should see a list of all the licenses you have purchased or registered for VirtualDJ products. Look for the one that says "VirtualDJ 4.3 R12" and click on it. You should see your serial number displayed on the screen. - - - -## Step 3: Download and Install VirtualDJ 4.3 R12 - - - -If you have found your serial number, you can now download and install VirtualDJ 4.3 R12 on your computer. To do this, go to [https://www.virtualdj.com/le/](https://www.virtualdj.com/le/) and enter your serial number in the box. You should see a link to download the software for your operating system. - - - -Click on the link and save the file to your computer. Then, run the file and follow the instructions to install VirtualDJ 4.3 R12 on your computer. When prompted, enter your serial number again to activate the software. - - - -## Step 4: Enjoy Your VirtualDJ 4.3 R12 - - - -Congratulations! You have successfully found and used your serial number for VirtualDJ 4.3 R12. You can now enjoy mixing and scratching music on your computer with this powerful and versatile software. - - - -If you need any help or support with VirtualDJ 4.3 R12, you can visit the official website at [https://www.virtualdj.com/](https://www.virtualdj.com/) or the forums at [https://www.virtualdj.com/forums/](https://www.virtualdj.com/forums/). You can also check out the online manual at [https://www.virtualdj.com/manuals/virtualdj/index.html](https://www.virtualdj.com/manuals/virtualdj/index.html) or watch some tutorials at [https://www.virtualdj.com/learn/index.html](https://www.virtualdj.com/learn/index.html). - - - -We hope this article was helpful and informative. Thank you for choosing VirtualDJ 4.3 R12 as your DJ software of choice! - - 1b8d091108 - - - - - diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Diary Of A Wimpy Kid The Third Wheel Pdf Free Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Diary Of A Wimpy Kid The Third Wheel Pdf Free Download.md deleted file mode 100644 index 6ab04df6bce6f1cb89c18ebb1b934dd8a1ca5681..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Diary Of A Wimpy Kid The Third Wheel Pdf Free Download.md +++ /dev/null @@ -1,24 +0,0 @@ -

          diary of a wimpy kid the third wheel pdf free download


          Download - https://cinurl.com/2uEYVz



          -
          -Now, with e-readers such as the Kindle and Nook, being able to actually carry around my entire library is a huge. - -The game was updated to be played on a Nintendo DS Lite, with support for the original DS. For the latest updates about this item, including changes, release dates, and more. - -Teams of two to four players take turns rolling a standard D20. - -The e-book is a great experience and a great distraction. The e-book is a great experience and a great distraction. But it’s certainly not necessary, and you shouldn’t consider it mandatory to buy an e-book reader before starting. The e-book is the story of Greg Heffley, a middle school student who is not very smart but is loved by his family, his friends, and most of all himself. Greg is quick to learn that with friends like him, no one has to know that he is “dumb,” but at the same time, he has no idea what to do with this new world he finds himself in. - -Select from over: - -The Bloggernacle is the place for encouragement for your writing, where you can meet other people who love books, to get book reviews, share book news, and buy or sell books. - -h2> - -As if the regular Christmas turkey and pumpkin pie weren’t enough to make you happy, the holiday also heralds the release of another new Harry Potter book. This one will be the final installment, The Deathly Hallows, which will be published by Scholastic on November 15, 2010, just in time for the holidays. Click here for a release date and additional details. - -At this year’s The Book Blogger Conference, the first annual conference of its kind, authors, publishers, bloggers, and other media professionals discussed the many possibilities for technology’s future and addressed issues affecting the online world, including those involving personal and creative rights. The conference was held at the Hyatt Regency Chicago O’Hare from November 12 to 13. - -Joining the world of e-books may be a challenge for readers with existing collections of library books or for those who want to check out entire series of books for the first time. With e-books, however, readers can choose a title to download or loan from the library that will be instantly available on their e- 4fefd39f24
          -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (La Escalera Del Exito Cesar Castella) WORK.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (La Escalera Del Exito Cesar Castella) WORK.md deleted file mode 100644 index ae715d2e13fdaf7154bb6f3bfb2e8558e052a4ae..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HD Online Player (La Escalera Del Exito Cesar Castella) WORK.md +++ /dev/null @@ -1,7 +0,0 @@ - -

          Kpga ponijem lalapula http://listgeeks.com/?autoplay=xsrhttps://ecvind.com/posts/18852954https://trello.com/c/lZepElis/381-upd-hd-online-player-download-jhttp://www.internationalpaper.com/index.php?option=com_xmap&view=viral&id=2064https://www.flickr.com/photos/reynoldmbp/13042118608-2/Dle exito cesar castella, Serie cena hd en exclusiva para. http://rtt.fun/rumah-resmi-banner-digital-2016-casino-online-video-hutang-dapat-properti-shorturl/https://www.pinterest.com/v2skins/https://www.youtube.com/watch?v=jzzcYn8cBw4https://www.youtube.com/watch?v=jzzcYn8cBw4https://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=KtjNnJGnuVohttps://www.youtube.com/watch?v=Jwe4f7hCZp4https://flickr.

          -

          cuyar caja clasificada. El corte primero por la parte izquierda de la mano derecha. Castellano, cesar.. Comigo abajo trabajando j. rollo y oscar dos castellanos la historia de los castellanos.. Csar los cesar castellanos mon. Castellano erik ayer paa paula eva.

          -

          HD Online Player (La Escalera Del Exito Cesar Castella)


          Download Filehttps://cinurl.com/2uEY8d



          -

          . para volver una vez usted haya logrado conseguir todo el videojuego en la. EL CONEXIONADO DE LA CABLE TELEFONICA. mi favorite. com/MOMMANDE. cesar castellano. aprovecharse del espacio abierto,

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Wic Reset Utility Crack Serial Website TOP.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Wic Reset Utility Crack Serial Website TOP.md deleted file mode 100644 index 8a97330ff75c4b084c55c3c9e97f1f75089f5b40..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Wic Reset Utility Crack Serial Website TOP.md +++ /dev/null @@ -1,54 +0,0 @@ -

          wic reset utility crack serial website


          Download ✏ ✏ ✏ https://cinurl.com/2uEYDK



          -
          -support - - ordo: dont touch those. you could break your system - - ordo: What is the output of "mount" when you're in the LiveUSB session? (Ctrl+Alt+F2 if you're at the terminal.) - - Jordan_U: - - ordo: Please pastebin the output of "sudo lshw -C memory". - - ordo: disable acpi - - ordo: (You can just copy/paste that into a terminal.) - - ordo: Also if it does not work, try to enable/disable acpi in BIOS, if that does not work you need to disable acpi in kernel, using /etc/default/grub or update-grub2 or one of the grub menu options - - - - moppy: I hope you're joking. - - i don't think it's battery issue, because battery is set to "use AC power adapter only" and it's not charging... - - why do they all keep on teasing me - - why don't they just fix the bug - - :( - - Jordan_U: 12.04 for...? - - they didn't bother to fix it - - V7: Lubuntu - - Jordan_U: Ok - - Ordo: Can you pastebin the output of "sudo lshw -C memory"? - - ordo: is that a bad sd card? - - k1l_: It's a freshly burned LiveUSB. - - Ordo: and is this a new card? - - i'm going to have to get it repaired - - damn it - - -
          -
          -

          diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Winsetupfromusb Portable ((BETTER)).md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Winsetupfromusb Portable ((BETTER)).md deleted file mode 100644 index b157ced332392a91e14994ec4a8fe0b447f6a51a..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Winsetupfromusb Portable ((BETTER)).md +++ /dev/null @@ -1,16 +0,0 @@ -

          winsetupfromusb portable


          DOWNLOADhttps://cinurl.com/2uEYUx



          - -WinSetupFromUSB is a Windows program that prepares a multiboot USB flash drive or hard drive for installing any version of Windows , since 2000/XP, booting various Linux and . Download WinSetupFromUSB for free in Russian from the official site without registration, advertising and SMS. -A program for creating a bootable USB flash drive with Windows OS. -There is nothing superfluous in the WinSetupFromUSB program, it has only three main windows: The main window, where the operating system is selected for installation from the ISO image. -The program interface is in Russian, including instructions for WinSetupFromUSB. -To do this, click on “WinSetup From Usb” and select “USB / DVD Download Tool”. (1522629) WinSetupFromUSB 1.6. exe (159919 -In Windows 7, everything is simple: just run the file: WinSetupFromUSB.exe. -In Windows 8, the order is slightly different. -First you need to install WinToUSB -Jun 2 2016 · WinToUSB - how to use the program, how to create a bootable flash drive - Duration: 3:30. -Roman Timofeev 515,710 views · 3:30.Duration: 0:32 Posted: 2 Jun. 2016 -Jan 16 2014 · WinToUSB is designed to create an installation USB flash drive with Windows 7 or Windows 8.1 from an ISO image. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/3d Album Commercial Suite 329 !!BETTER!! Full Crack.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/3d Album Commercial Suite 329 !!BETTER!! Full Crack.md deleted file mode 100644 index d748861eefc175a99901ba8c21fcb22bf1aef35b..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/3d Album Commercial Suite 329 !!BETTER!! Full Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

          3d Album Commercial Suite 329 Full Crack


          Download ✔✔✔ https://urluss.com/2uCHr0



          - - 4fefd39f24
          -
          -
          -

          diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/apis/inference.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/apis/inference.py deleted file mode 100644 index 90bc1c0c68525734bd6793f07c15fe97d3c8342c..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/apis/inference.py +++ /dev/null @@ -1,136 +0,0 @@ -import matplotlib.pyplot as plt -import annotator.uniformer.mmcv as mmcv -import torch -from annotator.uniformer.mmcv.parallel import collate, scatter -from annotator.uniformer.mmcv.runner import load_checkpoint - -from annotator.uniformer.mmseg.datasets.pipelines import Compose -from annotator.uniformer.mmseg.models import build_segmentor - - -def init_segmentor(config, checkpoint=None, device='cuda:0'): - """Initialize a segmentor from config file. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str, optional) CPU/CUDA device option. Default 'cuda:0'. - Use 'cpu' for loading model on CPU. - Returns: - nn.Module: The constructed segmentor. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - 'but got {}'.format(type(config))) - config.model.pretrained = None - config.model.train_cfg = None - model = build_segmentor(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - model.CLASSES = checkpoint['meta']['CLASSES'] - model.PALETTE = checkpoint['meta']['PALETTE'] - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -class LoadImage: - """A simple pipeline to load image.""" - - def __call__(self, results): - """Call function to load images into results. - - Args: - results (dict): A result dict contains the file name - of the image to be read. - - Returns: - dict: ``results`` will be returned containing loaded image. - """ - - if isinstance(results['img'], str): - results['filename'] = results['img'] - results['ori_filename'] = results['img'] - else: - results['filename'] = None - results['ori_filename'] = None - img = mmcv.imread(results['img']) - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - return results - - -def inference_segmentor(model, img): - """Inference image(s) with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - imgs (str/ndarray or list[str/ndarray]): Either image files or loaded - images. - - Returns: - (list[Tensor]): The segmentation result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = [LoadImage()] + cfg.data.test.pipeline[1:] - test_pipeline = Compose(test_pipeline) - # prepare data - data = dict(img=img) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - data['img_metas'] = [i.data[0] for i in data['img_metas']] - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result - - -def show_result_pyplot(model, - img, - result, - palette=None, - fig_size=(15, 10), - opacity=0.5, - title='', - block=True): - """Visualize the segmentation results on the image. - - Args: - model (nn.Module): The loaded segmentor. - img (str or np.ndarray): Image filename or loaded image. - result (list): The segmentation result. - palette (list[list[int]]] | None): The palette of segmentation - map. If None is given, random palette will be generated. - Default: None - fig_size (tuple): Figure size of the pyplot figure. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - title (str): The title of pyplot figure. - Default is ''. - block (bool): Whether to block the pyplot figure. - Default is True. - """ - if hasattr(model, 'module'): - model = model.module - img = model.show_result( - img, result, palette=palette, show=False, opacity=opacity) - # plt.figure(figsize=fig_size) - # plt.imshow(mmcv.bgr2rgb(img)) - # plt.title(title) - # plt.tight_layout() - # plt.show(block=block) - return mmcv.bgr2rgb(img) diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/utils/res_layer.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/utils/res_layer.py deleted file mode 100644 index b2c07b47007e92e4c3945b989e79f9d50306f5fe..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/utils/res_layer.py +++ /dev/null @@ -1,94 +0,0 @@ -from annotator.uniformer.mmcv.cnn import build_conv_layer, build_norm_layer -from torch import nn as nn - - -class ResLayer(nn.Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - multi_grid (int | None): Multi grid dilation rates of last - stage. Default: None - contract_dilation (bool): Whether contract first dilation of each layer - Default: False - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - dilation=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - multi_grid=None, - contract_dilation=False, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if multi_grid is None: - if dilation > 1 and contract_dilation: - first_dilation = dilation // 2 - else: - first_dilation = dilation - else: - first_dilation = multi_grid[0] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - dilation=first_dilation, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for i in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - dilation=dilation if multi_grid is None else multi_grid[i], - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) diff --git a/spaces/taesiri/BLIP-2/README.md b/spaces/taesiri/BLIP-2/README.md deleted file mode 100644 index 0c9c2917d67e1db177dda163c980d727dff287df..0000000000000000000000000000000000000000 --- a/spaces/taesiri/BLIP-2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BLIP-2 -emoji: 👁 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/taquynhnga/CNNs-interpretation-visualization/Home.py b/spaces/taquynhnga/CNNs-interpretation-visualization/Home.py deleted file mode 100644 index 73182eac94c98f0f098f9ce39ae714db36b963c7..0000000000000000000000000000000000000000 --- a/spaces/taquynhnga/CNNs-interpretation-visualization/Home.py +++ /dev/null @@ -1,43 +0,0 @@ -import streamlit as st -from frontend.footer import add_footer - -st.set_page_config(layout='wide') -# st.set_page_config(layout='centered') - -st.title('About') - -# INTRO -intro_text = """Convolutional neural networks (ConvNets) have evolved at a rapid speed from the 2010s. -Some of the representative ConvNets models are VGGNet, Inceptions, ResNe(X)t, DenseNet, MobileNet, EfficientNet and RegNet, which focus on various factors of accuracy, efficiency, and scalability. -In the year 2020, Vision Transformers (ViT) was introduced as a Transformer model solving the computer vision problems. -Larger model and dataset sizes allow ViT to perform significantly better than ResNet, however, ViT still encountered challenges in generic computer vision tasks such as object detection and semantic segmentation. -Swin Transformer’ s success made Transformers be adopted as a generic vision backbone and showed outstanding performance in a wide range of computer vision tasks. -Nevertheless, rather than the intrinsic inductive biases of convolutions, the success of this approach is still primarily attributed to Transformers’ inherent superiority. - -In 2022, Zhuang Liu et. al. proposed a pure convolutional model dubbed ConvNeXt, discovered from the modernization of a standard ResNet towards the design of Vision Transformers and claimed to outperform them. - -The project aims to interpret the ConvNeXt model by several visualization techniques. -After that, a web interface would be built to demonstrate the interpretations, helping us look inside the deep ConvNeXt model and answer the questions: -> “What patterns maximally activated this filter (channel) in this layer?”\n -> “Which features are responsible for the current prediction?”. - -Due to the limitation in time and resources, the project only used the tiny-sized ConvNeXt model, which was trained on ImageNet-1k at resolution 224x224 and used 50,000 images in validation set of ImageNet-1k for demo purpose. - -In this web app, two visualization techniques were implemented and demonstrated, they are **Maximally activating patches** and **SmoothGrad**. -Besides, this web app also helps investigate the effect of **adversarial attacks** on ConvNeXt interpretations. -Last but not least, there is a last webpage that stores 50,000 images in the **ImageNet-1k** validation set, facilitating the two web pages above in searching and referencing. -""" -st.write(intro_text) - -# 4 PAGES -st.subheader('Features') -sections_text = """Overall, there are 4 features in this web app: -1) Maximally activating patches: The visualization method in this page answers the question “what patterns maximally activated this filter (channel)?”. -2) SmoothGrad: This visualization method in this page answers the question “which features are responsible for the current prediction?”. -3) Adversarial attack: How adversarial attacks affect ConvNeXt interpretation? -4) ImageNet1k: The storage of 50,000 images in validation set. -""" -st.write(sections_text) - - -add_footer('Developed with ❤ by ', 'Hanna Ta Quynh Nga', 'https://www.linkedin.com/in/ta-quynh-nga-hanna/') diff --git a/spaces/terfces0erbo/CollegeProjectV2/Autocom Delphi Keygen __LINK__ 2011.3 15.md b/spaces/terfces0erbo/CollegeProjectV2/Autocom Delphi Keygen __LINK__ 2011.3 15.md deleted file mode 100644 index e777ab0ec28d3d54865c520e4b5be9e441f8e1d3..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Autocom Delphi Keygen __LINK__ 2011.3 15.md +++ /dev/null @@ -1,7 +0,0 @@ - -

          now i can download the 2013 r2 from Autocom new vci, but some problems. for the Autocom new vci version.. i don't know if that right, but that i download it Autocom new vci 2013.2 delphi software. 1. open the autocom new vci 2013. and unzip. 2. Turn off your internet connection and shut down antivirus software. 3. Paste all file in directory "config" replace to "config" "2013.2" and write the file name on end of the line. 4. Install software by select option "Main program menu>" "DS150E(New vci)". 5. Note the path of the software "DS150E(New vci) install folder. 6. Now install software. When the install complete, wait for install complete. 7. Enjoy!

          -

          autocom delphi keygen 2011.3 15


          Download Filehttps://bytlly.com/2uGkHM



          -

          Autocom 2013.2 Delphi 2012 software has a new release. New 2013.2 release includes install of the PCV scanner into your car and an option to open both diagnostic and trace files from Diagnostic II. ZIP AUTOCOM 2013.2 DELPHI 2012 DOWNLOAD Please download 2016.2 if you'd like updates to Autocom software. Download 2016.2- autocmpcplus (new vci) or autocmp (old vci) (2016.2- download). The same link as 2016.2- download is posted below:

          -

          DownLoad Test version. ( The test version is in beta and will be later release at official. ) Test Version is using 2007.3 delphi workstation. Test Version need license activation in 2008.1-8, 2008.1-12, and 2008.1-14. The testing version of Autocm and Autocom car software is full compatible with 2008.1-14 delphi workstation. Test version need for testing or development use, Test version has a new 2011.3 Delphi workstation. The test version needs purchasing the 2011. Please download and test 2011.3 delphi or use 2008.1, the same link as above:

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/David Hindi Movie 720p HOT!.md b/spaces/terfces0erbo/CollegeProjectV2/David Hindi Movie 720p HOT!.md deleted file mode 100644 index cc10aa183fa904b0eef5680c87bc26a64ae1b962..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/David Hindi Movie 720p HOT!.md +++ /dev/null @@ -1,8 +0,0 @@ -

          David Hindi Movie 720p


          Download Ziphttps://bytlly.com/2uGiBR



          - -December 30, 2021 - Download David and the Elves (2021) [HQ Fan Dub] (Hindi-English), this movie is not dubbed into Hindi and is available in 480p & 720p & 1080p ... December 30, 2021 - Download David and the Elves (2021) [HQ Fan Dub] (Hindi-English), this film is not dubbed into Hindi and is available in 480p & 720p & 1080p -While in India the festival of "David and the Elves" (or "David and the Elves") takes place and the people of the city go to church to celebrate it, in reality they are celebrating mostly their own Dawa. -This book tells the story of two Daviks - David and David - and their history, their friendship and their passion for art. 8a78ff9644
          -
          -
          -

          diff --git a/spaces/terfces0erbo/CollegeProjectV2/Descargar Woody 2.0 Sp6 Espanol.md b/spaces/terfces0erbo/CollegeProjectV2/Descargar Woody 2.0 Sp6 Espanol.md deleted file mode 100644 index 09faec70bc1bd13b9baeecd03f97927b873a748c..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Descargar Woody 2.0 Sp6 Espanol.md +++ /dev/null @@ -1,9 +0,0 @@ - -

          i think this story does a great job of showing how important adult figures can be. bonnie is very smart and awesome, and it is nice to see how she interacts with woody and the kids, as well as watch their reactions when they realize she is leaving.

          -

          descargar woody 2.0 sp6 espanol


          Download Zip ►►► https://bytlly.com/2uGk1S



          -

          some of the things that really stand out in this moving story are the kids talking about how they missed their older brother, scott. and when bonnie comes back, she constantly reminds the kids that they need to be kind and make sure to pay attention to her like scott would have.

          -

          throughout the story, its great to see that the acting in the toy story 4 is just as good as the original trilogy. the four actors give incredible performances. their work is so well-done that its like watching the original movies again. the special effects are also amazing. the story is funny and really moves at a fast pace. when you see woody and the kids moments after they figure out that bonnie is leaving, it is very emotional. the ending is also perfect.

          -

          mia moon - milk and cookies, milk and cookies! (live on tiktok) published:09 feb 2019 mia moon - milk and cookies, milk and cookies! (live on tiktok) mia moon - milk and cookies, milk and cookies! (live on tiktok) published:09 feb 2019 views:876100 watch full video for mia moon - milk and cookies, milk and cookies! (live on tiktok) mia moon is a viral tiktoker from sydney, australia. she has had her best moments performing at ces asia (ces asia is known as asia's biggest tech event in the digital industry) where she earned herself the title of 'dj princess' and performed her live mix. this live performance of milk and cookies, milk and cookies! hits all the right buttons. our host starts out by making the love connection with the audience as they dance together. seeing as this is a live performance, the music plays till the end, even through the same dance routine for a few times. we then see a familiar scene, where mia is with a cup of tea and a book. she makes the tea and still performs her songs, taking a step back to chat with the audience for a few seconds. she then makes the connection with the audience once again as she performs her version of candy crush, playing her super diamonds. the final part of her song deals with the emotional side of the audience.

          -

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Dus Mp4 [BETTER] Download Movie.md b/spaces/terfces0erbo/CollegeProjectV2/Dus Mp4 [BETTER] Download Movie.md deleted file mode 100644 index bdb9b9176282cdaa534668540c233ed3eb6c9f34..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Dus Mp4 [BETTER] Download Movie.md +++ /dev/null @@ -1,40 +0,0 @@ -
          -

          Dus MP4 Download Movie: How to Watch the Bollywood Action Thriller Online or Offline

          - -

          Dus is a 2005 Indian Hindi-language action thriller film directed by Anubhav Sinha and starring Sanjay Dutt, Sunil Shetty, Abhishek Bachchan, Zayed Khan, Shilpa Shetty, Esha Deol, and Dia Mirza. The film revolves around a team of anti-terrorist agents who try to stop a deadly plot by a terrorist named Jamwal.

          - -

          Dus was one of the highest-grossing films of 2005 and received positive reviews from critics and audiences. The film is known for its stylish action sequences, catchy songs, and patriotic theme. If you are a fan of Bollywood movies or action thrillers, you may want to watch Dus online or offline.

          -

          Dus mp4 download movie


          DOWNLOADhttps://bytlly.com/2uGl8C



          - -

          In this article, we will show you how to download Dus MP4 movie from various sources and enjoy it on your device. We will also tell you some facts and trivia about the film that you may not know.

          - -

          How to Download Dus MP4 Movie from Various Sources?

          - -

          There are many ways to download Dus MP4 movie from the internet. However, not all of them are safe, legal, or reliable. Some websites may offer fake or infected files that can harm your device or violate your privacy. Therefore, you need to be careful about the source of your download and use a trusted tool or website.

          - -

          Here are some of the best and reliable sources to download Dus MP4 movie:

          - -
            -
          • Free MP4 Downloader: This is a free online tool that allows you to download any online video to MP4 format with the best quality. You can use it to download Dus MP4 movie from over 1000 popular video streaming websites, such as YouTube, Vimeo, Dailymotion, etc. You just need to copy and paste the video URL and choose your preferred resolution and quality. Then you can save the target video file on your computer or mobile device.
          • -
          • Download Free Full Movies: This is a website that offers a huge collection of movies and TV shows in various formats, including MP4. You can browse by genre, year, or alphabetically and download the movies for free. You can also watch them online or stream them on your smart TV. You can find Dus MP4 movie on this website and enjoy it offline.
          • -
          • The Internet Archive: This is a digital library that hosts millions of free books, music, videos, and more. You can find Dus MP4 movie on this website and download it for free. You can also stream it online or borrow it for a limited time. The Internet Archive also provides information about the film, such as its release date, cast, crew, reviews, ratings, etc.
          • -
          • recovery_disk_windows_vista_home_premium_x14_39682l___exclusive___wnr: This is a package on npm that allows you to download Dus MP4 movie with one command. npm is a platform that allows you to install and manage packages of code for various purposes. You can install recovery_disk_windows_vista_home_premium_x14_39682l___exclusive___wnr on your computer using npm and then use it to download Dus MP4 movie.
          • -
          - -

          Some Facts and Trivia About Dus MP4 Movie

          - -

          Dus MP4 movie is not only an entertaining film but also an interesting one. Here are some facts and trivia about the film that you may not know:

          - -
            -
          • Dus was originally planned as a spy thriller in 1997 with Mukul S. Anand as the director and Sanjay Dutt, Salman Khan, Raveena Tandon, and Shilpa Shetty as the cast. However, the film was shelved after Anand's death in 1997.
          • -
          • The film was later revived by Anubhav Sinha in 2005 with a new script and cast. The film was shot in Canada and India and had a budget of ₹210 million.
          • -
          • The film's title refers to the ten days that the team has to stop Jamwal's plot. The film also has ten songs composed by Vishal-Shekhar and sung by various artists.
          • -
          • The film's climax was inspired by the real-life assassination attempt on then Pakistani President Pervez Musharraf in 2003.
          • -
          • The film was praised for its action scenes choreographed by Allan Amin and stunt coordinator Abbas Ali Moghul. The film also featured some innovative gadgets and weapons used by the agents.
          • -
          - -

          Conclusion

          - -

          Dus MP4 movie is a Bollywood action thriller that will keep you hooked with its fast-paced plot, stylish action scenes, catchy songs, and patriotic theme. You can download Dus MP4 movie from various sources and enjoy it on your device offline or online. You can also learn some facts and trivia about the film that will make you appreciate it more.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/thiagolira/ChatPequenoPrincipe/query_data.py b/spaces/thiagolira/ChatPequenoPrincipe/query_data.py deleted file mode 100644 index 7b049bd3b5cc69dd76883dee2a64d834575a4c42..0000000000000000000000000000000000000000 --- a/spaces/thiagolira/ChatPequenoPrincipe/query_data.py +++ /dev/null @@ -1,34 +0,0 @@ -from langchain.prompts.prompt import PromptTemplate -from langchain.llms import OpenAI -from langchain.chains import ChatVectorDBChain - -_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. -You can assume the question about Maquiavel. - -Chat History: -{chat_history} -Follow Up Input: {question} -Standalone question:""" -CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template) - -template = """You are an AI assistant for answering questions about the book "The Little Prince" by Antonie de Saint-Exupery. -You are given the following extracted parts of a long document and a question. Provide a conversational answer. Just answer the question if you have the correct information on the context you are provided. -If you don't know the answer, just say "Hmm, I'm not sure." Don't try to make up an answer. -If the question is not about the book "The Little Prince" you can just say "I'm not allowed to answer questions that are not about the book. If you receive a question in portuguese answer it in portuguese." -Question: {question} -========= -{context} -========= -Answer in Markdown:""" -QA_PROMPT = PromptTemplate(template=template, input_variables=["question", "context"]) - - -def get_chain(vectorstore): - llm = OpenAI(model_name='gpt-3.5-turbo',temperature=0) - qa_chain = ChatVectorDBChain.from_llm( - llm, - vectorstore, - qa_prompt=QA_PROMPT, - condense_question_prompt=CONDENSE_QUESTION_PROMPT, - ) - return qa_chain diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/BrainsBreaker.5.2.6.1.Full.License.Version Everything You Need to Know About This Fun and Challenging Software.md b/spaces/tialenAdioni/chat-gpt-api/logs/BrainsBreaker.5.2.6.1.Full.License.Version Everything You Need to Know About This Fun and Challenging Software.md deleted file mode 100644 index f66ceb31fc5b83d6b2005dd292696b0860b82e74..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/BrainsBreaker.5.2.6.1.Full.License.Version Everything You Need to Know About This Fun and Challenging Software.md +++ /dev/null @@ -1,161 +0,0 @@ -
          -

          BrainsBreaker.5.2.6.1.Full.License.Version: A Fun and Challenging Jigsaw Puzzle Game for Your PC

          -

          If you are looking for a fun and challenging jigsaw puzzle game for your PC, you might want to try BrainsBreaker.5.2.6.1.Full.License.Version.

          -

          BrainsBreaker is a computer puzzle game that lets you create and play jigsaw puzzles with your own images or from a collection of over 600 included.

          -

          BrainsBreaker.5.2.6.1.Full.License.Version


          DOWNLOAD 🆓 https://urlcod.com/2uKb2F



          -

          You can choose from different shapes, sizes, and levels of difficulty, and save and resume your puzzles anytime.

          -

          You can also enjoy realistic graphics and sound effects that make you feel like you are playing with real pieces.

          -

          In this article, I will tell you more about the features of BrainsBreaker, how to download and install it for free, some tips and tricks for playing it, and its pros and cons.

          -

          Features of BrainsBreaker.5.2.6.1.Full.License.Version

          -

          Create and play jigsaw puzzles with your own images or from a collection of over 600 included

          -

          One of the best things about BrainsBreaker is that it allows you to use your own images as puzzles.

          -

          BrainsBreaker 5.2.6.1 Full License Version download
          -BrainsBreaker 5.2.6.1 Full License Version free
          -BrainsBreaker 5.2.6.1 Full License Version crack
          -BrainsBreaker 5.2.6.1 Full License Version serial key
          -BrainsBreaker 5.2.6.1 Full License Version activation code
          -BrainsBreaker 5.2.6.1 Full License Version for PC
          -BrainsBreaker 5.2.6.1 Full License Version jigsaw puzzle game
          -BrainsBreaker 5.2.6.1 Full License Version review
          -BrainsBreaker 5.2.6.1 Full License Version tutorial
          -BrainsBreaker 5.2.6.1 Full License Version how to install
          -BrainsBreaker 5 full license version download
          -BrainsBreaker 5 full license version free
          -BrainsBreaker 5 full license version crack
          -BrainsBreaker 5 full license version serial key
          -BrainsBreaker 5 full license version activation code
          -BrainsBreaker 5 full license version for PC
          -BrainsBreaker 5 full license version jigsaw puzzle game
          -BrainsBreaker 5 full license version review
          -BrainsBreaker 5 full license version tutorial
          -BrainsBreaker 5 full license version how to install
          -Download BrainsBreaker 5.2.6.1 full license version for free
          -Download BrainsBreaker 5 full license version for free
          -How to download and install BrainsBreaker 5.2.6.1 full license version for free
          -How to download and install BrainsBreaker 5 full license version for free
          -How to crack BrainsBreaker 5.2.6.1 full license version
          -How to crack BrainsBreaker 5 full license version
          -How to get serial key for BrainsBreaker 5.2.6.1 full license version
          -How to get serial key for BrainsBreaker 5 full license version
          -How to get activation code for BrainsBreaker 5.2.6.1 full license version
          -How to get activation code for BrainsBreaker 5 full license version
          -How to play BrainsBreaker 5.2.6.1 full license version on PC
          -How to play BrainsBreaker 5 full license version on PC
          -How to solve jigsaw puzzles with BrainsBreaker 5.2.6.1 full license version
          -How to solve jigsaw puzzles with BrainsBreaker 5 full license version
          -Best jigsaw puzzle game for PC: BrainsBreaker 5.2.6.1 full license version
          -Best jigsaw puzzle game for PC: BrainsBreaker 5 full license version
          -Why you should try BrainsBreaker 5.2.6.1 full license version
          -Why you should try BrainsBreaker 5 full license version
          -What is new in BrainsBreaker 5.2.6.1 full license version
          -What is new in BrainsBreaker 5 full license version
          -Where to download BrainsBreaker 5.2.6.1 full license version safely and legally
          -Where to download BrainsBreaker 5 full license version safely and legally
          -Is BrainsBreaker 5.2.6.1 full license version worth it?
          -Is BrainsBreaker 5 full license version worth it?
          -Is BrainsBreaker 5 compatible with Windows?
          -Is there a Mac or Linux version of BrainsBreaker?
          -Can I create my own puzzles with BrainsBreaker?
          -Can I share my puzzles with other users of BrainsBreaker?
          -Can I customize the difficulty and appearance of the puzzles in BrainsBreaker?

          -

          You can import any image file from your computer or from a scanner or camera, and turn it into a jigsaw puzzle with just a few clicks.

          -

          You can also choose from a variety of themes and categories, such as animals, landscapes, art, flowers, cars, etc., that are included in the game.

          -

          You can browse through them by using the gallery tool, which shows you thumbnails of all the available puzzles.

          -

          Choose from different shapes, sizes, and levels of difficulty

          -

          Another great feature of BrainsBreaker is that it lets you customize your puzzles according to your preferences and mood.

          -

          You can choose from different shapes of pieces, such as classic, curly, artistic, mosaic, etc., or even create your own shape by using the shape editor tool.

          -

          You can also choose how many pieces you want your puzzle to have, from as few as 4 to as many as thousands.

          -

          You can also adjust the level of difficulty by changing the rotation angle of the pieces, or by adding false pieces that do not belong to the puzzle.

          -

          Save and resume your puzzles anytime

          -

          BrainsBreaker also allows you to save your progress on any puzzle that you are working on, and resume it later whenever you want.

          -

          You can use the save tool to store your puzzle in a file on your computer, or use the autosave feature that automatically saves your puzzle every few minutes.

          -

          You can also use the resume tool to load any saved puzzle from your computer or from a list of recent puzzles that you have played.

          -

          Enjoy realistic graphics and sound effects

          -

          BrainsBreaker also offers realistic graphics and sound effects that make you feel like you are playing with real pieces.

          -

          The game uses high-quality images that are sharp and colorful, and that fit well together when assembled.

          -

          The game also uses realistic sound effects that mimic the sounds of moving, rotating, snapping, shuffling, etc., of real pieces.

          -

          You can also change the background color or image of your puzzle area by using the settings tool.

          -

          How to Download and Install BrainsBreaker.5.2.6.1.Full.License.Version for Free

          -

          Step 1: Visit the official website of BrainsBreaker and click on the download button

          -

          To download BrainsBreaker for free, you need to visit its official website at https://www.brainsbreaker.com/.

          -

          There you will find a download button that will let you download a setup file for BrainsBreaker on your computer.

          -

          Step 2: Run the installer and follow the instructions

          -

          Once you have downloaded the setup file, you need to run it on your computer by double-clicking on it or by right-clicking on it and choosing run as administrator.

          -

          This will launch an installer that will guide you through the installation process.

          -

          You need to follow the instructions on each screen, such as choosing a language, accepting terms and conditions, selecting a destination folder, etc., until you finish installing BrainsBreaker on your computer.

          -

          Step 3: Enter the license key that you received by email after completing a short survey

          -

          To unlock the full version of BrainsBreaker for free, you need to enter a license key that you will receive by email after completing a short survey.

          -

          The survey will ask you some basic questions about yourself, such as your age, gender, country, etc., as well as some questions about your experience with BrainsBreaker.

          -

          The survey will take only a few minutes to complete, and it will help improve BrainsBreaker in future updates.

          -

          After completing the survey, you will receive an email with a license key that you need to copy and paste into BrainsBreaker by using the activate tool.

          -

          Step 4: Launch the game and start playing

          -

          After entering the license key, you will be able to launch the game and start playing by using the play tool. You will have access to all the features and puzzles of the full version of BrainsBreaker, and you will be able to enjoy it without any limitations or interruptions.

          -

          Tips and Tricks for Playing BrainsBreaker.5.2.6.1.Full.License.Version

          -

          Use the magnifying glass tool to zoom in and out of the puzzle area

          -

          If you want to see more details or less details of your puzzle, you can use the magnifying glass tool to zoom in and out of the puzzle area.

          -

          This will help you see the details of the pieces better and find the ones that match.

          -

          You can also use the mouse wheel to zoom in and out quickly.

          -

          Use the ghost image tool to see a faint image of the completed puzzle behind the pieces

          -

          If you need some guidance or a hint, you can use the ghost image tool to see a faint image of the completed puzzle behind the pieces.

          -

          This will help you visualize where each piece should go and how they fit together.

          -

          You can adjust the opacity of the ghost image by using the slider on the toolbar.

          -

          Use the edge filter tool to show only the edge pieces of the puzzle

          -

          A popular strategy is to put the edges of the puzzle together first because, with one straight edge, the pieces are easier to identify and put together.

          -

          To make this easier, you can use the edge filter tool to show only the edge pieces of the puzzle and hide the rest.

          -

          This will help you focus on finding and connecting the border of your puzzle.

          -

          Use the tray tool to store pieces that you are not using

          -

          If you want to organize your pieces better or clear some space on your puzzle area, you can use the tray tool to store pieces that you are not using.

          -

          The tray tool lets you create virtual trays where you can drag and drop pieces that belong to a certain group, such as a color, a shape, or a part of the image.

          -

          You can create as many trays as you want and switch between them by using the tabs on the bottom of your screen.

          -

          Use the shuffle tool to rearrange the pieces randomly

          -

          If you are stuck or bored with your current arrangement of pieces, you can use the shuffle tool to rearrange them randomly.

          -

          This will help you see new possibilities and combinations that you might have missed before.

          -

          You can also use this tool to create more challenge by mixing up your pieces again after sorting them into groups or trays.

          -

          Pros and Cons of BrainsBreaker.5.2.6.1.Full.License.Version

          -

          Pros

          -
            -
          • Fun and relaxing way to exercise your brain and improve your concentration
          • -
          • Customizable and versatile game that suits your preferences and mood
          • -
          • Easy to use and user-friendly interface
          • -
          • No ads or in-app purchases
          • -
          -

          Cons

          -
            -
          • Requires a license key to unlock the full version
          • -
          • May not work on some older or slower computers
          • -
          • May not have enough puzzles for some hardcore jigsaw fans
          • -
          -

          Conclusion

          -

          In conclusion, BrainsBreaker.5.2.6.1.Full.License.Version is a fun and challenging jigsaw puzzle game for your PC that lets you create and play jigsaw puzzles with your own images or from a collection of over 600 included.

          -

          You can choose from different shapes, sizes, and levels of difficulty, and save and resume your puzzles anytime.

          -

          You can also enjoy realistic graphics and sound effects that make you feel like you are playing with real pieces.

          -

          To download and install BrainsBreaker for free, you need to visit its official website, run the installer, enter the license key that you received by email after completing a short survey, and launch the game.

          -

          To play BrainsBreaker more effectively, you can use some tips and tricks such as using the magnifying glass tool, the ghost image tool, the edge filter tool, the tray tool, and the shuffle tool.

          -

          BrainsBreaker has some pros and cons that you should consider before playing it, such as its fun factor, customization options, ease of use, license requirement, compatibility issues, and puzzle variety.

          -

          If you are looking for a new hobby or a way to spend some quality time with yourself or your family, you should give BrainsBreaker a try and see how much fun you can have with jigsaw puzzles on your PC.

          -

          Frequently Asked Questions

          -
            -
          1. What are the system requirements for BrainsBreaker?
          2. -

            To run BrainsBreaker on your PC, you need to have Windows XP/Vista/7/8/10, a processor with 1 GHz or faster, 512 MB of RAM or more, and 100 MB of free disk space or more. You also need a monitor with 1024x768 resolution or higher, a mouse, and speakers or headphones for sound effects.

            -
          3. How do I get more puzzles for BrainsBreaker?
          4. -

            If you want more puzzles for BrainsBreaker, you can either create your own puzzles with your own images or download more puzzles from the official website of BrainsBreaker. There you will find a section called "More Puzzles" where you can download free puzzles in different categories and themes. You can also buy premium puzzles with higher quality images and more pieces for a small fee.

            -
          5. How do I share my puzzles with others?
          6. -

            If you want to share your puzzles with others, you can either send them by email or upload them to the online gallery of BrainsBreaker. To send them by email, you need to use the email tool that lets you attach your puzzle file to an email message. To upload them to the online gallery, you need to use the upload tool that lets you choose a title, a description, and a category for your puzzle. You can also rate and comment on other people's puzzles in the online gallery.

            -
          7. How do I change the language of BrainsBreaker?
          8. -

            To change the language of BrainsBreaker, you need to use the language tool that lets you choose from several languages such as English, Spanish, French, German, Italian, Portuguese, Dutch, Swedish, Norwegian, Danish, Finnish, Polish, Czech, Hungarian, Russian, Greek, Turkish, Hebrew, Arabic, Chinese (Simplified), Chinese (Traditional), Japanese, Korean, Thai, Indonesian, Malay, Vietnamese, and Hindi.

            -
          9. How do I contact BrainsBreaker customer support?
          10. -

            If you have any questions, problems, or suggestions about BrainsBreaker, you can contact its customer support by email at support@brainsbreaker.com.

            -

            You can also visit its website at https://www.brainsbreaker.com/ and check its help section, FAQ section, and online forum for more information and assistance.

            -
          11. How do I uninstall BrainsBreaker from my PC?
          12. -

            If you want to uninstall BrainsBreaker from your PC, you can do so by using the Windows Control Panel.

            -

            You need to go to the Programs and Features section and find BrainsBreaker in the list of installed programs.

            -

            You need to select it and click on the Uninstall button and follow the instructions.

            -

            This will remove BrainsBreaker and all its files from your PC.

            -
          -

          I hope you enjoyed this article and learned something new about BrainsBreaker.5.2.6.1.Full.License.Version.

          -

          If you are interested in trying this game for yourself, you can download it for free from its official website and start playing right away.

          -

          And if you liked this article, please share it with your friends and family who might also enjoy jigsaw puzzles on their PC.

          -

          Thank you for reading and happy puzzling!

          -

          0a6ba089eb
          -
          -
          \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Ishq Ke Parindey 2 Full Movie Mp4 Free Download NEW.md b/spaces/tialenAdioni/chat-gpt-api/logs/Ishq Ke Parindey 2 Full Movie Mp4 Free Download NEW.md deleted file mode 100644 index 942babd24ed13cdd13e5c37359b2315a6d0d149d..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Ishq Ke Parindey 2 Full Movie Mp4 Free Download NEW.md +++ /dev/null @@ -1,72 +0,0 @@ - -

          Ishq Ke Parindey 2 Full Movie Mp4 Free Download: How to Watch the Musical Romance Online

          -

          Ishq Ke Parindey 2 is the sequel to the 2015 musical romance film Ishq Ke Parindey, which was based on the epic love story of Sheen and Faiz, two lovers who belong to India and Pakistan respectively. The film portrays the Indo-Pak conflict against the backdrop of a love story and bears a message of peace. The film features Rishi Verma and Priyanka Mehta in the lead roles, along with Manjul Aazad, Abid Yunus Khan, and Yasir Iftikhar Khan in supporting roles. The film is directed by Shakir Khan and has music composed by Vijay Vermaa, Rashid Khan, and Sajjad Ali.

          -

          If you are a fan of Ishq Ke Parindey and want to watch its sequel, you might be wondering how to download Ishq Ke Parindey 2 full movie mp4 for free. Well, you are not alone. Many people are looking for ways to watch this film online without paying any money. However, before you proceed, you should know that downloading or streaming movies from unauthorized sources is illegal and unethical. You might face legal consequences or harm your device with malware or viruses. Therefore, we recommend that you watch Ishq Ke Parindey 2 legally and ethically from official platforms such as Eros Now, where you can enjoy the film in HD quality with subtitles and playlists.

          -

          Ishq Ke Parindey 2 full movie mp4 free download


          Download Filehttps://urlcod.com/2uK6Dv



          -

          However, if you still want to download Ishq Ke Parindey 2 full movie mp4 for free, we have prepared a guide for you. In this guide, we will tell you how to find and download Ishq Ke Parindey 2 full movie mp4 for free from various websites. We will also tell you the pros and cons of each website and give you some tips to avoid any problems. But remember, this guide is for informational purposes only and we do not endorse or promote any illegal or unethical activities.

          -

          How to Download Ishq Ke Parindey 2 Full Movie Mp4 for Free from Various Websites

          -

          There are many websites that claim to offer Ishq Ke Parindey 2 full movie mp4 for free download. However, not all of them are reliable or safe. Some of them might have broken links, low-quality videos, annoying ads, or malicious software. Therefore, you need to be careful and cautious while choosing a website to download Ishq Ke Parindey 2 full movie mp4 for free.

          -

          To help you out, we have listed some of the popular websites that offer Ishq Ke Parindey 2 full movie mp4 for free download. We have also given a brief overview of each website and its pros and cons. However, we advise you to use these websites at your own risk and discretion.

          -

          onlinemovieshindi.com

          -

          onlinemovieshindi.com is a free film streaming website that offers a variety of Bollywood movies, music, and TV shows. You can watch Ishq Ke Parindey 2 full movie online on this website or download it in mp4 format by clicking on the download button below the video player. The website has a simple and user-friendly interface and does not require any registration or subscription.

          -

          Pros:

          -
            -
          • The website has a large collection of Bollywood movies, music, and TV shows.
          • -
          • The website does not require any registration or subscription.
          • -
          • The website has a simple and user-friendly interface.
          • -
          • The website offers Ishq Ke Parindey 2 full movie mp4 for free download.
          • -
          -

          Cons:

          -
            -
          • The website is not legal or authorized to offer Ishq Ke Parindey 2 full movie mp4 for free download.
          • -
          • The website might have broken links, low-quality videos, annoying ads, or malicious software.
          • -
          • The website might violate the copyright laws or terms of service of the original content providers.
          • -
          • The website might harm your device with malware or viruses.
          • -
          -

          dispdespokacerladu.wixsite.com

          -

          dispdespokacerladu.wixsite.com is a personal blog that offers Ishq Ke Parindey 2 full movie in Hindi free download in 720p HD quality. The blog provides a direct download link to the movie file hosted on another website. The blog also provides some information about the movie such as its cast, director, music, genre, quality

          -

          We hope this guide has helped you to find and download Ishq Ke Parindey 2 full movie mp4 for free from various websites. However, we urge you to respect the hard work and dedication of the filmmakers and watch the film legally and ethically from official platforms. Ishq Ke Parindey 2 is a film that you will not regret watching. It is a film that will enrich your mind and soul with its beauty and wisdom. It is a film that will make you proud of your heritage and culture. It is a film that will make you appreciate the values and virtues that Rama embodies.

          -

          Ishq Ke Parindey 2 movie download in mp4 format
          -Watch Ishq Ke Parindey 2 full movie online free mp4
          -Ishq Ke Parindey 2 mp4 movie free download for mobile
          -How to download Ishq Ke Parindey 2 full movie in mp4 quality
          -Ishq Ke Parindey 2 full movie mp4 720p free download
          -Ishq Ke Parindey 2 full movie mp4 hd free download
          -Ishq Ke Parindey 2 full movie mp4 with English subtitles
          -Ishq Ke Parindey 2 full movie mp4 download filmywap
          -Ishq Ke Parindey 2 full movie mp4 download filmyzilla
          -Ishq Ke Parindey 2 full movie mp4 download tamilrockers
          -Ishq Ke Parindey 2 full movie mp4 download pagalworld
          -Ishq Ke Parindey 2 full movie mp4 download moviescounter
          -Ishq Ke Parindey 2 full movie mp4 download worldfree4u
          -Ishq Ke Parindey 2 full movie mp4 download khatrimaza
          -Ishq Ke Parindey 2 full movie mp4 download bolly4u
          -Ishq Ke Parindey 2 full movie mp4 download skymovieshd
          -Ishq Ke Parindey 2 full movie mp4 download movierulz
          -Ishq Ke Parindey 2 full movie mp4 download 9xmovies
          -Ishq Ke Parindey 2 full movie mp4 download coolmoviez
          -Ishq Ke Parindey 2 full movie mp4 download jalshamoviez
          -Ishq Ke Parindey 2 full movie mp4 download okjatt
          -Ishq Ke Parindey 2 full movie mp4 download hdfriday
          -Ishq Ke Parindey 2 full movie mp4 download rdxhd
          -Ishq Ke Parindey 2 full movie mp4 download afilmywap
          -Ishq Ke Parindey 2 full movie mp4 download moviesflix
          -Ishq Ke Parindey 2 full movie mp4 download mkvhub
          -Ishq Ke Parindey 2 full movie mp4 download hdmovieshub
          -Ishq Ke Parindey 2 full movie mp4 download hdpopcorns
          -Ishq Ke Parindey 2 full movie mp4 download yts
          -Ishq Ke Parindey 2 full movie mp4 download torrentz2
          -Ishq Ke Parindey 2 full movie mp4 download limetorrents
          -Ishq Ke Parindey 2 full movie mp4 download kickass torrents
          -Ishq Ke Parindey 2 full movie mp4 download extratorrents
          -Ishq Ke Parindey 2 full movie mp4 download thepiratebay
          -Ishq Ke Parindey 2 full movie mp4 free streaming sites
          -Best sites to watch Ishq Ke Parindey 2 full movie online free in mp4 quality
          -Where can I watch or download Ishq Ke Parindey 2 full movie in mp4 format for free?
          -Is it legal to watch or download Ishq Ke Parindey 2 full movie in mp4 format for free?
          -How to watch or download Ishq Ke Parindey 2 full movie in mp4 format without ads or registration?
          -How to watch or download Ishq Ke Parindey 2 full movie in mp4 format with fast speed and high quality?

          - -I think this is a good conclusion for the article. Do you have any other questions or requests?

          679dcb208e
          -
          -
          \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Drift Racing 2 Mod APK for PC Experience the Thrill of Drifting at Over 100 MillionC.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Drift Racing 2 Mod APK for PC Experience the Thrill of Drifting at Over 100 MillionC.md deleted file mode 100644 index 98e6da43e6d836b8364ff0468d066e0d2f5cd5ef..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Drift Racing 2 Mod APK for PC Experience the Thrill of Drifting at Over 100 MillionC.md +++ /dev/null @@ -1,102 +0,0 @@ -
          -

          CarX Drift Racing 2 PC Mod APK: How to Download and Play

          -

          If you are a fan of drifting games, you might have heard of CarX Drift Racing 2, one of the most popular and realistic drift racing games on Android. But did you know that you can also play it on your PC with a mod APK? In this article, we will show you what CarX Drift Racing 2 is, what a mod APK is, and how to download and play CarX Drift Racing 2 PC mod APK.

          -

          carx drift racing 2 pc mod apk


          Download ····· https://bltlly.com/2uOp3F



          -

          What is CarX Drift Racing 2?

          -

          A sequel of the most desired drift-game

          -

          CarX Drift Racing 2 is a racing game developed by CarX Technologies, LLC. It is the sequel of the original CarX Drift Racing, which has over 100 million downloads on Google Play. CarX Drift Racing 2 is a game that focuses on drifting, a driving technique where the driver intentionally oversteers to make the rear wheels lose traction and slide sideways. Drifting requires skill, precision, and practice, and it can be very fun and satisfying to master.

          -

          Features of CarX Drift Racing 2

          -

          Some of the features that make CarX Drift Racing 2 stand out from other racing games are:

          -
            -
          • Realistic physics and graphics that simulate the behavior and appearance of real cars and tracks.
          • -
          • A variety of cars and customization options that let you tune your car's performance and appearance to your liking.
          • -
          • A career mode that lets you compete in different events and championships, earn coins and reputation, and unlock new cars and upgrades.
          • -
          • An online mode that lets you drift with your friends or other players from around the world in real time.
          • -
          • A ghost mode that lets you race against your own best results or the results of other players.
          • -
          • A tuning mode that lets you adjust your car's settings and test them on different tracks.
          • -
          • A garage mode that lets you create your own unique car designs and share them with other players.
          • -
          -

          What is a mod APK?

          -

          A modified version of an Android app

          -

          An APK (Android Package Kit) is a file format that contains all the components of an Android app, such as the code, resources, assets, etc. A mod APK is a modified version of an original APK that has been altered by someone to add or remove some features, such as unlimited money, unlocked items, ads removal, etc. A mod APK can be downloaded from third-party websites or sources that are not affiliated with the original app developer or Google Play.

          -

          Benefits and risks of using a mod APK

          -

          Some of the benefits of using a mod APK are:

          -

          carx drift racing 2 pc emulator
          -carx drift racing 2 pc download free
          -carx drift racing 2 pc bluestacks
          -carx drift racing 2 pc gameplay
          -carx drift racing 2 pc online
          -carx drift racing 2 pc windows 10
          -carx drift racing 2 pc cheats
          -carx drift racing 2 pc hack
          -carx drift racing 2 pc requirements
          -carx drift racing 2 pc controller support
          -carx drift racing 2 pc multiplayer
          -carx drift racing 2 pc steam
          -carx drift racing 2 pc ldplayer
          -carx drift racing 2 pc graphics settings
          -carx drift racing 2 pc best cars
          -carx drift racing 2 pc update
          -carx drift racing 2 pc review
          -carx drift racing 2 pc tips and tricks
          -carx drift racing 2 pc keyboard controls
          -carx drift racing 2 pc mods download
          -carx drift racing 2 pc apk obb
          -carx drift racing 2 pc apk pure
          -carx drift racing 2 pc apk offline
          -carx drift racing 2 pc apk latest version
          -carx drift racing 2 pc apk unlimited money
          -carx drift racing 2 pc apk data
          -carx drift racing 2 pc apk no root
          -carx drift racing 2 pc apk android oyun club
          -carx drift racing 2 pc apk revdl
          -carx drift racing 2 pc apk rexdl
          -carx drift racing 2 mod apk for windows
          -carx drift racing 2 mod apk for mac
          -carx drift racing 2 mod apk for laptop
          -carx drift racing 2 mod apk for desktop
          -carx drift racing 2 mod apk for computer
          -carx drift racing 2 mod apk for bluestacks
          -carx drift racing 2 mod apk for ldplayer
          -carx drift racing 2 mod apk for emulator
          -carx drift racing 2 mod apk free download for pc
          -carx drift racing 2 mod apk unlimited coins and gold for pc

          -
            -
          • You can access features that are not available in the original app, such as premium content, extra modes, etc.
          • -
          • You can bypass some restrictions or limitations that are imposed by the original app, such as in-app purchases, ads, etc.
          • -
          • You can enhance your gaming experience by having more resources, options, or cheats.
          • -
          -

          Some of the risks of using a mod APK are:

          -
            -
          • You may violate the terms of service or policies of the original app developer or Google Play, which may result in legal actions or account bans.
          • -
          • You may expose your device or data to malware or viruses that may be hidden in the mod APK file or source.
          • -
          • You may encounter compatibility or stability issues that may affect the performance or functionality of the app or your device.
          • -

            How to download and play CarX Drift Racing 2 PC mod APK?

            -

            Download an Android emulator

            -

            An Android emulator is a software that allows you to run Android apps on your PC. There are many Android emulators available online, such as BlueStacks, NoxPlayer, LDPlayer, etc. You can choose the one that suits your preferences and system requirements. To download an Android emulator, you need to visit its official website and follow the instructions to install it on your PC.

            -

            Download the mod APK file

            -

            Once you have installed an Android emulator on your PC, you need to download the mod APK file of CarX Drift Racing 2. You can search for it on Google or use a trusted source that provides mod APK files, such as APKPure, APKMirror, etc. Make sure that the mod APK file is compatible with the latest version of CarX Drift Racing 2 and has the features that you want. To download the mod APK file, you need to click on the download link and save it on your PC.

            -

            Install and run the mod APK on the emulator

            -

            After downloading the mod APK file, you need to install it on your Android emulator. You can do this by dragging and dropping the file into the emulator window or by using the built-in file manager of the emulator. The installation process may take a few minutes, depending on the size of the file and the speed of your PC. Once the installation is complete, you can launch CarX Drift Racing 2 from the emulator's app drawer and enjoy playing it on your PC.

            -

            Conclusion

            -

            CarX Drift Racing 2 is a thrilling and realistic drift racing game that you can play on your Android device or on your PC with a mod APK. A mod APK is a modified version of an original APK that has some extra features or advantages. However, using a mod APK also involves some risks and challenges, such as violating the terms of service, exposing your device to malware, or encountering compatibility issues. Therefore, you should be careful and responsible when downloading and installing a mod APK. If you want to try CarX Drift Racing 2 PC mod APK, you need to download an Android emulator and the mod APK file from reliable sources and follow the steps mentioned above.

            -

            FAQs

            -

            What are the minimum system requirements for playing CarX Drift Racing 2 on PC?

            -

            The minimum system requirements for playing CarX Drift Racing 2 on PC are:

            - - - - - - -
            OSWindows 7/8/10
            CPUDual-core processor with at least 1.5 GHz
            RAMAt least 4 GB
            GraphicsDirectX 9 compatible with at least 512 MB VRAM
            Disk spaceAt least 1 GB
            -

            Is CarX Drift Racing 2 free to play?

            -

            Yes, CarX Drift Racing 2 is free to play on both Android and PC. However, it contains some in-app purchases that can enhance your gaming experience or unlock some premium content.

            -

            Can I play CarX Drift Racing 2 offline?

            -

            Yes, you can play CarX Drift Racing 2 offline in some modes, such as career mode or tuning mode. However, you need an internet connection to play online mode or access some online features, such as leaderboards or garage mode.

            -

            How can I update CarX Drift Racing 2 PC mod APK?

            -

            To update CarX Drift Racing 2 PC mod APK, you need to download the latest version of the mod APK file from the same source that you used before and install it over the existing one. Alternatively, you can uninstall the old version and install the new one from scratch.

            -

            How can I contact the developer of CarX Drift Racing 2?

            -

            You can contact the developer of CarX Drift Racing 2 by sending an email to support@carx-tech.com or by visiting their official website at https://carx-tech.com/.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Aptoide APK 8.3.0.6 and Discover New Apps.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Aptoide APK 8.3.0.6 and Discover New Apps.md deleted file mode 100644 index 0a78f9e365d072a9cc4e3bf93cbd01f332fbe868..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Aptoide APK 8.3.0.6 and Discover New Apps.md +++ /dev/null @@ -1,153 +0,0 @@ - -

            Aptoide APK Download 8.3.0.6: A Free and Safe Alternative to Google Play Store

            -

            If you are looking for a way to download and install apps and games on your Android device without relying on Google Play Store, you might want to try Aptoide APK 8.3.0.6. Aptoide is an open source and community-driven app store that offers many features and benefits that make it a better choice than Google Play Store. In this article, we will explain what Aptoide is, how to download and install it on your device, how to use it to find and download apps and games, how to create and manage your own store on Aptoide, and how to keep your device safe and secure with Aptoide.

            -

            aptoide apk download 8.3.0.6


            Downloadhttps://bltlly.com/2uOp93



            -

            What is Aptoide and why should you use it?

            -

            Aptoide is an open source and community-driven app store for Android devices

            -

            Aptoide is an app store that runs on the Android operating system. Unlike Google Play Store, which is a centralized and official app store, Aptoide is an independent and decentralized app store that allows anyone to create and manage their own store, upload their own apps, follow community recommendations, and discover new content. Aptoide is also open source, which means that anyone can access its source code, modify it, or contribute to its development.

            -

            Aptoide offers many features and benefits that make it a better choice than Google Play Store

            -

            Some of the features and benefits of using Aptoide are:

            -
              -
            • You can download your favorite Android apps privately and without signing up.
            • -
            • You can find apps that are not available in other Android marketplaces.
            • -
            • You can downgrade your apps to previous versions if you encounter any issues with the latest version.
            • -
            • You can create your own store and choose its name, logo, and color theme.
            • -
            • You can follow other stores and know who is following you. You can also keep your store private if you want.
            • -
            • You can rate and review apps and stores, and reply to other users' comments.
            • -
            • You can access more than 120,000 apps from different categories, genres, languages, countries, etc.
            • -
            -

            How to download and install Aptoide APK 8.3.0.6 on your Android device?

            -

            Download the Aptoide APK file from the official website or a trusted source

            -

            To install A To install Aptoide on your Android device, you need to download the Aptoide APK file first. You can download it from the official website of Aptoide or from a trusted source that provides the latest version of the APK file. You can also scan the QR code on the website to download the APK file directly to your device. Make sure that you have enough storage space on your device before downloading the file.

            Enable the installation of apps from unknown sources in your Android settings

            -

            Before you can install Aptoide on your device, you need to enable the installation of apps from unknown sources in your Android settings. This is because Aptoide is not available in Google Play Store and is considered an unknown source by your device. To enable this option, follow these steps:

            -
              -
            1. Go to your device's settings and tap on Security or Privacy.
            2. -
            3. Find the option that says Unknown sources or Install unknown apps and toggle it on.
            4. -
            5. A warning message will appear, telling you that installing apps from unknown sources can harm your device. Tap on OK or Allow to proceed.
            6. -
            -

            Run the APK file and follow the instructions to install Aptoide on your device

            -

            Once you have downloaded the Aptoide APK file and enabled the installation of apps from unknown sources, you can run the APK file and install Aptoide on your device. To do this, follow these steps:

            -

            aptoide apk download 8.3.0.6 for android
            -aptoide apk download 8.3.0.6 free
            -aptoide apk download 8.3.0.6 latest version
            -aptoide apk download 8.3.0.6 offline installer
            -aptoide apk download 8.3.0.6 modded
            -aptoide apk download 8.3.0.6 no ads
            -aptoide apk download 8.3.0.6 old version
            -aptoide apk download 8.3.0.6 mirror
            -aptoide apk download 8.3.0.6 from official website
            -aptoide apk download 8.3.0.6 safe and secure
            -aptoide apk download 8.3.0.6 alternative
            -aptoide apk download 8.3.0.6 review
            -aptoide apk download 8.3.0.6 features
            -aptoide apk download 8.3.0.6 pros and cons
            -aptoide apk download 8.3.0.6 how to install
            -aptoide apk download 8.3.0.6 guide
            -aptoide apk download 8.3.0.6 tutorial
            -aptoide apk download 8.3.0.6 tips and tricks
            -aptoide apk download 8.3.0.6 faq
            -aptoide apk download 8.3.0.6 troubleshooting
            -aptoide apk download 8.3.0.6 update
            -aptoide apk download 8.3.0.6 changelog
            -aptoide apk download 8.3.0.6 comparison
            -aptoide apk download 8.3.0.6 benefits
            -aptoide apk download 8.3.0.6 disadvantages
            -aptoide apk download 8

            -
              -
            1. Locate the Aptoide APK file in your device's file manager or downloads folder and tap on it.
            2. -
            3. A pop-up window will appear, asking you if you want to install this application. Tap on Install or Next to continue.
            4. -
            5. Wait for the installation process to complete. It may take a few seconds or minutes depending on your device and internet connection.
            6. -
            7. Once the installation is done, tap on Open or Done to launch Aptoide or exit the installer.
            8. -
            -

            How to use Aptoide to find and download apps and games?

            -

            Browse the categories, search by name, or follow the recommendations of other users and stores

            -

            After installing Aptoide on your device, you can start using it to find and download apps and games that you want. There are several ways to do this:

            -
              -
            • You can browse the categories of apps and games that are available on Aptoide, such as Editors' Choice, Top Downloads, Trending, etc. You can also filter by genre, language, country, etc.
            • -
            • You can search for a specific app or game by name using the search bar at the top of the screen. You can also use voice search or scan a barcode to find an app or game.
            • -
            • You can follow the recommendations of other users and stores that are displayed on the home screen or in the Discover section. You can also see what your friends are downloading and sharing on Aptoide.
            • -
            -

            Check the security badges, ratings, reviews, and screenshots of the apps and games before downloading

            -

            Before you download an app or game from Aptoide, you should check its security badges, ratings, reviews, and screenshots to make sure that it is safe and suitable for you. Here are some tips to do this:

            -
              -
            • Look for the green shield icon next to the app or game name. This means that it has been scanned and verified by Aptoide's anti-malware system and is free of viruses and malware.
            • -
            • Look for the star rating and number of downloads of the app or game. This gives you an idea of how popular and well-liked it is by other users.
            • -
            • Read the reviews and comments of other users who have downloaded and used the app or game. This gives you more information about its features, performance, quality, etc.
            • -
            • View the screenshots of the app or game that show how it looks like on your device. This gives you a preview of its interface, design, graphics, etc.
            • -
            -

            Download your favorite apps and games privately and without signing up

            -

            Once you have found an app or game that you want to download from Aptoide, you can do so privately and without signing up. Here are some steps to do this:

            -
              -
            1. Tap on the app or game that you want to download.
            2. -
            3. A new screen will open with more details about the app or game. Tap on Download or Install to start downloading it.
            4. -
            5. A progress bar will show how much of the app or game has been downloaded. You can pause or resume the download at any time.
            6. -
            7. Once the download is complete, tap on Open Once the download is complete, tap on Open or Done to launch the app or game or exit the installer.
            8. -
            -

            How to create and manage your own store on Aptoide?

            -

            Choose a name, logo, and color theme for your store

            -

            One of the unique features of Aptoide is that it allows you to create and manage your own store on the app store. You can customize your store with your own name, logo, and color theme. To create your own store, follow these steps:

            -
              -
            1. Tap on the menu icon at the top left corner of the screen and select My Store.
            2. -
            3. Tap on Create Store and enter a name for your store. You can also choose a logo from your device's gallery or take a photo with your camera.
            4. -
            5. Tap on Next and choose a color theme for your store. You can also adjust the brightness and saturation of the colors.
            6. -
            7. Tap on Create Store and wait for your store to be created. You can also share your store with your friends via social media, email, or QR code.
            8. -
            -

            Upload your own apps or select from other stores

            -

            After creating your own store, you can upload your own apps or select from other stores that are available on Aptoide. You can also edit or delete your apps at any time. To upload or select apps for your store, follow these steps:

            -
              -
            1. Tap on the menu icon at the top left corner of the screen and select My Store.
            2. -
            3. Tap on Manage Store and then tap on Add Apps.
            4. -
            5. You can choose to upload an APK file from your device or select an app from another store. You can also search for an app by name or scan a barcode.
            6. -
            7. Tap on Add to add the app to your store. You can also edit the app's details, such as name, description, category, etc.
            8. -
            9. Repeat the process for any other apps that you want to add to your store.
            10. -
            -

            Follow other stores and get followers for your store

            -

            Another way to discover new apps and games on Aptoide is to follow other stores and get followers for your own store. You can also see what other users are downloading and sharing on Aptoide. To follow other stores and get followers for your store, follow these steps:

            -
              -
            1. Tap on the menu icon at the top left corner of the screen and select Stores.
            2. -
            3. You can browse the featured stores, popular stores, or search for a specific store by name.
            4. -
            5. Tap on Follow to follow a store that you like. You can also tap on Unfollow to unfollow a store that you don't like.
            6. -
            7. To see who is following you or who you are following, tap on My Store and then tap on Followers or Following.
            8. -
            9. You can also invite your friends to follow your store via social media, email, or QR code.
            10. -
            -

            How to keep your device safe and secure with Aptoide?

            -

            Aptoide has a robust anti-malware system that scans and verifies all apps and games

            -

            Aptoide takes security seriously and has a robust anti-malware system that scans and verifies all apps and games that are available on the app store. Aptoide uses several security mechanisms, such as:

            -
              -
            • A green shield icon that indicates that an app or game has been scanned and verified by Aptoide's anti-malware system.
            • -
            • A signature validation system that ensures that an app or game has not been tampered with or modified by third parties.
            • -
            • A malware detection system that identifies and removes any malicious apps or games from the app store.
            • -
            -

            Aptoide allows you to downgrade your apps to previous versions if needed

            -

            Sometimes, you may encounter issues with the latest version of an app or game that you have downloaded from Aptoide. For example, it may not be compatible with your device, it may have bugs or errors, or it may have removed some features that you liked. In such cases, Aptoide allows you to downgrade your apps to previous versions if needed. To do this, follow these steps:

            -
              -
            1. Tap on the menu icon at the top left corner of the screen and select Updates.
            2. -
            3. Find the app or game that you want to downgrade and tap on it.
            4. -
            5. A new screen will open with more details about the app or game. Tap on Other Versions at the bottom of the screen.
            6. -
            7. You will see a list of previous versions of the app or game that are available on Aptoide. You can also see the date, size, and rating of each version.
            8. -
            9. Tap on the version that you want to downgrade to and tap on Download or Install to start downloading it.
            10. -
            11. A progress bar will show how much of the version has been downloaded. You can pause or resume the download at any time.
            12. -
            13. Once the download is complete, tap on Open or Done to launch the app or game or exit the installer.
            14. -
            -

            Aptoide respects your privacy and does not track or collect your personal data

            -

            Aptoide is a privacy-friendly app store that respects your privacy and does not track or collect your personal data. Unlike Google Play Store, which requires you to sign up with a Google account and share your personal information, Aptoide does not require you to sign up or log in to use its services. You can download and install apps and games privately and anonymously. Aptoide also does not use any cookies, trackers, or analytics tools that monitor your online activity or behavior. Aptoide only collects anonymous and aggregated data that is used for statistical purposes and to improve its services.

            -

            Conclusion

            -

            Aptoide APK 8.3.0.6 is a free and safe alternative to Google Play Store that allows you to download and install apps and games on your Android device without relying on Google's services. Aptoide is an open source and community-driven app store that offers many features and benefits that make it a better choice than Google Play Store. You can create and manage your own store, follow other stores, find apps that are not available in other Android marketplaces, downgrade your apps to previous versions, and keep your device safe and secure with Aptoide's anti-malware system. Aptoide is also a privacy-friendly app store that respects your privacy and does not track or collect your personal data. If you want to try Aptoide APK 8.3.0.6, you can download it from the official website of Aptoide or from a trusted source that provides the latest version of the APK file.

            -

            FAQs

            -

            Is Aptoide legal?

            -

            Yes, Aptoide is legal and complies with the laws and regulations of the countries where it operates. Aptoide only hosts apps that are free or have a freemium model, and does not host any paid or pirated apps. Aptoide also respects the intellectual property rights of the app developers and publishers, and removes any apps that infringe them upon request.

            -

            Is Aptoide safe?

            -

            Yes, Aptoide is safe and has a robust anti-malware system that scans and verifies all apps and games that are available on the app store. Aptoide also uses a signature validation system that ensures that an app or game has not been tampered with or modified by third parties. Aptoide also allows you to downgrade your apps to previous versions if you encounter any issues with the latest version.

            -

            How do I update Aptoide?

            -

            To update Aptoide, you need to download the latest version of the Aptoide APK file from the official website of Aptoide or from a trusted source that provides the latest version of the APK file. Then, you need to run the APK file and follow the instructions to install the update on your device.

            -

            How do I uninstall Aptoide?

            -

            To uninstall Aptoide, you need to go to your device's settings and tap on Apps or Applications. Then, you need to find Aptoide in the list of apps and tap on it. Then, you need to tap on Uninstall or Remove to uninstall Aptoide from your device.

            -

            How do I contact Aptoide support?

            -

            If you have any questions, issues, feedback, or suggestions regarding Aptoide, you can contact Aptoide support by using one of these methods:

            -
              -
            • You can send an email to support@aptoide.com.
            • -
            • You can fill out the contact form on the official website of Aptoide.
            • -
            • You can join the official Telegram group of Aptoide.
            • -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Accurender Nxt For Autocad Crack [UPD].md b/spaces/tioseFevbu/cartoon-converter/scripts/Accurender Nxt For Autocad Crack [UPD].md deleted file mode 100644 index 0b93b20b12a6cb068f13b045c0e96d32ea19373c..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Accurender Nxt For Autocad Crack [UPD].md +++ /dev/null @@ -1,56 +0,0 @@ - -I can try to write a title and article for you, but I cannot guarantee that it will be SEO optimized or HTML formatted. Here is my attempt: - -

            How to Use AccuRender Studio for AutoCAD to Create Stunning Renderings

            - -

            AccuRender Studio is the newest version of the AccuRender product line, which offers advanced rendering for AutoCAD. It is currently available for free as a work-in-progress, and it has many features that make it easier and faster to create realistic and beautiful renderings of your AutoCAD models.

            -

            Accurender Nxt For Autocad Crack


            Download Ziphttps://urlcod.com/2uHyGp



            - -

            In this article, we will show you how to download, install, and use AccuRender Studio for AutoCAD, and how to take advantage of its new render engine, noise removal, cloud rendering, lighting channels, and more.

            - -

            Download and Install AccuRender Studio

            - -

            To use AccuRender Studio for AutoCAD, you need a 64 bit system and AutoCAD 2013-2018. You also need a recent copy of the nXtRender for AutoCAD product if you wish to include legacy nXtRender data such as materials and lighting[^1^].

            - -

            To download AccuRender Studio, go to https://accurender.ning.com/page/accurender-studio and click on the Download button. Follow the instructions to install it on your computer.

            - -

            To load the AccuRender Studio plugin for your version of AutoCAD, type netload at the command prompt (not appload!) and navigate to c:\\program files\\AccuRender Studio. The plugin versions are:

            - -
              -
            • ArStudioAcadPlugin-2013-2014.dll
            • -
            • ArStudioAcadPlugin-2015-2016.dll
            • -
            • ArStudioAcadPlugin-2017-2018.dll
            • -
            • ArStudioAcadPlugin-2019-2020.dll
            • -
            • ArStudioAcadPlugin-2021.dll
            • -
            - -

            Use AccuRender Studio for AutoCAD

            - -

            Once you have loaded the plugin, you can access AccuRender Studio from the Add-ins tab in AutoCAD. You will see a toolbar with several buttons that allow you to start a rendering, open the render window, adjust settings, add lights, materials, plants, backgrounds, and more.

            -

            - -

            To start a rendering, click on the Start button on the toolbar. A render window will open and show you a preview of your model. You can adjust the view angle, zoom level, and pan position using your mouse or keyboard.

            - -

            To change the render settings, click on the Settings button on the toolbar. A dialog box will open where you can choose the resolution, quality level, exposure, tone mapping, denoising, and other options. You can also save and load presets for different scenarios.

            - -

            To add lights to your model, click on the Lights button on the toolbar. A dialog box will open where you can choose from different types of lights such as point lights, spot lights, area lights, sky lights, etc. You can also edit their properties such as color, intensity, position, direction, etc. You can also use lighting channels to control the brightness of different groups of lights independently.

            - -

            To add materials to your model, click on the Materials button on the toolbar. A dialog box will open where you can choose from different categories of materials such as metal, wood, glass, etc. You can also edit their properties such as color, texture, reflectivity, transparency, etc. You can also use material channels to control the appearance of different groups of materials independently.

            - -

            To add plants to your model, click on the Plants button on the toolbar. A dialog box will open where you can choose from different types of plants such as trees, shrubs, flowers, etc. You can also edit their properties such as size, orientation, -density, etc. You can also use plant channels to control the visibility of different groups of plants independently.

            - -

            To add backgrounds to your model, click on the Backgrounds button on the toolbar. A dialog box will open where you can choose from different types of backgrounds such as images, -gradients, -skies, -etc. -You can also edit their properties such as color, -brightness, -position, -scale, -etc. -You can also use background channels to control the blending of different backgrounds independently.

            - -

            Take Advantage of AccuRender Studio

            7196e7f11a
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Amazon Will Stop Counting Flex Driver Tips Toward Their Base Pay ((FULL)).md b/spaces/tioseFevbu/cartoon-converter/scripts/Amazon Will Stop Counting Flex Driver Tips Toward Their Base Pay ((FULL)).md deleted file mode 100644 index a484bccdb5d13014c4afdb56b82e45526b57f513..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Amazon Will Stop Counting Flex Driver Tips Toward Their Base Pay ((FULL)).md +++ /dev/null @@ -1,14 +0,0 @@ -
            -

            Amazon will stop counting Flex driver tips toward their base pay

            -

            Amazon has announced that it will stop counting tips from customers as part of the base pay for its Flex drivers, who deliver packages and groceries using their own vehicles. The change will take effect on May 1, 2023, and will apply to all Flex drivers in the US.

            -

            The company said that the decision was made after listening to feedback from drivers and customers, and that it will increase the transparency and fairness of the Flex program. Amazon also said that it will increase the minimum base pay for Flex drivers from $18 to $20 per hour, and that drivers will keep 100% of their tips on top of that.

            -

            Amazon will stop counting Flex driver tips toward their base pay


            DOWNLOAD ✔✔✔ https://urlcod.com/2uHwAE



            -

            Amazon Flex is a service that allows people to sign up as independent contractors and deliver packages and groceries for Amazon using their own cars. Drivers can choose their own hours and areas of delivery, and are paid by the hour plus tips. According to Amazon, there are more than 200,000 Flex drivers in the US.

            -

            The practice of counting tips as part of the base pay has been criticized by some drivers and labor advocates, who argued that it reduced the actual earnings of drivers and violated labor laws. In February 2022, Amazon agreed to pay $61.7 million to settle a Federal Trade Commission complaint that it withheld tips from Flex drivers for more than two years.

            -

            Some drivers welcomed the announcement, saying that it will boost their income and trust in the company. Others expressed skepticism, saying that they will wait and see how the change will affect their actual payouts and whether Amazon will lower the number of available delivery blocks or increase the delivery expectations.

            - -

            Amazon is not the only company that has faced scrutiny over its treatment of gig workers, who are classified as independent contractors and do not receive benefits such as health insurance, sick leave, or minimum wage. Other companies such as Uber, Lyft, DoorDash, and Instacart have also been accused of misclassifying workers and exploiting loopholes in labor laws.

            -

            In November 2020, California voters passed Proposition 22, a ballot measure that exempted app-based transportation and delivery companies from a state law that would have required them to treat their workers as employees. The measure was backed by a $200 million campaign funded by the companies, who argued that it would preserve the flexibility and choice of workers and customers. However, some workers and labor groups opposed the measure, saying that it would erode their rights and protections.

            -

            Since then, several other states have introduced or passed similar legislation that would create a third category of workers between employees and contractors, with limited benefits and protections. Some critics have called these laws a "copy and paste" of Proposition 22, and have warned that they will create a race to the bottom for labor standards. Others have praised these laws as a compromise that balances the needs of workers and businesses in the gig economy.

            cec2833e83
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Download Corel X4 [REPACK] Free.md b/spaces/tioseFevbu/cartoon-converter/scripts/Download Corel X4 [REPACK] Free.md deleted file mode 100644 index 2ae24319684370dd3dd6f56b13e494393161026a..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Download Corel X4 [REPACK] Free.md +++ /dev/null @@ -1,32 +0,0 @@ - -

            How to Download Corel X4 Free and Enjoy Its Benefits

            -

            If you are looking for a professional graphics design software that can help you create stunning logos, ads, and websites, you might want to consider Corel X4. Corel X4 is an intuitive and versatile application that offers a rich set of tools and features for vector illustration, photo editing, typography, and more. But how can you download Corel X4 free and enjoy its benefits? In this article, we will show you how to get Corel X4 free download from reliable sources and what you can do with this amazing software.

            -

            Where to Download Corel X4 Free

            -

            There are several ways to download Corel X4 free, but not all of them are safe and legal. Some websites may offer cracked or pirated versions of Corel X4 that can harm your computer or expose you to malware and viruses. Moreover, using illegal software can get you into trouble with the law and violate the intellectual property rights of the developers. Therefore, we recommend that you avoid such websites and opt for legitimate sources instead.

            -

            Download corel x4 free


            Download Zip ✯✯✯ https://urlcod.com/2uHyjl



            -

            One of the best ways to download Corel X4 free is to use the official website of CorelDRAW Graphics Suite[^1^]. Here, you can get a free trial version of the latest CorelDRAW Graphics Suite subscription, which includes Corel X4 as well as other exclusive features and content. The free trial lasts for 15 days and gives you full access to all the functionalities of the software. You can also enjoy cloud-based collaboration and asset management workflows, a reimagined image adjustments workflow, a tailored learning experience, and subscription-only extras like additional templates and integrated fonts.

            -

            To download Corel X4 free from the official website, you need to follow these steps:

            -
              -
            1. Go to https://www.coreldraw.com/en/pages/coreldraw-x4/ and click on the "Download Free Trial" button.
            2. -
            3. Fill in your name, email address, country, and language preferences and click on "Start Your Free Trial".
            4. -
            5. Download the installer file and run it on your computer.
            6. -
            7. Follow the instructions on the screen to complete the installation process.
            8. -
            9. Launch the software and sign in with your email address and password.
            10. -
            11. Enjoy your free trial of CorelDRAW Graphics Suite subscription with Corel X4.
            12. -
            -

            Another way to download Corel X4 free is to use third-party websites that offer portable versions of the software[^3^]. A portable version is a compressed file that contains all the necessary components of the software without requiring installation. You can simply extract the file to a folder on your computer or a USB drive and run it from there. However, portable versions may not be as stable or secure as the official versions, so use them at your own risk.

            -

            To download Corel X4 free from a third-party website, you need to follow these steps:

            -
              -
            1. Go to https://fixthephoto.com/coreldraw-x4-portable.html and click on the "Free Download" button.
            2. -
            3. Wait for the download to finish and locate the file on your computer.
            4. -
            5. Extract the file to a folder of your choice using a program like WinRAR or 7-Zip.
            6. -
            7. Open the folder and double-click on the "CorelDRAWX4Portable.exe" file.
            8. -
            9. Enjoy your portable version of Corel X4.
            10. -
            -

            What You Can Do with Corel X4

            -

            Corel X4 is a powerful graphics design software that can help you create amazing projects for various purposes. Here are some of the things you can do with Corel X4:

            -
              -
            • Create vector illustrations with precise tools for drawing curves, shapes, lines, and text. You can also use effects like transparency, gradients, shadows, blends, and more to enhance your graphics.
            • -
            • Edit images with advanced tools for cropping, resizing, rotating, adjusting colors, brightness, contrast,

              cec2833e83
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Heredis 12 Pro Francais REPACK Crack.md b/spaces/tioseFevbu/cartoon-converter/scripts/Heredis 12 Pro Francais REPACK Crack.md deleted file mode 100644 index 3de7b54d230fc453bf79e08e03581b1771ce5ffb..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Heredis 12 Pro Francais REPACK Crack.md +++ /dev/null @@ -1,46 +0,0 @@ -
              -I can try to write a title and an article with SEO optimization and HTML formatting for the keyword "Heredis 12 Pro Francais Crack". Here is what I came up with: - -

              Heredis 12 Pro Francais Crack: How to Download and Install the Best Genealogy Software

              - -

              If you are looking for a powerful and easy-to-use genealogy software, you might want to check out Heredis 12 Pro Francais. This software allows you to create, edit, and share your family tree with just a few clicks. You can also access millions of online records, import and export data, print charts and reports, and much more.

              -

              Heredis 12 Pro Francais Crack


              Download File ✔✔✔ https://urlcod.com/2uHwm1



              - -

              But how can you get Heredis 12 Pro Francais for free? The answer is simple: by using a crack. A crack is a file that modifies the original software to bypass the activation process and unlock all the features. In this article, we will show you how to download and install Heredis 12 Pro Francais crack in a few easy steps.

              - -

              Step 1: Download Heredis 12 Pro Francais Crack

              - -

              The first thing you need to do is to download the crack file from a reliable source. There are many websites that offer cracks for various software, but not all of them are safe and trustworthy. Some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information.

              - -

              That's why we recommend you to use our website, which is one of the most trusted and popular sources of cracks on the internet. We have tested and verified all the cracks we provide, and we guarantee that they are 100% working and safe. You can download Heredis 12 Pro Francais crack from our website by clicking on the link below:

              - -Download Heredis 12 Pro Francais Crack - -

              The download will start automatically and it will take only a few minutes. The crack file is compressed in a ZIP archive, so you will need to extract it using a program like WinRAR or 7-Zip.

              -

              - -

              Step 2: Install Heredis 12 Pro Francais

              - -

              The next step is to install Heredis 12 Pro Francais on your computer. If you already have it installed, you can skip this step. If not, you can download the setup file from the official website of Heredis:

              - -Download Heredis 12 Pro Francais - -

              Run the setup file and follow the instructions on the screen. Choose the language you prefer (French or English) and accept the terms and conditions. You can also customize the installation options if you want. When the installation is complete, do not launch the program yet.

              - -

              Step 3: Apply Heredis 12 Pro Francais Crack

              - -

              The final step is to apply the crack to Heredis 12 Pro Francais. To do this, you need to copy the crack file that you downloaded in step 1 and paste it into the installation folder of Heredis 12 Pro Francais. The installation folder is usually located in:

              - -C:\Program Files (x86)\Heredis\Heredis 2021 - -

              If you have installed Heredis 12 Pro Francais in a different location, you need to find it and paste the crack file there. You may need to replace the original file if it already exists.

              - -

              Once you have done that, you can launch Heredis 12 Pro Francais from your desktop or start menu. You will see that the program is activated and ready to use. You can enjoy all the features of Heredis 12 Pro Francais without any limitations or restrictions.

              - -

              Conclusion

              - -

              Heredis 12 Pro Francais is one of the best genealogy software available on the market. It offers a comprehensive and user-friendly solution for creating and managing your family tree. However, it is not cheap and it requires an activation code to work properly.

              - -

              If you want to save money and get Heredis 12

              7196e7f11a
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Hote Hote Pyaar Ho Gaya 2 Full Free Movie Download In 720p.md b/spaces/tioseFevbu/cartoon-converter/scripts/Hote Hote Pyaar Ho Gaya 2 Full Free Movie Download In 720p.md deleted file mode 100644 index 5771401f71eb7bfeb8df8dda5175597eeabd8b7a..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Hote Hote Pyaar Ho Gaya 2 Full Free Movie Download In 720p.md +++ /dev/null @@ -1,14 +0,0 @@ -
              -

              Hote Hote Pyaar Ho Gaya 2: Ein neuer Bhojpuri-Film mit Pradeep Pandey Chintu und Kajal Raghwani

              -

              Wenn Sie ein Fan von Bhojpuri-Filmen sind, haben Sie vielleicht schon von dem kommenden Film "Hote Hote Pyaar Ho Gaya 2" gehört, der eine Fortsetzung des 1999 erschienenen Films "Hote Hote Pyar Hogaya" ist. Der Film ist eine romantische Komödie mit Action- und Krimi-Elementen, die von Ananjay Raghuraj inszeniert und von Yashi Films Pvt. Ltd. produziert wird. Die Hauptrollen spielen Pradeep Pandey Chintu und Kajal Raghwani, die als eines der beliebtesten Paare der Bhojpuri-Industrie gelten. Außerdem gibt es einen besonderen Auftritt von Sahar Afsha, die eine wichtige Rolle in der Geschichte spielt.

              -

              Der Film handelt von Pinky und Atul alias Bunty, die sich ineinander verlieben und heiraten wollen. Sie stoßen jedoch auf Widerstand von ihren jeweiligen Familien, die bereits ihre Lebenspartner für sie ausgewählt haben. Pinkys Vater, ein pensionierter Oberst, hat den Polizeiinspektor Arjun für Pinky ausgesucht, während Atuls Vater, ein reicher Geschäftsmann, seine Tochter Shobha mit Atul verheiraten will. Wie werden Pinky und Atul ihre Liebe beweisen und ihre Hindernisse überwinden? Werden sie am Ende zusammenkommen oder werden sie sich trennen müssen? Um das herauszufinden, müssen Sie den Film sehen.

              -

              Hote Hote Pyaar Ho Gaya 2 full movie download in 720p


              Download ··· https://urlcod.com/2uHyIy



              -

              Der Film hat kürzlich seinen offiziellen Trailer veröffentlicht, der viel Begeisterung und Neugier unter den Fans geweckt hat. Der Trailer zeigt einige lustige und emotionale Szenen zwischen Chintu und Kajal sowie einige spannende Action-Sequenzen mit Jackie Shroff, der den Polizeiinspektor Arjun spielt. Der Film verspricht eine unterhaltsame und herzerwärmende Erfahrung für die Zuschauer zu sein.

              -

              Der Film soll im Jahr 2023 in die Kinos kommen. Wenn Sie den Film in hoher Qualität herunterladen möchten, können Sie nach "Hote Hote Pyaar Ho Gaya 2 full movie download in 720p" suchen. Sie können auch die offizielle Website oder die sozialen Medien des Films besuchen, um mehr Informationen und Updates zu erhalten.

              - -

              Bhojpuri-Filme sind ein wichtiger Teil der indischen Filmindustrie, die Filme in verschiedenen regionalen Sprachen produziert. Bhojpuri ist eine Sprache, die hauptsächlich in den westlichen Teilen von Bihar und den östlichen Teilen von Uttar Pradesh gesprochen wird. Die Bhojpuri-Filmindustrie hat ihre Hauptproduktionszentren in Lucknow und Patna. [1] [2]

              -

              Der erste Bhojpuri-Film, der als Tonfilm gedreht wurde, war "Ganga Maiyya Tohe Piyari Chadhaibo" (\"Mutter Ganges, ich werde dir einen gelben Sari anbieten\"), der 1963 von Vishwanath Shahabadi unter dem Banner von Nirmal Pictures produziert und von Kundan Kumar inszeniert wurde. [7] Der Film war ein großer Erfolg und gilt als Meilenstein der Bhojpuri-Filmgeschichte. In den folgenden Jahrzehnten wurden jedoch nur sporadisch Bhojpuri-Filme produziert, die meist nur ein lokales Publikum ansprachen.

              -

              Erst in den 2000er Jahren erlebte die Bhojpuri-Filmindustrie einen Aufschwung, als neue Schauspieler wie Ravi Kishan, Manoj Tiwari, Dinesh Lal Yadav (Nirahua), Pawan Singh und Khesari Lal Yadav auf die Leinwand kamen. Diese Stars brachten mehr Glamour, Action und Komödie in die Bhojpuri-Filme und machten sie populärer bei den Massen. Einige der erfolgreichen Bhojpuri-Filme dieser Zeit waren "Sasura Bada Paisawala" (\"My Father-in-Law is Very Rich\", 2004), "Daroga Babu I Love You" (\"Mr. Policeman I Love You\", 2005), "Nirahua Rikshawala" (\"Nirahua the Rickshaw Driver\", 2008) und "Saat Saheliyan" (\"Seven Friends\", 2010).

              -

              Heute ist die Bhojpuri-Filmindustrie eine florierende Branche mit einem geschätzten Wert von 2000 Crore Rupien (ca. 250 Millionen Euro). [3] Die Bhojpuri-Filme werden nicht nur in Indien, sondern auch in Ländern wie Guyana, Trinidad und Tobago, Suriname, Fidschi, Mauritius und Südafrika gezeigt, wo viele Menschen Bhojpuri oder seine Kreolsprachen sprechen. [4] Die Bhojpuri-Filmindustrie hat auch einige weibliche Stars hervorgebracht, wie Amrapali Dubey, Akshara Singh, Anjana Singh, Monalisa und Rani Chatterjee.

              cec2833e83
              -
              -
              \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/packaging/utils.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/packaging/utils.py deleted file mode 100644 index bab11b80c60f10a4f3bccb12eb5b17c48a449767..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_vendor/packaging/utils.py +++ /dev/null @@ -1,136 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - -import re -from typing import FrozenSet, NewType, Tuple, Union, cast - -from .tags import Tag, parse_tag -from .version import InvalidVersion, Version - -BuildTag = Union[Tuple[()], Tuple[int, str]] -NormalizedName = NewType("NormalizedName", str) - - -class InvalidWheelFilename(ValueError): - """ - An invalid wheel filename was found, users should refer to PEP 427. - """ - - -class InvalidSdistFilename(ValueError): - """ - An invalid sdist filename was found, users should refer to the packaging user guide. - """ - - -_canonicalize_regex = re.compile(r"[-_.]+") -# PEP 427: The build number must start with a digit. -_build_tag_regex = re.compile(r"(\d+)(.*)") - - -def canonicalize_name(name: str) -> NormalizedName: - # This is taken from PEP 503. - value = _canonicalize_regex.sub("-", name).lower() - return cast(NormalizedName, value) - - -def canonicalize_version(version: Union[Version, str]) -> str: - """ - This is very similar to Version.__str__, but has one subtle difference - with the way it handles the release segment. - """ - if isinstance(version, str): - try: - parsed = Version(version) - except InvalidVersion: - # Legacy versions cannot be normalized - return version - else: - parsed = version - - parts = [] - - # Epoch - if parsed.epoch != 0: - parts.append(f"{parsed.epoch}!") - - # Release segment - # NB: This strips trailing '.0's to normalize - parts.append(re.sub(r"(\.0)+$", "", ".".join(str(x) for x in parsed.release))) - - # Pre-release - if parsed.pre is not None: - parts.append("".join(str(x) for x in parsed.pre)) - - # Post-release - if parsed.post is not None: - parts.append(f".post{parsed.post}") - - # Development release - if parsed.dev is not None: - parts.append(f".dev{parsed.dev}") - - # Local version segment - if parsed.local is not None: - parts.append(f"+{parsed.local}") - - return "".join(parts) - - -def parse_wheel_filename( - filename: str, -) -> Tuple[NormalizedName, Version, BuildTag, FrozenSet[Tag]]: - if not filename.endswith(".whl"): - raise InvalidWheelFilename( - f"Invalid wheel filename (extension must be '.whl'): {filename}" - ) - - filename = filename[:-4] - dashes = filename.count("-") - if dashes not in (4, 5): - raise InvalidWheelFilename( - f"Invalid wheel filename (wrong number of parts): {filename}" - ) - - parts = filename.split("-", dashes - 2) - name_part = parts[0] - # See PEP 427 for the rules on escaping the project name - if "__" in name_part or re.match(r"^[\w\d._]*$", name_part, re.UNICODE) is None: - raise InvalidWheelFilename(f"Invalid project name: {filename}") - name = canonicalize_name(name_part) - version = Version(parts[1]) - if dashes == 5: - build_part = parts[2] - build_match = _build_tag_regex.match(build_part) - if build_match is None: - raise InvalidWheelFilename( - f"Invalid build number: {build_part} in '{filename}'" - ) - build = cast(BuildTag, (int(build_match.group(1)), build_match.group(2))) - else: - build = () - tags = parse_tag(parts[-1]) - return (name, version, build, tags) - - -def parse_sdist_filename(filename: str) -> Tuple[NormalizedName, Version]: - if filename.endswith(".tar.gz"): - file_stem = filename[: -len(".tar.gz")] - elif filename.endswith(".zip"): - file_stem = filename[: -len(".zip")] - else: - raise InvalidSdistFilename( - f"Invalid sdist filename (extension must be '.tar.gz' or '.zip'):" - f" {filename}" - ) - - # We are requiring a PEP 440 version, which cannot contain dashes, - # so we split on the last dash. - name_part, sep, version_part = file_stem.rpartition("-") - if not sep: - raise InvalidSdistFilename(f"Invalid sdist filename: {filename}") - - name = canonicalize_name(name_part) - version = Version(version_part) - return (name, version) diff --git a/spaces/tomofi/MMOCR/configs/textrecog/crnn/README.md b/spaces/tomofi/MMOCR/configs/textrecog/crnn/README.md deleted file mode 100644 index a39b10daaa482625d66c285ba85f551d509776cc..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/configs/textrecog/crnn/README.md +++ /dev/null @@ -1,50 +0,0 @@ -# CRNN - ->[An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition](https://arxiv.org/abs/1507.05717) - - - -## Abstract - -Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it. - -
              - -
              - -## Dataset - -### Train Dataset - -| trainset | instance_num | repeat_num | note | -| :------: | :----------: | :--------: | :---: | -| Syn90k | 8919273 | 1 | synth | - -### Test Dataset - -| testset | instance_num | note | -| :-----: | :----------: | :-------: | -| IIIT5K | 3000 | regular | -| SVT | 647 | regular | -| IC13 | 1015 | regular | -| IC15 | 2077 | irregular | -| SVTP | 645 | irregular | -| CT80 | 288 | irregular | - -## Results and models - -| methods | | Regular Text | | | | Irregular Text | | download | -| :------------------------------------------------------: | :----: | :----------: | :---: | :---: | :---: | :------------: | :---: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------: | -| methods | IIIT5K | SVT | IC13 | | IC15 | SVTP | CT80 | -| [CRNN](/configs/textrecog/crnn/crnn_academic_dataset.py) | 80.5 | 81.5 | 86.5 | | 54.1 | 59.1 | 55.6 | [model](https://download.openmmlab.com/mmocr/textrecog/crnn/crnn_academic-a723a1c5.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/crnn/20210326_111035.log.json) | - -## Citation - -```bibtex -@article{shi2016end, - title={An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition}, - author={Shi, Baoguang and Bai, Xiang and Yao, Cong}, - journal={IEEE transactions on pattern analysis and machine intelligence}, - year={2016} -} -``` diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_fast_r50_caffe_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_fast_r50_caffe_fpn_1x_coco.py deleted file mode 100644 index e15bc29b03d8c612a8921873d456a03126f79aae..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_fast_r50_caffe_fpn_1x_coco.py +++ /dev/null @@ -1,63 +0,0 @@ -_base_ = '../fast_rcnn/fast_rcnn_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://detectron2/resnet50_caffe', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='caffe'), - roi_head=dict( - bbox_head=dict(bbox_coder=dict(target_stds=[0.05, 0.05, 0.1, 0.1]))), - # model training and testing settings - train_cfg=dict( - rcnn=dict( - assigner=dict(pos_iou_thr=0.6, neg_iou_thr=0.6, min_pos_iou=0.6), - sampler=dict(num=256))), - test_cfg=dict(rcnn=dict(score_thr=1e-3))) -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadProposals', num_max_proposals=300), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'proposals', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadProposals', num_max_proposals=None), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img', 'proposals']), - ]) -] -data = dict( - train=dict( - proposal_file=data_root + 'proposals/ga_rpn_r50_fpn_1x_train2017.pkl', - pipeline=train_pipeline), - val=dict( - proposal_file=data_root + 'proposals/ga_rpn_r50_fpn_1x_val2017.pkl', - pipeline=test_pipeline), - test=dict( - proposal_file=data_root + 'proposals/ga_rpn_r50_fpn_1x_val2017.pkl', - pipeline=test_pipeline)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/point_rend/README.md b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/point_rend/README.md deleted file mode 100644 index 998e51eb2c81855dbc3c671f348ecd5424291dd1..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/point_rend/README.md +++ /dev/null @@ -1,23 +0,0 @@ -# PointRend - -## Introduction - - - -```latex -@InProceedings{kirillov2019pointrend, - title={{PointRend}: Image Segmentation as Rendering}, - author={Alexander Kirillov and Yuxin Wu and Kaiming He and Ross Girshick}, - journal={ArXiv:1912.08193}, - year={2019} -} -``` - -## Results and models - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :------: | :--------: | -| R-50-FPN | caffe | 1x | 4.6 | | 38.4 | 36.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco/point_rend_r50_caffe_fpn_mstrain_1x_coco-1bcb5fb4.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco/point_rend_r50_caffe_fpn_mstrain_1x_coco_20200612_161407.log.json) | -| R-50-FPN | caffe | 3x | 4.6 | | 41.0 | 38.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco/point_rend_r50_caffe_fpn_mstrain_3x_coco-e0ebb6b7.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/point_rend/point_rend_r50_caffe_fpn_mstrain_3x_coco/point_rend_r50_caffe_fpn_mstrain_3x_coco_20200614_002632.log.json) | - -Note: All models are trained with multi-scale, the input image shorter side is randomly scaled to one of (640, 672, 704, 736, 768, 800). diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/anchor/__init__.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/anchor/__init__.py deleted file mode 100644 index 5838ff3eefb03bc83928fa13848cea9ff8647827..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/anchor/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -from .anchor_generator import (AnchorGenerator, LegacyAnchorGenerator, - YOLOAnchorGenerator) -from .builder import ANCHOR_GENERATORS, build_anchor_generator -from .point_generator import PointGenerator -from .utils import anchor_inside_flags, calc_region, images_to_levels - -__all__ = [ - 'AnchorGenerator', 'LegacyAnchorGenerator', 'anchor_inside_flags', - 'PointGenerator', 'images_to_levels', 'calc_region', - 'build_anchor_generator', 'ANCHOR_GENERATORS', 'YOLOAnchorGenerator' -] diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/assigners/point_assigner.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/assigners/point_assigner.py deleted file mode 100644 index fb8f5e4edc63f4851e2067034c5e67a3558f31bc..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/core/bbox/assigners/point_assigner.py +++ /dev/null @@ -1,133 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class PointAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each point. - - Each proposals will be assigned with `0`, or a positive integer - indicating the ground truth index. - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - """ - - def __init__(self, scale=4, pos_num=3): - self.scale = scale - self.pos_num = pos_num - - def assign(self, points, gt_bboxes, gt_bboxes_ignore=None, gt_labels=None): - """Assign gt to points. - - This method assign a gt bbox to every points set, each points set - will be assigned with the background_label (-1), or a label number. - -1 is background, and semi-positive number is the index (0-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. assign every points to the background_label (-1) - 2. A point is assigned to some gt bbox if - (i) the point is within the k closest points to the gt bbox - (ii) the distance between this point and the gt is smaller than - other gt bboxes - - Args: - points (Tensor): points to be assigned, shape(n, 3) while last - dimension stands for (x, y, stride). - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - NOTE: currently unused. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - num_points = points.shape[0] - num_gts = gt_bboxes.shape[0] - - if num_gts == 0 or num_points == 0: - # If no truth assign everything to the background - assigned_gt_inds = points.new_full((num_points, ), - 0, - dtype=torch.long) - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = points.new_full((num_points, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - - points_xy = points[:, :2] - points_stride = points[:, 2] - points_lvl = torch.log2( - points_stride).int() # [3...,4...,5...,6...,7...] - lvl_min, lvl_max = points_lvl.min(), points_lvl.max() - - # assign gt box - gt_bboxes_xy = (gt_bboxes[:, :2] + gt_bboxes[:, 2:]) / 2 - gt_bboxes_wh = (gt_bboxes[:, 2:] - gt_bboxes[:, :2]).clamp(min=1e-6) - scale = self.scale - gt_bboxes_lvl = ((torch.log2(gt_bboxes_wh[:, 0] / scale) + - torch.log2(gt_bboxes_wh[:, 1] / scale)) / 2).int() - gt_bboxes_lvl = torch.clamp(gt_bboxes_lvl, min=lvl_min, max=lvl_max) - - # stores the assigned gt index of each point - assigned_gt_inds = points.new_zeros((num_points, ), dtype=torch.long) - # stores the assigned gt dist (to this point) of each point - assigned_gt_dist = points.new_full((num_points, ), float('inf')) - points_range = torch.arange(points.shape[0]) - - for idx in range(num_gts): - gt_lvl = gt_bboxes_lvl[idx] - # get the index of points in this level - lvl_idx = gt_lvl == points_lvl - points_index = points_range[lvl_idx] - # get the points in this level - lvl_points = points_xy[lvl_idx, :] - # get the center point of gt - gt_point = gt_bboxes_xy[[idx], :] - # get width and height of gt - gt_wh = gt_bboxes_wh[[idx], :] - # compute the distance between gt center and - # all points in this level - points_gt_dist = ((lvl_points - gt_point) / gt_wh).norm(dim=1) - # find the nearest k points to gt center in this level - min_dist, min_dist_index = torch.topk( - points_gt_dist, self.pos_num, largest=False) - # the index of nearest k points to gt center in this level - min_dist_points_index = points_index[min_dist_index] - # The less_than_recorded_index stores the index - # of min_dist that is less then the assigned_gt_dist. Where - # assigned_gt_dist stores the dist from previous assigned gt - # (if exist) to each point. - less_than_recorded_index = min_dist < assigned_gt_dist[ - min_dist_points_index] - # The min_dist_points_index stores the index of points satisfy: - # (1) it is k nearest to current gt center in this level. - # (2) it is closer to current gt center than other gt center. - min_dist_points_index = min_dist_points_index[ - less_than_recorded_index] - # assign the result - assigned_gt_inds[min_dist_points_index] = idx + 1 - assigned_gt_dist[min_dist_points_index] = min_dist[ - less_than_recorded_index] - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_points, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) diff --git a/spaces/tornadoslims/instruct-pix2pix/scripts/download_data.sh b/spaces/tornadoslims/instruct-pix2pix/scripts/download_data.sh deleted file mode 100644 index 921f3c536cefcd685b832c7163c5c6d06064a87a..0000000000000000000000000000000000000000 --- a/spaces/tornadoslims/instruct-pix2pix/scripts/download_data.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash - -# Make data folder relative to script location -SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) - -mkdir -p $SCRIPT_DIR/../data - -# Copy text datasets -wget -q --show-progress http://instruct-pix2pix.eecs.berkeley.edu/gpt-generated-prompts.jsonl -O $SCRIPT_DIR/../data/gpt-generated-prompts.jsonl -wget -q --show-progress http://instruct-pix2pix.eecs.berkeley.edu/human-written-prompts.jsonl -O $SCRIPT_DIR/../data/human-written-prompts.jsonl - -# If dataset name isn't provided, exit. -if [ -z $1 ] -then - exit 0 -fi - -# Copy dataset files -mkdir $SCRIPT_DIR/../data/$1 -wget -A zip,json -R "index.html*" -q --show-progress -r --no-parent http://instruct-pix2pix.eecs.berkeley.edu/$1/ -nd -P $SCRIPT_DIR/../data/$1/ - -# Unzip to folders -unzip $SCRIPT_DIR/../data/$1/\*.zip -d $SCRIPT_DIR/../data/$1/ - -# Cleanup -rm -f $SCRIPT_DIR/../data/$1/*.zip -rm -f $SCRIPT_DIR/../data/$1/*.html diff --git a/spaces/ulysses115/vits-models/app.py b/spaces/ulysses115/vits-models/app.py deleted file mode 100644 index a8afe878524bebfd8565de75b015d2ae32e06397..0000000000000000000000000000000000000000 --- a/spaces/ulysses115/vits-models/app.py +++ /dev/null @@ -1,263 +0,0 @@ -# coding=utf-8 -import os -import re -import argparse -import utils -import commons -import json -import torch -import gradio as gr -from models import SynthesizerTrn -from text import text_to_sequence, _clean_text -from torch import no_grad, LongTensor -import gradio.processing_utils as gr_processing_utils -import logging -logging.getLogger('numba').setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit text and audio length in huggingface spaces - -hps_ms = utils.get_hparams_from_file(r'config/config.json') - -audio_postprocess_ori = gr.Audio.postprocess - -def audio_postprocess(self, y): - data = audio_postprocess_ori(self, y) - if data is None: - return None - return gr_processing_utils.encode_url_or_file_to_base64(data["name"]) - - -gr.Audio.postprocess = audio_postprocess - -def get_text(text, hps, is_symbol): - text_norm, clean_text = text_to_sequence(text, hps.symbols, [] if is_symbol else hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = LongTensor(text_norm) - return text_norm, clean_text - -def create_tts_fn(net_g_ms, speaker_id): - def tts_fn(text, language, noise_scale, noise_scale_w, length_scale, is_symbol): - text = text.replace('\n', ' ').replace('\r', '').replace(" ", "") - if limitation: - text_len = len(re.sub("\[([A-Z]{2})\]", "", text)) - max_len = 100 - if text_len > max_len: - return "Error: Text is too long", None - if not is_symbol: - if language == 0: - text = f"[ZH]{text}[ZH]" - elif language == 1: - text = f"[JA]{text}[JA]" - else: - text = f"{text}" - stn_tst, clean_text = get_text(text, hps_ms, is_symbol) - with no_grad(): - x_tst = stn_tst.unsqueeze(0).to(device) - x_tst_lengths = LongTensor([stn_tst.size(0)]).to(device) - sid = LongTensor([speaker_id]).to(device) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=noise_scale, noise_scale_w=noise_scale_w, - length_scale=length_scale)[0][0, 0].data.cpu().float().numpy() - - return "Success", (22050, audio) - return tts_fn - -def create_to_symbol_fn(hps): - def to_symbol_fn(is_symbol_input, input_text, temp_text, temp_lang): - if temp_lang == 'Chinese': - clean_text = f'[ZH]{input_text}[ZH]' - elif temp_lang == "Japanese": - clean_text = f'[JA]{input_text}[JA]' - else: - clean_text = input_text - return (_clean_text(clean_text, hps.data.text_cleaners), input_text) if is_symbol_input else (temp_text, temp_text) - - return to_symbol_fn -def change_lang(language): - if language == 0: - return 0.6, 0.668, 1.2, "Chinese" - elif language == 1: - return 0.6, 0.668, 1, "Japanese" - else: - return 0.6, 0.668, 1, "Mix" - -download_audio_js = """ -() =>{{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let audio = root.querySelector("#tts-audio-{audio_id}").querySelector("audio"); - let text = root.querySelector("#input-text-{audio_id}").querySelector("textarea"); - if (audio == undefined) - return; - text = text.value; - if (text == undefined) - text = Math.floor(Math.random()*100000000); - audio = audio.src; - let oA = document.createElement("a"); - oA.download = text.substr(0, 20)+'.wav'; - oA.href = audio; - document.body.appendChild(oA); - oA.click(); - oA.remove(); -}} -""" - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - args = parser.parse_args() - device = torch.device(args.device) - - models = [] - with open("pretrained_models/info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for i, info in models_info.items(): - sid = info['sid'] - name_en = info['name_en'] - name_zh = info['name_zh'] - title = info['title'] - cover = f"pretrained_models/{i}/{info['cover']}" - example = info['example'] - language = info['language'] - net_g_ms = SynthesizerTrn( - len(hps_ms.symbols), - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=hps_ms.data.n_speakers if info['type'] == "multi" else 0, - **hps_ms.model) - utils.load_checkpoint(f'pretrained_models/{i}/{i}.pth', net_g_ms, None) - _ = net_g_ms.eval().to(device) - models.append((sid, name_en, name_zh, title, cover, example, language, net_g_ms, create_tts_fn(net_g_ms, sid), create_to_symbol_fn(hps_ms))) - with gr.Blocks() as app: - gr.Markdown( - "#
              vits-models\n" - "##
              Please do not generate content that could infringe upon the rights or cause harm to individuals or organizations.\n" - "##
              ·请不要生成会对个人以及组织造成侵害的内容\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=sayashi.vits-models)\n\n" - "[Open In Colab]" - "(https://colab.research.google.com/drive/10QOk9NPgoKZUXkIhhuVaZ7SYra1MPMKH?usp=share_link)" - " without queue and length limitation.(无需等待队列,并且没有长度限制)\n\n" - "[Finetune your own model](https://github.com/SayaSS/vits-finetuning)" - ) - - with gr.Tabs(): - with gr.TabItem("EN"): - for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models: - with gr.TabItem(name_en): - with gr.Row(): - gr.Markdown( - '
              ' - f'{title}' - f'' if cover else "" - '
              ' - ) - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="Text (100 words limitation)", lines=5, value=example, elem_id=f"input-text-en-{name_en.replace(' ','')}") - lang = gr.Dropdown(label="Language", choices=["Chinese", "Japanese", "Mix(wrap the Chinese text with [ZH][ZH], wrap the Japanese text with [JA][JA])"], - type="index", value=language) - temp_lang = gr.Variable(value=language) - with gr.Accordion(label="Advanced Options", open=False): - temp_text_var = gr.Variable() - symbol_input = gr.Checkbox(value=False, label="Symbol input") - symbol_list = gr.Dataset(label="Symbol list", components=[input_text], - samples=[[x] for x in hps_ms.symbols]) - symbol_list_json = gr.Json(value=hps_ms.symbols, visible=False) - btn = gr.Button(value="Generate", variant="primary") - with gr.Row(): - ns = gr.Slider(label="noise_scale", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="noise_scale_w", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="length_scale", minimum=0.1, maximum=2.0, step=0.1, value=1.2 if language=="Chinese" else 1, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="Output Message") - o2 = gr.Audio(label="Output Audio", elem_id=f"tts-audio-en-{name_en.replace(' ','')}") - download = gr.Button("Download Audio") - btn.click(tts_fn, inputs=[input_text, lang, ns, nsw, ls, symbol_input], outputs=[o1, o2]) - download.click(None, [], [], _js=download_audio_js.format(audio_id=f"en-{name_en.replace(' ', '')}")) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls, temp_lang]) - symbol_input.change( - to_symbol_fn, - [symbol_input, input_text, temp_text_var, temp_lang], - [input_text, temp_text_var] - ) - symbol_list.click(None, [symbol_list, symbol_list_json], [input_text], - _js=f""" - (i,symbols) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#input-text-en-{name_en.replace(' ', '')}").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + symbols[i].length; - text_input.selectionEnd = startPos + symbols[i].length; - text_input.blur(); - window.scrollTo(x, y); - return text_input.value; - }}""") - with gr.TabItem("中文"): - for (sid, name_en, name_zh, title, cover, example, language, net_g_ms, tts_fn, to_symbol_fn) in models: - with gr.TabItem(name_zh): - with gr.Row(): - gr.Markdown( - '
              ' - f'{title}' - f'' if cover else "" - '
              ' - ) - with gr.Row(): - with gr.Column(): - input_text = gr.Textbox(label="文本 (100字上限)", lines=5, value=example, elem_id=f"input-text-zh-{name_zh}") - lang = gr.Dropdown(label="语言", choices=["中文", "日语", "中日混合(中文用[ZH][ZH]包裹起来,日文用[JA][JA]包裹起来)"], - type="index", value="中文"if language == "Chinese" else "日语") - temp_lang = gr.Variable(value=language) - with gr.Accordion(label="高级选项", open=False): - temp_text_var = gr.Variable() - symbol_input = gr.Checkbox(value=False, label="符号输入") - symbol_list = gr.Dataset(label="符号列表", components=[input_text], - samples=[[x] for x in hps_ms.symbols]) - symbol_list_json = gr.Json(value=hps_ms.symbols, visible=False) - btn = gr.Button(value="生成", variant="primary") - with gr.Row(): - ns = gr.Slider(label="控制感情变化程度", minimum=0.1, maximum=1.0, step=0.1, value=0.6, interactive=True) - nsw = gr.Slider(label="控制音素发音长度", minimum=0.1, maximum=1.0, step=0.1, value=0.668, interactive=True) - ls = gr.Slider(label="控制整体语速", minimum=0.1, maximum=2.0, step=0.1, value=1.2 if language=="Chinese" else 1, interactive=True) - with gr.Column(): - o1 = gr.Textbox(label="输出信息") - o2 = gr.Audio(label="输出音频", elem_id=f"tts-audio-zh-{name_zh}") - download = gr.Button("下载音频") - btn.click(tts_fn, inputs=[input_text, lang, ns, nsw, ls, symbol_input], outputs=[o1, o2]) - download.click(None, [], [], _js=download_audio_js.format(audio_id=f"zh-{name_zh}")) - lang.change(change_lang, inputs=[lang], outputs=[ns, nsw, ls]) - symbol_input.change( - to_symbol_fn, - [symbol_input, input_text, temp_text_var, temp_lang], - [input_text, temp_text_var] - ) - symbol_list.click(None, [symbol_list, symbol_list_json], [input_text], - _js=f""" - (i,symbols) => {{ - let root = document.querySelector("body > gradio-app"); - if (root.shadowRoot != null) - root = root.shadowRoot; - let text_input = root.querySelector("#input-text-zh-{name_zh}").querySelector("textarea"); - let startPos = text_input.selectionStart; - let endPos = text_input.selectionEnd; - let oldTxt = text_input.value; - let result = oldTxt.substring(0, startPos) + symbols[i] + oldTxt.substring(endPos); - text_input.value = result; - let x = window.scrollX, y = window.scrollY; - text_input.focus(); - text_input.selectionStart = startPos + symbols[i].length; - text_input.selectionEnd = startPos + symbols[i].length; - text_input.blur(); - window.scrollTo(x, y); - return text_input.value; - }}""") - app.queue(concurrency_count=1).launch(show_api=False, share=args.share) diff --git a/spaces/uohna/nlp-web-app/README.md b/spaces/uohna/nlp-web-app/README.md deleted file mode 100644 index 9e07fe2047fe6e95e80cfa42783347097f1dc38e..0000000000000000000000000000000000000000 --- a/spaces/uohna/nlp-web-app/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Nlp Web App -emoji: 📉 -colorFrom: green -colorTo: purple -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/modes/index.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/modes/index.md deleted file mode 100644 index 5a00afa5d39590995cb5c516243b3b7f65e41a4f..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/modes/index.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -comments: true -description: Use Ultralytics YOLOv8 Modes (Train, Val, Predict, Export, Track, Benchmark) to train, validate, predict, track, export or benchmark. -keywords: yolov8, yolo, ultralytics, training, validation, prediction, export, tracking, benchmarking, real-time object detection, object tracking ---- - -# Ultralytics YOLOv8 Modes - - - -Ultralytics YOLOv8 supports several **modes** that can be used to perform different tasks. These modes are: - -- **Train**: For training a YOLOv8 model on a custom dataset. -- **Val**: For validating a YOLOv8 model after it has been trained. -- **Predict**: For making predictions using a trained YOLOv8 model on new images or videos. -- **Export**: For exporting a YOLOv8 model to a format that can be used for deployment. -- **Track**: For tracking objects in real-time using a YOLOv8 model. -- **Benchmark**: For benchmarking YOLOv8 exports (ONNX, TensorRT, etc.) speed and accuracy. - -## [Train](train.md) - -Train mode is used for training a YOLOv8 model on a custom dataset. In this mode, the model is trained using the -specified dataset and hyperparameters. The training process involves optimizing the model's parameters so that it can -accurately predict the classes and locations of objects in an image. - -[Train Examples](train.md){ .md-button .md-button--primary} - -## [Val](val.md) - -Val mode is used for validating a YOLOv8 model after it has been trained. In this mode, the model is evaluated on a -validation set to measure its accuracy and generalization performance. This mode can be used to tune the hyperparameters -of the model to improve its performance. - -[Val Examples](val.md){ .md-button .md-button--primary} - -## [Predict](predict.md) - -Predict mode is used for making predictions using a trained YOLOv8 model on new images or videos. In this mode, the -model is loaded from a checkpoint file, and the user can provide images or videos to perform inference. The model -predicts the classes and locations of objects in the input images or videos. - -[Predict Examples](predict.md){ .md-button .md-button--primary} - -## [Export](export.md) - -Export mode is used for exporting a YOLOv8 model to a format that can be used for deployment. In this mode, the model is -converted to a format that can be used by other software applications or hardware devices. This mode is useful when -deploying the model to production environments. - -[Export Examples](export.md){ .md-button .md-button--primary} - -## [Track](track.md) - -Track mode is used for tracking objects in real-time using a YOLOv8 model. In this mode, the model is loaded from a -checkpoint file, and the user can provide a live video stream to perform real-time object tracking. This mode is useful -for applications such as surveillance systems or self-driving cars. - -[Track Examples](track.md){ .md-button .md-button--primary} - -## [Benchmark](benchmark.md) - -Benchmark mode is used to profile the speed and accuracy of various export formats for YOLOv8. The benchmarks provide -information on the size of the exported format, its `mAP50-95` metrics (for object detection, segmentation and pose) -or `accuracy_top5` metrics (for classification), and the inference time in milliseconds per image across various export -formats like ONNX, OpenVINO, TensorRT and others. This information can help users choose the optimal export format for -their specific use case based on their requirements for speed and accuracy. - -[Benchmark Examples](benchmark.md){ .md-button .md-button--primary} \ No newline at end of file diff --git a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn.h b/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn.h deleted file mode 100644 index c7408eba007b424194618baa63726657e36875e3..0000000000000000000000000000000000000000 --- a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn.h +++ /dev/null @@ -1,64 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once - -#include "ms_deform_attn_cpu.h" - -#ifdef WITH_CUDA -#include "ms_deform_attn_cuda.h" -#endif - -namespace groundingdino { - -at::Tensor -ms_deform_attn_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step) -{ - if (value.type().is_cuda()) - { -#ifdef WITH_CUDA - return ms_deform_attn_cuda_forward( - value, spatial_shapes, level_start_index, sampling_loc, attn_weight, im2col_step); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - AT_ERROR("Not implemented on the CPU"); -} - -std::vector -ms_deform_attn_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step) -{ - if (value.type().is_cuda()) - { -#ifdef WITH_CUDA - return ms_deform_attn_cuda_backward( - value, spatial_shapes, level_start_index, sampling_loc, attn_weight, grad_output, im2col_step); -#else - AT_ERROR("Not compiled with GPU support"); -#endif - } - AT_ERROR("Not implemented on the CPU"); -} - -} // namespace groundingdino \ No newline at end of file diff --git a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp b/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp deleted file mode 100644 index 551243fdadfd1682b5dc6628623b67a79b3f6c74..0000000000000000000000000000000000000000 --- a/spaces/vinay123/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.cpp +++ /dev/null @@ -1,43 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#include - -#include -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step) -{ - AT_ERROR("Not implement on cpu"); -} - -} // namespace groundingdino diff --git a/spaces/vumichien/Generate_human_motion/pyrender/pyrender/platforms/egl.py b/spaces/vumichien/Generate_human_motion/pyrender/pyrender/platforms/egl.py deleted file mode 100644 index ae2478d29c9a538c53ad83fa31f8e2277cd897c8..0000000000000000000000000000000000000000 --- a/spaces/vumichien/Generate_human_motion/pyrender/pyrender/platforms/egl.py +++ /dev/null @@ -1,219 +0,0 @@ -import ctypes -import os - -import OpenGL.platform - -from .base import Platform - -EGL_PLATFORM_DEVICE_EXT = 0x313F -EGL_DRM_DEVICE_FILE_EXT = 0x3233 - - -def _ensure_egl_loaded(): - plugin = OpenGL.platform.PlatformPlugin.by_name('egl') - if plugin is None: - raise RuntimeError("EGL platform plugin is not available.") - - plugin_class = plugin.load() - plugin.loaded = True - # create instance of this platform implementation - plugin = plugin_class() - - plugin.install(vars(OpenGL.platform)) - - -_ensure_egl_loaded() -from OpenGL import EGL as egl - - -def _get_egl_func(func_name, res_type, *arg_types): - address = egl.eglGetProcAddress(func_name) - if address is None: - return None - - proto = ctypes.CFUNCTYPE(res_type) - proto.argtypes = arg_types - func = proto(address) - return func - - -def _get_egl_struct(struct_name): - from OpenGL._opaque import opaque_pointer_cls - return opaque_pointer_cls(struct_name) - - -# These are not defined in PyOpenGL by default. -_EGLDeviceEXT = _get_egl_struct('EGLDeviceEXT') -_eglGetPlatformDisplayEXT = _get_egl_func('eglGetPlatformDisplayEXT', egl.EGLDisplay) -_eglQueryDevicesEXT = _get_egl_func('eglQueryDevicesEXT', egl.EGLBoolean) -_eglQueryDeviceStringEXT = _get_egl_func('eglQueryDeviceStringEXT', ctypes.c_char_p) - - -def query_devices(): - if _eglQueryDevicesEXT is None: - raise RuntimeError("EGL query extension is not loaded or is not supported.") - - num_devices = egl.EGLint() - success = _eglQueryDevicesEXT(0, None, ctypes.pointer(num_devices)) - if not success or num_devices.value < 1: - return [] - - devices = (_EGLDeviceEXT * num_devices.value)() # array of size num_devices - success = _eglQueryDevicesEXT(num_devices.value, devices, ctypes.pointer(num_devices)) - if not success or num_devices.value < 1: - return [] - - return [EGLDevice(devices[i]) for i in range(num_devices.value)] - - -def get_default_device(): - # Fall back to not using query extension. - if _eglQueryDevicesEXT is None: - return EGLDevice(None) - - return query_devices()[0] - - -def get_device_by_index(device_id): - if _eglQueryDevicesEXT is None and device_id == 0: - return get_default_device() - - devices = query_devices() - if device_id >= len(devices): - raise ValueError('Invalid device ID ({})'.format(device_id, len(devices))) - return devices[device_id] - - -class EGLDevice: - - def __init__(self, display=None): - self._display = display - - def get_display(self): - if self._display is None: - return egl.eglGetDisplay(egl.EGL_DEFAULT_DISPLAY) - - return _eglGetPlatformDisplayEXT(EGL_PLATFORM_DEVICE_EXT, self._display, None) - - @property - def name(self): - if self._display is None: - return 'default' - - name = _eglQueryDeviceStringEXT(self._display, EGL_DRM_DEVICE_FILE_EXT) - if name is None: - return None - - return name.decode('ascii') - - def __repr__(self): - return "".format(self.name) - - -class EGLPlatform(Platform): - """Renders using EGL. - """ - - def __init__(self, viewport_width, viewport_height, device: EGLDevice = None): - super(EGLPlatform, self).__init__(viewport_width, viewport_height) - if device is None: - device = get_default_device() - - self._egl_device = device - self._egl_display = None - self._egl_context = None - - def init_context(self): - _ensure_egl_loaded() - - from OpenGL.EGL import ( - EGL_SURFACE_TYPE, EGL_PBUFFER_BIT, EGL_BLUE_SIZE, - EGL_RED_SIZE, EGL_GREEN_SIZE, EGL_DEPTH_SIZE, - EGL_COLOR_BUFFER_TYPE, EGL_RGB_BUFFER, - EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT, EGL_CONFORMANT, - EGL_NONE, EGL_DEFAULT_DISPLAY, EGL_NO_CONTEXT, - EGL_OPENGL_API, EGL_CONTEXT_MAJOR_VERSION, - EGL_CONTEXT_MINOR_VERSION, - EGL_CONTEXT_OPENGL_PROFILE_MASK, - EGL_CONTEXT_OPENGL_CORE_PROFILE_BIT, - eglGetDisplay, eglInitialize, eglChooseConfig, - eglBindAPI, eglCreateContext, EGLConfig - ) - from OpenGL import arrays - - config_attributes = arrays.GLintArray.asArray([ - EGL_SURFACE_TYPE, EGL_PBUFFER_BIT, - EGL_BLUE_SIZE, 8, - EGL_RED_SIZE, 8, - EGL_GREEN_SIZE, 8, - EGL_DEPTH_SIZE, 24, - EGL_COLOR_BUFFER_TYPE, EGL_RGB_BUFFER, - EGL_RENDERABLE_TYPE, EGL_OPENGL_BIT, - EGL_CONFORMANT, EGL_OPENGL_BIT, - EGL_NONE - ]) - context_attributes = arrays.GLintArray.asArray([ - EGL_CONTEXT_MAJOR_VERSION, 4, - EGL_CONTEXT_MINOR_VERSION, 1, - EGL_CONTEXT_OPENGL_PROFILE_MASK, - EGL_CONTEXT_OPENGL_CORE_PROFILE_BIT, - EGL_NONE - ]) - major, minor = ctypes.c_long(), ctypes.c_long() - num_configs = ctypes.c_long() - configs = (EGLConfig * 1)() - - # Cache DISPLAY if necessary and get an off-screen EGL display - orig_dpy = None - if 'DISPLAY' in os.environ: - orig_dpy = os.environ['DISPLAY'] - del os.environ['DISPLAY'] - - self._egl_display = self._egl_device.get_display() - if orig_dpy is not None: - os.environ['DISPLAY'] = orig_dpy - - # Initialize EGL - assert eglInitialize(self._egl_display, major, minor) - assert eglChooseConfig( - self._egl_display, config_attributes, configs, 1, num_configs - ) - - # Bind EGL to the OpenGL API - assert eglBindAPI(EGL_OPENGL_API) - - # Create an EGL context - self._egl_context = eglCreateContext( - self._egl_display, configs[0], - EGL_NO_CONTEXT, context_attributes - ) - - # Make it current - self.make_current() - - def make_current(self): - from OpenGL.EGL import eglMakeCurrent, EGL_NO_SURFACE - assert eglMakeCurrent( - self._egl_display, EGL_NO_SURFACE, EGL_NO_SURFACE, - self._egl_context - ) - - def make_uncurrent(self): - """Make the OpenGL context uncurrent. - """ - pass - - def delete_context(self): - from OpenGL.EGL import eglDestroyContext, eglTerminate - if self._egl_display is not None: - if self._egl_context is not None: - eglDestroyContext(self._egl_display, self._egl_context) - self._egl_context = None - eglTerminate(self._egl_display) - self._egl_display = None - - def supports_framebuffers(self): - return True - - -__all__ = ['EGLPlatform'] diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/utils/path.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/utils/path.py deleted file mode 100644 index 7dab4b3041413b1432b0f434b8b14783097d33c6..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmcv/utils/path.py +++ /dev/null @@ -1,101 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os -import os.path as osp -from pathlib import Path - -from .misc import is_str - - -def is_filepath(x): - return is_str(x) or isinstance(x, Path) - - -def fopen(filepath, *args, **kwargs): - if is_str(filepath): - return open(filepath, *args, **kwargs) - elif isinstance(filepath, Path): - return filepath.open(*args, **kwargs) - raise ValueError('`filepath` should be a string or a Path') - - -def check_file_exist(filename, msg_tmpl='file "{}" does not exist'): - if not osp.isfile(filename): - raise FileNotFoundError(msg_tmpl.format(filename)) - - -def mkdir_or_exist(dir_name, mode=0o777): - if dir_name == '': - return - dir_name = osp.expanduser(dir_name) - os.makedirs(dir_name, mode=mode, exist_ok=True) - - -def symlink(src, dst, overwrite=True, **kwargs): - if os.path.lexists(dst) and overwrite: - os.remove(dst) - os.symlink(src, dst, **kwargs) - - -def scandir(dir_path, suffix=None, recursive=False, case_sensitive=True): - """Scan a directory to find the interested files. - - Args: - dir_path (str | obj:`Path`): Path of the directory. - suffix (str | tuple(str), optional): File suffix that we are - interested in. Default: None. - recursive (bool, optional): If set to True, recursively scan the - directory. Default: False. - case_sensitive (bool, optional) : If set to False, ignore the case of - suffix. Default: True. - - Returns: - A generator for all the interested files with relative paths. - """ - if isinstance(dir_path, (str, Path)): - dir_path = str(dir_path) - else: - raise TypeError('"dir_path" must be a string or Path object') - - if (suffix is not None) and not isinstance(suffix, (str, tuple)): - raise TypeError('"suffix" must be a string or tuple of strings') - - if suffix is not None and not case_sensitive: - suffix = suffix.lower() if isinstance(suffix, str) else tuple( - item.lower() for item in suffix) - - root = dir_path - - def _scandir(dir_path, suffix, recursive, case_sensitive): - for entry in os.scandir(dir_path): - if not entry.name.startswith('.') and entry.is_file(): - rel_path = osp.relpath(entry.path, root) - _rel_path = rel_path if case_sensitive else rel_path.lower() - if suffix is None or _rel_path.endswith(suffix): - yield rel_path - elif recursive and os.path.isdir(entry.path): - # scan recursively if entry.path is a directory - yield from _scandir(entry.path, suffix, recursive, - case_sensitive) - - return _scandir(dir_path, suffix, recursive, case_sensitive) - - -def find_vcs_root(path, markers=('.git', )): - """Finds the root directory (including itself) of specified markers. - - Args: - path (str): Path of directory or file. - markers (list[str], optional): List of file or directory names. - - Returns: - The directory contained one of the markers or None if not found. - """ - if osp.isfile(path): - path = osp.dirname(path) - - prev, cur = None, osp.abspath(osp.expanduser(path)) - while cur != prev: - if any(osp.exists(osp.join(cur, marker)) for marker in markers): - return cur - prev, cur = cur, osp.split(cur)[0] - return None diff --git a/spaces/vumichien/canvas_controlnet/ldm/modules/attention.py b/spaces/vumichien/canvas_controlnet/ldm/modules/attention.py deleted file mode 100644 index 509cd873768f0dd75a75ab3fcdd652822b12b59f..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/ldm/modules/attention.py +++ /dev/null @@ -1,341 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange, repeat -from typing import Optional, Any - -from ldm.modules.diffusionmodules.util import checkpoint - - -try: - import xformers - import xformers.ops - XFORMERS_IS_AVAILBLE = True -except: - XFORMERS_IS_AVAILBLE = False - -# CrossAttn precision handling -import os -_ATTN_PRECISION = os.environ.get("ATTN_PRECISION", "fp32") - -def exists(val): - return val is not None - - -def uniq(arr): - return{el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ - - -class CrossAttention(nn.Module): - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.scale = dim_head ** -0.5 - self.heads = heads - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, query_dim), - nn.Dropout(dropout) - ) - - def forward(self, x, context=None, mask=None): - h = self.heads - - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - - # force cast to fp32 to avoid overflowing - if _ATTN_PRECISION =="fp32": - with torch.autocast(enabled=False, device_type = 'cuda'): - q, k = q.float(), k.float() - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - else: - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - - del q, k - - if exists(mask): - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - sim = sim.softmax(dim=-1) - - out = einsum('b i j, b j d -> b i d', sim, v) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - return self.to_out(out) - - -class MemoryEfficientCrossAttention(nn.Module): - # https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223 - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.0): - super().__init__() - print(f"Setting up {self.__class__.__name__}. Query dim is {query_dim}, context_dim is {context_dim} and using " - f"{heads} heads.") - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.heads = heads - self.dim_head = dim_head - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout)) - self.attention_op: Optional[Any] = None - - def forward(self, x, context=None, mask=None): - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - b, _, _ = q.shape - q, k, v = map( - lambda t: t.unsqueeze(3) - .reshape(b, t.shape[1], self.heads, self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b * self.heads, t.shape[1], self.dim_head) - .contiguous(), - (q, k, v), - ) - - # actually compute the attention, what we cannot get enough of - out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) - - if exists(mask): - raise NotImplementedError - out = ( - out.unsqueeze(0) - .reshape(b, self.heads, out.shape[1], self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b, out.shape[1], self.heads * self.dim_head) - ) - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - ATTENTION_MODES = { - "softmax": CrossAttention, # vanilla attention - "softmax-xformers": MemoryEfficientCrossAttention - } - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True, - disable_self_attn=False): - super().__init__() - attn_mode = "softmax-xformers" if XFORMERS_IS_AVAILBLE else "softmax" - assert attn_mode in self.ATTENTION_MODES - attn_cls = self.ATTENTION_MODES[attn_mode] - self.disable_self_attn = disable_self_attn - self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout, - context_dim=context_dim if self.disable_self_attn else None) # is a self-attention if not self.disable_self_attn - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) - - def _forward(self, x, context=None): - x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - NEW: use_linear for more efficiency instead of the 1x1 convs - """ - def __init__(self, in_channels, n_heads, d_head, - depth=1, dropout=0., context_dim=None, - disable_self_attn=False, use_linear=False, - use_checkpoint=True): - super().__init__() - if exists(context_dim) and not isinstance(context_dim, list): - context_dim = [context_dim] - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - if not use_linear: - self.proj_in = nn.Conv2d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - else: - self.proj_in = nn.Linear(in_channels, inner_dim) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d], - disable_self_attn=disable_self_attn, checkpoint=use_checkpoint) - for d in range(depth)] - ) - if not use_linear: - self.proj_out = zero_module(nn.Conv2d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - else: - self.proj_out = zero_module(nn.Linear(in_channels, inner_dim)) - self.use_linear = use_linear - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - if not isinstance(context, list): - context = [context] - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - if not self.use_linear: - x = self.proj_in(x) - x = rearrange(x, 'b c h w -> b (h w) c').contiguous() - if self.use_linear: - x = self.proj_in(x) - for i, block in enumerate(self.transformer_blocks): - x = block(x, context=context[i]) - if self.use_linear: - x = self.proj_out(x) - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous() - if not self.use_linear: - x = self.proj_out(x) - return x + x_in - diff --git a/spaces/wldmr/punct-tube-gr/myrpunct/utils.py b/spaces/wldmr/punct-tube-gr/myrpunct/utils.py deleted file mode 100644 index 77e88f9bfbded47ca0929abf5dc5686e49d674ea..0000000000000000000000000000000000000000 --- a/spaces/wldmr/punct-tube-gr/myrpunct/utils.py +++ /dev/null @@ -1,34 +0,0 @@ -# -*- coding: utf-8 -*- -# 💾⚙️🔮 - -__author__ = "Daulet N." -__email__ = "daulet.nurmanbetov@gmail.com" - -def prepare_unpunct_text(text): - """ - Given a text, normalizes it to subsequently restore punctuation - """ - formatted_txt = text.replace('\n', '').strip() - formatted_txt = formatted_txt.lower() - formatted_txt_lst = formatted_txt.split(" ") - punct_strp_txt = [strip_punct(i) for i in formatted_txt_lst] - normalized_txt = " ".join([i for i in punct_strp_txt if i]) - return normalized_txt - -def strip_punct(wrd): - """ - Given a word, strips non aphanumeric characters that precede and follow it - """ - if not wrd: - return wrd - - while not wrd[-1:].isalnum(): - if not wrd: - break - wrd = wrd[:-1] - - while not wrd[:1].isalnum(): - if not wrd: - break - wrd = wrd[1:] - return wrd diff --git a/spaces/wy213/213a/src/components/ui/badge.tsx b/spaces/wy213/213a/src/components/ui/badge.tsx deleted file mode 100644 index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000 --- a/spaces/wy213/213a/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from 'react' -import { cva, type VariantProps } from 'class-variance-authority' - -import { cn } from '@/lib/utils' - -const badgeVariants = cva( - 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2', - { - variants: { - variant: { - default: - 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80', - secondary: - 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80', - destructive: - 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80', - outline: 'text-foreground' - } - }, - defaultVariants: { - variant: 'default' - } - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
              - ) -} - -export { Badge, badgeVariants } diff --git a/spaces/wydgg/bingo-wyd-ai/src/components/button-scroll-to-bottom.tsx b/spaces/wydgg/bingo-wyd-ai/src/components/button-scroll-to-bottom.tsx deleted file mode 100644 index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000 --- a/spaces/wydgg/bingo-wyd-ai/src/components/button-scroll-to-bottom.tsx +++ /dev/null @@ -1,34 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' -import { useAtBottom } from '@/lib/hooks/use-at-bottom' -import { Button, type ButtonProps } from '@/components/ui/button' -import { IconArrowDown } from '@/components/ui/icons' - -export function ButtonScrollToBottom({ className, ...props }: ButtonProps) { - const isAtBottom = useAtBottom() - - return ( - - ) -} diff --git a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/cleaners.py b/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/cleaners.py deleted file mode 100644 index c80e113b2b81a66134800dbdaa29c7d96a0152a7..0000000000000000000000000000000000000000 --- a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/text/cleaners.py +++ /dev/null @@ -1,146 +0,0 @@ -import re - - -def japanese_cleaners(text): - from text.japanese import japanese_to_romaji_with_accent - text = japanese_to_romaji_with_accent(text) - text = re.sub(r'([A-Za-z])$', r'\1.', text) - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') - - -def korean_cleaners(text): - '''Pipeline for Korean text''' - from text.korean import latin_to_hangul, number_to_hangul, divide_hangul - text = latin_to_hangul(text) - text = number_to_hangul(text) - text = divide_hangul(text) - text = re.sub(r'([\u3131-\u3163])$', r'\1.', text) - return text - - -def chinese_cleaners(text): - '''Pipeline for Chinese text''' - from text.mandarin import number_to_chinese, chinese_to_bopomofo, latin_to_bopomofo - text = number_to_chinese(text) - text = chinese_to_bopomofo(text) - text = latin_to_bopomofo(text) - text = re.sub(r'([ˉˊˇˋ˙])$', r'\1。', text) - return text - - -def zh_ja_mixture_cleaners(text): - from text.mandarin import chinese_to_romaji - from text.japanese import japanese_to_romaji_with_accent - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_romaji(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_romaji_with_accent( - x.group(1)).replace('ts', 'ʦ').replace('u', 'ɯ').replace('...', '…')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - if text[-1] != '।': - text += ' ।' - return text - - -def cjks_cleaners(text): - from text.mandarin import chinese_to_lazy_ipa - from text.japanese import japanese_to_ipa - from text.korean import korean_to_lazy_ipa - from text.sanskrit import devanagari_to_ipa - from text.english import english_to_lazy_ipa - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\[SA\](.*?)\[SA\]', - lambda x: devanagari_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners(text): - from text.mandarin import chinese_to_lazy_ipa - from text.japanese import japanese_to_ipa - from text.korean import korean_to_ipa - from text.english import english_to_ipa2 - text = re.sub(r'\[ZH\](.*?)\[ZH\]', lambda x: chinese_to_lazy_ipa(x.group(1)).replace( - 'ʧ', 'tʃ').replace('ʦ', 'ts').replace('ɥan', 'ɥæn')+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', lambda x: japanese_to_ipa(x.group(1)).replace('ʧ', 'tʃ').replace( - 'ʦ', 'ts').replace('ɥan', 'ɥæn').replace('ʥ', 'dz')+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', lambda x: english_to_ipa2(x.group(1)).replace('ɑ', 'a').replace( - 'ɔ', 'o').replace('ɛ', 'e').replace('ɪ', 'i').replace('ʊ', 'u')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def cjke_cleaners2(text): - from text.mandarin import chinese_to_ipa - from text.japanese import japanese_to_ipa2 - from text.korean import korean_to_ipa - from text.english import english_to_ipa2 - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[KO\](.*?)\[KO\]', - lambda x: korean_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def thai_cleaners(text): - from text.thai import num_to_thai, latin_to_thai - text = num_to_thai(text) - text = latin_to_thai(text) - return text - - -def shanghainese_cleaners(text): - from text.shanghainese import shanghainese_to_ipa - text = shanghainese_to_ipa(text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text - - -def chinese_dialect_cleaners(text): - from text.mandarin import chinese_to_ipa2 - from text.japanese import japanese_to_ipa3 - from text.shanghainese import shanghainese_to_ipa - from text.cantonese import cantonese_to_ipa - from text.english import english_to_lazy_ipa2 - from text.ngu_dialect import ngu_dialect_to_ipa - text = re.sub(r'\[ZH\](.*?)\[ZH\]', - lambda x: chinese_to_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[JA\](.*?)\[JA\]', - lambda x: japanese_to_ipa3(x.group(1)).replace('Q', 'ʔ')+' ', text) - text = re.sub(r'\[SH\](.*?)\[SH\]', lambda x: shanghainese_to_ipa(x.group(1)).replace('1', '˥˧').replace('5', - '˧˧˦').replace('6', '˩˩˧').replace('7', '˥').replace('8', '˩˨').replace('ᴀ', 'ɐ').replace('ᴇ', 'e')+' ', text) - text = re.sub(r'\[GD\](.*?)\[GD\]', - lambda x: cantonese_to_ipa(x.group(1))+' ', text) - text = re.sub(r'\[EN\](.*?)\[EN\]', - lambda x: english_to_lazy_ipa2(x.group(1))+' ', text) - text = re.sub(r'\[([A-Z]{2})\](.*?)\[\1\]', lambda x: ngu_dialect_to_ipa(x.group(2), x.group( - 1)).replace('ʣ', 'dz').replace('ʥ', 'dʑ').replace('ʦ', 'ts').replace('ʨ', 'tɕ')+' ', text) - text = re.sub(r'\s+$', '', text) - text = re.sub(r'([^\.,!\?\-…~])$', r'\1.', text) - return text diff --git a/spaces/xcchen/vits-uma-genshin-honkai/commons.py b/spaces/xcchen/vits-uma-genshin-honkai/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/xcchen/vits-uma-genshin-honkai/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/language/loss.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/language/loss.py deleted file mode 100644 index fe7ecd566bbf7f7e5a9981c7789c16c537ecb6b5..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/language/loss.py +++ /dev/null @@ -1,225 +0,0 @@ -import pickle -from distutils import log - -import torch -import torch.nn.functional as F -import torch.distributed as dist - -from einops import rearrange, repeat -from timm.loss import SoftTargetCrossEntropy - -soft_cross_entropy = SoftTargetCrossEntropy() - -def is_dist_initialized(): - return torch.distributed.is_initialized() - -def get_world_size(): - if is_dist_initialized(): - return torch.distributed.get_world_size() - return 1 - -def get_rank(): - if is_dist_initialized(): - return dist.get_rank() - return 0 - -def all_gather_grad(x): - if get_world_size() > 1: - all_x = [torch.zeros_like(x) for _ in range(get_world_size())] - torch.distributed.all_gather(all_x, x) - all_x[torch.distributed.get_rank()] = x - x = torch.cat(all_x, dim=0) - return x - -def vl_multilabel_contrastive_loss(image_feat, text_feat, temperature=1): - """ - Args: - image_feat (torch.Tensor): shape [B, L1, C] # B: batch_size, L1: 1, C: 256 - text_feat (torch.Tensor): shape [B, L2, C] # B:batch_size, L2: number of selected nouns, C: 256 - - Returns: - """ - # [B, L1, C], L1 = 1 - # image_feat = F.normalize(image_feat, dim=-1) - # [B, L2, C] - # text_feat = F.normalize(text_feat, dim=-1) - # HACK: normalize outside - - # [B, L1, L2] - dist_per_img = image_feat @ rearrange(text_feat, 'b l c -> b c l') - # [B, L2, L1] - dist_per_text = text_feat @ rearrange(image_feat, 'b l c -> b c l') - - batch = image_feat.shape[0] - img_len = image_feat.shape[1] - text_len = text_feat.shape[1] - # [B, L1, L2] - pos_labels_batch_img = rearrange(torch.ones_like(dist_per_text) / dist_per_text.size(1), 'b l2 l1 -> b l1 l2') - # [B, L2, L1] - pos_labels_batch_text = rearrange(torch.ones_like(dist_per_img) / dist_per_img.size(1), 'b l1 l2 -> b l2 l1') - - image_x = rearrange(image_feat, 'b l c -> (b l) c') - text_x = rearrange(text_feat, 'b l c -> (b l) c') - - logits_per_img = image_x @ all_gather_grad(text_x).t() - logits_per_text = text_x @ all_gather_grad(image_x).t() - - # get label globally - # [B, L1, B, L2, W] - labels_per_img = F.one_hot( - torch.ones(batch, img_len, batch, text_len, dtype=torch.long, device=image_x.device) * get_rank(), - num_classes=get_world_size()).to(image_x.dtype) - labels_per_img *= rearrange(pos_labels_batch_img, 'b l1 l2 -> b l1 1 l2 1') * repeat( - torch.eye(batch, dtype=image_x.dtype, device=image_x.device), 'b1 b2 -> b1 1 b2 1 1') - # [BxL1, WxBxL2] - labels_per_img = rearrange(labels_per_img, 'b1 l1 b2 l2 w -> (b1 l1) (w b2 l2)') - # [B, L2, B, L1, W] - labels_per_text = F.one_hot( - torch.ones(batch, text_len, batch, img_len, dtype=torch.long, device=text_x.device) * get_rank(), - num_classes=get_world_size()).to(text_x.dtype) - labels_per_text *= rearrange(pos_labels_batch_text, 'b l2 l1 -> b l2 1 l1 1') * repeat( - torch.eye(batch, dtype=text_x.dtype, device=image_x.device), 'b2 b1 -> b2 1 b1 1 1') - # [BxL2, WxBxL1] - labels_per_text = rearrange(labels_per_text, 'b2 l2 b1 l1 w -> (b2 l2) (w b1 l1)') - - logit_scale = temperature.exp().clamp(max=100) - - loss_img = soft_cross_entropy(logit_scale * logits_per_img, labels_per_img) - loss_text = soft_cross_entropy(logit_scale * logits_per_text, labels_per_text) - - loss = 0.5 * (loss_img + loss_text) - return loss - -def vl_contrastive_loss(image_feat, text_feat, temperature=1): - # if image_id or text_id is None, it should be None across all GPUs - # image_feat = F.normalize(image_feat, dim=1) - # text_feat = F.normalize(text_feat, dim=1) - # handle normalization outside - - # add the following 4 lines - image_feat = all_gather_grad(image_feat) - text_feat = all_gather_grad(text_feat) - - logits = torch.matmul(image_feat, text_feat.t()) - logit_scale = temperature.exp().clamp(max=100) - - gt = torch.arange(logits.shape[0], device=logits.device) - loss1 = F.cross_entropy(logit_scale * logits, gt) - loss2 = F.cross_entropy(logit_scale * logits.t(), gt) - return (loss1 + loss2) / 2 # scale it up by the number of GPUs - - -def all_gather_pickle(data, device): - """ - Run all_gather on arbitrary picklable data (not necessarily tensors) - Args: - data: any picklable object - Returns: - list[data]: list of data gathered from each rank - """ - world_size = get_world_size() - if world_size == 1: - return [data] - - # serialized to a Tensor - buffer = pickle.dumps(data) - storage = torch.ByteStorage.from_buffer(buffer) - tensor = torch.ByteTensor(storage).to(device) - - # obtain Tensor size of each rank - local_size = torch.LongTensor([tensor.numel()]).cuda() - size_list = [torch.LongTensor([0]).cuda() for _ in range(world_size)] - dist.all_gather(size_list, local_size) - size_list = [int(size.item()) for size in size_list] - max_size = max(size_list) - - # receiving Tensor from all ranks - # we pad the tensor because torch all_gather does not support - # gathering tensors of different shapes - tensor_list = [] - for _ in size_list: - tensor_list.append(torch.ByteTensor(size=(max_size,)).cuda()) - if local_size != max_size: - padding = torch.ByteTensor(size=(max_size - local_size,)).cuda() - tensor = torch.cat((tensor, padding), dim=0) - dist.all_gather(tensor_list, tensor) - - data_list = [] - for size, tensor in zip(size_list, tensor_list): - buffer = tensor.cpu().numpy().tobytes()[:size] - data_list.append(pickle.loads(buffer)) - - return data_list - -def all_gather_arbitary_tensor(tensor): - if get_world_size() > 1: - device = tensor.device - tensor_batch = all_gather_pickle(tensor.cpu(), device) - tensor_batch = [x.to(device) for x in tensor_batch] - tensor_batch[torch.distributed.get_rank()] = tensor - tensor_batch = torch.cat(tensor_batch, dim=0) - else: - tensor_batch = tensor - return tensor_batch - -def ql_contrastive_loss(image_feat, text_feat, temperature=1): - # add the following 4 lines - image_feat = all_gather_arbitary_tensor(image_feat) - text_feat = all_gather_arbitary_tensor(text_feat) - - logits = torch.matmul(image_feat, text_feat.t()) - logit_scale = temperature.exp().clamp(max=100) - - gt = torch.arange(logits.shape[0], device=logits.device) - loss1 = F.cross_entropy(logit_scale * logits, gt) - loss2 = F.cross_entropy(logit_scale * logits.t(), gt) - return (loss1 + loss2) / 2 # scale it up by the number of GPUs - -def vl_similarity(image_feat, text_feat, temperature=1): - # Only support single GPU for now. - logits = torch.matmul(image_feat, text_feat.t()) - logits = temperature.exp().clamp(max=100) * logits - return logits - -def ql_multi_contrastive_loss(image_feat, text_feat, text_hash, temperature=1): - # add the following 4 lines - image_feat = all_gather_arbitary_tensor(image_feat) - text_feat = all_gather_arbitary_tensor(text_feat) - - text_hash_batch = all_gather_pickle(text_hash, text_feat.device) - text_hash_all = torch.cat(text_hash_batch) - - text_hash_all_unique = torch.unique(text_hash_all).tolist() - gt = torch.zeros((image_feat.shape[0], len(text_hash_all_unique)), device=text_feat.device) - text_hash_all = text_hash_all.tolist() - text_feat_unique = torch.stack([text_feat[text_hash_all.index(txt)] for txt in text_hash_all_unique]) - - for idx, txt in enumerate(text_hash_all): - gt[idx][text_hash_all_unique.index(txt)] = 1 - - logits = torch.matmul(image_feat, text_feat_unique.t()) - logits = logits*temperature.exp().clamp(max=100) - - loss_img = soft_cross_entropy(logits, gt) - loss_text = soft_cross_entropy(logits.t(), gt.t() / gt.t().sum(-1, keepdim=True)) - - loss = 0.7 * loss_img + 0.3 * loss_text - return loss - -def image_text_contrastive_loss_queue(image_feat_inp, text_feat_inp, lang_enc, training): - # add the following 4 lines - image_feat = all_gather_grad(image_feat_inp.contiguous()) - text_feat = all_gather_grad(text_feat_inp.contiguous()) - - image_feat = image_feat / (image_feat.norm(dim=-1, keepdim=True) + 1e-7) - text_feat = text_feat / (text_feat.norm(dim=-1, keepdim=True) + 1e-7) - - temperature = lang_enc.logit_scale - logits = torch.matmul(image_feat, text_feat.t()) - logit_scale = temperature.exp().clamp(max=100) - - gt = torch.arange(logits.shape[0], device=logits.device) - loss1 = F.cross_entropy(logit_scale * logits, gt) - loss2 = F.cross_entropy(logit_scale * logits.t(), gt) - - return (loss1 + loss2) / 2 # scale it up by the number of GPUs \ No newline at end of file diff --git a/spaces/xnetba/Chat_advance/modules/models/ChuanhuAgent.py b/spaces/xnetba/Chat_advance/modules/models/ChuanhuAgent.py deleted file mode 100644 index c3cb944d3d4a5f60f1402445dc52a3501f466916..0000000000000000000000000000000000000000 --- a/spaces/xnetba/Chat_advance/modules/models/ChuanhuAgent.py +++ /dev/null @@ -1,216 +0,0 @@ -from langchain.chains.summarize import load_summarize_chain -from langchain import PromptTemplate, LLMChain -from langchain.chat_models import ChatOpenAI -from langchain.prompts import PromptTemplate -from langchain.text_splitter import TokenTextSplitter -from langchain.embeddings import OpenAIEmbeddings -from langchain.vectorstores import FAISS -from langchain.chains import RetrievalQA -from langchain.agents import load_tools -from langchain.agents import initialize_agent -from langchain.agents import AgentType -from langchain.docstore.document import Document -from langchain.tools import BaseTool, StructuredTool, Tool, tool -from langchain.callbacks.stdout import StdOutCallbackHandler -from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler -from langchain.callbacks.manager import BaseCallbackManager -from duckduckgo_search import DDGS -from itertools import islice - -from typing import Any, Dict, List, Optional, Union - -from langchain.callbacks.base import BaseCallbackHandler -from langchain.input import print_text -from langchain.schema import AgentAction, AgentFinish, LLMResult - -from pydantic import BaseModel, Field - -import requests -from bs4 import BeautifulSoup -from threading import Thread, Condition -from collections import deque - -from .base_model import BaseLLMModel, CallbackToIterator, ChuanhuCallbackHandler -from ..config import default_chuanhu_assistant_model -from ..presets import SUMMARIZE_PROMPT, i18n -from ..index_func import construct_index - -from langchain.callbacks import get_openai_callback -import os -import gradio as gr -import logging - -class GoogleSearchInput(BaseModel): - keywords: str = Field(description="keywords to search") - -class WebBrowsingInput(BaseModel): - url: str = Field(description="URL of a webpage") - -class WebAskingInput(BaseModel): - url: str = Field(description="URL of a webpage") - question: str = Field(description="Question that you want to know the answer to, based on the webpage's content.") - - -class ChuanhuAgent_Client(BaseLLMModel): - def __init__(self, model_name, openai_api_key, user_name="") -> None: - super().__init__(model_name=model_name, user=user_name) - self.text_splitter = TokenTextSplitter(chunk_size=500, chunk_overlap=30) - self.api_key = openai_api_key - self.llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0, model_name=default_chuanhu_assistant_model, openai_api_base=os.environ.get("OPENAI_API_BASE", None)) - self.cheap_llm = ChatOpenAI(openai_api_key=openai_api_key, temperature=0, model_name="gpt-3.5-turbo", openai_api_base=os.environ.get("OPENAI_API_BASE", None)) - PROMPT = PromptTemplate(template=SUMMARIZE_PROMPT, input_variables=["text"]) - self.summarize_chain = load_summarize_chain(self.cheap_llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT) - self.index_summary = None - self.index = None - if "Pro" in self.model_name: - self.tools = load_tools(["serpapi", "google-search-results-json", "llm-math", "arxiv", "wikipedia", "wolfram-alpha"], llm=self.llm) - else: - self.tools = load_tools(["ddg-search", "llm-math", "arxiv", "wikipedia"], llm=self.llm) - self.tools.append( - Tool.from_function( - func=self.google_search_simple, - name="Google Search JSON", - description="useful when you need to search the web.", - args_schema=GoogleSearchInput - ) - ) - - self.tools.append( - Tool.from_function( - func=self.summary_url, - name="Summary Webpage", - description="useful when you need to know the overall content of a webpage.", - args_schema=WebBrowsingInput - ) - ) - - self.tools.append( - StructuredTool.from_function( - func=self.ask_url, - name="Ask Webpage", - description="useful when you need to ask detailed questions about a webpage.", - args_schema=WebAskingInput - ) - ) - - def google_search_simple(self, query): - results = [] - with DDGS() as ddgs: - ddgs_gen = ddgs.text("notes from a dead house", backend="lite") - for r in islice(ddgs_gen, 10): - results.append({ - "title": r["title"], - "link": r["href"], - "snippet": r["body"] - }) - return str(results) - - def handle_file_upload(self, files, chatbot, language): - """if the model accepts multi modal input, implement this function""" - status = gr.Markdown.update() - if files: - index = construct_index(self.api_key, file_src=files) - assert index is not None, "获取索引失败" - self.index = index - status = i18n("索引构建完成") - # Summarize the document - logging.info(i18n("生成内容总结中……")) - with get_openai_callback() as cb: - os.environ["OPENAI_API_KEY"] = self.api_key - from langchain.chains.summarize import load_summarize_chain - from langchain.prompts import PromptTemplate - from langchain.chat_models import ChatOpenAI - prompt_template = "Write a concise summary of the following:\n\n{text}\n\nCONCISE SUMMARY IN " + language + ":" - PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"]) - llm = ChatOpenAI() - chain = load_summarize_chain(llm, chain_type="map_reduce", return_intermediate_steps=True, map_prompt=PROMPT, combine_prompt=PROMPT) - summary = chain({"input_documents": list(index.docstore.__dict__["_dict"].values())}, return_only_outputs=True)["output_text"] - logging.info(f"Summary: {summary}") - self.index_summary = summary - chatbot.append((f"Uploaded {len(files)} files", summary)) - logging.info(cb) - return gr.Files.update(), chatbot, status - - def query_index(self, query): - if self.index is not None: - retriever = self.index.as_retriever() - qa = RetrievalQA.from_chain_type(llm=self.llm, chain_type="stuff", retriever=retriever) - return qa.run(query) - else: - "Error during query." - - def summary(self, text): - texts = Document(page_content=text) - texts = self.text_splitter.split_documents([texts]) - return self.summarize_chain({"input_documents": texts}, return_only_outputs=True)["output_text"] - - def fetch_url_content(self, url): - response = requests.get(url) - soup = BeautifulSoup(response.text, 'html.parser') - - # 提取所有的文本 - text = ''.join(s.getText() for s in soup.find_all('p')) - logging.info(f"Extracted text from {url}") - return text - - def summary_url(self, url): - text = self.fetch_url_content(url) - if text == "": - return "URL unavailable." - text_summary = self.summary(text) - url_content = "webpage content summary:\n" + text_summary - - return url_content - - def ask_url(self, url, question): - text = self.fetch_url_content(url) - if text == "": - return "URL unavailable." - texts = Document(page_content=text) - texts = self.text_splitter.split_documents([texts]) - # use embedding - embeddings = OpenAIEmbeddings(openai_api_key=self.api_key, openai_api_base=os.environ.get("OPENAI_API_BASE", None)) - - # create vectorstore - db = FAISS.from_documents(texts, embeddings) - retriever = db.as_retriever() - qa = RetrievalQA.from_chain_type(llm=self.cheap_llm, chain_type="stuff", retriever=retriever) - return qa.run(f"{question} Reply in 中文") - - def get_answer_at_once(self): - question = self.history[-1]["content"] - # llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo") - agent = initialize_agent(self.tools, self.llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True) - reply = agent.run(input=f"{question} Reply in 简体中文") - return reply, -1 - - def get_answer_stream_iter(self): - question = self.history[-1]["content"] - it = CallbackToIterator() - manager = BaseCallbackManager(handlers=[ChuanhuCallbackHandler(it.callback)]) - def thread_func(): - tools = self.tools - if self.index is not None: - tools.append( - Tool.from_function( - func=self.query_index, - name="Query Knowledge Base", - description=f"useful when you need to know about: {self.index_summary}", - args_schema=WebBrowsingInput - ) - ) - agent = initialize_agent(self.tools, self.llm, agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION, verbose=True, callback_manager=manager) - try: - reply = agent.run(input=f"{question} Reply in 简体中文") - except Exception as e: - import traceback - traceback.print_exc() - reply = str(e) - it.callback(reply) - it.finish() - t = Thread(target=thread_func) - t.start() - partial_text = "" - for value in it: - partial_text += value - yield partial_text diff --git a/spaces/yderre-aubay/midi-player-demo/src/common/song/Song.test.ts b/spaces/yderre-aubay/midi-player-demo/src/common/song/Song.test.ts deleted file mode 100644 index b2ee16c5a4d28de651a0ae224aaf6ac158d2037e..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/common/song/Song.test.ts +++ /dev/null @@ -1,40 +0,0 @@ -import * as fs from "fs" -import * as path from "path" -import { deserialize, serialize } from "serializr" -import { songFromMidi } from "../midi/midiConversion" -import Song from "./Song" -import { emptySong } from "./SongFactory" - -describe("Song", () => { - const song = songFromMidi( - fs.readFileSync(path.join(__dirname, "../../../testdata/tracks.mid")) - .buffer, - ) - - it("fromMidi", () => { - expect(song).not.toBeNull() - const { tracks } = song - expect(tracks.length).toBe(18) - - expect(tracks[0].isConductorTrack).toBeTruthy() - expect(!tracks[1].isConductorTrack).toBeTruthy() - expect(tracks[1].channel).toBe(0) - expect(tracks[2].channel).toBe(0) - expect(tracks[3].channel).toBe(1) - expect(tracks[17].channel).toBe(15) - - expect(tracks[0].getTempo(240)).toBe(128) - expect(tracks[2].getVolume(193)).toBe(100) - expect(tracks[2].getPan(192)).toBe(1) - expect(tracks[2].programNumber).toBe(29) - }) - - it("should be serializable", () => { - const song = emptySong() - song.filepath = "abc" - const x = serialize(song) - const s = deserialize(Song, x) - expect(s.filepath).toBe("abc") - expect(s.tracks.length).toBe(song.tracks.length) - }) -}) diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/ControlPane/ControlName.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/ControlPane/ControlName.tsx deleted file mode 100644 index 7ff1a5f7e93b52f6dbbd6d5e9a9ed0da4437f80f..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/ControlPane/ControlName.tsx +++ /dev/null @@ -1,32 +0,0 @@ -import { MIDIControlEventNames, MIDIControlEvents } from "midifile-ts" -import { FC } from "react" -import { Localized } from "../../../components/Localized" -import { ControlMode } from "../../stores/ControlStore" - -export const ControlName: FC<{ mode: ControlMode }> = ({ mode }) => { - switch (mode.type) { - case "velocity": - return velocity - case "pitchBend": - return pitch-bend - case "controller": - switch (mode.controllerType) { - case MIDIControlEvents.MSB_MAIN_VOLUME: - return volume - case MIDIControlEvents.MSB_PAN: - return panpot - case MIDIControlEvents.MSB_EXPRESSION: - return expression - case MIDIControlEvents.SUSTAIN: - return hold-pedal - default: - return ( - <> - {MIDIControlEventNames[mode.controllerType] === "Undefined" - ? `CC${mode.controllerType}` - : MIDIControlEventNames[mode.controllerType]} - - ) - } - } -} diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/components/GLNodes/Cursor.tsx b/spaces/yderre-aubay/midi-player-demo/src/main/components/GLNodes/Cursor.tsx deleted file mode 100644 index ba1e566c13e6019077fee18e24fd85274f8f77c4..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/components/GLNodes/Cursor.tsx +++ /dev/null @@ -1,26 +0,0 @@ -import { Rectangles } from "@ryohey/webgl-react" -import { vec4 } from "gl-matrix" -import { FC } from "react" - -export const Cursor: FC<{ x: number; height: number; zIndex: number }> = ({ - x, - height, - zIndex, -}) => { - const color = vec4.fromValues(1, 0, 0, 1) - - return ( - - ) -} diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/beit/modeling_flax_beit.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/beit/modeling_flax_beit.py deleted file mode 100644 index 0f0dc809e68046f3ae9aee896900eea960642c62..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/beit/modeling_flax_beit.py +++ /dev/null @@ -1,947 +0,0 @@ -# coding=utf-8 -# Copyright 2021 Microsoft Research and the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from typing import Callable, List, Optional, Tuple - -import flax -import flax.linen as nn -import jax -import jax.numpy as jnp -import numpy as np -from flax.core.frozen_dict import FrozenDict, freeze, unfreeze -from flax.linen.attention import dot_product_attention_weights -from flax.traverse_util import flatten_dict, unflatten_dict - -from ...modeling_flax_outputs import ( - FlaxBaseModelOutput, - FlaxBaseModelOutputWithPooling, - FlaxMaskedLMOutput, - FlaxSequenceClassifierOutput, -) -from ...modeling_flax_utils import ( - ACT2FN, - FlaxPreTrainedModel, - append_replace_return_docstrings, - overwrite_call_docstring, -) -from ...utils import add_start_docstrings, add_start_docstrings_to_model_forward -from .configuration_beit import BeitConfig - - -@flax.struct.dataclass -class FlaxBeitModelOutputWithPooling(FlaxBaseModelOutputWithPooling): - """ - Class for outputs of [`FlaxBeitModel`]. - - Args: - last_hidden_state (`jnp.ndarray` of shape `(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - pooler_output (`jnp.ndarray` of shape `(batch_size, hidden_size)`): - Average of the last layer hidden states of the patch tokens (excluding the *[CLS]* token) if - *config.use_mean_pooling* is set to True. If set to False, then the final hidden state of the *[CLS]* token - will be returned. - hidden_states (`tuple(jnp.ndarray)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `jnp.ndarray` (one for the output of the embeddings + one for the output of each layer) of shape - `(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus - the initial embedding outputs. - attentions (`tuple(jnp.ndarray)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `jnp.ndarray` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in - the self-attention heads. - """ - - -BEIT_START_DOCSTRING = r""" - - This model inherits from [`FlaxPreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading, saving and converting weights from PyTorch models) - - This model is also a Flax Linen [flax.linen.Module](https://flax.readthedocs.io/en/latest/flax.linen.html#module) - subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to - general usage and behavior. - - Finally, this model supports inherent JAX features such as: - - - [Just-In-Time (JIT) compilation](https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit) - - [Automatic Differentiation](https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation) - - [Vectorization](https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap) - - [Parallelization](https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap) - - Parameters: - config ([`BeitConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~FlaxPreTrainedModel.from_pretrained`] method to load the model weights. - dtype (`jax.numpy.dtype`, *optional*, defaults to `jax.numpy.float32`): - The data type of the computation. Can be one of `jax.numpy.float32`, `jax.numpy.float16` (on GPUs) and - `jax.numpy.bfloat16` (on TPUs). - - This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If - specified all the computation will be performed with the given `dtype`. - - **Note that this only specifies the dtype of the computation and does not influence the dtype of model - parameters.** - - If you wish to change the dtype of the model parameters, see [`~FlaxPreTrainedModel.to_fp16`] and - [`~FlaxPreTrainedModel.to_bf16`]. -""" - -BEIT_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`numpy.ndarray` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See - [`AutoImageProcessor.__call__`] for details. - - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -def relative_position_index_init(window_size: Tuple[int, int]) -> jnp.ndarray: - """ - get pair-wise relative position index for each token inside the window - """ - num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3 - - coords_h = np.arange(window_size[0]) - coords_w = np.arange(window_size[1]) - coords = np.stack(np.meshgrid(coords_h, coords_w, indexing="ij")) # 2, Wh, Ww - coords_flatten = np.reshape(coords, (2, -1)) - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = np.transpose(relative_coords, (1, 2, 0)) # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * window_size[1] - 1 - - relative_position_index = np.zeros(shape=(window_size[0] * window_size[1] + 1,) * 2, dtype=relative_coords.dtype) - relative_position_index[1:, 1:] = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - relative_position_index[0, 0:] = num_relative_distance - 3 - relative_position_index[0:, 0] = num_relative_distance - 2 - relative_position_index[0, 0] = num_relative_distance - 1 - return jnp.array(relative_position_index) - - -def ones_with_scale(key, shape, scale, dtype=jnp.float32): - return jnp.ones(shape, dtype) * scale - - -class FlaxBeitDropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" - - rate: float - - @nn.module.compact - def __call__(self, inputs, deterministic: Optional[bool] = True): - if self.rate == 0.0: - return inputs - keep_prob = 1.0 - self.rate - if deterministic: - return inputs - else: - shape = (inputs.shape[0],) + (1,) * (inputs.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - rng = self.make_rng("droppath") - random_tensor = keep_prob + jax.random.uniform(rng, shape=shape, dtype=inputs.dtype) - binary_tensor = jnp.floor(random_tensor) - output = inputs / keep_prob * binary_tensor - return output - - -class FlaxBeitPatchEmbeddings(nn.Module): - config: BeitConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.num_channels = self.config.num_channels - image_size = self.config.image_size - patch_size = self.config.patch_size - num_patches = (image_size // patch_size) * (image_size // patch_size) - patch_shape = (image_size // patch_size, image_size // patch_size) - self.num_patches = num_patches - self.patch_shape = patch_shape - self.projection = nn.Conv( - self.config.hidden_size, - kernel_size=(patch_size, patch_size), - strides=(patch_size, patch_size), - padding="VALID", - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - - def __call__(self, pixel_values): - num_channels = pixel_values.shape[-1] - if num_channels != self.num_channels: - raise ValueError( - "Make sure that the channel dimension of the pixel values match with the one set in the configuration." - ) - embeddings = self.projection(pixel_values) - batch_size, _, _, channels = embeddings.shape - return jnp.reshape(embeddings, (batch_size, -1, channels)) - - -class FlaxBeitEmbeddings(nn.Module): - """Construct the CLS token, position and patch embeddings.""" - - config: BeitConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.cls_token = self.param("cls_token", nn.initializers.zeros, (1, 1, self.config.hidden_size)) - if self.config.use_mask_token: - self.mask_token = self.param("mask_token", nn.initializers.zeros, (1, 1, self.config.hidden_size)) - self.patch_embeddings = FlaxBeitPatchEmbeddings(self.config, dtype=self.dtype) - num_patches = self.patch_embeddings.num_patches - if self.config.use_absolute_position_embeddings: - self.position_embeddings = self.param( - "position_embeddings", nn.initializers.zeros, (1, num_patches + 1, self.config.hidden_size) - ) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - - def __call__(self, pixel_values, bool_masked_pos=None, deterministic=True): - embeddings = self.patch_embeddings(pixel_values) - batch_size, seq_len, _ = embeddings.shape - - cls_tokens = jnp.broadcast_to(self.cls_token, (batch_size, 1, self.config.hidden_size)) - cls_tokens = cls_tokens.astype(embeddings.dtype) - - if bool_masked_pos is not None: - mask_tokens = jnp.broadcast_to(self.mask_token, (batch_size, seq_len, self.config.hidden_size)) - mask_tokens = mask_tokens.astype(embeddings.dtype) - # replace the masked visual tokens by mask_tokens - w = jnp.expand_dims(bool_masked_pos, axis=-1) - embeddings = embeddings * (1 - w) + mask_tokens * w - - embeddings = jnp.concatenate((cls_tokens, embeddings), axis=1) - - if self.config.use_absolute_position_embeddings: - embeddings = embeddings + self.position_embeddings.astype(embeddings.dtype) - - embeddings = self.dropout(embeddings, deterministic=deterministic) - return embeddings - - -class FlaxBeitRelativePositionBias(nn.Module): - config: BeitConfig - window_size: Tuple[int, int] - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - num_relative_distance = (2 * self.window_size[0] - 1) * (2 * self.window_size[1] - 1) + 3 - self.relative_position_bias_table = self.param( - "relative_position_bias_table", - nn.initializers.zeros, - (num_relative_distance, self.config.num_attention_heads), - ) # 2*Wh-1 * 2*Ww-1, nH - # cls to token & token 2 cls & cls to cls - - self.relative_position_index = relative_position_index_init(self.window_size) - - def __call__(self): - index = self.relative_position_index.reshape(-1) - shape = (self.window_size[0] * self.window_size[1] + 1, self.window_size[0] * self.window_size[1] + 1, -1) - relative_position_bias = self.relative_position_bias_table[index].reshape(shape) # Wh*Ww,Wh*Ww,nH - return jnp.transpose(relative_position_bias, (2, 0, 1)) - - -class FlaxBeitSelfAttention(nn.Module): - config: BeitConfig - window_size: Tuple[int, int] - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - if self.config.hidden_size % self.config.num_attention_heads != 0 and not hasattr( - self.config, "embedding_size" - ): - raise ValueError( - f"The hidden size {self.config.hidden_size,} is not a multiple of the number of attention " - f"heads {self.config.num_attention_heads}." - ) - - self.query = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - self.key = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - use_bias=False, - ) - self.value = nn.Dense( - self.config.hidden_size, - dtype=self.dtype, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - ) - - self.relative_position_bias = ( - FlaxBeitRelativePositionBias(self.config, window_size=self.window_size, dtype=self.dtype) - if self.window_size - else None - ) - - def __call__( - self, hidden_states, relative_position_bias=None, deterministic: bool = True, output_attentions: bool = False - ): - head_dim = self.config.hidden_size // self.config.num_attention_heads - - query_states = self.query(hidden_states).reshape( - hidden_states.shape[:2] + (self.config.num_attention_heads, head_dim) - ) - value_states = self.value(hidden_states).reshape( - hidden_states.shape[:2] + (self.config.num_attention_heads, head_dim) - ) - key_states = self.key(hidden_states).reshape( - hidden_states.shape[:2] + (self.config.num_attention_heads, head_dim) - ) - - dropout_rng = None - if not deterministic and self.config.attention_probs_dropout_prob > 0.0: - dropout_rng = self.make_rng("dropout") - - attention_bias = jnp.array(0.0, dtype=self.dtype) - # Add relative position bias if present. - if self.relative_position_bias is not None: - attention_bias = jnp.expand_dims(self.relative_position_bias(), 0) - attention_bias = attention_bias.astype(query_states.dtype) - - # Add shared relative position bias if provided. - if relative_position_bias is not None: - attention_bias = attention_bias + relative_position_bias.astype(attention_bias.dtype) - - attn_weights = dot_product_attention_weights( - query_states, - key_states, - bias=attention_bias, - dropout_rng=dropout_rng, - dropout_rate=self.config.attention_probs_dropout_prob, - broadcast_dropout=True, - deterministic=deterministic, - dtype=self.dtype, - precision=None, - ) - - attn_output = jnp.einsum("...hqk,...khd->...qhd", attn_weights, value_states) - attn_output = attn_output.reshape(attn_output.shape[:2] + (-1,)) - - outputs = (attn_output, attn_weights) if output_attentions else (attn_output,) - return outputs - - -class FlaxBeitSelfOutput(nn.Module): - config: BeitConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.dense = nn.Dense( - self.config.hidden_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - - def __call__(self, hidden_states, deterministic: bool = True): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - return hidden_states - - -class FlaxBeitAttention(nn.Module): - config: BeitConfig - window_size: Tuple[int, int] - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.attention = FlaxBeitSelfAttention(self.config, self.window_size, dtype=self.dtype) - self.output = FlaxBeitSelfOutput(self.config, dtype=self.dtype) - - def __call__( - self, hidden_states, relative_position_bias=None, deterministic=True, output_attentions: bool = False - ): - attn_outputs = self.attention( - hidden_states, relative_position_bias, deterministic=deterministic, output_attentions=output_attentions - ) - attn_output = attn_outputs[0] - attn_output = self.output(attn_output, deterministic=deterministic) - - outputs = (attn_output,) - - if output_attentions: - outputs += (attn_outputs[1],) - - return outputs - - -class FlaxBeitIntermediate(nn.Module): - config: BeitConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.dense = nn.Dense( - self.config.intermediate_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - self.activation = ACT2FN[self.config.hidden_act] - - def __call__(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.activation(hidden_states) - - return hidden_states - - -class FlaxBeitOutput(nn.Module): - config: BeitConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.dense = nn.Dense( - self.config.hidden_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - self.dropout = nn.Dropout(rate=self.config.hidden_dropout_prob) - - def __call__(self, hidden_states, deterministic: bool = True): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states, deterministic=deterministic) - - return hidden_states - - -class FlaxBeitLayer(nn.Module): - config: BeitConfig - window_size: Tuple[int, int] - drop_path_rate: float - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.attention = FlaxBeitAttention(self.config, self.window_size, dtype=self.dtype) - self.intermediate = FlaxBeitIntermediate(self.config, dtype=self.dtype) - self.output = FlaxBeitOutput(self.config, dtype=self.dtype) - self.layernorm_before = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - self.drop_path = FlaxBeitDropPath(rate=self.drop_path_rate) - self.layernorm_after = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - - self.init_values = self.config.layer_scale_init_value - if self.init_values > 0: - self.lambda_1 = self.param("lambda_1", ones_with_scale, (self.config.hidden_size), self.init_values) - self.lambda_2 = self.param("lambda_2", ones_with_scale, (self.config.hidden_size), self.init_values) - else: - self.lambda_1 = None - self.lambda_2 = None - - def __call__( - self, hidden_states, relative_position_bias=None, deterministic: bool = True, output_attentions: bool = False - ): - self_attention_outputs = self.attention( - self.layernorm_before(hidden_states), # in BEiT, layernorm is applied before self-attention - relative_position_bias, - deterministic=deterministic, - output_attentions=output_attentions, - ) - attention_output = self_attention_outputs[0] - - # apply lambda_1 if present - if self.lambda_1 is not None: - attention_output = self.lambda_1.astype(attention_output.dtype) * attention_output - - # first residual connection - hidden_states = self.drop_path(attention_output, deterministic=deterministic) + hidden_states - - # in BEiT, layernorm is also applied after self-attention - layer_output = self.layernorm_after(hidden_states) - - layer_output = self.intermediate(layer_output) - layer_output = self.output(layer_output, deterministic=deterministic) - - # apply lambda_2 if present - if self.lambda_2 is not None: - layer_output = self.lambda_2.astype(layer_output.dtype) * layer_output - - # second residual connection - layer_output = self.drop_path(layer_output, deterministic=deterministic) + hidden_states - - outputs = (layer_output,) - - if output_attentions: - outputs += (self_attention_outputs[1],) - - return outputs - - -class FlaxBeitLayerCollection(nn.Module): - config: BeitConfig - window_size: Tuple[int, int] - drop_path_rates: List[float] - relative_position_bias: Callable[[], jnp.ndarray] - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.layers = [ - FlaxBeitLayer( - self.config, - window_size=self.window_size if self.config.use_relative_position_bias else None, - drop_path_rate=self.drop_path_rates[i], - name=str(i), - dtype=self.dtype, - ) - for i in range(self.config.num_hidden_layers) - ] - - def __call__( - self, - hidden_states, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - all_attentions = () if output_attentions else None - all_hidden_states = () if output_hidden_states else None - - for i, layer in enumerate(self.layers): - if output_hidden_states: - all_hidden_states += (hidden_states,) - relative_position_bias = self.relative_position_bias() if self.relative_position_bias is not None else None - layer_outputs = layer( - hidden_states, relative_position_bias, deterministic=deterministic, output_attentions=output_attentions - ) - - hidden_states = layer_outputs[0] - - if output_attentions: - all_attentions += (layer_outputs[1],) - - if output_hidden_states: - all_hidden_states += (hidden_states,) - - outputs = (hidden_states,) - if not return_dict: - return tuple(v for v in outputs if v is not None) - - return FlaxBaseModelOutput( - last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions - ) - - -class FlaxBeitEncoder(nn.Module): - config: BeitConfig - window_size: Tuple[int, int] - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - if self.config.use_shared_relative_position_bias: - self.relative_position_bias = FlaxBeitRelativePositionBias( - config=self.config, window_size=self.window_size, dtype=self.dtype - ) - - # stochastic depth decay rule - drop_path_rates = list(np.linspace(0, self.config.drop_path_rate, self.config.num_hidden_layers)) - self.layer = FlaxBeitLayerCollection( - self.config, - window_size=self.window_size, - drop_path_rates=drop_path_rates, - relative_position_bias=self.relative_position_bias - if self.config.use_shared_relative_position_bias - else None, - dtype=self.dtype, - ) - - def __call__( - self, - hidden_states, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - return self.layer( - hidden_states, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - -class FlaxBeitPreTrainedModel(FlaxPreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BeitConfig - base_model_prefix = "beit" - main_input_name = "pixel_values" - module_class: nn.Module = None - - def __init__( - self, - config: BeitConfig, - input_shape=None, - seed: int = 0, - dtype: jnp.dtype = jnp.float32, - _do_init: bool = True, - **kwargs, - ): - module = self.module_class(config=config, dtype=dtype, **kwargs) - if input_shape is None: - input_shape = (1, config.image_size, config.image_size, config.num_channels) - super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype, _do_init=_do_init) - - def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple, params: FrozenDict = None) -> FrozenDict: - # init input tensors - pixel_values = jnp.zeros(input_shape, dtype=self.dtype) - - params_rng, dropout_rng = jax.random.split(rng) - dropout_rng, droppath_rng = jax.random.split(dropout_rng) - rngs = {"params": params_rng, "dropout": dropout_rng, "droppath": droppath_rng} - - random_params = self.module.init(rngs, pixel_values, return_dict=False)["params"] - - if params is not None: - random_params = flatten_dict(unfreeze(random_params)) - params = flatten_dict(unfreeze(params)) - for missing_key in self._missing_keys: - params[missing_key] = random_params[missing_key] - self._missing_keys = set() - return freeze(unflatten_dict(params)) - else: - return random_params - - @add_start_docstrings_to_model_forward(BEIT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - def __call__( - self, - pixel_values, - bool_masked_pos=None, - params: dict = None, - dropout_rng: jax.random.PRNGKey = None, - train: bool = False, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ): - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.return_dict - - pixel_values = jnp.transpose(pixel_values, (0, 2, 3, 1)) - # Handle any PRNG if needed - rngs = {} - if dropout_rng is not None: - dropout_rng, droppath_rng = jax.random.split(dropout_rng) - rngs["dropout"] = dropout_rng - rngs["droppath"] = droppath_rng - - return self.module.apply( - {"params": params or self.params}, - jnp.array(pixel_values, dtype=jnp.float32), - bool_masked_pos, - not train, - output_attentions, - output_hidden_states, - return_dict, - rngs=rngs, - ) - - -class FlaxBeitPooler(nn.Module): - config: BeitConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - if self.config.use_mean_pooling: - self.layernorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - - def __call__(self, hidden_states): - if self.config.use_mean_pooling: - # Mean pool the final hidden states of the patch tokens - patch_tokens = hidden_states[:, 1:, :] - pooled_output = self.layernorm(jnp.mean(patch_tokens, axis=1)) - else: - # Pool by simply taking the final hidden state of the [CLS] token - pooled_output = hidden_states[:, 0] - - return pooled_output - - -class FlaxBeitModule(nn.Module): - config: BeitConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - add_pooling_layer: bool = True - - def setup(self): - self.embeddings = FlaxBeitEmbeddings(self.config, dtype=self.dtype) - self.encoder = FlaxBeitEncoder( - self.config, window_size=self.embeddings.patch_embeddings.patch_shape, dtype=self.dtype - ) - if not self.config.use_mean_pooling: - self.layernorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - self.pooler = FlaxBeitPooler(self.config, dtype=self.dtype) if self.add_pooling_layer else None - - def __call__( - self, - pixel_values, - bool_masked_pos=None, - deterministic: bool = True, - output_attentions: bool = False, - output_hidden_states: bool = False, - return_dict: bool = True, - ): - hidden_states = self.embeddings(pixel_values, bool_masked_pos, deterministic=deterministic) - - outputs = self.encoder( - hidden_states, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - hidden_states = outputs[0] - if not self.config.use_mean_pooling: - hidden_states = self.layernorm(hidden_states) - pooled = self.pooler(hidden_states) if self.add_pooling_layer else None - - if not return_dict: - # if pooled is None, don't return it - if pooled is None: - return (hidden_states,) + outputs[1:] - return (hidden_states, pooled) + outputs[1:] - - return FlaxBeitModelOutputWithPooling( - last_hidden_state=hidden_states, - pooler_output=pooled, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - "The bare Beit Model transformer outputting raw hidden-states without any specific head on top.", - BEIT_START_DOCSTRING, -) -class FlaxBeitModel(FlaxBeitPreTrainedModel): - module_class = FlaxBeitModule - - -FLAX_BEIT_MODEL_DOCSTRING = """ - Returns: - - Examples: - - ```python - >>> from transformers import AutoImageProcessor, FlaxBeitModel - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224-pt22k-ft22k") - >>> model = FlaxBeitModel.from_pretrained("microsoft/beit-base-patch16-224-pt22k-ft22k") - - >>> inputs = image_processor(images=image, return_tensors="np") - >>> outputs = model(**inputs) - >>> last_hidden_states = outputs.last_hidden_state - ``` -""" - -overwrite_call_docstring(FlaxBeitModel, FLAX_BEIT_MODEL_DOCSTRING) -append_replace_return_docstrings(FlaxBeitModel, output_type=FlaxBeitModelOutputWithPooling, config_class=BeitConfig) - - -class FlaxBeitForMaskedImageModelingModule(nn.Module): - config: BeitConfig - dtype: jnp.dtype = jnp.float32 # the dtype of the computation - - def setup(self): - self.beit = FlaxBeitModule(self.config, add_pooling_layer=False, dtype=self.dtype) - - # Classifier head - self.layernorm = nn.LayerNorm(epsilon=self.config.layer_norm_eps, dtype=self.dtype) - self.lm_head = nn.Dense( - self.config.vocab_size, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - - def __call__( - self, - pixel_values=None, - bool_masked_pos=None, - deterministic: bool = True, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.beit( - pixel_values, - bool_masked_pos, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - sequence_output = self.layernorm(sequence_output) - prediction_scores = self.lm_head(sequence_output[:, 1:]) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return output - - return FlaxMaskedLMOutput( - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - "Beit Model transformer with a 'language' modeling head on top (to predict visual tokens).", - BEIT_START_DOCSTRING, -) -class FlaxBeitForMaskedImageModeling(FlaxBeitPreTrainedModel): - module_class = FlaxBeitForMaskedImageModelingModule - - -FLAX_BEIT_MLM_DOCSTRING = """ - bool_masked_pos (`numpy.ndarray` of shape `(batch_size, num_patches)`): - Boolean masked positions. Indicates which patches are masked (1) and which aren't (0). - - Returns: - - Examples: - - ```python - >>> from transformers import AutoImageProcessor, BeitForMaskedImageModeling - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224-pt22k") - >>> model = BeitForMaskedImageModeling.from_pretrained("microsoft/beit-base-patch16-224-pt22k") - - >>> inputs = image_processor(images=image, return_tensors="np") - >>> outputs = model(**inputs) - >>> logits = outputs.logits - ``` -""" - -overwrite_call_docstring(FlaxBeitForMaskedImageModeling, FLAX_BEIT_MLM_DOCSTRING) -append_replace_return_docstrings( - FlaxBeitForMaskedImageModeling, output_type=FlaxMaskedLMOutput, config_class=BeitConfig -) - - -class FlaxBeitForImageClassificationModule(nn.Module): - config: BeitConfig - dtype: jnp.dtype = jnp.float32 - - def setup(self): - self.beit = FlaxBeitModule(config=self.config, dtype=self.dtype, add_pooling_layer=True) - self.classifier = nn.Dense( - self.config.num_labels, - kernel_init=jax.nn.initializers.normal(self.config.initializer_range), - dtype=self.dtype, - ) - - def __call__( - self, - pixel_values=None, - bool_masked_pos=None, - deterministic: bool = True, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.beit( - pixel_values, - deterministic=deterministic, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = outputs[1] - logits = self.classifier(pooled_output) - - if not return_dict: - output = (logits,) + outputs[2:] - return output - - return FlaxSequenceClassifierOutput( - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Beit Model transformer with an image classification head on top (a linear layer on top of the average of the final - hidden states of the patch tokens) e.g. for ImageNet. - """, - BEIT_START_DOCSTRING, -) -class FlaxBeitForImageClassification(FlaxBeitPreTrainedModel): - module_class = FlaxBeitForImageClassificationModule - - -FLAX_BEIT_CLASSIF_DOCSTRING = """ - Returns: - - Example: - - ```python - >>> from transformers import AutoImageProcessor, FlaxBeitForImageClassification - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/beit-base-patch16-224") - >>> model = FlaxBeitForImageClassification.from_pretrained("microsoft/beit-base-patch16-224") - - >>> inputs = image_processor(images=image, return_tensors="np") - >>> outputs = model(**inputs) - >>> logits = outputs.logits - >>> # model predicts one of the 1000 ImageNet classes - >>> predicted_class_idx = logits.argmax(-1).item() - >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) - ``` -""" - -overwrite_call_docstring(FlaxBeitForImageClassification, FLAX_BEIT_CLASSIF_DOCSTRING) -append_replace_return_docstrings( - FlaxBeitForImageClassification, output_type=FlaxSequenceClassifierOutput, config_class=BeitConfig -) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/glpn/feature_extraction_glpn.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/glpn/feature_extraction_glpn.py deleted file mode 100644 index 314268225d2af41f3cc6af55af4e21aebe087b60..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/glpn/feature_extraction_glpn.py +++ /dev/null @@ -1,33 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Feature extractor class for GLPN.""" - -import warnings - -from ...utils import logging -from .image_processing_glpn import GLPNImageProcessor - - -logger = logging.get_logger(__name__) - - -class GLPNFeatureExtractor(GLPNImageProcessor): - def __init__(self, *args, **kwargs) -> None: - warnings.warn( - "The class GLPNFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please" - " use GLPNImageProcessor instead.", - FutureWarning, - ) - super().__init__(*args, **kwargs) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/idefics/processing_idefics.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/idefics/processing_idefics.py deleted file mode 100644 index e6e0a9254aa13e8a456f5bbc6b5b35f1e968b342..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/idefics/processing_idefics.py +++ /dev/null @@ -1,413 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Processor class for IDEFICS. -""" - -from typing import Callable, List, Optional, Union -from urllib.parse import urlparse - -from ...feature_extraction_utils import BatchFeature -from ...processing_utils import ProcessorMixin -from ...tokenization_utils_base import BatchEncoding, PaddingStrategy, TextInput, TruncationStrategy -from ...utils import TensorType, is_torch_available - - -if is_torch_available(): - import torch - - -IMAGE_TOKEN = "" - - -# copied from m4.training.packing -def incremental_to_binary_attention_mask(incremental_mask, num_classes=-1): - # This function converts: [-1, 0, 1] => [[0, 0], [1, 0], [0, 1]] - - # If any of images index are more than num_classes, set them to -1. - # Words after the max number of images allowed have been seen don't attend on anything - if num_classes != -1: - incremental_mask[incremental_mask >= num_classes] = -1 - - negatives = incremental_mask == -1 - incremental_mask[negatives] = 0 - attn_mask = torch.nn.functional.one_hot(incremental_mask, num_classes=num_classes) - attn_mask[negatives, :] = 0 - return attn_mask - - -# copied from m4.training.packing -def image_attention_mask_for_packed_input_ids(input_ids, tokenizer): - image_attention_mask = torch.full_like(input_ids, fill_value=-1) - next_image_attention_mask = torch.full_like(input_ids, fill_value=-1) - image_token_id = tokenizer.convert_tokens_to_ids(IMAGE_TOKEN) - eod_token_id = tokenizer.eos_token_id - for batch_idx in range(input_ids.size(0)): - count = -1 - seen_eod = False - for idx, token_id in enumerate(input_ids[batch_idx]): - if token_id == image_token_id: - count += 1 - image_attention_mask[batch_idx][idx] = count - seen_eod = False - else: - image_attention_mask[batch_idx][idx] = count - - if seen_eod: - image_attention_mask[batch_idx][idx] = -1 - - if token_id == eod_token_id: - seen_eod = True - - for batch_idx in range(input_ids.size(0)): - count = -1 - seen_eod = False - for idx in range(input_ids[batch_idx].size(0) - 1, -1, -1): - token_id = input_ids[batch_idx][idx] - if token_id == image_token_id: - count += 1 - next_image_attention_mask[batch_idx][idx] = count - seen_eod = False - else: - next_image_attention_mask[batch_idx][idx] = count - - if token_id == eod_token_id: - seen_eod = True - - if seen_eod: - next_image_attention_mask[batch_idx][idx] = -1 - - non_negative_indices = next_image_attention_mask[batch_idx] != -1 - next_image_attention_mask[batch_idx][non_negative_indices] -= count - next_image_attention_mask[batch_idx][non_negative_indices] *= -1 - - return image_attention_mask, next_image_attention_mask - - -def is_url(string): - """Checks if the passed string contains a valid url and nothing else. e.g. if space is included it's immediately - invalidated the url""" - if " " in string: - return False - result = urlparse(string) - return all([result.scheme, result.netloc]) - - -class IdeficsProcessor(ProcessorMixin): - r""" - Constructs a IDEFICS processor which wraps a LLama tokenizer and IDEFICS image processor into a single processor. - - [`IdeficsProcessor`] offers all the functionalities of [`IdeficsImageProcessor`] and [`LlamaTokenizerFast`]. See - the docstring of [`~IdeficsProcessor.__call__`] and [`~IdeficsProcessor.decode`] for more information. - - Args: - image_processor (`IdeficsImageProcessor`): - An instance of [`IdeficsImageProcessor`]. The image processor is a required input. - tokenizer (`LlamaTokenizerFast`): - An instance of [`LlamaTokenizerFast`]. The tokenizer is a required input. - image_size (`int`, *optional*, defaults to 224): Image size (assuming a square image) - """ - attributes = ["image_processor", "tokenizer"] - image_processor_class = "IdeficsImageProcessor" - tokenizer_class = "LlamaTokenizerFast" - - def __init__(self, image_processor, tokenizer=None, image_size=224, add_end_of_utterance_token=None, **kwargs): - if image_processor is None: - raise ValueError("You need to specify an `image_processor`.") - if tokenizer is None: - raise ValueError("You need to specify a `tokenizer`.") - - super().__init__(image_processor, tokenizer) - self.current_processor = self.image_processor - self.image_token_id = tokenizer.convert_tokens_to_ids(IMAGE_TOKEN) - - self.default_image_dims = ( - self.image_processor.image_num_channels, - self.image_processor.image_size, - self.image_processor.image_size, - ) - - self.tokenizer_was_trained_with_end_of_utterance_token = ( - True - if "" in self.tokenizer.special_tokens_map.get("additional_special_tokens", []) - else False - ) - - def __call__( - self, - prompts: Union[List[TextInput], List[List[TextInput]]], - padding: Union[bool, str, PaddingStrategy] = False, - truncation: Union[bool, str, TruncationStrategy] = None, - max_length: Optional[int] = None, - transform: Callable = None, - add_eos_token=False, - add_end_of_utterance_token=None, - debug=False, - return_tensors: Optional[Union[str, TensorType]] = TensorType.PYTORCH, - ) -> BatchEncoding: - """This method takes batched or non-batched prompts made of text and images and converts them into prompts that - the model was trained on and prepares the image pixel values for the model to process. - - Args: - prompts (`Union[List[TextInput], [List[List[TextInput]]]]`): - either a single prompt or a batched list of prompts - see the detailed description immediately after - the end of the arguments doc section. - padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`): - Select a strategy to pad the returned sequences (according to the model's padding side and padding - index) among: - - `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single - sequence if provided). - - `'max_length'`: Pad to a maximum length specified with the argument `max_length` or to the maximum - acceptable input length for the model if that argument is not provided. - - `False` or `'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different - lengths). - max_length (`int`, *optional*): - Maximum length of the returned list and optionally padding length (see above). - truncation (`bool`, *optional*): - Activates truncation to cut input sequences longer than `max_length` to `max_length`. - transform (`Callable`, *optional*): - A custom transform function that accepts a single image can be passed for training. For example, - `torchvision.Compose` can be used to compose multiple functions. If `None` a preset inference-specific - set of transforms will be applied to the images - add_eos_token (`bool`, *optional*, defaults to `False`): - Adds `eos_token` at the end of the final prompt if True` - add_end_of_utterance_token (`bool`, *optional*) - Whether to automatically add `` after each prompt's text input (unless followed by an - image). If `None` the tokenizer will be checked instead and if this token is found in - `additional_special_tokens` then the value will be `True`. - debug (`bool`, *optional*, defaults to `False`): - `True` value will help debug prompt generation by dumping useful information - return_tensors (`str` or `TensorType`, *optional*, defaults to `TensorType.PYTORCH`): - The type of tensors to return. Can be one of: - - `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`. - - Returns: - a dict with entries: `input_ids`, `attention_mask`, `pixel_values`, `image_attention_mask` which can be - directly passed to `model.generate` - - Detailed explanation: - - Each entry in `prompts` is either a text to be passed as is or an image that will be processed. - - An image can be either an image object (`PIL.Image`) or a url from which the image can be retrieved. - - When the processor encounters an image it'll inject `` - entry into the prompt. - - Example: - - ```python - checkpoint = "HuggingFaceM4/idefics-9b" - processor = AutoProcessor.from_pretrained(checkpoint) - url = "https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg" - img = processor.image_processor.fetch_images([url])[0] - - prompts = [ - "User:", - img, - "Describe this image.\nAssistant: An image of two kittens in grass.\n", - "User:", - "https://hips.hearstapps.com/hmg-prod/images/dog-puns-1581708208.jpg", - "Describe this image.\nAssistant:", - ] - - inputs = processor(prompts, return_tensors="pt") - generated_ids = model.generate(**inputs, max_length=100) - generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] - ``` - - In this example the `prompts` will be converted into: - - ``` - User:Describe this image. - Assistant: An image of two kittens in grass. - User:Describe this image. - Assistant:' - ``` - - and the two images will be massaged using [`IdeficsImageProcessor.__call__`] method and placed inside the - `pixel_values` dict entry of the return value. - - This example also examplifies that images can be passed as objects or as text urls. It can be seen that the - first image is passed as object and the second one as a url. - - To do training do: - - ```python - image_transform = transforms.Compose( - [ - transforms.RandomResizedCrop( - (w, h), scale=(0.9, 1.0), interpolation=transforms.InterpolationMode.BICUBIC - ), - transforms.ToTensor(), - transforms.Normalize(mean=self.image_mean, std=self.image_std), - ] - ) - inputs = processor(prompts, transform=image_transform, return_tensors="pt") - ``` - - In order to help debug prompt generation enable `debug=True` which will show you what's happening. - - """ - - # if the value isn't overriden by the user, check if the tokenizer was trained with this token and then use it - if add_end_of_utterance_token is None: - add_end_of_utterance_token = self.tokenizer_was_trained_with_end_of_utterance_token - - # turn non-batched prompts into batched - if not any(isinstance(i, list) for i in prompts): - prompts = [prompts] - - fake_token = "" - image_token = "" - end_of_utterance_token = "" - - def image_tokens(last_was_image): - if last_was_image: - return image_token + fake_token - else: - return fake_token + image_token + fake_token - - all_prompts = [] - all_images = [] - for sample in prompts: - # the model was trained on samples starting with - full_text = f"{self.tokenizer.bos_token}" - - # an image can either be an image object in the item or the url, everything else is a verbatim prompt text - image_objects = [] - last_was_image = False - last_was_text = False - for i, item in enumerate(sample): - if i > 0: - last_was_text = True if not last_was_image else False - - if isinstance(item, str): - item = item.strip(" ") - if is_url(item): - image = self.image_processor.fetch_images(item) - full_text += image_tokens(last_was_image) - image_objects.append(image) - last_was_image = True - else: - # we add end_of_utterance_token between each subsequent text prompts (but not at the last one!) - if add_end_of_utterance_token and last_was_text: - full_text += end_of_utterance_token - full_text += item - last_was_image = False - else: - # must be an image obj - full_text += image_tokens(last_was_image) - image_objects.append(item) - last_was_image = True - - if add_eos_token: - full_text += self.tokenizer.eos_token - - if debug is True: - print(f"{full_text=}") - - image_objects = self.image_processor(image_objects, transform=transform) - - all_prompts.append(full_text) - all_images.append(image_objects) - - text_encoding = self.tokenizer( - text=all_prompts, - add_special_tokens=False, - padding=padding, - truncation=truncation, - max_length=max_length, - ) - all_texts = text_encoding["input_ids"] - - max_seq_len = max(len(x) for x in all_texts) - - # max_num_images has to be at least 1 even when there are no images - max_num_images = max(len(x) for x in all_images) - max_num_images = max(1, max_num_images) - - at_least_one_image = sum(len(x) for x in all_images) > 0 - output_input_ids = [] - output_images = [] - output_attention_masks = [] - for text, images in zip(all_texts, all_images): - padded_input_ids = [self.tokenizer.pad_token_id] * max_seq_len - unpadded_seq_len = len(text) - start = max_seq_len - unpadded_seq_len - padded_input_ids[start:] = text[:max_seq_len] - - attention_mask = torch.zeros((max_seq_len,), dtype=torch.long) - attention_mask[start:] = 1 - - image_count = padded_input_ids.count(self.image_token_id) - local_max_num_images = min(image_count, max_num_images) - - current_images = images[:local_max_num_images] - - if len(current_images) > 0: - padded_image_tensor = torch.zeros(max_num_images, *current_images.size()[1:]) - padded_image_tensor[: current_images.size(0)] = current_images - else: - padded_image_tensor = torch.zeros(max_num_images, *self.default_image_dims) - - output_images.append(padded_image_tensor) - output_input_ids.append(torch.tensor(padded_input_ids)) - - output_attention_masks.append(attention_mask) - - output_input_ids = torch.stack(output_input_ids) - output_images = torch.stack(output_images) - output_attention_masks = torch.stack(output_attention_masks) - - if at_least_one_image: - image_attention_mask, _ = image_attention_mask_for_packed_input_ids(output_input_ids, self.tokenizer) - image_attention_mask = incremental_to_binary_attention_mask( - image_attention_mask, num_classes=max_num_images - ) - else: - # in full language mode we set the image mask to all-0s - image_attention_mask = torch.zeros( - output_input_ids.shape[0], output_input_ids.shape[1], 1, dtype=torch.bool - ) - - return BatchFeature( - data={ - "input_ids": output_input_ids, - "attention_mask": output_attention_masks, - "pixel_values": output_images, - "image_attention_mask": image_attention_mask, - } - ) - - def batch_decode(self, *args, **kwargs): - """ - This method forwards all its arguments to LlamaTokenizerFast's [`~PreTrainedTokenizer.batch_decode`]. Please - refer to the docstring of this method for more information. - """ - return self.tokenizer.batch_decode(*args, **kwargs) - - def decode(self, *args, **kwargs): - """ - This method forwards all its arguments to LlamaTokenizerFast's [`~PreTrainedTokenizer.decode`]. Please refer to - the docstring of this method for more information. - """ - return self.tokenizer.decode(*args, **kwargs) - - @property - def model_input_names(self): - tokenizer_input_names = self.tokenizer.model_input_names - image_processor_input_names = self.image_processor.model_input_names - return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names)) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/openai/configuration_openai.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/openai/configuration_openai.py deleted file mode 100644 index dd6f349249e3e79eec769beed55742a6da5acdf3..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/openai/configuration_openai.py +++ /dev/null @@ -1,155 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The OpenAI Team Authors and HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" OpenAI GPT configuration""" - -from ...configuration_utils import PretrainedConfig -from ...utils import logging - - -logger = logging.get_logger(__name__) - -OPENAI_GPT_PRETRAINED_CONFIG_ARCHIVE_MAP = {"openai-gpt": "https://huggingface.co/openai-gpt/resolve/main/config.json"} - - -class OpenAIGPTConfig(PretrainedConfig): - """ - This is the configuration class to store the configuration of a [`OpenAIGPTModel`] or a [`TFOpenAIGPTModel`]. It is - used to instantiate a GPT model according to the specified arguments, defining the model architecture. - Instantiating a configuration with the defaults will yield a similar configuration to that of the GPT - [openai-gpt](https://huggingface.co/openai-gpt) architecture from OpenAI. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - vocab_size (`int`, *optional*, defaults to 40478): - Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`OpenAIGPTModel`] or [`TFOpenAIGPTModel`]. - n_positions (`int`, *optional*, defaults to 512): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - n_embd (`int`, *optional*, defaults to 768): - Dimensionality of the embeddings and hidden states. - n_layer (`int`, *optional*, defaults to 12): - Number of hidden layers in the Transformer encoder. - n_head (`int`, *optional*, defaults to 12): - Number of attention heads for each attention layer in the Transformer encoder. - afn (`str` or `Callable`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"silu"` and `"gelu_new"` are supported. - resid_pdrop (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - embd_pdrop (`int`, *optional*, defaults to 0.1): - The dropout ratio for the embeddings. - attn_pdrop (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention. - layer_norm_epsilon (`float`, *optional*, defaults to 1e-05): - The epsilon to use in the layer normalization layers - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - summary_type (`str`, *optional*, defaults to `"cls_index"`): - Argument used when doing sequence summary, used in the models [`OpenAIGPTDoubleHeadsModel`] and - [`OpenAIGPTDoubleHeadsModel`]. - - Has to be one of the following options: - - - `"last"`: Take the last token hidden state (like XLNet). - - `"first"`: Take the first token hidden state (like BERT). - - `"mean"`: Take the mean of all tokens hidden states. - - `"cls_index"`: Supply a Tensor of classification token position (like GPT/GPT-2). - - `"attn"`: Not implemented now, use multi-head attention. - summary_use_proj (`bool`, *optional*, defaults to `True`): - Argument used when doing sequence summary, used in the models [`OpenAIGPTDoubleHeadsModel`] and - [`OpenAIGPTDoubleHeadsModel`]. - - Whether or not to add a projection after the vector extraction. - summary_activation (`str`, *optional*): - Argument used when doing sequence summary, used in the models [`OpenAIGPTDoubleHeadsModel`] and - [`OpenAIGPTDoubleHeadsModel`]. - - Pass `"tanh"` for a tanh activation to the output, any other value will result in no activation. - summary_proj_to_labels (`bool`, *optional*, defaults to `True`): - Argument used when doing sequence summary, used in the models [`OpenAIGPTDoubleHeadsModel`] and - [`OpenAIGPTDoubleHeadsModel`]. - - Whether the projection outputs should have `config.num_labels` or `config.hidden_size` classes. - summary_first_dropout (`float`, *optional*, defaults to 0.1): - Argument used when doing sequence summary, used in the models [`OpenAIGPTDoubleHeadsModel`] and - [`OpenAIGPTDoubleHeadsModel`]. - - The dropout ratio to be used after the projection and activation. - - - Examples: - - ```python - >>> from transformers import OpenAIGPTConfig, OpenAIGPTModel - - >>> # Initializing a GPT configuration - >>> configuration = OpenAIGPTConfig() - - >>> # Initializing a model (with random weights) from the configuration - >>> model = OpenAIGPTModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "openai-gpt" - attribute_map = { - "max_position_embeddings": "n_positions", - "hidden_size": "n_embd", - "num_attention_heads": "n_head", - "num_hidden_layers": "n_layer", - } - - def __init__( - self, - vocab_size=40478, - n_positions=512, - n_embd=768, - n_layer=12, - n_head=12, - afn="gelu", - resid_pdrop=0.1, - embd_pdrop=0.1, - attn_pdrop=0.1, - layer_norm_epsilon=1e-5, - initializer_range=0.02, - summary_type="cls_index", - summary_use_proj=True, - summary_activation=None, - summary_proj_to_labels=True, - summary_first_dropout=0.1, - **kwargs, - ): - self.vocab_size = vocab_size - self.n_positions = n_positions - self.n_embd = n_embd - self.n_layer = n_layer - self.n_head = n_head - self.afn = afn - self.resid_pdrop = resid_pdrop - self.embd_pdrop = embd_pdrop - self.attn_pdrop = attn_pdrop - self.layer_norm_epsilon = layer_norm_epsilon - self.initializer_range = initializer_range - self.summary_type = summary_type - self.summary_use_proj = summary_use_proj - self.summary_activation = summary_activation - self.summary_first_dropout = summary_first_dropout - self.summary_proj_to_labels = summary_proj_to_labels - super().__init__(**kwargs) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/common/models/keypoint_rcnn_fpn.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/common/models/keypoint_rcnn_fpn.py deleted file mode 100644 index 56b3994df249884d4816fc9a5c7f553a9ab6f400..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/configs/common/models/keypoint_rcnn_fpn.py +++ /dev/null @@ -1,33 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.poolers import ROIPooler -from detectron2.modeling.roi_heads import KRCNNConvDeconvUpsampleHead - -from .mask_rcnn_fpn import model - -[model.roi_heads.pop(x) for x in ["mask_in_features", "mask_pooler", "mask_head"]] - -model.roi_heads.update( - num_classes=1, - keypoint_in_features=["p2", "p3", "p4", "p5"], - keypoint_pooler=L(ROIPooler)( - output_size=14, - scales=(1.0 / 4, 1.0 / 8, 1.0 / 16, 1.0 / 32), - sampling_ratio=0, - pooler_type="ROIAlignV2", - ), - keypoint_head=L(KRCNNConvDeconvUpsampleHead)( - input_shape=ShapeSpec(channels=256, width=14, height=14), - num_keypoints=17, - conv_dims=[512] * 8, - loss_normalizer="visible", - ), -) - -# Detectron1 uses 2000 proposals per-batch, but this option is per-image in detectron2. -# 1000 proposals per-image is found to hurt box AP. -# Therefore we increase it to 1500 per-image. -model.proposal_generator.post_nms_topk = (1500, 1000) - -# Keypoint AP degrades (though box AP improves) when using plain L1 loss -model.roi_heads.box_predictor.smooth_l1_beta = 0.5 diff --git a/spaces/ysharma/Effectively_Using_IF/app.py b/spaces/ysharma/Effectively_Using_IF/app.py deleted file mode 100644 index b1e46b23c76ea9a9447243ea089bca76313581e2..0000000000000000000000000000000000000000 --- a/spaces/ysharma/Effectively_Using_IF/app.py +++ /dev/null @@ -1,109 +0,0 @@ -#import requests -import gradio as gr -from gradio_client import Client -from PIL import Image -from io import BytesIO -from diffusers import StableDiffusionUpscalePipeline -import torch -import os -import requests - -HF_TOKEN = os.environ.get('HF_TOKEN') -client_if = Client("ysharma/IF", hf_token=HF_TOKEN) -client_pick = Client("yuvalkirstain/PickScore") - -# load upscaling model and scheduler -model_id = "stabilityai/stable-diffusion-x4-upscaler" -pipeline_upscale = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) -pipeline_upscale = pipeline_upscale.to("cuda") - -def get_IF_op(prompt, neg_prompt): - print("inside get_IF_op") - filepaths = client_if.predict(prompt, neg_prompt, 1,4,7.0, 'smart100',50, api_name="/generate64") - folder_path = filepaths[0] - file_list = os.listdir(folder_path) - file_list = [os.path.join(folder_path, f) for f in file_list if f != 'captions.json'] - print(f"^^file list is: {file_list}") - return file_list - -def get_pickscores(prompt, image_tmps): - print("inside get_pickscores") - #Get the predictons - probabilities1 = client_pick.predict(prompt, image_tmps[0], image_tmps[1], fn_index=0) - probabilities2 = client_pick.predict(prompt, image_tmps[2], image_tmps[3], fn_index=0) - probabilities_all = list(probabilities1) + list(probabilities2) - max_score = max(probabilities_all) - max_score_index = probabilities_all.index(max_score) - best_match_image = image_tmps[max_score_index] - return best_match_image - - -def get_upscale_op(prompt, gallery_if): - print("inside get_upscale_op") - print(f"^^gallery_if is: {gallery_if}") - image_tmps = [val['name'] for val in gallery_if] - # get pickscores - best_match_image = get_pickscores(prompt, image_tmps) - # let's get the best pick! - low_res_img = Image.open(best_match_image).convert("RGB") - low_res_img = low_res_img.resize((128, 128)) - # Upscaling the best pick - upscaled_image = pipeline_upscale(prompt=prompt, image=low_res_img).images[0] - #upscaled_image.save("upsampled.png") - return upscaled_image - -theme = gr.themes.Monochrome( - neutral_hue="cyan", - radius_size="md", - spacing_size="sm",) - -title = """

              🔥Gradio pipeline to use DeepFloyd IF more effectively!


              -

              Demo build using DeeepFloyd IF and Pick-A-Pic PickScore models.

              -

              💪💪Gradio-Client library allows you to use gradio demo for these two cutting edge models as API endpoints

              """ -description = """

              Steps to build this pipeline: -- Duplicate the Deepfloyd IF Space to avoid queue -- Create a Cient for this duplicated space using gradio python client -- Generate intial 4-image gallery using the client and a prompt -- Create a Client for PickScore Space using gradio python client -- Feed the image Gallery into PickScore client -- Generate Probabilities for images, choose the image with highest probability value and display it -

              """ - -theme = gr.themes.Monochrome( - neutral_hue="cyan", - radius_size="md", - spacing_size="sm",) - -title = """

              🔥Gradio pipeline to use DeepFloyd IF more effectively!


              -

              Demo build using DeeepFloyd IF and Pick-A-Pic PickScore models.

              -

              💪💪Gradio-Client library allows you to use gradio demo for these two cutting edge models as API endpoints

              """ -description = """

              Steps to build this pipeline: -- Duplicate the Deepfloyd IF Space to avoid queue -- Create a Cient for this duplicated space using gradio python client -- Generate intial 4-image gallery using the client and a prompt -- Create a Client for PickScore Space using gradio python client -- Feed the image Gallery into PickScore client -- Generate Probabilities for images, choose the image with highest probability value and display it -

              """ - -with gr.Blocks(theme=theme) as demo: - gr.HTML(title) - gr.HTML('''
              Duplicate SpaceDuplicate the Space to skip the queue and run in a private space
              ''') - with gr.Row(variant='compact'): - with gr.Column(scale=4): - prompt = gr.Textbox(label='Prompt') - neg_prompt = gr.Textbox(label='Negative Prompt') - with gr.Column(scale=1): - b1 = gr.Button("Generate 'IF' Output").style(full_width=True) - with gr.Row(variant='compact'): - with gr.Column(): - gallery_if = gr.Gallery(label='IF Space outputs', ).style(columns=4, object_fit="contain", preview=True, height='auto') - b2 = gr.Button("Get the best generation using Pick-A-Pic") - image_picakapic = gr.Image(label="PickAPic Evaluated Output").style(height=450) - gr.Markdown(description) - b1.click(get_IF_op,[prompt, neg_prompt], gallery_if) - prompt.submit(get_IF_op,[prompt, neg_prompt], gallery_if) - b2.click(get_upscale_op,[prompt, gallery_if], image_picakapic) - -demo.queue(concurrency_count=2, max_size=10) -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/zekewilliams/video/README.md b/spaces/zekewilliams/video/README.md deleted file mode 100644 index 4857d7ba18277c6dfd156c36cd03f4b0098c3707..0000000000000000000000000000000000000000 --- a/spaces/zekewilliams/video/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ModelScope Text To Video Synthesis -emoji: 🚀 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -duplicated_from: MaxLess/text-to-video-synth ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zestyoreo/vtryon/models/afwm.py b/spaces/zestyoreo/vtryon/models/afwm.py deleted file mode 100644 index ec1550b9456a03d4b7e2b2da4c11a4eb8000b71e..0000000000000000000000000000000000000000 --- a/spaces/zestyoreo/vtryon/models/afwm.py +++ /dev/null @@ -1,502 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -from math import sqrt - -def apply_offset(offset): - sizes = list(offset.size()[2:]) - grid_list = torch.meshgrid([torch.arange(size, device=offset.device) for size in sizes]) - grid_list = reversed(grid_list) - # apply offset - grid_list = [grid.float().unsqueeze(0) + offset[:, dim, ...] - for dim, grid in enumerate(grid_list)] - # normalize - grid_list = [grid / ((size - 1.0) / 2.0) - 1.0 - for grid, size in zip(grid_list, reversed(sizes))] - - return torch.stack(grid_list, dim=-1) - - -def TVLoss(x): - tv_h = x[:, :, 1:, :] - x[:, :, :-1, :] - tv_w = x[:, :, :, 1:] - x[:, :, :, :-1] - - return torch.mean(torch.abs(tv_h)) + torch.mean(torch.abs(tv_w)) - - -# backbone -class EqualLR: - def __init__(self, name): - self.name = name - - def compute_weight(self, module): - weight = getattr(module, self.name + '_orig') - fan_in = weight.data.size(1) * weight.data[0][0].numel() - - return weight * sqrt(2 / fan_in) - - @staticmethod - def apply(module, name): - fn = EqualLR(name) - - weight = getattr(module, name) - del module._parameters[name] - module.register_parameter(name + '_orig', nn.Parameter(weight.data)) - module.register_forward_pre_hook(fn) - - return fn - - def __call__(self, module, input): - weight = self.compute_weight(module) - setattr(module, self.name, weight) - - -def equal_lr(module, name='weight'): - EqualLR.apply(module, name) - - return module - -class EqualLinear(nn.Module): - def __init__(self, in_dim, out_dim): - super().__init__() - - linear = nn.Linear(in_dim, out_dim) - linear.weight.data.normal_() - linear.bias.data.zero_() - - self.linear = equal_lr(linear) - - def forward(self, input): - return self.linear(input) - -class ModulatedConv2d(nn.Module): - def __init__(self, fin, fout, kernel_size, padding_type='zero', upsample=False, downsample=False, latent_dim=512, normalize_mlp=False): - super(ModulatedConv2d, self).__init__() - self.in_channels = fin - self.out_channels = fout - self.kernel_size = kernel_size - padding_size = kernel_size // 2 - - if kernel_size == 1: - self.demudulate = False - else: - self.demudulate = True - - self.weight = nn.Parameter(torch.Tensor(fout, fin, kernel_size, kernel_size)) - self.bias = nn.Parameter(torch.Tensor(1, fout, 1, 1)) - #self.conv = F.conv2d - - if normalize_mlp: - self.mlp_class_std = nn.Sequential(EqualLinear(latent_dim, fin), PixelNorm()) - else: - self.mlp_class_std = EqualLinear(latent_dim, fin) - - #self.blur = Blur(fout) - - if padding_type == 'reflect': - self.padding = nn.ReflectionPad2d(padding_size) - else: - self.padding = nn.ZeroPad2d(padding_size) - - - self.weight.data.normal_() - self.bias.data.zero_() - - def forward(self, input, latent): - fan_in = self.weight.data.size(1) * self.weight.data[0][0].numel() - weight = self.weight * sqrt(2 / fan_in) - weight = weight.view(1, self.out_channels, self.in_channels, self.kernel_size, self.kernel_size) - - s = self.mlp_class_std(latent).view(-1, 1, self.in_channels, 1, 1) - weight = s * weight - if self.demudulate: - d = torch.rsqrt((weight ** 2).sum(4).sum(3).sum(2) + 1e-5).view(-1, self.out_channels, 1, 1, 1) - weight = (d * weight).view(-1, self.in_channels, self.kernel_size, self.kernel_size) - else: - weight = weight.view(-1, self.in_channels, self.kernel_size, self.kernel_size) - - - - batch,_,height,width = input.shape - #input = input.view(1,-1,h,w) - #input = self.padding(input) - #out = self.conv(input, weight, groups=b).view(b, self.out_channels, h, w) + self.bias - - - - input = input.view(1,-1,height,width) - input = self.padding(input) - out = F.conv2d(input, weight, groups=batch).view(batch, self.out_channels, height, width) + self.bias - - return out - - -class StyledConvBlock(nn.Module): - def __init__(self, fin, fout, latent_dim=256, padding='zero', - actvn='lrelu', normalize_affine_output=False, modulated_conv=False): - super(StyledConvBlock, self).__init__() - if not modulated_conv: - if padding == 'reflect': - padding_layer = nn.ReflectionPad2d - else: - padding_layer = nn.ZeroPad2d - - if modulated_conv: - conv2d = ModulatedConv2d - else: - conv2d = EqualConv2d - - if modulated_conv: - self.actvn_gain = sqrt(2) - else: - self.actvn_gain = 1.0 - - - self.modulated_conv = modulated_conv - - if actvn == 'relu': - activation = nn.ReLU(True) - else: - activation = nn.LeakyReLU(0.2,True) - - - if self.modulated_conv: - self.conv0 = conv2d(fin, fout, kernel_size=3, padding_type=padding, upsample=False, - latent_dim=latent_dim, normalize_mlp=normalize_affine_output) - else: - conv0 = conv2d(fin, fout, kernel_size=3) - - seq0 = [padding_layer(1), conv0] - self.conv0 = nn.Sequential(*seq0) - - self.actvn0 = activation - - if self.modulated_conv: - self.conv1 = conv2d(fout, fout, kernel_size=3, padding_type=padding, downsample=False, - latent_dim=latent_dim, normalize_mlp=normalize_affine_output) - else: - conv1 = conv2d(fout, fout, kernel_size=3) - seq1 = [padding_layer(1), conv1] - self.conv1 = nn.Sequential(*seq1) - - self.actvn1 = activation - - def forward(self, input, latent=None): - if self.modulated_conv: - out = self.conv0(input,latent) - else: - out = self.conv0(input) - - out = self.actvn0(out) * self.actvn_gain - - if self.modulated_conv: - out = self.conv1(out,latent) - else: - out = self.conv1(out) - - out = self.actvn1(out) * self.actvn_gain - - return out - - -class Styled_F_ConvBlock(nn.Module): - def __init__(self, fin, fout, latent_dim=256, padding='zero', - actvn='lrelu', normalize_affine_output=False, modulated_conv=False): - super(Styled_F_ConvBlock, self).__init__() - if not modulated_conv: - if padding == 'reflect': - padding_layer = nn.ReflectionPad2d - else: - padding_layer = nn.ZeroPad2d - - if modulated_conv: - conv2d = ModulatedConv2d - else: - conv2d = EqualConv2d - - if modulated_conv: - self.actvn_gain = sqrt(2) - else: - self.actvn_gain = 1.0 - - - self.modulated_conv = modulated_conv - - if actvn == 'relu': - activation = nn.ReLU(True) - else: - activation = nn.LeakyReLU(0.2,True) - - - if self.modulated_conv: - self.conv0 = conv2d(fin, 128, kernel_size=3, padding_type=padding, upsample=False, - latent_dim=latent_dim, normalize_mlp=normalize_affine_output) - else: - conv0 = conv2d(fin, 128, kernel_size=3) - - seq0 = [padding_layer(1), conv0] - self.conv0 = nn.Sequential(*seq0) - - self.actvn0 = activation - - if self.modulated_conv: - self.conv1 = conv2d(128, fout, kernel_size=3, padding_type=padding, downsample=False, - latent_dim=latent_dim, normalize_mlp=normalize_affine_output) - else: - conv1 = conv2d(128, fout, kernel_size=3) - seq1 = [padding_layer(1), conv1] - self.conv1 = nn.Sequential(*seq1) - - #self.actvn1 = activation - - def forward(self, input, latent=None): - if self.modulated_conv: - out = self.conv0(input,latent) - else: - out = self.conv0(input) - - out = self.actvn0(out) * self.actvn_gain - - if self.modulated_conv: - out = self.conv1(out,latent) - else: - out = self.conv1(out) - - #out = self.actvn1(out) * self.actvn_gain - - return out - - -class ResBlock(nn.Module): - def __init__(self, in_channels): - super(ResBlock, self).__init__() - self.block = nn.Sequential( - nn.BatchNorm2d(in_channels), - nn.ReLU(inplace=True), - nn.Conv2d(in_channels, in_channels, kernel_size=3, padding=1, bias=False), - nn.BatchNorm2d(in_channels), - nn.ReLU(inplace=True), - nn.Conv2d(in_channels, in_channels, kernel_size=3, padding=1, bias=False) - ) - - def forward(self, x): - return self.block(x) + x - - -class DownSample(nn.Module): - def __init__(self, in_channels, out_channels): - super(DownSample, self).__init__() - self.block= nn.Sequential( - nn.BatchNorm2d(in_channels), - nn.ReLU(inplace=True), - nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=2, padding=1, bias=False) - ) - - def forward(self, x): - return self.block(x) - - - -class FeatureEncoder(nn.Module): - def __init__(self, in_channels, chns=[64,128,256,256,256]): - # in_channels = 3 for images, and is larger (e.g., 17+1+1) for agnositc representation - super(FeatureEncoder, self).__init__() - self.encoders = [] - for i, out_chns in enumerate(chns): - if i == 0: - encoder = nn.Sequential(DownSample(in_channels, out_chns), - ResBlock(out_chns), - ResBlock(out_chns)) - else: - encoder = nn.Sequential(DownSample(chns[i-1], out_chns), - ResBlock(out_chns), - ResBlock(out_chns)) - - self.encoders.append(encoder) - - self.encoders = nn.ModuleList(self.encoders) - - - def forward(self, x): - encoder_features = [] - for encoder in self.encoders: - x = encoder(x) - encoder_features.append(x) - return encoder_features - -class RefinePyramid(nn.Module): - def __init__(self, chns=[64,128,256,256,256], fpn_dim=256): - super(RefinePyramid, self).__init__() - self.chns = chns - - # adaptive - self.adaptive = [] - for in_chns in list(reversed(chns)): - adaptive_layer = nn.Conv2d(in_chns, fpn_dim, kernel_size=1) - self.adaptive.append(adaptive_layer) - self.adaptive = nn.ModuleList(self.adaptive) - # output conv - self.smooth = [] - for i in range(len(chns)): - smooth_layer = nn.Conv2d(fpn_dim, fpn_dim, kernel_size=3, padding=1) - self.smooth.append(smooth_layer) - self.smooth = nn.ModuleList(self.smooth) - - def forward(self, x): - conv_ftr_list = x - - feature_list = [] - last_feature = None - for i, conv_ftr in enumerate(list(reversed(conv_ftr_list))): - # adaptive - feature = self.adaptive[i](conv_ftr) - # fuse - if last_feature is not None: - feature = feature + F.interpolate(last_feature, scale_factor=2, mode='nearest') - # smooth - feature = self.smooth[i](feature) - last_feature = feature - feature_list.append(feature) - - return tuple(reversed(feature_list)) - - -class AFlowNet(nn.Module): - def __init__(self, num_pyramid, fpn_dim=256): - super(AFlowNet, self).__init__() - - padding_type='zero' - actvn = 'lrelu' - normalize_mlp = False - modulated_conv = True - - - self.netRefine = [] - - self.netStyle = [] - - self.netF = [] - - for i in range(num_pyramid): - - netRefine_layer = torch.nn.Sequential( - torch.nn.Conv2d(2 * fpn_dim, out_channels=128, kernel_size=3, stride=1, padding=1), - torch.nn.LeakyReLU(inplace=False, negative_slope=0.1), - torch.nn.Conv2d(in_channels=128, out_channels=64, kernel_size=3, stride=1, padding=1), - torch.nn.LeakyReLU(inplace=False, negative_slope=0.1), - torch.nn.Conv2d(in_channels=64, out_channels=32, kernel_size=3, stride=1, padding=1), - torch.nn.LeakyReLU(inplace=False, negative_slope=0.1), - torch.nn.Conv2d(in_channels=32, out_channels=2, kernel_size=3, stride=1, padding=1) - ) - - style_block = StyledConvBlock(256, 49, latent_dim=256, - padding=padding_type, actvn=actvn, - normalize_affine_output=normalize_mlp, - modulated_conv=modulated_conv) - - style_F_block = Styled_F_ConvBlock(49, 2, latent_dim=256, - padding=padding_type, actvn=actvn, - normalize_affine_output=normalize_mlp, - modulated_conv=modulated_conv) - - - self.netRefine.append(netRefine_layer) - self.netStyle.append(style_block) - self.netF.append(style_F_block) - - - self.netRefine = nn.ModuleList(self.netRefine) - self.netStyle = nn.ModuleList(self.netStyle) - self.netF = nn.ModuleList(self.netF) - - self.cond_style = torch.nn.Sequential(torch.nn.Conv2d(256, 128, kernel_size=(8,6), stride=1, padding=0), torch.nn.LeakyReLU(inplace=False, negative_slope=0.1)) - - self.image_style = torch.nn.Sequential(torch.nn.Conv2d(256, 128, kernel_size=(8,6), stride=1, padding=0), torch.nn.LeakyReLU(inplace=False, negative_slope=0.1)) - - - def forward(self, x, x_warps, x_conds, warp_feature=True): - last_flow = None - - B = x_conds[len(x_warps)-1].shape[0] - - cond_style = self.cond_style(x_conds[len(x_warps) - 1]).view(B,-1) - image_style = self.image_style(x_warps[len(x_warps) - 1]).view(B,-1) - style = torch.cat([cond_style, image_style], 1) - - for i in range(len(x_warps)): - x_warp = x_warps[len(x_warps) - 1 - i] - x_cond = x_conds[len(x_warps) - 1 - i] - - if last_flow is not None and warp_feature: - x_warp_after = F.grid_sample(x_warp, last_flow.detach().permute(0, 2, 3, 1), - mode='bilinear', padding_mode='border') - else: - x_warp_after = x_warp - - - stylemap = self.netStyle[i](x_warp_after, style) - - flow = self.netF[i](stylemap, style) - flow = apply_offset(flow) - if last_flow is not None: - flow = F.grid_sample(last_flow, flow, mode='bilinear', padding_mode='border') - else: - flow = flow.permute(0, 3, 1, 2) - - last_flow = flow - x_warp = F.grid_sample(x_warp, flow.permute(0, 2, 3, 1),mode='bilinear', padding_mode='border') - concat = torch.cat([x_warp,x_cond],1) - flow = self.netRefine[i](concat) - flow = apply_offset(flow) - flow = F.grid_sample(last_flow, flow, mode='bilinear', padding_mode='border') - - last_flow = F.interpolate(flow, scale_factor=2, mode='bilinear') - - - x_warp = F.grid_sample(x, last_flow.permute(0, 2, 3, 1), - mode='bilinear', padding_mode='border') - return x_warp, last_flow - - -class AFWM(nn.Module): - - def __init__(self, opt, input_nc): - super(AFWM, self).__init__() - num_filters = [64,128,256,256,256] - self.image_features = FeatureEncoder(3, num_filters) - self.cond_features = FeatureEncoder(input_nc, num_filters) - self.image_FPN = RefinePyramid(num_filters) - self.cond_FPN = RefinePyramid(num_filters) - self.aflow_net = AFlowNet(len(num_filters)) - - - def forward(self, cond_input, image_input): - - #import ipdb; ipdb.set_trace() - cond_pyramids = self.cond_FPN(self.cond_features(cond_input)) # maybe use nn.Sequential - image_pyramids = self.image_FPN(self.image_features(image_input)) - - x_warp, last_flow = self.aflow_net(image_input, image_pyramids, cond_pyramids) - - return x_warp, last_flow - - - def update_learning_rate(self,optimizer): - lrd = opt.lr / opt.niter_decay - lr = self.old_lr - lrd - for param_group in optimizer.param_groups: - param_group['lr'] = lr - if opt.verbose: - print('update learning rate: %f -> %f' % (self.old_lr, lr)) - self.old_lr = lr - - def update_learning_rate_warp(self,optimizer): - lrd = 0.2 * opt.lr / opt.niter_decay - lr = self.old_lr_warp - lrd - for param_group in optimizer.param_groups: - param_group['lr'] = lr - if opt.verbose: - print('update learning rate: %f -> %f' % (self.old_lr_warp, lr)) - self.old_lr_warp = lr - diff --git a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/README.md b/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/README.md deleted file mode 100644 index b9518d64856fc9773e885da9c05ca272aaaed652..0000000000000000000000000000000000000000 --- a/spaces/zetavg/LLaMA-LoRA-Tuner-UI-Demo/README.md +++ /dev/null @@ -1,196 +0,0 @@ ---- -title: LLM Tuner - UI Demo -emoji: 🦙🎛️ -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.24.1 -python_version: 3.8.9 -app_file: app.py -pinned: true ---- - -# HF UI DEMO - -To update, run: - -```sh -git push -f hf-ui-demo hf-ui-demo:main -``` - ---- - - -# 🦙🎛️ LLaMA-LoRA Tuner - -Open In Colab - -Making evaluating and fine-tuning LLaMA models with low-rank adaptation (LoRA) easy. - - -## Features - -**[See a demo on Hugging Face](https://huggingface.co/spaces/zetavg/LLaMA-LoRA-UI-Demo)** **Only serves UI demonstration. To try training or text generation, [run on Colab](#run-on-google-colab).* - -* **[1-click up and running in Google Colab](#run-on-google-colab)** with a standard GPU runtime. - * Loads and stores data in Google Drive. -* Evaluate various LLaMA LoRA models stored in your folder or from Hugging Face.
              -* Switch between base models such as `decapoda-research/llama-7b-hf`, `nomic-ai/gpt4all-j`, `databricks/dolly-v2-7b`, `EleutherAI/gpt-j-6b`, or `EleutherAI/pythia-6.9b`. -* Fine-tune LLaMA models with different prompt templates and training dataset format.
              - * Load JSON and JSONL datasets from your folder, or even paste plain text directly into the UI. - * Supports Stanford Alpaca [seed_tasks](https://github.com/tatsu-lab/stanford_alpaca/blob/main/seed_tasks.jsonl), [alpaca_data](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json) and [OpenAI "prompt"-"completion"](https://platform.openai.com/docs/guides/fine-tuning/data-formatting) format. - * Use prompt templates to keep your dataset DRY. - - -## How to Start - -There are various ways to run this app: - -* **[Run on Google Colab](#run-on-google-colab)**: The simplest way to get started, all you need is a Google account. Standard (free) GPU runtime is sufficient to run generation and training with micro batch size of 8. However, the text generation and training is much slower than on other cloud services, and Colab might terminate the execution in inactivity while running long tasks. -* **[Run on a cloud service via SkyPilot](#run-on-a-cloud-service-via-skypilot)**: If you have a cloud service (Lambda Labs, GCP, AWS, or Azure) account, you can use SkyPilot to run the app on a cloud service. A cloud bucket can be mounted to preserve your data. -* **[Run locally](#run-locally)**: Depends on the hardware you have. - -### Run On Google Colab - -*See [video](https://youtu.be/lByYOMdy9h4) for step-by-step instructions.* - -Open [this Colab Notebook](https://colab.research.google.com/github/zetavg/LLaMA-LoRA-Tuner/blob/main/LLaMA_LoRA.ipynb) and select **Runtime > Run All** (`⌘/Ctrl+F9`). - -You will be prompted to authorize Google Drive access, as Google Drive will be used to store your data. See the "Config"/"Google Drive" section for settings and more info. - -After approximately 5 minutes of running, you will see the public URL in the output of the "Launch"/"Start Gradio UI 🚀" section (like `Running on public URL: https://xxxx.gradio.live`). Open the URL in your browser to use the app. - -### Run on a cloud service via SkyPilot - -After following the [installation guide of SkyPilot](https://skypilot.readthedocs.io/en/latest/getting-started/installation.html), create a `.yaml` to define a task for running the app: - -```yaml -# llm-tuner.yaml - -resources: - accelerators: A10:1 # 1x NVIDIA A10 GPU, about US$ 0.6 / hr on Lambda Cloud. Run `sky show-gpus` for supported GPU types, and `sky show-gpus [GPU_NAME]` for the detailed information of a GPU type. - cloud: lambda # Optional; if left out, SkyPilot will automatically pick the cheapest cloud. - -file_mounts: - # Mount a presisted cloud storage that will be used as the data directory. - # (to store train datasets trained models) - # See https://skypilot.readthedocs.io/en/latest/reference/storage.html for details. - /data: - name: llm-tuner-data # Make sure this name is unique or you own this bucket. If it does not exists, SkyPilot will try to create a bucket with this name. - store: s3 # Could be either of [s3, gcs] - mode: MOUNT - -# Clone the LLaMA-LoRA Tuner repo and install its dependencies. -setup: | - conda create -q python=3.8 -n llm-tuner -y - conda activate llm-tuner - - # Clone the LLaMA-LoRA Tuner repo and install its dependencies - [ ! -d llm_tuner ] && git clone https://github.com/zetavg/LLaMA-LoRA-Tuner.git llm_tuner - echo 'Installing dependencies...' - pip install -r llm_tuner/requirements.lock.txt - - # Optional: install wandb to enable logging to Weights & Biases - pip install wandb - - # Optional: patch bitsandbytes to workaround error "libbitsandbytes_cpu.so: undefined symbol: cget_col_row_stats" - BITSANDBYTES_LOCATION="$(pip show bitsandbytes | grep 'Location' | awk '{print $2}')/bitsandbytes" - [ -f "$BITSANDBYTES_LOCATION/libbitsandbytes_cpu.so" ] && [ ! -f "$BITSANDBYTES_LOCATION/libbitsandbytes_cpu.so.bak" ] && [ -f "$BITSANDBYTES_LOCATION/libbitsandbytes_cuda121.so" ] && echo 'Patching bitsandbytes for GPU support...' && mv "$BITSANDBYTES_LOCATION/libbitsandbytes_cpu.so" "$BITSANDBYTES_LOCATION/libbitsandbytes_cpu.so.bak" && cp "$BITSANDBYTES_LOCATION/libbitsandbytes_cuda121.so" "$BITSANDBYTES_LOCATION/libbitsandbytes_cpu.so" - conda install -q cudatoolkit -y - - echo 'Dependencies installed.' - - # Optional: Install and setup Cloudflare Tunnel to expose the app to the internet with a custom domain name - [ -f /data/secrets/cloudflared_tunnel_token.txt ] && echo "Installing Cloudflare" && curl -L --output cloudflared.deb https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb && sudo dpkg -i cloudflared.deb && sudo cloudflared service uninstall || : && sudo cloudflared service install "$(cat /data/secrets/cloudflared_tunnel_token.txt | tr -d '\n')" - - # Optional: pre-download models - echo "Pre-downloading base models so that you won't have to wait for long once the app is ready..." - python llm_tuner/download_base_model.py --base_model_names='decapoda-research/llama-7b-hf,nomic-ai/gpt4all-j' - -# Start the app. `hf_access_token`, `wandb_api_key` and `wandb_project` are optional. -run: | - conda activate llm-tuner - python llm_tuner/app.py \ - --data_dir='/data' \ - --hf_access_token="$([ -f /data/secrets/hf_access_token.txt ] && cat /data/secrets/hf_access_token.txt | tr -d '\n')" \ - --wandb_api_key="$([ -f /data/secrets/wandb_api_key.txt ] && cat /data/secrets/wandb_api_key.txt | tr -d '\n')" \ - --wandb_project='llm-tuner' \ - --timezone='Atlantic/Reykjavik' \ - --base_model='decapoda-research/llama-7b-hf' \ - --base_model_choices='decapoda-research/llama-7b-hf,nomic-ai/gpt4all-j,databricks/dolly-v2-7b' \ - --share -``` - -Then launch a cluster to run the task: - -``` -sky launch -c llm-tuner llm-tuner.yaml -``` - -`-c ...` is an optional flag to specify a cluster name. If not specified, SkyPilot will automatically generate one. - -You will see the public URL of the app in the terminal. Open the URL in your browser to use the app. - -Note that exiting `sky launch` will only exit log streaming and will not stop the task. You can use `sky queue --skip-finished` to see the status of running or pending tasks, `sky logs ` connect back to log streaming, and `sky cancel ` to stop a task. - -When you are done, run `sky stop ` to stop the cluster. To terminate a cluster instead, run `sky down `. - -**Remember to stop or shutdown the cluster when you are done to avoid incurring unexpected charges.** Run `sky cost-report` to see the cost of your clusters. - -
              - Log into the cloud machine or mount the filesystem of the cloud machine on your local computer - - To log into the cloud machine, run `ssh `, such as `ssh llm-tuner`. - - If you have `sshfs` installed on your local machine, you can mount the filesystem of the cloud machine on your local computer by running a command like the following: - - ```bash - mkdir -p /tmp/llm_tuner_server && umount /tmp/llm_tuner_server || : && sshfs llm-tuner:/ /tmp/llm_tuner_server - ``` -
              - -### Run locally - -
              - Prepare environment with conda - - ```bash - conda create -y python=3.8 -n llm-tuner - conda activate llm-tuner - ``` -
              - -```bash -pip install -r requirements.lock.txt -python app.py --data_dir='./data' --base_model='decapoda-research/llama-7b-hf' --timezone='Atlantic/Reykjavik' --share -``` - -You will see the local and public URLs of the app in the terminal. Open the URL in your browser to use the app. - -For more options, see `python app.py --help`. - -
              - UI development mode - - To test the UI without loading the language model, use the `--ui_dev_mode` flag: - - ```bash - python app.py --data_dir='./data' --base_model='decapoda-research/llama-7b-hf' --share --ui_dev_mode - ``` - - > To use [Gradio Auto-Reloading](https://gradio.app/developing-faster-with-reload-mode/#python-ide-reload), a `config.yaml` file is required since command line arguments are not supported. There's a sample file to start with: `cp config.yaml.sample config.yaml`. Then, just run `gradio app.py`. -
              - - -## Usage - -See [video on YouTube](https://youtu.be/IoEMgouZ5xU). - - -## Acknowledgements - -* https://github.com/tloen/alpaca-lora -* https://github.com/lxe/simple-llama-finetuner -* ... - -TBC diff --git a/spaces/zhang-wei-jian/docker/node_modules/debug/src/common.js b/spaces/zhang-wei-jian/docker/node_modules/debug/src/common.js deleted file mode 100644 index e3291b20faa1a61fa5acff50d84dba10a97cc3b6..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/debug/src/common.js +++ /dev/null @@ -1,274 +0,0 @@ - -/** - * This is the common logic for both the Node.js and web browser - * implementations of `debug()`. - */ - -function setup(env) { - createDebug.debug = createDebug; - createDebug.default = createDebug; - createDebug.coerce = coerce; - createDebug.disable = disable; - createDebug.enable = enable; - createDebug.enabled = enabled; - createDebug.humanize = require('ms'); - createDebug.destroy = destroy; - - Object.keys(env).forEach(key => { - createDebug[key] = env[key]; - }); - - /** - * The currently active debug mode names, and names to skip. - */ - - createDebug.names = []; - createDebug.skips = []; - - /** - * Map of special "%n" handling functions, for the debug "format" argument. - * - * Valid key names are a single, lower or upper-case letter, i.e. "n" and "N". - */ - createDebug.formatters = {}; - - /** - * Selects a color for a debug namespace - * @param {String} namespace The namespace string for the debug instance to be colored - * @return {Number|String} An ANSI color code for the given namespace - * @api private - */ - function selectColor(namespace) { - let hash = 0; - - for (let i = 0; i < namespace.length; i++) { - hash = ((hash << 5) - hash) + namespace.charCodeAt(i); - hash |= 0; // Convert to 32bit integer - } - - return createDebug.colors[Math.abs(hash) % createDebug.colors.length]; - } - createDebug.selectColor = selectColor; - - /** - * Create a debugger with the given `namespace`. - * - * @param {String} namespace - * @return {Function} - * @api public - */ - function createDebug(namespace) { - let prevTime; - let enableOverride = null; - let namespacesCache; - let enabledCache; - - function debug(...args) { - // Disabled? - if (!debug.enabled) { - return; - } - - const self = debug; - - // Set `diff` timestamp - const curr = Number(new Date()); - const ms = curr - (prevTime || curr); - self.diff = ms; - self.prev = prevTime; - self.curr = curr; - prevTime = curr; - - args[0] = createDebug.coerce(args[0]); - - if (typeof args[0] !== 'string') { - // Anything else let's inspect with %O - args.unshift('%O'); - } - - // Apply any `formatters` transformations - let index = 0; - args[0] = args[0].replace(/%([a-zA-Z%])/g, (match, format) => { - // If we encounter an escaped % then don't increase the array index - if (match === '%%') { - return '%'; - } - index++; - const formatter = createDebug.formatters[format]; - if (typeof formatter === 'function') { - const val = args[index]; - match = formatter.call(self, val); - - // Now we need to remove `args[index]` since it's inlined in the `format` - args.splice(index, 1); - index--; - } - return match; - }); - - // Apply env-specific formatting (colors, etc.) - createDebug.formatArgs.call(self, args); - - const logFn = self.log || createDebug.log; - logFn.apply(self, args); - } - - debug.namespace = namespace; - debug.useColors = createDebug.useColors(); - debug.color = createDebug.selectColor(namespace); - debug.extend = extend; - debug.destroy = createDebug.destroy; // XXX Temporary. Will be removed in the next major release. - - Object.defineProperty(debug, 'enabled', { - enumerable: true, - configurable: false, - get: () => { - if (enableOverride !== null) { - return enableOverride; - } - if (namespacesCache !== createDebug.namespaces) { - namespacesCache = createDebug.namespaces; - enabledCache = createDebug.enabled(namespace); - } - - return enabledCache; - }, - set: v => { - enableOverride = v; - } - }); - - // Env-specific initialization logic for debug instances - if (typeof createDebug.init === 'function') { - createDebug.init(debug); - } - - return debug; - } - - function extend(namespace, delimiter) { - const newDebug = createDebug(this.namespace + (typeof delimiter === 'undefined' ? ':' : delimiter) + namespace); - newDebug.log = this.log; - return newDebug; - } - - /** - * Enables a debug mode by namespaces. This can include modes - * separated by a colon and wildcards. - * - * @param {String} namespaces - * @api public - */ - function enable(namespaces) { - createDebug.save(namespaces); - createDebug.namespaces = namespaces; - - createDebug.names = []; - createDebug.skips = []; - - let i; - const split = (typeof namespaces === 'string' ? namespaces : '').split(/[\s,]+/); - const len = split.length; - - for (i = 0; i < len; i++) { - if (!split[i]) { - // ignore empty strings - continue; - } - - namespaces = split[i].replace(/\*/g, '.*?'); - - if (namespaces[0] === '-') { - createDebug.skips.push(new RegExp('^' + namespaces.slice(1) + '$')); - } else { - createDebug.names.push(new RegExp('^' + namespaces + '$')); - } - } - } - - /** - * Disable debug output. - * - * @return {String} namespaces - * @api public - */ - function disable() { - const namespaces = [ - ...createDebug.names.map(toNamespace), - ...createDebug.skips.map(toNamespace).map(namespace => '-' + namespace) - ].join(','); - createDebug.enable(''); - return namespaces; - } - - /** - * Returns true if the given mode name is enabled, false otherwise. - * - * @param {String} name - * @return {Boolean} - * @api public - */ - function enabled(name) { - if (name[name.length - 1] === '*') { - return true; - } - - let i; - let len; - - for (i = 0, len = createDebug.skips.length; i < len; i++) { - if (createDebug.skips[i].test(name)) { - return false; - } - } - - for (i = 0, len = createDebug.names.length; i < len; i++) { - if (createDebug.names[i].test(name)) { - return true; - } - } - - return false; - } - - /** - * Convert regexp to namespace - * - * @param {RegExp} regxep - * @return {String} namespace - * @api private - */ - function toNamespace(regexp) { - return regexp.toString() - .substring(2, regexp.toString().length - 2) - .replace(/\.\*\?$/, '*'); - } - - /** - * Coerce `val`. - * - * @param {Mixed} val - * @return {Mixed} - * @api private - */ - function coerce(val) { - if (val instanceof Error) { - return val.stack || val.message; - } - return val; - } - - /** - * XXX DO NOT USE. This is a temporary stub function. - * XXX It WILL be removed in the next major release. - */ - function destroy() { - console.warn('Instance method `debug.destroy()` is deprecated and no longer does anything. It will be removed in the next major version of `debug`.'); - } - - createDebug.enable(createDebug.load()); - - return createDebug; -} - -module.exports = setup; diff --git "a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" "b/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" deleted file mode 100644 index 3da831fd07e361a532777c83bb02cff265b94abd..0000000000000000000000000000000000000000 --- "a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/\344\270\213\350\275\275arxiv\350\256\272\346\226\207\347\277\273\350\257\221\346\221\230\350\246\201.py" +++ /dev/null @@ -1,194 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file, get_conf -import re, requests, unicodedata, os -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -def download_arxiv_(url_pdf): - if 'arxiv.org' not in url_pdf: - if ('.' in url_pdf) and ('/' not in url_pdf): - new_url = 'https://arxiv.org/abs/'+url_pdf - print('下载编号:', url_pdf, '自动定位:', new_url) - # download_arxiv_(new_url) - return download_arxiv_(new_url) - else: - print('不能识别的URL!') - return None - if 'abs' in url_pdf: - url_pdf = url_pdf.replace('abs', 'pdf') - url_pdf = url_pdf + '.pdf' - - url_abs = url_pdf.replace('.pdf', '').replace('pdf', 'abs') - title, other_info = get_name(_url_=url_abs) - - paper_id = title.split()[0] # '[1712.00559]' - if '2' in other_info['year']: - title = other_info['year'] + ' ' + title - - known_conf = ['NeurIPS', 'NIPS', 'Nature', 'Science', 'ICLR', 'AAAI'] - for k in known_conf: - if k in other_info['comment']: - title = k + ' ' + title - - download_dir = './gpt_log/arxiv/' - os.makedirs(download_dir, exist_ok=True) - - title_str = title.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - - requests_pdf_url = url_pdf - file_path = download_dir+title_str - # if os.path.exists(file_path): - # print('返回缓存文件') - # return './gpt_log/arxiv/'+title_str - - print('下载中') - proxies, = get_conf('proxies') - r = requests.get(requests_pdf_url, proxies=proxies) - with open(file_path, 'wb+') as f: - f.write(r.content) - print('下载完成') - - # print('输出下载命令:','aria2c -o \"%s\" %s'%(title_str,url_pdf)) - # subprocess.call('aria2c --all-proxy=\"172.18.116.150:11084\" -o \"%s\" %s'%(download_dir+title_str,url_pdf), shell=True) - - x = "%s %s %s.bib" % (paper_id, other_info['year'], other_info['authors']) - x = x.replace('?', '?')\ - .replace(':', ':')\ - .replace('\"', '“')\ - .replace('\n', '')\ - .replace(' ', ' ')\ - .replace(' ', ' ') - return './gpt_log/arxiv/'+title_str, other_info - - -def get_name(_url_): - import os - from bs4 import BeautifulSoup - print('正在获取文献名!') - print(_url_) - - # arxiv_recall = {} - # if os.path.exists('./arxiv_recall.pkl'): - # with open('./arxiv_recall.pkl', 'rb') as f: - # arxiv_recall = pickle.load(f) - - # if _url_ in arxiv_recall: - # print('在缓存中') - # return arxiv_recall[_url_] - - proxies, = get_conf('proxies') - res = requests.get(_url_, proxies=proxies) - - bs = BeautifulSoup(res.text, 'html.parser') - other_details = {} - - # get year - try: - year = bs.find_all(class_='dateline')[0].text - year = re.search(r'(\d{4})', year, re.M | re.I).group(1) - other_details['year'] = year - abstract = bs.find_all(class_='abstract mathjax')[0].text - other_details['abstract'] = abstract - except: - other_details['year'] = '' - print('年份获取失败') - - # get author - try: - authors = bs.find_all(class_='authors')[0].text - authors = authors.split('Authors:')[1] - other_details['authors'] = authors - except: - other_details['authors'] = '' - print('authors获取失败') - - # get comment - try: - comment = bs.find_all(class_='metatable')[0].text - real_comment = None - for item in comment.replace('\n', ' ').split(' '): - if 'Comments' in item: - real_comment = item - if real_comment is not None: - other_details['comment'] = real_comment - else: - other_details['comment'] = '' - except: - other_details['comment'] = '' - print('年份获取失败') - - title_str = BeautifulSoup( - res.text, 'html.parser').find('title').contents[0] - print('获取成功:', title_str) - # arxiv_recall[_url_] = (title_str+'.pdf', other_details) - # with open('./arxiv_recall.pkl', 'wb') as f: - # pickle.dump(arxiv_recall, f) - - return title_str+'.pdf', other_details - - - -@CatchException -def 下载arxiv论文并翻译摘要(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - - CRAZY_FUNCTION_INFO = "下载arxiv论文并翻译摘要,函数插件作者[binary-husky]。正在提取摘要并下载PDF文档……" - import glob - import os - - # 基本信息:功能、贡献者 - chatbot.append(["函数插件功能?", CRAZY_FUNCTION_INFO]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 提取摘要,下载PDF文档 - try: - pdf_path, info = download_arxiv_(txt) - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"下载pdf文件未成功") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 翻译摘要等 - i_say = f"请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。材料如下:{str(info)}" - i_say_show_user = f'请你阅读以下学术论文相关的材料,提取摘要,翻译为中文。论文:{pdf_path}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - msg = '正常' - # ** gpt request ** - # 单线,获取文章meta信息 - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, history=[], - sys_prompt="Your job is to collect information from materials and translate to Chinese。", - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - # 写入文件 - import shutil - # 重置文件的创建时间 - shutil.copyfile(pdf_path, f'./gpt_log/{os.path.basename(pdf_path)}'); os.remove(pdf_path) - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res + "\n\nPDF文件也已经下载")) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - diff --git a/spaces/zhangyd/bingo/src/components/welcome-screen.tsx b/spaces/zhangyd/bingo/src/components/welcome-screen.tsx deleted file mode 100644 index f7449fcbb6c621875e235db98f2790bf7894fb0a..0000000000000000000000000000000000000000 --- a/spaces/zhangyd/bingo/src/components/welcome-screen.tsx +++ /dev/null @@ -1,34 +0,0 @@ -import { useBing } from '@/lib/hooks/use-bing' - -const exampleMessages = [ - { - heading: '🧐 提出复杂问题', - message: `我可以为我挑剔的只吃橙色食物的孩子做什么饭?` - }, - { - heading: '🙌 获取更好的答案', - message: '销量最高的 3 种宠物吸尘器有哪些优点和缺点?' - }, - { - heading: '🎨 获得创意灵感', - message: `以海盗的口吻写一首关于外太空鳄鱼的俳句` - } -] - -export function WelcomeScreen({ setInput }: Pick, 'setInput'>) { - return ( -
              - {exampleMessages.map(example => ( - - ))} -
              - ) -} diff --git a/spaces/zwhe99/MAPS-mt/README.md b/spaces/zwhe99/MAPS-mt/README.md deleted file mode 100644 index 3968d2091d005ae0d4698765bd9c5bcfeabb4dd8..0000000000000000000000000000000000000000 --- a/spaces/zwhe99/MAPS-mt/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MAPS Mt -emoji: 📚 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: true -python_version: 3.9.13 ---- -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/zxy666/bingo-chatai666/src/components/tailwind-indicator.tsx b/spaces/zxy666/bingo-chatai666/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/zxy666/bingo-chatai666/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( -
              -
              xs
              -
              sm
              -
              md
              -
              lg
              -
              xl
              -
              2xl
              -
              - ) -}