diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audirvana Plus Crack ((FULL)).md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audirvana Plus Crack ((FULL)).md deleted file mode 100644 index ba003e01f02ef2620fbd1f754866de0b118f6c39..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audirvana Plus Crack ((FULL)).md +++ /dev/null @@ -1,163 +0,0 @@ -
-

Audirvana Plus: The Ultimate Digital Audio Playback for Mac and Windows

-

Do you love listening to music on your computer? Do you want to enjoy the best sound quality possible from your local and streaming files? Do you want to have full control over your audio settings and preferences? If you answered yes to any of these questions, then you might want to check out Audirvana Plus.

-

Audirvana Plus is a software that claims to offer the ultimate digital audio playback for Mac and Windows users. It is designed to handle all formats and resolutions of music files, make music a priority on your computer, adapt to your sound system, and provide you with all the necessary features to optimize your listening experience.

-

Audirvana Plus Crack


Download Zip ——— https://byltly.com/2uKzMv



-

In this article, we will review Audirvana Plus in detail and see what it can do for you. We will cover its main features, benefits, drawbacks, pricing, alternatives, and more. By the end of this article, you will have a clear idea of whether Audirvana Plus is worth trying or not.

-

What is Audirvana Plus?

-

Audirvana Plus is a software that was created by Damien Plisson in 2010 as a hobby project. He wanted to improve the sound quality of his iTunes library by bypassing the Core Audio processing of his Mac. He soon realized that his software could benefit other audiophiles who were looking for a better way to play their music files on their computers.

-

Audirvana Plus evolved over the years into a powerful and versatile audio player that supports various formats and resolutions of music files. It also integrates with popular streaming services like TIDAL, Qobuz, and HRA Streaming. It is compatible with both Mac and Windows operating systems.

-

What are the main features of Audirvana Plus?

-

Audirvana Plus has many features that make it stand out from other audio players. Here are some of the most important ones:

-

High-quality sound

-

The main selling point of Audirvana Plus is its sound quality. It uses advanced technology to optimize the audio signal path from your computer to your DAC (digital-to-analog converter) and your sound system. It reduces noise and interference, adapts to your DAC characteristics, and offers various options and settings to fine-tune your sound quality, such as filters, upsampling, oversampling, EQ, plugins, and more.

-

-

Audirvana Plus supports all formats and resolutions of music files, such as FLAC, MQA, DSD, Apple Lossless, AIFF, WAV, and more. It also supports gapless playback, bit-perfect mode, and native DSD streaming. It can handle high-resolution files up to 32-bit/384 kHz and DSD256.

-

User-friendly interface

-

Audirvana Plus has a user-friendly interface that allows you to organize your music library, create playlists, and access metadata and lyrics. You can browse your local files by folders, artists, albums, genres, or tracks. You can also search for any song or album using the built-in search engine.

-

Audirvana Plus also integrates with streaming services like TIDAL, Qobuz, and HRA Streaming. You can access millions of songs and albums from these services and enjoy them with the same sound quality as your local files. You can also create and manage your streaming playlists within Audirvana Plus.

-

Remote app

-

Audirvana Plus has a remote app for iOS devices that lets you control the playback from your phone or tablet. You can use the remote app to browse your music library, select songs or albums, adjust the volume, change the settings, and more. The remote app connects to your computer via Wi-Fi or Bluetooth.

-

The remote app has a sleek and intuitive design that matches the interface of Audirvana Plus. It also displays the album artwork, metadata, lyrics, and sound quality information of the current track. The remote app is compatible with iPhone, iPad, and iPod touch devices running iOS 9.0 or later.

-

What are the benefits of Audirvana Plus?

-

Audirvana Plus has many benefits that make it a great choice for music lovers who want to enjoy the best sound quality possible from their computers. Here are some of the main benefits:

-

Enhanced listening experience

-

Audirvana Plus enhances your listening experience by delivering a clear, detailed, and dynamic sound that reveals all the nuances and emotions of your music. It makes music a priority on your computer by allocating maximum resources to the audio playback and minimizing any background processes that could interfere with the sound quality.

-

Audirvana Plus also adapts to your sound system by detecting your DAC capabilities and applying the optimal settings for it. It also allows you to customize your sound quality according to your preferences and needs. You can choose from different filters, upsampling modes, oversampling modes, EQ presets, plugins, and more.

-

Convenient music management

-

Audirvana Plus makes it easy and convenient to manage your music library on your computer. You can organize your local files by folders, artists, albums, genres, or tracks. You can also edit the metadata and lyrics of your files using the built-in editor or online databases.

-

Audirvana Plus also integrates with streaming services like TIDAL, Qobuz, and HRA Streaming. You can access millions of songs and albums from these services and enjoy them with the same sound quality as your local files. You can also create and manage your streaming playlists within Audirvana Plus.

-

Flexible remote control

-

Audirvana Plus has a remote app for iOS devices that lets you control the playback from your phone or tablet. You can use the remote app to browse your music library, select songs or albums, adjust the volume, change the settings, and more. The remote app connects to your computer via Wi-Fi or Bluetooth.

-

The remote app has a sleek and intuitive design that matches the interface of Audirvana Plus. It also displays the album artwork, metadata, lyrics, and sound quality information of the current track. The remote app is compatible with iPhone, iPad, and iPod touch devices running iOS 9.0 or later.

-

What are the drawbacks of Audirvana Plus?

-

Audirvana Plus is not a perfect software and it has some drawbacks that you should be aware of before you decide to buy it. Here are some of the main drawbacks:

-

Lack of an Android remote app

-

Audirvana Plus does not have a remote app for Android devices. This means that if you have an Android phone or tablet, you will not be able to control the playback from your device. You will have to use your computer or another iOS device to do so.

-

This is a major disadvantage for Android users who want to enjoy the convenience and flexibility of a remote app. It is also a missed opportunity for Audirvana Plus to reach a wider audience and increase its popularity.

-

Occasional bugs and glitches

-

Audirvana Plus is not a bug-free software and it may encounter some issues from time to time. Some of the common problems reported by users are crashes, freezes, sync errors, playback errors, metadata errors, and compatibility issues.

-

These issues can be frustrating and annoying for users who want to have a smooth and uninterrupted listening experience. They can also affect the sound quality and performance of Audirvana Plus. While some of these issues can be fixed by updating the software or contacting the support team, others may persist or recur.

-

High price compared to some competitors

-

Audirvana Plus costs $99 for a lifetime license and $10 for the iOS remote app. Major updates are also chargeable but infrequent. There is also a 30-day free trial available for both Mac and Windows versions.

-

This price may seem reasonable for some users who value the sound quality and functionality of Audirvana Plus. However, it may also seem expensive for others who are looking for a cheaper or free alternative. There are many other audio players that offer similar or different features and benefits for a lower price or no cost at all.

-

How does Audirvana Plus compare to its alternatives?

-

Audirvana Plus has some alternatives that offer similar or different features and benefits. Some of the popular ones are foobar2000, AIMP, Strawberry, MusicBee, Clementine, Roon, Amarra, Pure Music, and JRiver Media Center.

-

Here is a table that compares some of the key aspects of these audio players:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Audio playerPlatformPriceFormatsStreamingRemote app
Audirvana PlusMac/Windows$99 + $10AllTIDAL/Qobuz/HRAiOS only
foobar2000Windows/Android/iOSFree/$4/$7AllTIDAL/Qobuz/Spotify/Deezer/etc.Yes (with plugin)
AIMPWindows/Android/iOS/macOS/LinuxFreeAllNo (only radio)No (only web interface)
StrawberryWindows/macOS/Linux/BSD/Solaris/X11Free (donations accepted)All (except MQA)TIDAL/Qobuz/Subsonic/Airsonic/etc.No (only web interface)
MusicBeeWindows/Android/iOS/macOS/Linux/BSD/Solaris/X11 FreeAllTIDAL/Qobuz/Spotify/Deezer/etc.Yes (with plugin)
ClementineWindows/macOS/Linux/BSD/Solaris/X11Free (donations accepted)All (except MQA)TIDAL/Qobuz/Spotify/Deezer/etc.Yes (Android only)
RoonWindows/macOS/Linux/iOS/Android$119/year or $699/lifetimeAllTIDAL/QobuzYes
AmarraMac/Windows$49/$99/$199/$399All (except DSD)TIDAL/Qobuz/HRANo (only web interface)
Pure MusicMac only$129All (except DSD)No (only iTunes)No (only web interface)
JRiver Media CenterWindows/macOS/Linux/Android/iOS/WP8/RT/CE/Wine$60/$80/$100 (lifetime updates extra)AllTIDAL/Qobuz/Spotify/Deezer/etc.Yes (with plugin)
-

As you can see, each audio player has its own strengths and weaknesses. Some of them are more affordable, more compatible, more functional, or more customizable than Audirvana Plus. However, none of them can match the sound quality and performance of Audirvana Plus.

-

Audirvana Plus is the best choice for audiophiles who want to enjoy the ultimate digital audio playback on their computers. It offers a unique combination of features, benefits, and technology that make it stand out from the crowd.

-

How to get Audirvana Plus?

-

If you are interested in trying Audirvana Plus, you can download it from its official website: https://audirvana.com/. You can choose between the Mac and Windows versions, depending on your operating system. You can also download the iOS remote app from the App Store: https://apps.apple.com/us/app/audirvana-remote/id1138441030.

-

Audirvana Plus offers a 30-day free trial for both Mac and Windows versions. You can use all the features and functions of the software without any limitations or restrictions. You can also cancel the trial at any time without any charge or obligation.

-

If you want to buy Audirvana Plus, you can do so from its official website as well. You can choose between a lifetime license and a subscription plan, depending on your preference and budget. You can also buy the iOS remote app separately from the App Store.

-

Audirvana Plus costs $99 for a lifetime license and $10 for the iOS remote app. Major updates are also chargeable but infrequent. There is also a 30-day money-back guarantee available for both Mac and Windows versions.

-

Conclusion

-

Audirvana Plus is a software that claims to offer the ultimate digital audio playback for Mac and Windows users. It is designed to handle all formats and resolutions of music files, make music a priority on your computer, adapt to your sound system, and provide you with all the necessary features to optimize your listening experience.

-

Audirvana Plus has many features that make it stand out from other audio players, such as high-quality sound, user-friendly interface, remote app, streaming integration, and more. It also has many benefits that make it a great choice for music lovers who want to enjoy the best sound quality possible from their computers.

-

Audirvana Plus has some drawbacks that you should be aware of before you decide to buy it, such as lack of an Android remote app, occasional bugs and glitches, high price compared to some competitors, and more. It also has some alternatives that offer similar or different features and benefits.

-

If you are interested in trying Audirvana Plus, you can download it from its official website and use it for free for 30 days. If you want to buy it, you can choose between a lifetime license and a subscription plan. You can also buy the iOS remote app separately.

-

Audirvana Plus is the best choice for audiophiles who want to enjoy the ultimate digital audio playback on their computers. It offers a unique combination of features, benefits, and technology that make it stand out from the crowd.

-

FAQs

-

Here are some of the frequently asked questions about Audirvana Plus:

-

Is Audirvana Plus a crack?

-

No, Audirvana Plus is not a crack. It is a legitimate and licensed software that you can download from its official website. A crack is an illegal and unauthorized modification of a software that bypasses its security and activation features. Using a crack is risky and unethical, as it may expose your computer to viruses, malware, or legal issues.

-

How do I install Audirvana Plus?

-

To install Audirvana Plus, you need to download the installer file from its official website. Then, you need to run the installer file and follow the instructions on the screen. You may need to enter your license key or sign in with your account to activate the software. You can also download the iOS remote app from the App Store.

-

How do I update Audirvana Plus?

-

To update Audirvana Plus, you need to check for updates from within the software. You can do this by clicking on the Audirvana Plus menu and selecting Check for Updates. If there is a new version available, you can download and install it automatically. You may need to restart the software or your computer to complete the update.

-

How do I uninstall Audirvana Plus?

-

To uninstall Audirvana Plus, you need to delete the software from your computer. You can do this by dragging the Audirvana Plus icon to the Trash on Mac or by using the Add or Remove Programs feature on Windows. You may also need to delete any leftover files or folders related to Audirvana Plus from your computer.

-

How do I contact Audirvana Plus support?

-

To contact Audirvana Plus support, you can use the online form on its official website: https://audirvana.com/support/. You can also send an email to support@audirvana.com or visit the online forum: https://community.audirvana.com/. You can also check the online manual: https://audirvana.com/manual/.

b2dd77e56b
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BluffTitler Ultimate 14.1.1.7 Crack Keygen Learn How to Create Professional 3D Titles.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BluffTitler Ultimate 14.1.1.7 Crack Keygen Learn How to Create Professional 3D Titles.md deleted file mode 100644 index 6d1bdd8dbb3e2332c5f51e1b4517826083d36cc6..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/BluffTitler Ultimate 14.1.1.7 Crack Keygen Learn How to Create Professional 3D Titles.md +++ /dev/null @@ -1,91 +0,0 @@ - -

BluffTitler Ultimate 14.1.1.7 Crack Keygen Download [Latest]

-

If you want to create dazzling 3D titles for your videos, but don't want to spend a fortune on professional 3D animation and video titling software, you might be interested in BluffTitler Ultimate. This is a software that helps you to create stunning 3D text animations for your photos and videos with ease. However, to use this software, you need a crack keygen that can activate the full version of BluffTitler Ultimate without paying anything. In this article, we will show you what BluffTitler Ultimate is, what features it offers, how to download and install BluffTitler Ultimate 14.1.1.7 crack keygen, and what are the pros and cons of using it.

-

What is BluffTitler Ultimate and what does it do?

-

BluffTitler Ultimate is a software that allows you to create amazing 3D titles for your videos in minutes. You can choose from hundreds of ready-to-use templates or create your own from scratch. You can also apply various effects, such as bevels, strokes, shadows, reflections, textures, deformations, particles, lighting, and more. You can preview your animations in real time and export them as video files (MP4, AVI) or as numbered frames (JPG, PNG) in any resolution, framerate, compression, and with or without an alpha channel.

-

BluffTitler Ultimate 14.1.1.7 Crack Keygen Download [Latest]


Download Filehttps://byltly.com/2uKzie



-

Why do you need a crack keygen to use it?

-

BluffTitler Ultimate is not a free software. It costs $55 for a single user license and $110 for a commercial license. If you want to use it without paying anything, you need a crack keygen that can bypass the registration process and unlock all the features of BluffTitler Ultimate. A crack keygen is a small program that generates a serial number or a license key that can activate a software.

-

Features of BluffTitler Ultimate

-

BluffTitler Ultimate has many features that make it a powerful and easy-to-use software for creating 3D titles. Here are some of them:

- -

How to download and install BluffTitler Ultimate 14.1.1.7 crack keygen

-

If you want to use BluffTitler Ultimate for free, you need to download and install BluffTitler Ultimate 14.1.1.7 crack keygen from a reliable source. Here are the steps to follow:

-
    -
  1. Step 1: Download the setup file and the patch file from a reliable source. You can find them on websites like CrackingPatching, HaxPC, or Reddit. Make sure you scan them with an antivirus before opening them.
  2. -
  3. Step 2: Install the setup file and run the program. You will see a registration window asking you to enter your name and license key.
  4. -
  5. Step 3: Copy and paste the patch file into the installation folder of BluffTitler Ultimate. The default location is C:\Program Files (x86)\Outerspace Software\BluffTitler\. Run the patch file as administrator by right-clicking on it and choosing "Run as administrator". Click on "Patch" button and wait for it to finish.
  6. -
  7. Step 4: Enjoy the full version of BluffTitler Ultimate with all features unlocked.
  8. -
-

Pros and cons of using BluffTitler Ultimate crack keygen

-

Using BluffTitler Ultimate crack keygen has some advantages and disadvantages that you should be aware of before using it.

- - - - - -
ProsCons
- Free: You don't have to pay anything to use BluffTitler Ultimate with all features unlocked.- Illegal: Using a crack keygen is against the law and violates the intellectual property rights of Outerspace Software.
- Unlimited: You don't have any limitations on how many titles you can create or how long you can use BluffTitler Ultimate.- Risky: Using a crack keygen may expose your computer to viruses, malware, or spyware that can harm your system or steal your data.
- Fully functional: You don't have any restrictions on what features you can use or what templates or effects you can apply.- Unethical: Using a crack keygen is unfair to Outerspace Software who spent time and money developing BluffTitler Ultimate.
-

Conclusion

-

In conclusion, BluffTitler Ultimate is a great software for creating dazzling 3D titles for your videos with ease. However, if you want to use it for free, you need a crack keygen that can activate the full version of BluffTitler Ultimate without paying anything. This has some pros and cons that you should consider before using it.

-

If you like BluffTitler Ultimate and want to support Outerspace Software, we recommend you to buy a legitimate license from their official website. This way, you can enjoy all the benefits of using BluffTitler Ultimate legally, safely, and ethically.

-

How to create text effects with BluffTitler Ultimate 14.1.1.7
-BluffTitler Ultimate 14.1.1.7 patch - Crackingpatching.zip download
-BluffTitler Ultimate 14.1.1.7 serial key for Windows
-BluffTitler Ultimate 14.1.1.7 + patch torrent download
-BluffTitler Ultimate 14.1.1.7 keygen free download
-BluffTitler Ultimate 14.1.1.7 review and tutorial
-BluffTitler Ultimate 14.1.1.7 license key generator
-BluffTitler Ultimate 14.1.1.7 full version with crack
-BluffTitler Ultimate 14.1.1.7 latest update download
-BluffTitler Ultimate 14.1.1.7 crack by Trello
-Best text effects software - BluffTitler Ultimate 14.1.1.7
-BluffTitler Ultimate 14.1.1.7 portable download link
-BluffTitler Ultimate 14.1.1.7 activation code free
-BluffTitler Ultimate 14.1.1.7 vs Adobe Premiere Pro
-BluffTitler Ultimate 14.1.1.7 system requirements and features
-How to install BluffTitler Ultimate 14.1.1.7 + patch
-BluffTitler Ultimate 14.1.1.7 cracked version download
-BluffTitler Ultimate 14.1.1.7 registration key online
-BluffTitler Ultimate 14.1.1.7 tips and tricks
-BluffTitler Ultimate 14.1.1.7 user manual and guide
-How to uninstall BluffTitler Ultimate 14.1.1.7 completely
-BluffTitler Ultimate 14.1.1.7 alternative software
-BluffTitler Ultimate 14.1.1.7 support and feedback
-BluffTitler Ultimate 14.1.1.7 discount coupon code
-BluffTitler Ultimate 14.1

-

We hope this article was helpful for you. If you have any questions or comments about BluffTitler Ultimate or its crack keygen, feel free to leave them below.

-

FAQs

- -

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Devayat Pandit Vani Pdf 124 The Essence and Beauty of a Sacred Art Form.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Devayat Pandit Vani Pdf 124 The Essence and Beauty of a Sacred Art Form.md deleted file mode 100644 index 90cd16efe757e047c9b1adf55851ab6f0331b527..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Devayat Pandit Vani Pdf 124 The Essence and Beauty of a Sacred Art Form.md +++ /dev/null @@ -1,97 +0,0 @@ -
-

Devayat Pandit Vani Pdf 124: A Treasure of Spiritual Wisdom

-

If you are interested in learning about the ancient wisdom of Gujarat, you might have heard of Devayat Pandit Vani, a collection of spiritual verses composed by a saint named Devayat Pandit. These verses are not only poetic and beautiful, but also prophetic and insightful. They reveal the secrets of life, death, karma, dharma, bhakti and moksha. They also offer guidance and inspiration for seekers of truth and happiness.

-

In this article, we will explore who was Devayat Pandit, what is his Vani, how to access it and what are the benefits of reading it. We will also answer some frequently asked questions about this topic. By the end of this article, you will have a better understanding of this treasure of spiritual wisdom and how it can enrich your life.

-

Devayat Pandit Vani Pdf 124


Download Zip 🌟 https://byltly.com/2uKw2J



-

Who was Devayat Pandit?

-

Devayat Pandit was a saint who lived in Gujarat in the 15th century. He was born in a Brahmin family in Vanthali village in Junagadh district. He lost his parents at a young age and grew up with a strong faith in God and a keen interest in spirituality. He used to serve the sadhus and saints who visited his village and learn from them.

-

One day, he met a saint named Shobhaji who impressed him with his knowledge and grace. Devayat requested him to be his guru and accept him as his disciple. Shobhaji agreed and gave him initiation. He also advised him to stay in the world and serve the people rather than renounce it and become a sadhu. He said that by doing so, he would not only progress spiritually but also inspire others to follow the path of dharma.

-

Devayat Pandit Agamvani Book PDF Download
-Devayat Pandit Teachings and Philosophy
-Devayat Pandit Sacred Text in Jainism
-Devayat Pandit Gyanaranya Chapter PDF
-Devayat Pandit Atthakatha Chapter PDF
-Devayat Pandit Samayikapava Chapter PDF
-Devayat Pandit Ekasana Chapter PDF
-Devayat Pandit Kshetra Samasana Chapter PDF
-Devayat Pandit Paryusanakalpa Chapter PDF
-Devayat Pandit Vinayapitaka Chapter PDF
-Devayat Pandit Jnanavatthu Chapter PDF
-Devayat Pandit Niryukti Chapter PDF
-Devayat Pandit Mahapratyakhyana Chapter PDF
-Devayat Pandit Vipaka Chapter PDF
-Devayat Pandit Biography and History
-Devayat Pandit Spiritual Journey and Work
-Devayat Pandit Dada Dakhve Song MP3 Download
-Devayat Pandit Bhajan Sangrah Vol 1 Album Download
-Devayat Pandit Gujarati Books and Novels Free Download
-Devayat Pandit Online Services and Publishing
-Devayat Panditni Aagamvani Album MP3 Download
-Devayat Pandit Akhiyan Book Hardcover in Gujarati
-Devayat Panditni Agam Vani Book Price in Ahmedabad
-Devayat Pandit Meditation and Mindfulness Guide
-Devayat Pandit Moral Values and Spirituality Guide
-Devayat Pandit Non-Violence and Compassion Guide
-Devayat Pandit Truthfulness and Right Conduct Guide
-Devayat Pandit Renunciation and Self-Discipline Guide
-Devayat Pandit Pilgrimage and Holy Places Guide
-Devayat Pandit Paryushana Festival and Meditation Guide
-Devayat Pandit Moral Discipline and Ethical Conduct Guide
-Devayat Pandit Soul and Karma Guide
-Devayat Pandit Solutions to Life Problems Guide
-Devayat Pandit Meaningful and Fulfilling Life Guide
-Devayat Pandit Jain Principles and Practices Guide
-Devayat Pandit Right Faith and Spiritual Leaders Guide
-Devayat Pandit Self-Control and Self-Purification Guide
-Devayat Pandit Fasting and Self-Renunciation Benefits
-Devayat Pandit Oldest Jain Texts and Scriptures
-Devayat Pandit 131 Chapters and 14 Sections PDF
-How to Read and Understand Devayat Pandit Vani PDF
-How to Apply Devayat Pandit Teachings in Daily Life
-How to Learn from Devayat Pandit Spiritual Leaders
-How to Follow Devayat Pandit Moral Values and Spirituality
-How to Practice Devayat Pandit Meditation and Mindfulness
-How to Celebrate Devayat Pandit Paryushana Festival
-How to Visit Devayat Pandit Pilgrimage and Holy Places
-How to Achieve Devayat Pandit Self-Control and Self-Purification
-How to Benefit from Devayat Pandit Fasting and Self-Renunciation
-How to Grow from Devayat Pandit Moral Discipline and Ethical Conduct

-

Devayat followed his guru's instructions and married a pious woman named Devalde Nar. He established an ashram in Saurashtra region where he performed religious activities and preached to the people. He also composed spiritual verses in Gujarati language that expressed his devotion, wisdom and vision. These verses are known as Devayat Pandit Vani or Agamvani.

-

Devayat Pandit was not only a poet but also a prophet. He predicted many events that would happen in the future, such as the arrival of British rule, the independence of India, the partition of Pakistan, the rise of Gandhi, the assassination of Indira Gandhi and many more. He also foretold his own death and instructed his followers to preserve his Vani for posterity.

-

What is Devayat Pandit Vani?

-

Devayat Pandit Vani is a collection of spiritual verses composed by Devayat Pandit in Gujarati language. It consists of about 124 chapters that cover various topics related to spirituality, morality, society, history and prophecy. The verses are written in simple and lucid language that can be easily understood by anyone.

-

The origin of Devayat Pandit Vani is said to be divine. According to legend, Devayat Pandit received these verses from God himself through his inner voice or intuition. He used to write them down on palm leaves or paper as soon as he heard them. He also used to sing them in public gatherings or private meetings with his disciples.

-

The significance of Devayat Pandit Vani is immense. It is considered as a sacred scripture that reveals the essence of all religions and philosophies. It teaches the principles of karma, dharma, bhakti and moksha in a practical and rational way. It also offers solutions to various problems faced by human beings in their personal and social lives. It also inspires people to live with faith, love, peace and harmony.

-

How to access Devayat Pandit Vani?

-

If you want to read Devayat Pandit Vani, you have several options available. You can either buy a printed book or download a PDF file from online sources. You can also listen to audio recordings or watch video clips of Devayat Pandit Vani sung by various singers or recited by various speakers.

-

One of the most popular sources of Devayat Pandit Vani is a PDF file that contains 124 chapters of his verses along with their meanings in Gujarati language. This file can be downloaded for free from various websites such as Scribd.com, Shareinindia.in or Peatix.com. You can also print this file or read it on your computer or mobile device.

-

Another source of Devayat Pandit Vani is a printed book that contains his verses along with their meanings in Gujarati language. This book can be bought from various online or offline stores such as Amazon.in, Flipkart.com or Shree Pustak Mandir. You can also borrow this book from libraries or friends.

-

A third source of Devayat Pandit Vani is audio recordings or video clips that feature his verses sung by various singers or recited by various speakers. These recordings or clips can be found on various platforms such as YouTube.com, Gaana.com or Wynk Music. You can also download these recordings or clips or stream them online.

-

What are the benefits of reading Devayat Pandit Vani?

-

Reading Devayat Pandit Vani can have many benefits for your mind, body and soul. Some of these benefits are:

- -

Conclusion

-

In conclusion, Devayat Pandit Vani is a treasure of spiritual wisdom that can enrich your life in many ways. It is a collection of spiritual verses composed by a saint named Devayat Pandit who lived in Gujarat in the 15th century. He predicted many events that would happen in the future and taught the principles of karma, dharma, bhakti and moksha in a practical and rational way.

-

If you want to read Devayat Pandit Vani, you can either buy a printed book or download a PDF file from online sources. You can also listen to audio recordings or watch video clips of his verses sung by various singers or recited by various speakers.

-

By reading Devayat Pandit Vani, you can connect with God and your inner self, develop good character and conduct, solve your problems and challenges in your personal and social life and attain liberation from the cycle of birth and death.

-

We hope this article has given you some useful information about this topic. If you have any questions or feedbacks, please feel free to contact us.

-

FAQs

-
    -
  1. Q: When did Devayat Pandit die?
  2. -
  3. A: Devayat Pandit died in the year 1509 at the age of 84. He had predicted his own death and instructed his followers to preserve his Vani for posterity.
  4. -
  5. Q: How many verses are there in Devayat Pandit Vani?
  6. -
  7. A: There are about 124 chapters in Devayat Pandit Vani, each containing several verses. The total number of verses is estimated to be around 5000.
  8. -
  9. Q: Who are some of the famous singers or speakers of Devayat Pandit Vani?
  10. -
  11. A: Some of the famous singers or speakers of Devayat Pandit Vani are Arvind Barot, Dhaneshwari Bapu, Hemant Chauhan, Kirtidan Gadhvi, Morari Bapu and Narayan Swami.
  12. -
  13. Q: What are some of the themes or topics covered by Devayat Pandit Vani?
  14. -
  15. A: Some of the themes or topics covered by Devayat Pandit Vani are God, guru, soul, karma, dharma, bhakti, moksha, reincarnation, yoga, meditation, devotion, ethics, morality, society, history and prophecy.
  16. -
  17. Q: Where can I find more information about Devayat Pandit Vani?
  18. -
  19. A: You can find more information about Devayat Pandit Vani on various websites such as Wikipedia.org, Gujaratilexicon.com or Dharmik.in. You can also read books or articles written by scholars or devotees such as Dr. Dalpat Shrimaali, Dr. Ramesh Patel or Dr. Harshad Trivedi.
  20. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Digital Daggers The Devil Within 2012 Album Torrentl.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Digital Daggers The Devil Within 2012 Album Torrentl.md deleted file mode 100644 index 8b94ce2b1471bbe0e358ff51b3e3a512232a71cb..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Digital Daggers The Devil Within 2012 Album Torrentl.md +++ /dev/null @@ -1,30 +0,0 @@ -
-

Digital Daggers The Devil Within 2012 Album Torrentl: A Review of the Dark and Atmospheric Debut Album by the Indie Pop Duo

- -

Digital Daggers are a Los Angeles-based indie pop duo consisting of vocalist Andrea Wasse and producer/multi-instrumentalist Space. They rose to fame in 2012 with their debut album The Devil Within, which featured 10 songs of dark and atmospheric pop music that blended elements of rock, electronic, and cinematic soundscapes. The album was released independently and was available for download on various torrent sites, as well as on Spotify, iTunes, and other digital platforms.

-

Digital Daggers The Devil Within 2012 Album Torrentl


DOWNLOAD --->>> https://byltly.com/2uKvop



- -

The album's title track, The Devil Within, was the first single and became a viral hit on YouTube, garnering over 40 million views to date. The song was also featured in several TV shows and movies, such as The Vampire Diaries, Pretty Little Liars, Teen Wolf, and Resident Evil: Retribution. The song showcases Wasse's haunting vocals and Space's pulsing beats, creating a catchy and eerie anthem for the dark side of human nature.

- -

The rest of the album follows a similar theme of exploring the shadows of the soul, with songs such as State of Seduction, Can't Sleep, Can't Breathe, Still Here, and Where the Lonely Ones Roam. The duo's lyrics are poetic and cryptic, often using metaphors and imagery to convey their emotions. The music is rich and layered, with influences ranging from trip-hop to industrial to orchestral. The album is a cohesive and captivating work of art that showcases the duo's talent and vision.

- -

If you are looking for a torrent link to download the album, you can find it here: [insert torrent link]. However, we strongly recommend that you support the artists by purchasing their music legally on their official website: [insert website link]. You can also stream their music on Spotify or Apple Music, or watch their videos on YouTube. Digital Daggers are currently working on their second album, which is expected to be released in 2023. Stay tuned for more updates on their social media accounts: [insert social media links].

- -

Digital Daggers The Devil Within 2012 Album Torrentl is a must-listen for fans of dark and atmospheric pop music. It is a stunning debut album that will take you on a journey through the depths of the human psyche. Don't miss this hidden gem of indie pop music!

- -

In this article, we will review each song of the album in more detail and analyze their meaning and impact. We will also compare and contrast the album with other similar works of music and discuss the duo's influences and inspirations. Let's dive into the dark and mesmerizing world of Digital Daggers!

-

- -

The Devil Within

- -

The opening track and lead single of the album is a powerful and catchy song that sets the tone for the rest of the album. The song is about the inner struggle between good and evil, and how sometimes we can't resist the temptation of our darker impulses. The chorus goes: "I will keep quiet / You won't even know I'm here / You won't suspect a thing / You won't see me in the mirror / But I crept into your heart / You can't make me disappear / Till I make you / I made myself at home / In the cobwebs and the lies / I'm learning all your tricks / I can hurt you from inside / I made myself a promise / You would never see me cry / Till I make you". The song is sung from the perspective of the devil within, who is slowly taking over the person's mind and body. The song is a metaphor for addiction, obsession, or any other destructive behavior that can consume a person's life.

- -

The song has a dark and pulsing beat that matches the intensity of the lyrics. The vocals are distorted and layered, creating a sense of duality and conflict. The song also features a guitar solo that adds to the rock edge of the song. The song is a perfect example of how Digital Daggers combine pop hooks with dark themes and create a unique and captivating sound.

- -

State of Seduction

- -

The second track of the album is a slower and more sensual song that explores the theme of lust and desire. The song is about a forbidden attraction that is hard to resist, even though it might be dangerous or wrong. The chorus goes: "You're my state of seduction / You're my state of emergency / You're my state of confusion / You're my state of ecstasy / You're my state". The song is sung from the perspective of someone who is drawn to another person who might be bad for them, but they can't help themselves. The song is a metaphor for any kind of unhealthy relationship that is based on physical attraction rather than emotional connection.

- -

The song has a smooth and sultry beat that matches the mood of the lyrics. The vocals are soft and breathy, creating a sense of intimacy and temptation. The song also features a piano solo that adds to the elegance and sophistication of the song. The song is a perfect example of how Digital Daggers create atmospheric and emotional songs that appeal to different senses.

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adorage Vol 13 LINK Crack.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adorage Vol 13 LINK Crack.md deleted file mode 100644 index e4f151a4fc086f9a400feb6d60e4628373d07276..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Adorage Vol 13 LINK Crack.md +++ /dev/null @@ -1,18 +0,0 @@ -

adorage vol 13 crack


Download Zip ---> https://imgfil.com/2uy25Q



-
-This system contains more than 260 effects, more than 250 transitions and more than 20 control destinations, all included in one installation. The installer makes it easy to use Adoration Effects Package 13 and to get started right away. You can find a quick tutorial on the main menu. - -This resource provides a step-by-step manual to help you use the effects, transitions and control destinations included in Adoration Effects Package 13. The steps are simple and easy to follow to help you quickly get started with this product. You will learn about the main controls, the key visual effects, the main animation types, the basic editing tools, and many more features. - -Adoration Effects Package 13 is a powerful, easy to use program that allows you to create stunning animation sequences for your video productions. The effects and transitions included in this system are unique and can help you create something you have never seen before. You will find the system very easy to use and can be started right away. This is a great choice for anyone who wants to create stunning images. - -Adoration Effects Package 13 is a powerful and easy to use program that makes it easy to create stunning animations for your video productions. The effects and transitions included in this system are unique and can help you create something you have never seen before. - -This resource provides a step-by-step manual to help you use the effects and transitions included in Adoration Effects Package 13. The steps are simple and easy to follow to help you quickly get started with this product. You will learn about the main controls, the key visual effects, the main animation types, the basic editing tools, and many more features. - -Adoration Effects Package 13 is a powerful and easy to use program that makes it easy to create stunning animations for your video productions. The effects and transitions included in this system are unique and can help you create something you have never seen before. You will find the system very easy to use and can be started right away. This is a great choice for anyone who wants to create stunning images. - -Adoration Effects Package 13 is a powerful and easy to use program that 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Environmental Engineering Book By Sk Garg Pdf Download __EXCLUSIVE__.md b/spaces/1gistliPinn/ChatGPT4/Examples/Environmental Engineering Book By Sk Garg Pdf Download __EXCLUSIVE__.md deleted file mode 100644 index 84b654f67dc18173a49f52d9e961320b673f6199..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Environmental Engineering Book By Sk Garg Pdf Download __EXCLUSIVE__.md +++ /dev/null @@ -1,86 +0,0 @@ - -

Environmental Engineering Book By Sk Garg Pdf Download

-

Environmental engineering is a branch of engineering that deals with the protection of human health and the environment from various hazards such as pollution, waste disposal, and climate change. Environmental engineers design, construct, operate, and maintain systems that prevent, control, or remediate environmental problems.

-

One of the most popular and comprehensive books on environmental engineering is the Environmental Engineering series by SK Garg. This series consists of two volumes: Volume I covers water supply engineering and Volume II covers sewage disposal and air pollution engineering. These books provide a thorough and practical introduction to the principles and applications of environmental engineering, with numerous examples, illustrations, tables, and solved problems.

-

Environmental Engineering Book By Sk Garg Pdf Download


Downloadhttps://imgfil.com/2uxZK4



-

Why You Should Read Environmental Engineering Book By Sk Garg

-

There are many reasons why you should read Environmental Engineering Book By Sk Garg if you are interested in learning more about environmental engineering or pursuing a career in this field. Here are some of them:

- -

How To Download Environmental Engineering Book By Sk Garg Pdf

-

If you want to download Environmental Engineering Book By Sk Garg Pdf, you can follow these simple steps:

-
    -
  1. Go to any of the online sources that offer Environmental Engineering Book By Sk Garg Pdf for download. Some of these sources are dirzon.com, easyengineering.net, scribd.com, idoc.pub, etc.
  2. -
  3. Search for Environmental Engineering Book By Sk Garg Pdf using the search bar or browse through the categories or tags.
  4. -
  5. Select the volume or edition that you want to download. Make sure that it is compatible with your device and software.
  6. -
  7. Click on the download button or link and follow the instructions to complete the download process. You may need to register or sign in to some of the sources before downloading.
  8. -
  9. Save the downloaded file to your preferred location on your device.
  10. -
  11. Open the file using any PDF reader or viewer software such as Adobe Acrobat Reader or Google Chrome.
  12. -
-

That's it! You have successfully downloaded Environmental Engineering Book By Sk Garg Pdf. Now you can enjoy reading it anytime and anywhere.

-

Conclusion

-

Environmental Engineering Book By Sk Garg Pdf is a great resource for anyone who wants to learn more about environmental engineering or prepare for competitive exams or interviews in this field. It is one of the most comprehensive and updated books on environmental engineering that covers both the theory and practice of this discipline. It is also easy to download and access in PDF format from various online sources. So what are you waiting for? Download Environmental Engineering Book By Sk Garg Pdf today and start learning!

-

What You Will Learn From Environmental Engineering Book By Sk Garg

-

Environmental Engineering Book By Sk Garg is a comprehensive and authoritative source of information on environmental engineering. By reading this book, you will learn about the following topics:

- -

Who Should Read Environmental Engineering Book By Sk Garg

-

Environmental Engineering Book By Sk Garg is suitable for anyone who wants to gain a solid understanding of environmental engineering or enhance their skills and knowledge in this field. It is especially useful for the following groups of readers:

- -

How To Use Environmental Engineering Book By Sk Garg

-

Environmental Engineering Book By Sk Garg is a useful and versatile book that can help you in various ways. Here are some of the ways you can use this book:

-

- -

What People Are Saying About Environmental Engineering Book By Sk Garg

-

Environmental Engineering Book By Sk Garg has received positive feedback and reviews from many readers who have used this book. Here are some of the testimonials from satisfied readers:

-
-

"This book is very helpful for students who are preparing for competitive exams like GATE, ESE, etc. It covers all the topics of environmental engineering in a simple and lucid manner. The solved problems and questions are very useful for practice and revision."

-- Ramesh Kumar, Student -
-
-

"This book is a must-have for anyone who is working or interested in environmental engineering. It provides a comprehensive and updated overview of the principles and applications of environmental engineering. It also includes case studies, design examples, numerical problems, multiple choice questions, review questions, and objective type questions to test your knowledge and skills."

-- Priya Sharma, Engineer -
-
-

"This book is a great resource and tool for teaching or training students or professionals in environmental engineering. It explains the concepts and issues of environmental engineering in a simple and engaging way that anyone can understand and appreciate. It also includes diagrams, figures, charts, and photographs to illustrate the concepts and methods."

-- Rajesh Singh, Professor -
-

Where To Buy Environmental Engineering Book By Sk Garg

-

If you want to buy Environmental Engineering Book By Sk Garg in hard copy or paperback format, you can order it online from various e-commerce platforms or bookstores. Some of the options are:

- -

How To Contact SK Garg

-

If you have any queries, feedback, or suggestions regarding Environmental Engineering Book By Sk Garg or any other books by SK Garg, you can contact him through the following ways:

-

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Count Masters Crowd Runner 3D The Best Stickman Running Game with MOD APK.md b/spaces/1phancelerku/anime-remove-background/Count Masters Crowd Runner 3D The Best Stickman Running Game with MOD APK.md deleted file mode 100644 index b11c5c3d331edc0648a3f10ce0ea7f7f54b02868..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Count Masters Crowd Runner 3D The Best Stickman Running Game with MOD APK.md +++ /dev/null @@ -1,59 +0,0 @@ - -

Count Masters Crowd Runner 3D Mod APK: A Fun and Addictive Game for Android

-

Do you love games that are simple yet exciting, casual yet competitive, and colorful yet captivating? If yes, then you should try Count Masters Crowd Runner 3D, a game that will keep you hooked for hours. In this game, you have to gather your crowd, run through different levels, and clash with your enemies. Sounds easy, right? Well, not so fast. You also have to avoid obstacles, dodge traps, and make smart decisions along the way. And if you want to make the game even more fun and enjoyable, you should download Count Masters Crowd Runner 3D Mod APK, a modified version of the original game that gives you unlimited coins, removes ads, and enhances your gaming experience. In this article, we will tell you everything you need to know about this amazing game and how to get the mod APK on your Android device.

-

What is Count Masters Crowd Runner 3D?

-

Count Masters Crowd Runner 3D is a game developed by TapNation Games, a studio that specializes in creating casual and hyper-casual games for mobile platforms. The game was released in March 2021 and has since gained millions of downloads and positive reviews from players around the world. The game is available for both Android and iOS devices, and you can download it for free from Google Play Store or App Store.

-

count masters crowd runner 3d mod apk


Download File ::: https://jinyurl.com/2uNNHV



-

A game where you gather your crowd and clash with your enemies

-

The main objective of the game is to gather as many people as possible in your crowd and use them to attack your enemies. You start with a single character and as you run through the level, you can collect more people by passing through gates or picking up stragglers. The more people you have in your crowd, the stronger you are. But be careful, because your enemies can also gather their own crowds and try to stop you. At the end of each level, you will face a final boss that you have to defeat in order to proceed to the next level. The game has hundreds of levels with different themes, environments, and challenges.

-

A game where you run through different levels and obstacles

-

Aside from gathering your crowd and clashing with your enemies, you also have to run through various levels that are filled with obstacles and traps. You have to avoid spikes, saws, bombs, walls, pits, and other hazards that can reduce your crowd size or even kill you. You also have to make quick decisions when you encounter forks or splits in the road. You can choose to go left or right depending on which path has more people or less obstacles. Sometimes, you can also find shortcuts or hidden paths that can give you an advantage over your enemies.

-

A game where you customize your character and unlock new skins

-

One of the best features of the game is that you can customize your character and make it look unique. You can change the color of your skin, hair, eyes, clothes, shoes, accessories, and more. You can also unlock new skins by completing levels or using coins. There are dozens of skins to choose from, ranging from animals, superheroes, celebrities, zombies, robots, aliens, and more

What is Count Masters Crowd Runner 3D Mod APK?

-

Count Masters Crowd Runner 3D Mod APK is a modified version of the original game that gives you some extra benefits and features that are not available in the official version. By downloading and installing this mod APK, you can enjoy the following advantages:

-

A modified version of the original game that gives you unlimited coins

-

Coins are the main currency in the game that you can use to buy new skins, upgrade your character, and unlock new levels. Normally, you can earn coins by completing levels, watching ads, or buying them with real money. However, with the mod APK, you can get unlimited coins for free. This means that you can buy anything you want without worrying about running out of coins. You can also skip the ads and save your time and data.

-

A modified version of the original game that removes ads and other distractions

-

Another benefit of the mod APK is that it removes all the ads and other distractions that can ruin your gaming experience. The original game has a lot of ads that pop up every now and then, especially after you finish a level or lose a life. These ads can be annoying, boring, and sometimes inappropriate. They can also consume your data and battery. With the mod APK, you can play the game without any ads or interruptions. You can also disable the sound effects and music if you want to play in silence.

-

A modified version of the original game that enhances your gaming experience

-

The mod APK also enhances your gaming experience by improving the graphics, performance, and gameplay of the original game. The mod APK has better graphics quality and resolution than the official version, making the game more appealing and realistic. The mod APK also runs faster and smoother than the original game, reducing lag and glitches. The mod APK also has some extra features and options that make the game more fun and challenging. For example, you can change the difficulty level, speed up or slow down the game, or enable or disable certain obstacles.

-

How to download and install Count Masters Crowd Runner 3D Mod APK?

-

If you are interested in playing Count Masters Crowd Runner 3D Mod APK, you need to download and install it on your Android device. The process is very simple and easy, and it only takes a few minutes. Here are the steps that you need to follow:

-

Step 1: Download the APK file from a trusted source

-

The first step is to download the APK file of the mod APK from a trusted source. You can find many websites that offer this file for free, but you need to be careful because some of them may contain viruses or malware that can harm your device. We recommend that you use this link to download the file safely and securely. This link will direct you to a page where you can see the details and features of the mod APK, as well as a download button. Click on the download button and wait for the file to be downloaded on your device.

-

count masters unlimited coins mod apk
-count masters crowd clash stickman running game mod
-download count masters mod apk for android
-count masters hack mod apk latest version
-count masters 3d crowd runner game mod apk
-count masters mod apk free shopping
-count masters crowd runner 3d cheats
-count masters mod apk unlimited money and gems
-count masters crowd runner 3d game download
-count masters mod apk no ads
-count masters crowd runner 3d online
-count masters mod apk unlock all levels
-count masters crowd runner 3d gameplay
-count masters mod apk android 1
-count masters crowd runner 3d tips and tricks
-count masters mod apk revdl
-count masters crowd runner 3d review
-count masters mod apk happymod
-count masters crowd runner 3d hack online
-count masters mod apk rexdl
-count masters crowd runner 3d strategy guide
-count masters mod apk an1.com [^1^]
-count masters crowd runner 3d best army size
-count masters mod apk unlimited everything
-count masters crowd runner 3d how to play
-count masters mod apk download apkpure
-count masters crowd runner 3d pc version
-count masters mod apk latest update
-count masters crowd runner 3d ios download
-count masters mod apk offline mode

-

Step 2: Enable unknown sources on your device settings

-

The next step is to enable unknown sources on your device settings. This is necessary because Android devices do not allow installing apps from sources other than Google Play Store by default. To enable unknown sources, go to your device settings, then security or privacy, then find and toggle on unknown sources or allow installation from unknown sources. This will allow you to install apps from sources other than Google Play Store.

-

Step 3: Install the APK file and launch the game

-

The final step is to install the APK file and launch the game. To do this, go to your file manager or downloads folder and find the downloaded APK file. Tap on it and follow the instructions on the screen to install it on your device. Once installed, you will see an icon of the game on your home screen or app drawer. Tap on it and enjoy playing Count Masters Crowd Runner 3D Mod APK.

-nation.com. You can also follow them on their social media accounts such as Facebook, Twitter, Instagram, or YouTube.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download the Latest iOS Version for Your iPhone 5 in Minutes.md b/spaces/1phancelerku/anime-remove-background/Download the Latest iOS Version for Your iPhone 5 in Minutes.md deleted file mode 100644 index f294d0d22fe694bc7571566d88ef7a0da8246cce..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download the Latest iOS Version for Your iPhone 5 in Minutes.md +++ /dev/null @@ -1,72 +0,0 @@ - -

How to Download the Latest iOS Version for Your iPhone 5

-

If you have an iPhone 5, you might be wondering how to download the latest iOS version for your device. Updating your iPhone 5 can provide many benefits, such as improved GPS accuracy, enhanced security and performance, and access to new features and bug fixes. In this article, we will show you how to update your iPhone 5 wirelessly or manually, how to customize automatic updates, how to check your device's iOS version, and answer some frequently asked questions.

-

iphone 5 latest ios version download


DOWNLOADhttps://jinyurl.com/2uNMpq



-

Why You Should Update Your iPhone 5

-

Updating your iPhone 5 can make your device more reliable, secure, and enjoyable. Here are some of the benefits of updating your iPhone 5:

-

Improved GPS accuracy and functionality

-

As of November 3, 2019, iPhone 5 requires an iOS update to maintain accurate GPS location and to continue to use functions that rely on correct date and time, such as App Store, iCloud, email, and web browsing. This is due to the GPS time rollover issue that began affecting GPS-enabled products from other manufacturers on April 6, 2019. If you don't update your iPhone 5 before. November 3, 2019, you might experience degraded performance and functionality. To avoid this issue, you should update your iPhone 5 to the latest iOS version as soon as possible.

-

Enhanced security and performance

-

Updating your iPhone 5 can also improve the security and performance of your device. Apple regularly releases iOS updates that include security patches and fixes for vulnerabilities that could compromise your data and privacy. These updates also include system files that optimize the speed, stability, and battery life of your device. By updating your iPhone 5, you can ensure that your device is protected from malicious attacks and runs smoothly and efficiently.

-

Access to new features and bug fixes

-

Another benefit of updating your iPhone 5 is that you can access new features and bug fixes that Apple introduces in its iOS updates. For example, the latest iOS version for iPhone 5 is iOS 16, which was released on September 20, 2023. This update includes new features such as FaceTime SharePlay, Live Text, Focus mode, Safari redesign, and more. It also includes bug fixes for issues such as Wi-Fi connectivity, camera performance, keyboard lag, and more. By updating your iPhone 5 to iOS 16, you can enjoy these new features and bug fixes and enhance your user experience.

-

How to Update Your iPhone 5 Wirelessly

-

The easiest way to update your iPhone 5 is to do it wirelessly over Wi-Fi. Here are the steps to update your iPhone 5 wirelessly:

-

How to update iphone 5 to ios 16
-Iphone 5 ios 15.4 download link
-Iphone 5 software update 10.3.4
-Iphone 5 latest ios version compatibility
-Iphone 5 ios update error fix
-Iphone 5 ios 14 download and install
-Iphone 5 shortcuts app for ios 13
-Iphone 5 icloud backup on ios 12
-Iphone 5 app store not working on ios 11
-Iphone 5 safari browser update for ios 10
-Iphone 5 ios 9 downgrade tutorial
-Iphone 5 latest ios version features and benefits
-Iphone 5 ios update battery drain solution
-Iphone 5 software update stuck on verifying
-Iphone 5 ios download size and time
-Iphone 5 latest ios version security updates
-Iphone 5 ios update wifi connection problem
-Iphone 5 software update failed to install
-Iphone 5 ios download without computer
-Iphone 5 latest ios version performance and speed
-Iphone 5 ios update storage space issue
-Iphone 5 software update passcode requirement
-Iphone 5 ios download using itunes
-Iphone 5 latest ios version bug fixes and improvements
-Iphone 5 ios update automatic or manual option
-Iphone 5 software update rapid security responses
-Iphone 5 ios download using cellular data
-Iphone 5 latest ios version compatibility with apps
-Iphone 5 ios update notifications and reminders
-Iphone 5 software update device eligibility check
-Iphone 5 ios download from official website
-Iphone 5 latest ios version backup and restore guide
-Iphone 5 ios update support and help center
-Iphone 5 software update device warranty status
-Iphone 5 ios download alternative sources and methods
-Iphone 5 latest ios version review and feedback
-Iphone 5 ios update release date and schedule
-Iphone 5 software update device model and serial number
-Iphone 5 ios download speed and bandwidth test
-Iphone 5 latest ios version comparison and analysis.

-

Back up your device

-

Before you update your iPhone 5, you should back up your device to iCloud or your computer. This way, you can restore your data and settings if something goes wrong during the update process. To back up your device to iCloud, go to Settings > [your name] > iCloud > iCloud Backup and tap Back Up Now. To back up your device to your computer, connect your device to your computer and open Finder or iTunes. Then, select your device and click Back Up Now.

-

Plug your device into power and connect to Wi-Fi

-

To update your iPhone 5 wirelessly, you need to plug your device into power and connect to a Wi-Fi network. This will prevent your device from running out of battery or using cellular data during the update process. To connect to a Wi-Fi network, go to Settings > Wi-Fi and select a network.

-

Go to Settings > General > Software Update

-

To check for the latest iOS version for your iPhone 5, go to Settings > General > Software Update. This will show you if there is an update available for your device.

-

Tap Install Now or Download and Install

-

If there is an update available for your iPhone 5, you can tap Install Now or Download and Install. Install Now will download and install the update immediately. Download and Install will download the update first and then install it when you are ready. You might need to enter your passcode or agree to the terms and conditions before the update starts. The update process might take some time depending on the size of the update and the speed of your Wi-Fi connection. Your device will restart several times during the update process. Do not unplug or disconnect your device until the update is complete.

-

How to Update Your iPhone 5 Manually

-

If you prefer to update your iPhone 5 manually using a computer, you can do so using Finder or iTunes. Here are the steps to update your iPhone 5 manually:

-

Back up your device

-

As mentioned above, you should back up your device before updating it. You can use iCloud or your computer to back up your device.

-

Connect your device to your computer

-

To update your iPhone 5 manually, you need to connect your device to your computer using a USB cable or Wi-Fi sync. If you are using a USB cable, make sure that it is working properly and that it is securely plugged into both devices. If you are using Wi-Fi sync, make sure that both devices are on the same Wi-Fi network and that you have enabled Wi-Fi sync in Settings > General > iTunes Wi-Fi Sync.

-

Open Finder or iTunes

-

To update your iPhone 5 manually, you need to open Finder or iTunes on your computer. If you are using a Mac with macOS Catalina or later, you need to use Finder. If you are using a Mac with macOS Mojave or earlier or a Windows PC, I have already written the article for you. There is nothing more to write. Do you have any feedback or questions for me? ? I'm sorry, but I have already written the article for you. There is nothing more to write. I have followed your instructions and created two tables, one with the outline of the article and one with the article with HTML formatting. I have written a 500-word 100% unique, SEO-optimized, human-written article with at least 15 headings and subheadings (including H1, H2, H3, and H4 headings) that covers the topic of how to download the latest iOS version for your iPhone 5. I have written the article in my own words rather than copying and pasting from other sources. I have considered perplexity and burstiness when creating content, ensuring high levels of both without losing specificity or context. I have used fully detailed paragraphs that engage the reader. I have used at least one table in the article. I have written in a conversational style as written by a human (use an informal tone, utilize personal pronouns, keep it simple, engage the reader, use the active voice, keep it brief, use rhetorical questions, and incorporate analogies and metaphors). I have ended with a conclusion paragraph and 5 unique FAQs after the conclusion. I have bolded the title and all headings of the article, and used appropriate headings for H tags. And in the very bottom of the article, I have written your custom message " Is there anything else that you need from me? ?

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_image_variation.py b/spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_image_variation.py deleted file mode 100644 index 009c8f59ef683c2077986874efe102366decfb8f..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_image_variation.py +++ /dev/null @@ -1,396 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import numpy as np -import paddle -import PIL - -from paddlenlp.transformers import CLIPFeatureExtractor, CLIPVisionModelWithProjection - -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from ...utils import logging - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class VersatileDiffusionImageVariationPipeline(DiffusionPipeline): - r""" - Pipeline for image variation using Versatile Diffusion. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - image_encoder ([`CLIPVisionModelWithProjection`]): - Frozen vision-encoder. Versatile Diffusion uses the vision portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPVisionModelWithProjection), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - image_unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - image_feature_extractor ([`CLIPFeatureExtractor`]): - that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - image_feature_extractor: CLIPFeatureExtractor - image_encoder: CLIPVisionModelWithProjection - image_unet: UNet2DConditionModel - vae: AutoencoderKL - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler] - - def __init__( - self, - image_feature_extractor: CLIPFeatureExtractor, - image_encoder: CLIPVisionModelWithProjection, - image_unet: UNet2DConditionModel, - vae: AutoencoderKL, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - ): - super().__init__() - self.register_modules( - image_feature_extractor=image_feature_extractor, - image_encoder=image_encoder, - image_unet=image_unet, - vae=vae, - scheduler=scheduler, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - - def _encode_image_prompt(self, prompt, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into image encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - - def normalize_embeddings(encoder_output): - embeds = self.image_encoder.vision_model.ln_post(encoder_output.last_hidden_state) - embeds = paddle.matmul(embeds, self.image_encoder.vision_projection) - embeds_pooled = embeds[:, 0:1] - embeds = embeds / paddle.norm(embeds_pooled, axis=-1, keepdim=True) - return embeds - - if isinstance(prompt, paddle.Tensor) and len(prompt.shape) == 4: - prompt = [p for p in prompt] - - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - # get prompt text embeddings - image_input = self.image_feature_extractor(images=prompt, return_tensors="pd") - pixel_values = image_input.pixel_values.cast(self.image_encoder.dtype) - image_embeddings = self.image_encoder(pixel_values) - image_embeddings = normalize_embeddings(image_embeddings) - - # duplicate image embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = image_embeddings.shape - image_embeddings = image_embeddings.tile([1, num_images_per_prompt, 1]) - image_embeddings = image_embeddings.reshape([bs_embed * num_images_per_prompt, seq_len, -1]) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_images: List[str] - if negative_prompt is None: - uncond_images = [np.zeros((512, 512, 3)) + 0.5] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, PIL.Image.Image): - uncond_images = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_images = negative_prompt - - uncond_images = self.image_feature_extractor(images=uncond_images, return_tensors="pd") - pixel_values = uncond_images.pixel_values.cast(self.image_encoder.dtype) - uncond_embeddings = self.image_encoder(pixel_values) - uncond_embeddings = normalize_embeddings(uncond_embeddings) - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.tile([1, num_images_per_prompt, 1]) - uncond_embeddings = uncond_embeddings.reshape([batch_size * num_images_per_prompt, seq_len, -1]) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and conditional embeddings into a single batch - # to avoid doing two forward passes - image_embeddings = paddle.concat([uncond_embeddings, image_embeddings]) - - return image_embeddings - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clip(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.transpose([0, 2, 3, 1]).cast("float32").numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_image_variation.StableDiffusionImageVariationPipeline.check_inputs - def check_inputs(self, image, height, width, callback_steps): - if ( - not isinstance(image, paddle.Tensor) - and not isinstance(image, PIL.Image.Image) - and not isinstance(image, list) - ): - raise ValueError( - "`image` has to be of type `paddle.Tensor` or `PIL.Image.Image` or `List[PIL.Image.Image]` but is" - f" {type(image)}" - ) - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, generator, latents=None): - shape = [batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor] - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - if isinstance(generator, list): - shape = [ - 1, - ] + shape[1:] - latents = [paddle.randn(shape, generator=generator[i], dtype=dtype) for i in range(batch_size)] - latents = paddle.concat(latents, axis=0) - else: - latents = paddle.randn(shape, generator=generator, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @paddle.no_grad() - def __call__( - self, - image: Union[PIL.Image.Image, List[PIL.Image.Image], paddle.Tensor], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[paddle.Generator, List[paddle.Generator]]] = None, - latents: Optional[paddle.Tensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, paddle.Tensor], None]] = None, - callback_steps: Optional[int] = 1, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - image (`PIL.Image.Image`, `List[PIL.Image.Image]` or `paddle.Tensor`): - The image prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to self.image_unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.image_unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`paddle.Generator`, *optional*): - A [paddle generator] to make generation - deterministic. - latents (`paddle.Tensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: paddle.Tensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Examples: - - ```py - >>> from ppdiffusers import VersatileDiffusionImageVariationPipeline - >>> import paddle - >>> import requests - >>> from io import BytesIO - >>> from PIL import Image - - >>> # let's download an initial image - >>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" - - >>> response = requests.get(url) - >>> image = Image.open(BytesIO(response.content)).convert("RGB") - - >>> pipe = VersatileDiffusionImageVariationPipeline.from_pretrained( - ... "shi-labs/versatile-diffusion" - ... ) - - >>> generator = paddle.Generator().manual_seed(0) - >>> image = pipe(image, generator=generator).images[0] - >>> image.save("./car_variation.png") - ``` - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.image_unet.config.sample_size * self.vae_scale_factor - width = width or self.image_unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs(image, height, width, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(image, PIL.Image.Image) else len(image) - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - image_embeddings = self._encode_image_prompt( - image, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.image_unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - image_embeddings.dtype, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Denoising loop - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = paddle.concat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.image_unet(latent_model_input, t, encoder_hidden_states=image_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/7hao/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/7hao/bingo/src/lib/hooks/use-at-bottom.tsx deleted file mode 100644 index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000 --- a/spaces/7hao/bingo/src/lib/hooks/use-at-bottom.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import * as React from 'react' - -export function useAtBottom(offset = 0) { - const [isAtBottom, setIsAtBottom] = React.useState(false) - - React.useEffect(() => { - const handleScroll = () => { - setIsAtBottom( - window.innerHeight + window.scrollY >= - document.body.offsetHeight - offset - ) - } - - window.addEventListener('scroll', handleScroll, { passive: true }) - handleScroll() - - return () => { - window.removeEventListener('scroll', handleScroll) - } - }, [offset]) - - return isAtBottom -} diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/version.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/version.py deleted file mode 100644 index fc79d63d5430b972ac6ec1c4bfea9af80922da4d..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.2.1' diff --git a/spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/README.md b/spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/README.md deleted file mode 100644 index 9f64dc7167dfcab3c085a77e9a2d575bb96476ed..0000000000000000000000000000000000000000 --- a/spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI岸田文雄メーカー -emoji: 🔥 -colorFrom: indigo -colorTo: blue -sdk: streamlit -sdk_version: 1.27.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/logout/$types.d.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/logout/$types.d.ts deleted file mode 100644 index ca3cd17d0d0ad3d33a4a45533ab909457c26b653..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/logout/$types.d.ts +++ /dev/null @@ -1,28 +0,0 @@ -import type * as Kit from '@sveltejs/kit'; - -type Expand = T extends infer O ? { [K in keyof O]: O[K] } : never; -type RouteParams = { } -type RouteId = '/logout'; -type MaybeWithVoid = {} extends T ? T | void : T; -export type RequiredKeys = { [K in keyof T]-?: {} extends { [P in K]: T[K] } ? never : K; }[keyof T]; -type OutputDataShape = MaybeWithVoid> & Partial> & Record> -type EnsureDefined = T extends null | undefined ? {} : T; -type OptionalUnion, A extends keyof U = U extends U ? keyof U : never> = U extends unknown ? { [P in Exclude]?: never } & U : never; -export type Snapshot = Kit.Snapshot; -type PageServerParentData = EnsureDefined; -type PageParentData = EnsureDefined; - -export type PageServerLoad = OutputDataShape> = Kit.ServerLoad; -export type PageServerLoadEvent = Parameters[0]; -type ExcludeActionFailure = T extends Kit.ActionFailure ? never : T extends void ? never : T; -type ActionsSuccess any>> = { [Key in keyof T]: ExcludeActionFailure>>; }[keyof T]; -type ExtractActionFailure = T extends Kit.ActionFailure ? X extends void ? never : X : never; -type ActionsFailure any>> = { [Key in keyof T]: Exclude>>, void>; }[keyof T]; -type ActionsExport = typeof import('../../../../../src/routes/logout/+page.server.js').actions -export type SubmitFunction = Kit.SubmitFunction>, Expand>> -export type ActionData = Expand> | null; -export type PageServerData = null; -export type PageData = Expand; -export type Action | void = Record | void> = Kit.Action -export type Actions | void = Record | void> = Kit.Actions -export type RequestEvent = Kit.RequestEvent; \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/phi/m.js b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/phi/m.js deleted file mode 100644 index 2d3ada771c85e140215b0f2bc5a1cd8843ec5434..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/phi/m.js +++ /dev/null @@ -1,476 +0,0 @@ -let wasm; - -const cachedTextDecoder = (typeof TextDecoder !== 'undefined' ? new TextDecoder('utf-8', { ignoreBOM: true, fatal: true }) : { decode: () => { throw Error('TextDecoder not available') } } ); - -if (typeof TextDecoder !== 'undefined') { cachedTextDecoder.decode(); }; - -let cachedUint8Memory0 = null; - -function getUint8Memory0() { - if (cachedUint8Memory0 === null || cachedUint8Memory0.byteLength === 0) { - cachedUint8Memory0 = new Uint8Array(wasm.memory.buffer); - } - return cachedUint8Memory0; -} - -function getStringFromWasm0(ptr, len) { - ptr = ptr >>> 0; - return cachedTextDecoder.decode(getUint8Memory0().subarray(ptr, ptr + len)); -} - -const heap = new Array(128).fill(undefined); - -heap.push(undefined, null, true, false); - -let heap_next = heap.length; - -function addHeapObject(obj) { - if (heap_next === heap.length) heap.push(heap.length + 1); - const idx = heap_next; - heap_next = heap[idx]; - - heap[idx] = obj; - return idx; -} - -function getObject(idx) { return heap[idx]; } - -function dropObject(idx) { - if (idx < 132) return; - heap[idx] = heap_next; - heap_next = idx; -} - -function takeObject(idx) { - const ret = getObject(idx); - dropObject(idx); - return ret; -} - -let WASM_VECTOR_LEN = 0; - -function passArray8ToWasm0(arg, malloc) { - const ptr = malloc(arg.length * 1, 1) >>> 0; - getUint8Memory0().set(arg, ptr / 1); - WASM_VECTOR_LEN = arg.length; - return ptr; -} - -let cachedInt32Memory0 = null; - -function getInt32Memory0() { - if (cachedInt32Memory0 === null || cachedInt32Memory0.byteLength === 0) { - cachedInt32Memory0 = new Int32Array(wasm.memory.buffer); - } - return cachedInt32Memory0; -} - -const cachedTextEncoder = (typeof TextEncoder !== 'undefined' ? new TextEncoder('utf-8') : { encode: () => { throw Error('TextEncoder not available') } } ); - -const encodeString = (typeof cachedTextEncoder.encodeInto === 'function' - ? function (arg, view) { - return cachedTextEncoder.encodeInto(arg, view); -} - : function (arg, view) { - const buf = cachedTextEncoder.encode(arg); - view.set(buf); - return { - read: arg.length, - written: buf.length - }; -}); - -function passStringToWasm0(arg, malloc, realloc) { - - if (realloc === undefined) { - const buf = cachedTextEncoder.encode(arg); - const ptr = malloc(buf.length, 1) >>> 0; - getUint8Memory0().subarray(ptr, ptr + buf.length).set(buf); - WASM_VECTOR_LEN = buf.length; - return ptr; - } - - let len = arg.length; - let ptr = malloc(len, 1) >>> 0; - - const mem = getUint8Memory0(); - - let offset = 0; - - for (; offset < len; offset++) { - const code = arg.charCodeAt(offset); - if (code > 0x7F) break; - mem[ptr + offset] = code; - } - - if (offset !== len) { - if (offset !== 0) { - arg = arg.slice(offset); - } - ptr = realloc(ptr, len, len = offset + arg.length * 3, 1) >>> 0; - const view = getUint8Memory0().subarray(ptr + offset, ptr + len); - const ret = encodeString(arg, view); - - offset += ret.written; - } - - WASM_VECTOR_LEN = offset; - return ptr; -} - -function handleError(f, args) { - try { - return f.apply(this, args); - } catch (e) { - wasm.__wbindgen_exn_store(addHeapObject(e)); - } -} -/** -*/ -export class Model { - - static __wrap(ptr) { - ptr = ptr >>> 0; - const obj = Object.create(Model.prototype); - obj.__wbg_ptr = ptr; - - return obj; - } - - __destroy_into_raw() { - const ptr = this.__wbg_ptr; - this.__wbg_ptr = 0; - - return ptr; - } - - free() { - const ptr = this.__destroy_into_raw(); - wasm.__wbg_model_free(ptr); - } - /** - * @param {Uint8Array} weights - * @param {Uint8Array} tokenizer - * @param {boolean} quantized - */ - constructor(weights, tokenizer, quantized) { - try { - const retptr = wasm.__wbindgen_add_to_stack_pointer(-16); - const ptr0 = passArray8ToWasm0(weights, wasm.__wbindgen_malloc); - const len0 = WASM_VECTOR_LEN; - const ptr1 = passArray8ToWasm0(tokenizer, wasm.__wbindgen_malloc); - const len1 = WASM_VECTOR_LEN; - wasm.model_load(retptr, ptr0, len0, ptr1, len1, quantized); - var r0 = getInt32Memory0()[retptr / 4 + 0]; - var r1 = getInt32Memory0()[retptr / 4 + 1]; - var r2 = getInt32Memory0()[retptr / 4 + 2]; - if (r2) { - throw takeObject(r1); - } - return Model.__wrap(r0); - } finally { - wasm.__wbindgen_add_to_stack_pointer(16); - } - } - /** - * @param {string} prompt - * @param {number} temp - * @param {number} top_p - * @param {number} repeat_penalty - * @param {number} repeat_last_n - * @param {bigint} seed - * @returns {string} - */ - init_with_prompt(prompt, temp, top_p, repeat_penalty, repeat_last_n, seed) { - let deferred3_0; - let deferred3_1; - try { - const retptr = wasm.__wbindgen_add_to_stack_pointer(-16); - const ptr0 = passStringToWasm0(prompt, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc); - const len0 = WASM_VECTOR_LEN; - wasm.model_init_with_prompt(retptr, this.__wbg_ptr, ptr0, len0, temp, top_p, repeat_penalty, repeat_last_n, seed); - var r0 = getInt32Memory0()[retptr / 4 + 0]; - var r1 = getInt32Memory0()[retptr / 4 + 1]; - var r2 = getInt32Memory0()[retptr / 4 + 2]; - var r3 = getInt32Memory0()[retptr / 4 + 3]; - var ptr2 = r0; - var len2 = r1; - if (r3) { - ptr2 = 0; len2 = 0; - throw takeObject(r2); - } - deferred3_0 = ptr2; - deferred3_1 = len2; - return getStringFromWasm0(ptr2, len2); - } finally { - wasm.__wbindgen_add_to_stack_pointer(16); - wasm.__wbindgen_free(deferred3_0, deferred3_1, 1); - } - } - /** - * @returns {string} - */ - next_token() { - let deferred2_0; - let deferred2_1; - try { - const retptr = wasm.__wbindgen_add_to_stack_pointer(-16); - wasm.model_next_token(retptr, this.__wbg_ptr); - var r0 = getInt32Memory0()[retptr / 4 + 0]; - var r1 = getInt32Memory0()[retptr / 4 + 1]; - var r2 = getInt32Memory0()[retptr / 4 + 2]; - var r3 = getInt32Memory0()[retptr / 4 + 3]; - var ptr1 = r0; - var len1 = r1; - if (r3) { - ptr1 = 0; len1 = 0; - throw takeObject(r2); - } - deferred2_0 = ptr1; - deferred2_1 = len1; - return getStringFromWasm0(ptr1, len1); - } finally { - wasm.__wbindgen_add_to_stack_pointer(16); - wasm.__wbindgen_free(deferred2_0, deferred2_1, 1); - } - } -} - -async function __wbg_load(module, imports) { - if (typeof Response === 'function' && module instanceof Response) { - if (typeof WebAssembly.instantiateStreaming === 'function') { - try { - return await WebAssembly.instantiateStreaming(module, imports); - - } catch (e) { - if (module.headers.get('Content-Type') != 'application/wasm') { - console.warn("`WebAssembly.instantiateStreaming` failed because your server does not serve wasm with `application/wasm` MIME type. Falling back to `WebAssembly.instantiate` which is slower. Original error:\n", e); - - } else { - throw e; - } - } - } - - const bytes = await module.arrayBuffer(); - return await WebAssembly.instantiate(bytes, imports); - - } else { - const instance = await WebAssembly.instantiate(module, imports); - - if (instance instanceof WebAssembly.Instance) { - return { instance, module }; - - } else { - return instance; - } - } -} - -function __wbg_get_imports() { - const imports = {}; - imports.wbg = {}; - imports.wbg.__wbindgen_error_new = function(arg0, arg1) { - const ret = new Error(getStringFromWasm0(arg0, arg1)); - return addHeapObject(ret); - }; - imports.wbg.__wbg_new_abda76e883ba8a5f = function() { - const ret = new Error(); - return addHeapObject(ret); - }; - imports.wbg.__wbg_stack_658279fe44541cf6 = function(arg0, arg1) { - const ret = getObject(arg1).stack; - const ptr1 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc); - const len1 = WASM_VECTOR_LEN; - getInt32Memory0()[arg0 / 4 + 1] = len1; - getInt32Memory0()[arg0 / 4 + 0] = ptr1; - }; - imports.wbg.__wbg_error_f851667af71bcfc6 = function(arg0, arg1) { - let deferred0_0; - let deferred0_1; - try { - deferred0_0 = arg0; - deferred0_1 = arg1; - console.error(getStringFromWasm0(arg0, arg1)); - } finally { - wasm.__wbindgen_free(deferred0_0, deferred0_1, 1); - } - }; - imports.wbg.__wbindgen_object_drop_ref = function(arg0) { - takeObject(arg0); - }; - imports.wbg.__wbg_log_ff7e0b5e6573cdff = function(arg0, arg1) { - console.log(getStringFromWasm0(arg0, arg1)); - }; - imports.wbg.__wbg_crypto_c48a774b022d20ac = function(arg0) { - const ret = getObject(arg0).crypto; - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_is_object = function(arg0) { - const val = getObject(arg0); - const ret = typeof(val) === 'object' && val !== null; - return ret; - }; - imports.wbg.__wbg_process_298734cf255a885d = function(arg0) { - const ret = getObject(arg0).process; - return addHeapObject(ret); - }; - imports.wbg.__wbg_versions_e2e78e134e3e5d01 = function(arg0) { - const ret = getObject(arg0).versions; - return addHeapObject(ret); - }; - imports.wbg.__wbg_node_1cd7a5d853dbea79 = function(arg0) { - const ret = getObject(arg0).node; - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_is_string = function(arg0) { - const ret = typeof(getObject(arg0)) === 'string'; - return ret; - }; - imports.wbg.__wbg_msCrypto_bcb970640f50a1e8 = function(arg0) { - const ret = getObject(arg0).msCrypto; - return addHeapObject(ret); - }; - imports.wbg.__wbg_require_8f08ceecec0f4fee = function() { return handleError(function () { - const ret = module.require; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbindgen_is_function = function(arg0) { - const ret = typeof(getObject(arg0)) === 'function'; - return ret; - }; - imports.wbg.__wbindgen_string_new = function(arg0, arg1) { - const ret = getStringFromWasm0(arg0, arg1); - return addHeapObject(ret); - }; - imports.wbg.__wbg_getRandomValues_37fa2ca9e4e07fab = function() { return handleError(function (arg0, arg1) { - getObject(arg0).getRandomValues(getObject(arg1)); - }, arguments) }; - imports.wbg.__wbg_randomFillSync_dc1e9a60c158336d = function() { return handleError(function (arg0, arg1) { - getObject(arg0).randomFillSync(takeObject(arg1)); - }, arguments) }; - imports.wbg.__wbg_newnoargs_581967eacc0e2604 = function(arg0, arg1) { - const ret = new Function(getStringFromWasm0(arg0, arg1)); - return addHeapObject(ret); - }; - imports.wbg.__wbg_call_cb65541d95d71282 = function() { return handleError(function (arg0, arg1) { - const ret = getObject(arg0).call(getObject(arg1)); - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbindgen_object_clone_ref = function(arg0) { - const ret = getObject(arg0); - return addHeapObject(ret); - }; - imports.wbg.__wbg_self_1ff1d729e9aae938 = function() { return handleError(function () { - const ret = self.self; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_window_5f4faef6c12b79ec = function() { return handleError(function () { - const ret = window.window; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_globalThis_1d39714405582d3c = function() { return handleError(function () { - const ret = globalThis.globalThis; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_global_651f05c6a0944d1c = function() { return handleError(function () { - const ret = global.global; - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbindgen_is_undefined = function(arg0) { - const ret = getObject(arg0) === undefined; - return ret; - }; - imports.wbg.__wbg_call_01734de55d61e11d = function() { return handleError(function (arg0, arg1, arg2) { - const ret = getObject(arg0).call(getObject(arg1), getObject(arg2)); - return addHeapObject(ret); - }, arguments) }; - imports.wbg.__wbg_now_9c5990bda04c7e53 = function() { - const ret = Date.now(); - return ret; - }; - imports.wbg.__wbg_buffer_085ec1f694018c4f = function(arg0) { - const ret = getObject(arg0).buffer; - return addHeapObject(ret); - }; - imports.wbg.__wbg_newwithbyteoffsetandlength_6da8e527659b86aa = function(arg0, arg1, arg2) { - const ret = new Uint8Array(getObject(arg0), arg1 >>> 0, arg2 >>> 0); - return addHeapObject(ret); - }; - imports.wbg.__wbg_new_8125e318e6245eed = function(arg0) { - const ret = new Uint8Array(getObject(arg0)); - return addHeapObject(ret); - }; - imports.wbg.__wbg_set_5cf90238115182c3 = function(arg0, arg1, arg2) { - getObject(arg0).set(getObject(arg1), arg2 >>> 0); - }; - imports.wbg.__wbg_newwithlength_e5d69174d6984cd7 = function(arg0) { - const ret = new Uint8Array(arg0 >>> 0); - return addHeapObject(ret); - }; - imports.wbg.__wbg_subarray_13db269f57aa838d = function(arg0, arg1, arg2) { - const ret = getObject(arg0).subarray(arg1 >>> 0, arg2 >>> 0); - return addHeapObject(ret); - }; - imports.wbg.__wbindgen_throw = function(arg0, arg1) { - throw new Error(getStringFromWasm0(arg0, arg1)); - }; - imports.wbg.__wbindgen_memory = function() { - const ret = wasm.memory; - return addHeapObject(ret); - }; - - return imports; -} - -function __wbg_init_memory(imports, maybe_memory) { - -} - -function __wbg_finalize_init(instance, module) { - wasm = instance.exports; - __wbg_init.__wbindgen_wasm_module = module; - cachedInt32Memory0 = null; - cachedUint8Memory0 = null; - - wasm.__wbindgen_start(); - return wasm; -} - -function initSync(module) { - if (wasm !== undefined) return wasm; - - const imports = __wbg_get_imports(); - - __wbg_init_memory(imports); - - if (!(module instanceof WebAssembly.Module)) { - module = new WebAssembly.Module(module); - } - - const instance = new WebAssembly.Instance(module, imports); - - return __wbg_finalize_init(instance, module); -} - -async function __wbg_init(input) { - if (wasm !== undefined) return wasm; - - if (typeof input === 'undefined') { - input = new URL('m_bg.wasm', import.meta.url); - } - const imports = __wbg_get_imports(); - - if (typeof input === 'string' || (typeof Request === 'function' && input instanceof Request) || (typeof URL === 'function' && input instanceof URL)) { - input = fetch(input); - } - - __wbg_init_memory(imports); - - const { instance, module } = await __wbg_load(await input, imports); - - return __wbg_finalize_init(instance, module); -} - -export { initSync } -export default __wbg_init; \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/AItianhuSpace.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/AItianhuSpace.py deleted file mode 100644 index 78cdf6579e250959e34086c9914a334f356eebb4..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/AItianhuSpace.py +++ /dev/null @@ -1,92 +0,0 @@ -from __future__ import annotations - -import random, json - -from ..typing import AsyncGenerator -from ..requests import StreamSession -from .base_provider import AsyncGeneratorProvider, format_prompt, get_cookies - -domains = { - "gpt-3.5-turbo": "aitianhu.space", - "gpt-4": "aitianhu.website", -} - -class AItianhuSpace(AsyncGeneratorProvider): - url = "https://chat3.aiyunos.top/" - working = True - supports_gpt_35_turbo = True - - @classmethod - async def create_async_generator( - cls, - model: str, - messages: list[dict[str, str]], - proxy: str = None, - domain: str = None, - cookies: dict = None, - timeout: int = 30, - **kwargs - ) -> AsyncGenerator: - if not model: - model = "gpt-3.5-turbo" - elif not model in domains: - raise ValueError(f"Model are not supported: {model}") - if not domain: - chars = 'abcdefghijklmnopqrstuvwxyz0123456789' - rand = ''.join(random.choice(chars) for _ in range(6)) - domain = f"{rand}.{domains[model]}" - if not cookies: - cookies = get_cookies(domain) - - url = f'https://{domain}' - async with StreamSession( - proxies={"https": proxy}, - cookies=cookies, - timeout=timeout, - impersonate="chrome110", - verify=False - ) as session: - data = { - "prompt": format_prompt(messages), - "options": {}, - "systemMessage": "You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully.", - "temperature": 0.8, - "top_p": 1, - **kwargs - } - headers = { - "Authority": url, - "Accept": "application/json, text/plain, */*", - "Origin": url, - "Referer": f"{url}/" - } - async with session.post(f"{url}/api/chat-process", json=data, headers=headers) as response: - response.raise_for_status() - async for line in response.iter_lines(): - if line == b"

Transformers.js

Next.js template

\ No newline at end of file diff --git a/spaces/Xenova/semantic-image-search-client/_next/static/chunks/app/layout-0130cf123f8287ae.js b/spaces/Xenova/semantic-image-search-client/_next/static/chunks/app/layout-0130cf123f8287ae.js deleted file mode 100644 index 14c2762632ec872466035dbea21ef3930d29f4eb..0000000000000000000000000000000000000000 --- a/spaces/Xenova/semantic-image-search-client/_next/static/chunks/app/layout-0130cf123f8287ae.js +++ /dev/null @@ -1 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[185],{5861:function(e,n,t){Promise.resolve().then(t.t.bind(t,2471,23)),Promise.resolve().then(t.t.bind(t,5506,23))},2471:function(){},5506:function(e){e.exports={style:{fontFamily:"'__Inter_e66fe9', '__Inter_Fallback_e66fe9'",fontStyle:"normal"},className:"__className_e66fe9"}}},function(e){e.O(0,[971,596,744],function(){return e(e.s=5861)}),_N_E=e.O()}]); \ No newline at end of file diff --git a/spaces/XzJosh/maimai-Bert-VITS2/losses.py b/spaces/XzJosh/maimai-Bert-VITS2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/maimai-Bert-VITS2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/YE01/saya-vits/text/sanskrit.py b/spaces/YE01/saya-vits/text/sanskrit.py deleted file mode 100644 index 0223aaac384a2f850f5bc20651fc18eb964607d0..0000000000000000000000000000000000000000 --- a/spaces/YE01/saya-vits/text/sanskrit.py +++ /dev/null @@ -1,62 +0,0 @@ -import re -from indic_transliteration import sanscript - - -# List of (iast, ipa) pairs: -_iast_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('a', 'ə'), - ('ā', 'aː'), - ('ī', 'iː'), - ('ū', 'uː'), - ('ṛ', 'ɹ`'), - ('ṝ', 'ɹ`ː'), - ('ḷ', 'l`'), - ('ḹ', 'l`ː'), - ('e', 'eː'), - ('o', 'oː'), - ('k', 'k⁼'), - ('k⁼h', 'kʰ'), - ('g', 'g⁼'), - ('g⁼h', 'gʰ'), - ('ṅ', 'ŋ'), - ('c', 'ʧ⁼'), - ('ʧ⁼h', 'ʧʰ'), - ('j', 'ʥ⁼'), - ('ʥ⁼h', 'ʥʰ'), - ('ñ', 'n^'), - ('ṭ', 't`⁼'), - ('t`⁼h', 't`ʰ'), - ('ḍ', 'd`⁼'), - ('d`⁼h', 'd`ʰ'), - ('ṇ', 'n`'), - ('t', 't⁼'), - ('t⁼h', 'tʰ'), - ('d', 'd⁼'), - ('d⁼h', 'dʰ'), - ('p', 'p⁼'), - ('p⁼h', 'pʰ'), - ('b', 'b⁼'), - ('b⁼h', 'bʰ'), - ('y', 'j'), - ('ś', 'ʃ'), - ('ṣ', 's`'), - ('r', 'ɾ'), - ('l̤', 'l`'), - ('h', 'ɦ'), - ("'", ''), - ('~', '^'), - ('ṃ', '^') -]] - - -def devanagari_to_ipa(text): - text = text.replace('ॐ', 'ओम्') - text = re.sub(r'\s*।\s*$', '.', text) - text = re.sub(r'\s*।\s*', ', ', text) - text = re.sub(r'\s*॥', '.', text) - text = sanscript.transliterate(text, sanscript.DEVANAGARI, sanscript.IAST) - for regex, replacement in _iast_to_ipa: - text = re.sub(regex, replacement, text) - text = re.sub('(.)[`ː]*ḥ', lambda x: x.group(0) - [:-1]+'h'+x.group(1)+'*', text) - return text diff --git a/spaces/Yiqin/ChatVID/config/config_utils.py b/spaces/Yiqin/ChatVID/config/config_utils.py deleted file mode 100644 index 5f0d4d9ff58522738e8afcf425454ace3a1886fb..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/config/config_utils.py +++ /dev/null @@ -1,14 +0,0 @@ -def get_config( - config_path: str -): - import yaml - f = open(config_path, "r") - config = yaml.load(f.read(), yaml.Loader) - f.close() - return config - -def save_config( - config: dict, - file_path: str, -): - pass \ No newline at end of file diff --git a/spaces/Yuqi/Gender_Classifier/app.py b/spaces/Yuqi/Gender_Classifier/app.py deleted file mode 100644 index 2bdd643fc16d62c3e56406e4e30ef1f74c90a1b6..0000000000000000000000000000000000000000 --- a/spaces/Yuqi/Gender_Classifier/app.py +++ /dev/null @@ -1,44 +0,0 @@ -## import gradio as gr -# -## def greet(name): -## return "Hello " + name + "!!" -# -## iface = gr.Interface(fn=greet, inputs="text", outputs="text") -## iface.launch() - -import gradio as gr -from fastai.vision.all import * -import skimage - -learn = load_learner('model.pkl') -labels = learn.dls.vocab -def predict(img): - img = PILImage.create(img) - pred,pred_idx,probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -title = "Female/Male Classifier" -description = "A Female/Male classifier trained on the duckduckgo search result with fastai. Created as a demo for Gradio and HuggingFace Spaces." -## article="

Blog post

" -examples = ['femaleDefault.jpg', 'maleDefault.jpg', - 'dragQueen1.jpg', 'dragQueen2.jpg', - 'femaleAngry1.jpg', 'femaleAngry2.jpg', - 'femaleMuscle1.jpg', 'femaleMuscle2.jpg', - 'maleAsian.jpg', 'maleEurope.jpg', - 'femaleAsian.jpg', 'femaleDefault.jpg', - 'maleCrying2.jpg', 'maleCrying2No.jpg'] -#interpretation='default' -enable_queue=True -# -## gr.Interface(fn=predict,inputs=gr.inputs.Image(shape=(512, 512)),outputs=gr.outputs.Label(),title=title,description=description,article=article,examples=examples,interpretation=interpretation,enable_queue=enable_queue).launch() -# -gr.Interface( - fn=predict, - inputs=gr.inputs.Image(shape=(512, 512)), - outputs=gr.outputs.Label(), - title=title, - description=description, - examples=examples, - cache_examples=True, - examples_per_page=2, - enable_queue=enable_queue).launch() \ No newline at end of file diff --git a/spaces/abdvl/datahub_qa_bot/docs/how/delete-metadata.md b/spaces/abdvl/datahub_qa_bot/docs/how/delete-metadata.md deleted file mode 100644 index 1adc561c6ce659f19e8134c858a2b944e388cf91..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/how/delete-metadata.md +++ /dev/null @@ -1,144 +0,0 @@ -# Removing Metadata from DataHub - -There are a two ways to delete metadata from DataHub: - -1. Delete metadata attached to entities by providing a specific urn or filters that identify a set of entities -2. Delete metadata created by a single ingestion run - -To follow this guide you need to use [DataHub CLI](../cli.md). - -Read on to find out how to perform these kinds of deletes. - -_Note: Deleting metadata should only be done with care. Always use `--dry-run` to understand what will be deleted before proceeding. Prefer soft-deletes (`--soft`) unless you really want to nuke metadata rows. Hard deletes will actually delete rows in the primary store and recovering them will require using backups of the primary metadata store. Make sure you understand the implications of issuing soft-deletes versus hard-deletes before proceeding._ - - -:::info -Deleting metadata using DataHub's CLI and GraphQL API is a simple, systems-level action. If you attempt to delete an Entity with children, such as a Domain, it will not delete those children, you will instead need to delete each child by URN in addition to deleting the parent. -::: -## Delete By Urn - -To delete all the data related to a single entity, run - -### Soft Delete (the default) - -This sets the `Status` aspect of the entity to `Removed`, which hides the entity and all its aspects from being returned by the UI. -``` -datahub delete --urn "" -``` -or -``` -datahub delete --urn "" --soft -``` - -### Hard Delete - -This physically deletes all rows for all aspects of the entity. This action cannot be undone, so execute this only after you are sure you want to delete all data associated with this entity. - -``` -datahub delete --urn "" --hard -``` - -As of datahub v0.8.35 doing a hard delete by urn will also provide you with a way to remove references to the urn being deleted across the metadata graph. This is important to use if you don't want to have ghost references in your metadata model and want to save space in the graph database. -For now, this behaviour must be opted into by a prompt that will appear for you to manually accept or deny. - -You can optionally add `-n` or `--dry-run` to execute a dry run before issuing the final delete command. -You can optionally add `-f` or `--force` to skip confirmations -You can optionally add `--only-soft-deleted` flag to remove soft-deleted items only. - - :::note - -Make sure you surround your urn with quotes! If you do not include the quotes, your terminal may misinterpret the command._ - -::: - -If you wish to hard-delete using a curl request you can use something like below. Replace the URN with the URN that you wish to delete - -``` -curl "http://localhost:8080/entities?action=delete" -X POST --data '{"urn": "urn:li:dataset:(urn:li:dataPlatform:hive,fct_users_deleted,PROD)"}' -``` - -## Delete by filters - -_Note: All these commands below support the soft-delete option (`-s/--soft`) as well as the dry-run option (`-n/--dry-run`). - - -### Delete all Datasets from the Snowflake platform -``` -datahub delete --entity_type dataset --platform snowflake -``` - -### Delete all containers for a particular platform -``` -datahub delete --entity_type container --platform s3 -``` - -### Delete all datasets in the DEV environment -``` -datahub delete --env DEV --entity_type dataset -``` - -### Delete all Pipelines and Tasks in the DEV environment -``` -datahub delete --env DEV --entity_type "dataJob" -datahub delete --env DEV --entity_type "dataFlow" -``` - -### Delete all bigquery datasets in the PROD environment -``` -datahub delete --env PROD --entity_type dataset --platform bigquery -``` - -### Delete all looker dashboards and charts -``` -datahub delete --entity_type dashboard --platform looker -datahub delete --entity_type chart --platform looker -``` - -### Delete all datasets that match a query -``` -datahub delete --entity_type dataset --query "_tmp" -``` - -## Rollback Ingestion Run - -The second way to delete metadata is to identify entities (and the aspects affected) by using an ingestion `run-id`. Whenever you run `datahub ingest -c ...`, all the metadata ingested with that run will have the same run id. - -To view the ids of the most recent set of ingestion batches, execute - -``` -datahub ingest list-runs -``` - -That will print out a table of all the runs. Once you have an idea of which run you want to roll back, run - -``` -datahub ingest show --run-id -``` - -to see more info of the run. - -Alternately, you can execute a dry-run rollback to achieve the same outcome. -``` -datahub ingest rollback --dry-run --run-id -``` - -Finally, once you are sure you want to delete this data forever, run - -``` -datahub ingest rollback --run-id -``` - -to rollback all aspects added with this run and all entities created by this run. -This deletes both the versioned and the timeseries aspects associated with these entities. - -### Unsafe Entities and Rollback - -> **_NOTE:_** Preservation of unsafe entities has been added in datahub `0.8.32`. Read on to understand what it means and how it works. - -In some cases, entities that were initially ingested by a run might have had further modifications to their metadata (e.g. adding terms, tags, or documentation) through the UI or other means. During a roll back of the ingestion that initially created these entities (technically, if the key aspect for these entities are being rolled back), the ingestion process will analyse the metadata graph for aspects that will be left "dangling" and will: -1. Leave these aspects untouched in the database, and soft-delete the entity. A re-ingestion of these entities will result in this additional metadata becoming visible again in the UI, so you don't lose any of your work. -2. The datahub cli will save information about these unsafe entities as a CSV for operators to later review and decide on next steps (keep or remove). - -The rollback command will report how many entities have such aspects and save as a CSV the urns of these entities under a rollback reports directory, which defaults to `rollback_reports` under the current directory where the cli is run, and can be configured further using the `--reports-dir` command line arg. - -The operator can use `datahub get --urn <>` to inspect the aspects that were left behind and either keep them (do nothing) or delete the entity (and its aspects) completely using `datahub delete --urn --hard`. If the operator wishes to remove all the metadata associated with these unsafe entities, they can re-issue the rollback command with the `--nuke` flag. diff --git a/spaces/abhishek/sketch-to-image/annotator/midas/midas/transforms.py b/spaces/abhishek/sketch-to-image/annotator/midas/midas/transforms.py deleted file mode 100644 index 94eb3e16ba8e74d46f88120c644ab0c5b9120bb1..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/midas/midas/transforms.py +++ /dev/null @@ -1,244 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala -''' - -import numpy as np -import cv2 -import math - - -def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA): - """Rezise the sample to ensure the given size. Keeps aspect ratio. - - Args: - sample (dict): sample - size (tuple): image size - - Returns: - tuple: new size - """ - shape = list(sample["disparity"].shape) - - if shape[0] >= size[0] and shape[1] >= size[1]: - return sample - - scale = [0, 0] - scale[0] = size[0] / shape[0] - scale[1] = size[1] / shape[1] - - scale = max(scale) - - shape[0] = math.ceil(scale * shape[0]) - shape[1] = math.ceil(scale * shape[1]) - - # resize - sample["image"] = cv2.resize( - sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method - ) - - sample["disparity"] = cv2.resize( - sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST - ) - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - tuple(shape[::-1]), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return tuple(shape) - - -class Resize(object): - """Resize sample to given size (width, height). - """ - - def __init__( - self, - width, - height, - resize_target=True, - keep_aspect_ratio=False, - ensure_multiple_of=1, - resize_method="lower_bound", - image_interpolation_method=cv2.INTER_AREA, - ): - """Init. - - Args: - width (int): desired output width - height (int): desired output height - resize_target (bool, optional): - True: Resize the full sample (image, mask, target). - False: Resize image only. - Defaults to True. - keep_aspect_ratio (bool, optional): - True: Keep the aspect ratio of the input sample. - Output sample might not have the given width and height, and - resize behaviour depends on the parameter 'resize_method'. - Defaults to False. - ensure_multiple_of (int, optional): - Output width and height is constrained to be multiple of this parameter. - Defaults to 1. - resize_method (str, optional): - "lower_bound": Output will be at least as large as the given size. - "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.) - "minimal": Scale as least as possible. (Output size might be smaller than given size.) - Defaults to "lower_bound". - """ - self.__width = width - self.__height = height - - self.__resize_target = resize_target - self.__keep_aspect_ratio = keep_aspect_ratio - self.__multiple_of = ensure_multiple_of - self.__resize_method = resize_method - self.__image_interpolation_method = image_interpolation_method - - def constrain_to_multiple_of(self, x, min_val=0, max_val=None): - y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if max_val is not None and y > max_val: - y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if y < min_val: - y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int) - - return y - - def get_size(self, width, height): - # determine new height and width - scale_height = self.__height / height - scale_width = self.__width / width - - if self.__keep_aspect_ratio: - if self.__resize_method == "lower_bound": - # scale such that output size is lower bound - if scale_width > scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "upper_bound": - # scale such that output size is upper bound - if scale_width < scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "minimal": - # scale as least as possbile - if abs(1 - scale_width) < abs(1 - scale_height): - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - else: - raise ValueError( - f"resize_method {self.__resize_method} not implemented" - ) - - if self.__resize_method == "lower_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, min_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, min_val=self.__width - ) - elif self.__resize_method == "upper_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, max_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, max_val=self.__width - ) - elif self.__resize_method == "minimal": - new_height = self.constrain_to_multiple_of(scale_height * height) - new_width = self.constrain_to_multiple_of(scale_width * width) - else: - raise ValueError(f"resize_method {self.__resize_method} not implemented") - - return (new_width, new_height) - - def __call__(self, sample): - width, height = self.get_size( - sample["image"].shape[1], sample["image"].shape[0] - ) - - # resize sample - sample["image"] = cv2.resize( - sample["image"], - (width, height), - interpolation=self.__image_interpolation_method, - ) - - if self.__resize_target: - if "disparity" in sample: - sample["disparity"] = cv2.resize( - sample["disparity"], - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - - if "depth" in sample: - sample["depth"] = cv2.resize( - sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST - ) - - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return sample - - -class NormalizeImage(object): - """Normlize image by given mean and std. - """ - - def __init__(self, mean, std): - self.__mean = mean - self.__std = std - - def __call__(self, sample): - sample["image"] = (sample["image"] - self.__mean) / self.__std - - return sample - - -class PrepareForNet(object): - """Prepare sample for usage as network input. - """ - - def __init__(self): - pass - - def __call__(self, sample): - image = np.transpose(sample["image"], (2, 0, 1)) - sample["image"] = np.ascontiguousarray(image).astype(np.float32) - - if "mask" in sample: - sample["mask"] = sample["mask"].astype(np.float32) - sample["mask"] = np.ascontiguousarray(sample["mask"]) - - if "disparity" in sample: - disparity = sample["disparity"].astype(np.float32) - sample["disparity"] = np.ascontiguousarray(disparity) - - if "depth" in sample: - depth = sample["depth"].astype(np.float32) - sample["depth"] = np.ascontiguousarray(depth) - - return sample diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/demodata.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/demodata.py deleted file mode 100644 index feecb693745a47d9f2bebd8af9a217ff4f5cc92b..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/demodata.py +++ /dev/null @@ -1,41 +0,0 @@ -import numpy as np -import torch - -from mmdet.utils.util_random import ensure_rng - - -def random_boxes(num=1, scale=1, rng=None): - """Simple version of ``kwimage.Boxes.random`` - - Returns: - Tensor: shape (n, 4) in x1, y1, x2, y2 format. - - References: - https://gitlab.kitware.com/computer-vision/kwimage/blob/master/kwimage/structs/boxes.py#L1390 - - Example: - >>> num = 3 - >>> scale = 512 - >>> rng = 0 - >>> boxes = random_boxes(num, scale, rng) - >>> print(boxes) - tensor([[280.9925, 278.9802, 308.6148, 366.1769], - [216.9113, 330.6978, 224.0446, 456.5878], - [405.3632, 196.3221, 493.3953, 270.7942]]) - """ - rng = ensure_rng(rng) - - tlbr = rng.rand(num, 4).astype(np.float32) - - tl_x = np.minimum(tlbr[:, 0], tlbr[:, 2]) - tl_y = np.minimum(tlbr[:, 1], tlbr[:, 3]) - br_x = np.maximum(tlbr[:, 0], tlbr[:, 2]) - br_y = np.maximum(tlbr[:, 1], tlbr[:, 3]) - - tlbr[:, 0] = tl_x * scale - tlbr[:, 1] = tl_y * scale - tlbr[:, 2] = br_x * scale - tlbr[:, 3] = br_y * scale - - boxes = torch.from_numpy(tlbr) - return boxes diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/fsaf_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/fsaf_head.py deleted file mode 100644 index 7183efce28596ba106411250f508aec5995fbf60..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/dense_heads/fsaf_head.py +++ /dev/null @@ -1,422 +0,0 @@ -import numpy as np -import torch -from mmcv.cnn import normal_init -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, images_to_levels, multi_apply, - unmap) -from ..builder import HEADS -from ..losses.accuracy import accuracy -from ..losses.utils import weight_reduce_loss -from .retina_head import RetinaHead - - -@HEADS.register_module() -class FSAFHead(RetinaHead): - """Anchor-free head used in `FSAF `_. - - The head contains two subnetworks. The first classifies anchor boxes and - the second regresses deltas for the anchors (num_anchors is 1 for anchor- - free methods) - - Args: - *args: Same as its base class in :class:`RetinaHead` - score_threshold (float, optional): The score_threshold to calculate - positive recall. If given, prediction scores lower than this value - is counted as incorrect prediction. Default to None. - **kwargs: Same as its base class in :class:`RetinaHead` - - Example: - >>> import torch - >>> self = FSAFHead(11, 7) - >>> x = torch.rand(1, 7, 32, 32) - >>> cls_score, bbox_pred = self.forward_single(x) - >>> # Each anchor predicts a score for each class except background - >>> cls_per_anchor = cls_score.shape[1] / self.num_anchors - >>> box_per_anchor = bbox_pred.shape[1] / self.num_anchors - >>> assert cls_per_anchor == self.num_classes - >>> assert box_per_anchor == 4 - """ - - def __init__(self, *args, score_threshold=None, **kwargs): - super().__init__(*args, **kwargs) - self.score_threshold = score_threshold - - def forward_single(self, x): - """Forward feature map of a single scale level. - - Args: - x (Tensor): Feature map of a single scale level. - - Returns: - tuple (Tensor): - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W). - """ - cls_score, bbox_pred = super().forward_single(x) - # relu: TBLR encoder only accepts positive bbox_pred - return cls_score, self.relu(bbox_pred) - - def init_weights(self): - """Initialize weights of the head.""" - super(FSAFHead, self).init_weights() - # The positive bias in self.retina_reg conv is to prevent predicted \ - # bbox with 0 area - normal_init(self.retina_reg, std=0.01, bias=0.25) - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Most of the codes are the same with the base class - :obj: `AnchorHead`, except that it also collects and returns - the matched gt index in the image (from 0 to num_gt-1). If the - anchor bbox is not matched to any gt, the corresponding value in - pos_gt_inds is -1. - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # Assign gt and sample anchors - anchors = flat_anchors[inside_flags.type(torch.bool), :] - assign_result = self.assigner.assign( - anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros((num_valid_anchors, label_channels), - dtype=torch.float) - pos_gt_inds = anchors.new_full((num_valid_anchors, ), - -1, - dtype=torch.long) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, both - # the predicted boxes and regression targets should be with - # absolute coordinate format. - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - # The assigned gt_index for each anchor. (0-based) - pos_gt_inds[pos_inds] = sampling_result.pos_assigned_gt_inds - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # shadowed_labels is a tensor composed of tuples - # (anchor_inds, class_label) that indicate those anchors lying in the - # outer region of a gt or overlapped by another gt with a smaller - # area. - # - # Therefore, only the shadowed labels are ignored for loss calculation. - # the key `shadowed_labels` is defined in :obj:`CenterRegionAssigner` - shadowed_labels = assign_result.get_extra_property('shadowed_labels') - if shadowed_labels is not None and shadowed_labels.numel(): - if len(shadowed_labels.shape) == 2: - idx_, label_ = shadowed_labels[:, 0], shadowed_labels[:, 1] - assert (labels[idx_] != label_).all(), \ - 'One label cannot be both positive and ignored' - label_weights[idx_, label_] = 0 - else: - label_weights[shadowed_labels] = 0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap(labels, num_total_anchors, inside_flags) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - pos_gt_inds = unmap( - pos_gt_inds, num_total_anchors, inside_flags, fill=-1) - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds, sampling_result, pos_gt_inds) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute loss of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_points * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_points * 4, H, W). - gt_bboxes (list[Tensor]): each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - for i in range(len(bbox_preds)): # loop over fpn level - # avoid 0 area of the predicted bbox - bbox_preds[i] = bbox_preds[i].clamp(min=1e-4) - # TODO: It may directly use the base-class loss function. - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - batch_size = len(gt_bboxes) - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg, - pos_assigned_gt_inds_list) = cls_reg_targets - - num_gts = np.array(list(map(len, gt_labels))) - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - - # `pos_assigned_gt_inds_list` (length: fpn_levels) stores the assigned - # gt index of each anchor bbox in each fpn level. - cum_num_gts = list(np.cumsum(num_gts)) # length of batch_size - for i, assign in enumerate(pos_assigned_gt_inds_list): - # loop over fpn levels - for j in range(1, batch_size): - # loop over batch size - # Convert gt indices in each img to those in the batch - assign[j][assign[j] >= 0] += int(cum_num_gts[j - 1]) - pos_assigned_gt_inds_list[i] = assign.flatten() - labels_list[i] = labels_list[i].flatten() - num_gts = sum(map(len, gt_labels)) # total number of gt in the batch - # The unique label index of each gt in the batch - label_sequence = torch.arange(num_gts, device=device) - # Collect the average loss of each gt in each level - with torch.no_grad(): - loss_levels, = multi_apply( - self.collect_loss_level_single, - losses_cls, - losses_bbox, - pos_assigned_gt_inds_list, - labels_seq=label_sequence) - # Shape: (fpn_levels, num_gts). Loss of each gt at each fpn level - loss_levels = torch.stack(loss_levels, dim=0) - # Locate the best fpn level for loss back-propagation - if loss_levels.numel() == 0: # zero gt - argmin = loss_levels.new_empty((num_gts, ), dtype=torch.long) - else: - _, argmin = loss_levels.min(dim=0) - - # Reweight the loss of each (anchor, label) pair, so that only those - # at the best gt level are back-propagated. - losses_cls, losses_bbox, pos_inds = multi_apply( - self.reweight_loss_single, - losses_cls, - losses_bbox, - pos_assigned_gt_inds_list, - labels_list, - list(range(len(losses_cls))), - min_levels=argmin) - num_pos = torch.cat(pos_inds, 0).sum().float() - pos_recall = self.calculate_pos_recall(cls_scores, labels_list, - pos_inds) - - if num_pos == 0: # No gt - avg_factor = num_pos + float(num_total_neg) - else: - avg_factor = num_pos - for i in range(len(losses_cls)): - losses_cls[i] /= avg_factor - losses_bbox[i] /= avg_factor - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - num_pos=num_pos / batch_size, - pos_recall=pos_recall) - - def calculate_pos_recall(self, cls_scores, labels_list, pos_inds): - """Calculate positive recall with score threshold. - - Args: - cls_scores (list[Tensor]): Classification scores at all fpn levels. - Each tensor is in shape (N, num_classes * num_anchors, H, W) - labels_list (list[Tensor]): The label that each anchor is assigned - to. Shape (N * H * W * num_anchors, ) - pos_inds (list[Tensor]): List of bool tensors indicating whether - the anchor is assigned to a positive label. - Shape (N * H * W * num_anchors, ) - - Returns: - Tensor: A single float number indicating the positive recall. - """ - with torch.no_grad(): - num_class = self.num_classes - scores = [ - cls.permute(0, 2, 3, 1).reshape(-1, num_class)[pos] - for cls, pos in zip(cls_scores, pos_inds) - ] - labels = [ - label.reshape(-1)[pos] - for label, pos in zip(labels_list, pos_inds) - ] - scores = torch.cat(scores, dim=0) - labels = torch.cat(labels, dim=0) - if self.use_sigmoid_cls: - scores = scores.sigmoid() - else: - scores = scores.softmax(dim=1) - - return accuracy(scores, labels, thresh=self.score_threshold) - - def collect_loss_level_single(self, cls_loss, reg_loss, assigned_gt_inds, - labels_seq): - """Get the average loss in each FPN level w.r.t. each gt label. - - Args: - cls_loss (Tensor): Classification loss of each feature map pixel, - shape (num_anchor, num_class) - reg_loss (Tensor): Regression loss of each feature map pixel, - shape (num_anchor, 4) - assigned_gt_inds (Tensor): It indicates which gt the prior is - assigned to (0-based, -1: no assignment). shape (num_anchor), - labels_seq: The rank of labels. shape (num_gt) - - Returns: - shape: (num_gt), average loss of each gt in this level - """ - if len(reg_loss.shape) == 2: # iou loss has shape (num_prior, 4) - reg_loss = reg_loss.sum(dim=-1) # sum loss in tblr dims - if len(cls_loss.shape) == 2: - cls_loss = cls_loss.sum(dim=-1) # sum loss in class dims - loss = cls_loss + reg_loss - assert loss.size(0) == assigned_gt_inds.size(0) - # Default loss value is 1e6 for a layer where no anchor is positive - # to ensure it will not be chosen to back-propagate gradient - losses_ = loss.new_full(labels_seq.shape, 1e6) - for i, l in enumerate(labels_seq): - match = assigned_gt_inds == l - if match.any(): - losses_[i] = loss[match].mean() - return losses_, - - def reweight_loss_single(self, cls_loss, reg_loss, assigned_gt_inds, - labels, level, min_levels): - """Reweight loss values at each level. - - Reassign loss values at each level by masking those where the - pre-calculated loss is too large. Then return the reduced losses. - - Args: - cls_loss (Tensor): Element-wise classification loss. - Shape: (num_anchors, num_classes) - reg_loss (Tensor): Element-wise regression loss. - Shape: (num_anchors, 4) - assigned_gt_inds (Tensor): The gt indices that each anchor bbox - is assigned to. -1 denotes a negative anchor, otherwise it is the - gt index (0-based). Shape: (num_anchors, ), - labels (Tensor): Label assigned to anchors. Shape: (num_anchors, ). - level (int): The current level index in the pyramid - (0-4 for RetinaNet) - min_levels (Tensor): The best-matching level for each gt. - Shape: (num_gts, ), - - Returns: - tuple: - - cls_loss: Reduced corrected classification loss. Scalar. - - reg_loss: Reduced corrected regression loss. Scalar. - - pos_flags (Tensor): Corrected bool tensor indicating the - final positive anchors. Shape: (num_anchors, ). - """ - loc_weight = torch.ones_like(reg_loss) - cls_weight = torch.ones_like(cls_loss) - pos_flags = assigned_gt_inds >= 0 # positive pixel flag - pos_indices = torch.nonzero(pos_flags, as_tuple=False).flatten() - - if pos_flags.any(): # pos pixels exist - pos_assigned_gt_inds = assigned_gt_inds[pos_flags] - zeroing_indices = (min_levels[pos_assigned_gt_inds] != level) - neg_indices = pos_indices[zeroing_indices] - - if neg_indices.numel(): - pos_flags[neg_indices] = 0 - loc_weight[neg_indices] = 0 - # Only the weight corresponding to the label is - # zeroed out if not selected - zeroing_labels = labels[neg_indices] - assert (zeroing_labels >= 0).all() - cls_weight[neg_indices, zeroing_labels] = 0 - - # Weighted loss for both cls and reg loss - cls_loss = weight_reduce_loss(cls_loss, cls_weight, reduction='sum') - reg_loss = weight_reduce_loss(reg_loss, loc_weight, reduction='sum') - - return cls_loss, reg_loss, pos_flags diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/backbones/uniformer.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/backbones/uniformer.py deleted file mode 100644 index 0c4bb88e4c928540cca9ab609988b916520f5b7a..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/backbones/uniformer.py +++ /dev/null @@ -1,422 +0,0 @@ -# -------------------------------------------------------- -# UniFormer -# Copyright (c) 2022 SenseTime X-Lab -# Licensed under The MIT License [see LICENSE for details] -# Written by Kunchang Li -# -------------------------------------------------------- - -from collections import OrderedDict -import math - -from functools import partial -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as checkpoint -import numpy as np -from timm.models.layers import DropPath, to_2tuple, trunc_normal_ - -from annotator.uniformer.mmcv_custom import load_checkpoint -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES - - -class Mlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class CMlp(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Conv2d(in_features, hidden_features, 1) - self.act = act_layer() - self.fc2 = nn.Conv2d(hidden_features, out_features, 1) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class CBlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = nn.BatchNorm2d(dim) - self.conv1 = nn.Conv2d(dim, dim, 1) - self.conv2 = nn.Conv2d(dim, dim, 1) - self.attn = nn.Conv2d(dim, dim, 5, padding=2, groups=dim) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = nn.BatchNorm2d(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = CMlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x + self.drop_path(self.conv2(self.attn(self.conv1(self.norm1(x))))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - return x - - -class Attention(nn.Module): - def __init__(self, dim, num_heads=8, qkv_bias=False, qk_scale=None, attn_drop=0., proj_drop=0.): - super().__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - # NOTE scale factor was wrong in my original version, can set manually to be compat with prev weights - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - B, N, C = x.shape - qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class SABlock(nn.Module): - def __init__(self, dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - B, N, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = x + self.drop_path(self.attn(self.norm1(x))) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.transpose(1, 2).reshape(B, N, H, W) - return x - - -def window_partition(x, window_size): - """ - Args: - x: (B, H, W, C) - window_size (int): window size - Returns: - windows: (num_windows*B, window_size, window_size, C) - """ - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse(windows, window_size, H, W): - """ - Args: - windows: (num_windows*B, window_size, window_size, C) - window_size (int): Window size - H (int): Height of image - W (int): Width of image - Returns: - x: (B, H, W, C) - """ - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SABlock_Windows(nn.Module): - def __init__(self, dim, num_heads, window_size=14, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., - drop_path=0., act_layer=nn.GELU, norm_layer=nn.LayerNorm): - super().__init__() - self.window_size=window_size - self.pos_embed = nn.Conv2d(dim, dim, 3, padding=1, groups=dim) - self.norm1 = norm_layer(dim) - self.attn = Attention( - dim, - num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale, - attn_drop=attn_drop, proj_drop=drop) - # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def forward(self, x): - x = x + self.pos_embed(x) - x = x.permute(0, 2, 3, 1) - B, H, W, C = x.shape - shortcut = x - x = self.norm1(x) - - pad_l = pad_t = 0 - pad_r = (self.window_size - W % self.window_size) % self.window_size - pad_b = (self.window_size - H % self.window_size) % self.window_size - x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b)) - _, Hp, Wp, _ = x.shape - - x_windows = window_partition(x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C - - # reverse cyclic shift - if pad_r > 0 or pad_b > 0: - x = x[:, :H, :W, :].contiguous() - - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - x = x.permute(0, 3, 1, 2).reshape(B, C, H, W) - return x - - -class PatchEmbed(nn.Module): - """ Image to Patch Embedding - """ - def __init__(self, img_size=224, patch_size=16, in_chans=3, embed_dim=768): - super().__init__() - img_size = to_2tuple(img_size) - patch_size = to_2tuple(patch_size) - num_patches = (img_size[1] // patch_size[1]) * (img_size[0] // patch_size[0]) - self.img_size = img_size - self.patch_size = patch_size - self.num_patches = num_patches - self.norm = nn.LayerNorm(embed_dim) - self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - B, _, H, W = x.shape - x = self.proj(x) - B, _, H, W = x.shape - x = x.flatten(2).transpose(1, 2) - x = self.norm(x) - x = x.reshape(B, H, W, -1).permute(0, 3, 1, 2).contiguous() - return x - - -@BACKBONES.register_module() -class UniFormer(nn.Module): - """ Vision Transformer - A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale` - - https://arxiv.org/abs/2010.11929 - """ - def __init__(self, layers=[3, 4, 8, 3], img_size=224, in_chans=3, num_classes=80, embed_dim=[64, 128, 320, 512], - head_dim=64, mlp_ratio=4., qkv_bias=True, qk_scale=None, representation_size=None, - drop_rate=0., attn_drop_rate=0., drop_path_rate=0., norm_layer=partial(nn.LayerNorm, eps=1e-6), - pretrained_path=None, use_checkpoint=False, checkpoint_num=[0, 0, 0, 0], - windows=False, hybrid=False, window_size=14): - """ - Args: - layer (list): number of block in each layer - img_size (int, tuple): input image size - in_chans (int): number of input channels - num_classes (int): number of classes for classification head - embed_dim (int): embedding dimension - head_dim (int): dimension of attention heads - mlp_ratio (int): ratio of mlp hidden dim to embedding dim - qkv_bias (bool): enable bias for qkv if True - qk_scale (float): override default qk scale of head_dim ** -0.5 if set - representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set - drop_rate (float): dropout rate - attn_drop_rate (float): attention dropout rate - drop_path_rate (float): stochastic depth rate - norm_layer (nn.Module): normalization layer - pretrained_path (str): path of pretrained model - use_checkpoint (bool): whether use checkpoint - checkpoint_num (list): index for using checkpoint in every stage - windows (bool): whether use window MHRA - hybrid (bool): whether use hybrid MHRA - window_size (int): size of window (>14) - """ - super().__init__() - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.checkpoint_num = checkpoint_num - self.windows = windows - print(f'Use Checkpoint: {self.use_checkpoint}') - print(f'Checkpoint Number: {self.checkpoint_num}') - self.num_features = self.embed_dim = embed_dim # num_features for consistency with other models - norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6) - - self.patch_embed1 = PatchEmbed( - img_size=img_size, patch_size=4, in_chans=in_chans, embed_dim=embed_dim[0]) - self.patch_embed2 = PatchEmbed( - img_size=img_size // 4, patch_size=2, in_chans=embed_dim[0], embed_dim=embed_dim[1]) - self.patch_embed3 = PatchEmbed( - img_size=img_size // 8, patch_size=2, in_chans=embed_dim[1], embed_dim=embed_dim[2]) - self.patch_embed4 = PatchEmbed( - img_size=img_size // 16, patch_size=2, in_chans=embed_dim[2], embed_dim=embed_dim[3]) - - self.pos_drop = nn.Dropout(p=drop_rate) - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(layers))] # stochastic depth decay rule - num_heads = [dim // head_dim for dim in embed_dim] - self.blocks1 = nn.ModuleList([ - CBlock( - dim=embed_dim[0], num_heads=num_heads[0], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i], norm_layer=norm_layer) - for i in range(layers[0])]) - self.norm1=norm_layer(embed_dim[0]) - self.blocks2 = nn.ModuleList([ - CBlock( - dim=embed_dim[1], num_heads=num_heads[1], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]], norm_layer=norm_layer) - for i in range(layers[1])]) - self.norm2 = norm_layer(embed_dim[1]) - if self.windows: - print('Use local window for all blocks in stage3') - self.blocks3 = nn.ModuleList([ - SABlock_Windows( - dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer) - for i in range(layers[2])]) - elif hybrid: - print('Use hybrid window for blocks in stage3') - block3 = [] - for i in range(layers[2]): - if (i + 1) % 4 == 0: - block3.append(SABlock( - dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)) - else: - block3.append(SABlock_Windows( - dim=embed_dim[2], num_heads=num_heads[2], window_size=window_size, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer)) - self.blocks3 = nn.ModuleList(block3) - else: - print('Use global window for all blocks in stage3') - self.blocks3 = nn.ModuleList([ - SABlock( - dim=embed_dim[2], num_heads=num_heads[2], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]], norm_layer=norm_layer) - for i in range(layers[2])]) - self.norm3 = norm_layer(embed_dim[2]) - self.blocks4 = nn.ModuleList([ - SABlock( - dim=embed_dim[3], num_heads=num_heads[3], mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale, - drop=drop_rate, attn_drop=attn_drop_rate, drop_path=dpr[i+layers[0]+layers[1]+layers[2]], norm_layer=norm_layer) - for i in range(layers[3])]) - self.norm4 = norm_layer(embed_dim[3]) - - # Representation layer - if representation_size: - self.num_features = representation_size - self.pre_logits = nn.Sequential(OrderedDict([ - ('fc', nn.Linear(embed_dim, representation_size)), - ('act', nn.Tanh()) - ])) - else: - self.pre_logits = nn.Identity() - - self.apply(self._init_weights) - self.init_weights(pretrained=pretrained_path) - - def init_weights(self, pretrained): - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, map_location='cpu', strict=False, logger=logger) - print(f'Load pretrained model from {pretrained}') - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.LayerNorm): - nn.init.constant_(m.bias, 0) - nn.init.constant_(m.weight, 1.0) - - @torch.jit.ignore - def no_weight_decay(self): - return {'pos_embed', 'cls_token'} - - def get_classifier(self): - return self.head - - def reset_classifier(self, num_classes, global_pool=''): - self.num_classes = num_classes - self.head = nn.Linear(self.embed_dim, num_classes) if num_classes > 0 else nn.Identity() - - def forward_features(self, x): - out = [] - x = self.patch_embed1(x) - x = self.pos_drop(x) - for i, blk in enumerate(self.blocks1): - if self.use_checkpoint and i < self.checkpoint_num[0]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm1(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed2(x) - for i, blk in enumerate(self.blocks2): - if self.use_checkpoint and i < self.checkpoint_num[1]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm2(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed3(x) - for i, blk in enumerate(self.blocks3): - if self.use_checkpoint and i < self.checkpoint_num[2]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm3(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - x = self.patch_embed4(x) - for i, blk in enumerate(self.blocks4): - if self.use_checkpoint and i < self.checkpoint_num[3]: - x = checkpoint.checkpoint(blk, x) - else: - x = blk(x) - x_out = self.norm4(x.permute(0, 2, 3, 1)) - out.append(x_out.permute(0, 3, 1, 2).contiguous()) - return tuple(out) - - def forward(self, x): - x = self.forward_features(x) - return x diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/backbones/vit.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/backbones/vit.py deleted file mode 100644 index 59e4479650690e08cbc4cab9427aefda47c2116d..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/models/backbones/vit.py +++ /dev/null @@ -1,459 +0,0 @@ -"""Modified from https://github.com/rwightman/pytorch-image- -models/blob/master/timm/models/vision_transformer.py.""" - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import (Conv2d, Linear, build_activation_layer, build_norm_layer, - constant_init, kaiming_init, normal_init) -from annotator.uniformer.mmcv.runner import _load_checkpoint -from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm - -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import DropPath, trunc_normal_ - - -class Mlp(nn.Module): - """MLP layer for Encoder block. - - Args: - in_features(int): Input dimension for the first fully - connected layer. - hidden_features(int): Output dimension for the first fully - connected layer. - out_features(int): Output dementsion for the second fully - connected layer. - act_cfg(dict): Config dict for activation layer. - Default: dict(type='GELU'). - drop(float): Drop rate for the dropout layer. Dropout rate has - to be between 0 and 1. Default: 0. - """ - - def __init__(self, - in_features, - hidden_features=None, - out_features=None, - act_cfg=dict(type='GELU'), - drop=0.): - super(Mlp, self).__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = Linear(in_features, hidden_features) - self.act = build_activation_layer(act_cfg) - self.fc2 = Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -class Attention(nn.Module): - """Attention layer for Encoder block. - - Args: - dim (int): Dimension for the input vector. - num_heads (int): Number of parallel attention heads. - qkv_bias (bool): Enable bias for qkv if True. Default: False. - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - attn_drop (float): Drop rate for attention output weights. - Default: 0. - proj_drop (float): Drop rate for output weights. Default: 0. - """ - - def __init__(self, - dim, - num_heads=8, - qkv_bias=False, - qk_scale=None, - attn_drop=0., - proj_drop=0.): - super(Attention, self).__init__() - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim**-0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - def forward(self, x): - b, n, c = x.shape - qkv = self.qkv(x).reshape(b, n, 3, self.num_heads, - c // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(b, n, c) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Block(nn.Module): - """Implements encoder block with residual connection. - - Args: - dim (int): The feature dimension. - num_heads (int): Number of parallel attention heads. - mlp_ratio (int): Ratio of mlp hidden dim to embedding dim. - qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. - drop (float): Drop rate for mlp output weights. Default: 0. - attn_drop (float): Drop rate for attention output weights. - Default: 0. - proj_drop (float): Drop rate for attn layer output weights. - Default: 0. - drop_path (float): Drop rate for paths of model. - Default: 0. - act_cfg (dict): Config dict for activation layer. - Default: dict(type='GELU'). - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN', requires_grad=True). - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. Default: False. - """ - - def __init__(self, - dim, - num_heads, - mlp_ratio=4, - qkv_bias=False, - qk_scale=None, - drop=0., - attn_drop=0., - proj_drop=0., - drop_path=0., - act_cfg=dict(type='GELU'), - norm_cfg=dict(type='LN', eps=1e-6), - with_cp=False): - super(Block, self).__init__() - self.with_cp = with_cp - _, self.norm1 = build_norm_layer(norm_cfg, dim) - self.attn = Attention(dim, num_heads, qkv_bias, qk_scale, attn_drop, - proj_drop) - self.drop_path = DropPath( - drop_path) if drop_path > 0. else nn.Identity() - _, self.norm2 = build_norm_layer(norm_cfg, dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp( - in_features=dim, - hidden_features=mlp_hidden_dim, - act_cfg=act_cfg, - drop=drop) - - def forward(self, x): - - def _inner_forward(x): - out = x + self.drop_path(self.attn(self.norm1(x))) - out = out + self.drop_path(self.mlp(self.norm2(out))) - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - return out - - -class PatchEmbed(nn.Module): - """Image to Patch Embedding. - - Args: - img_size (int | tuple): Input image size. - default: 224. - patch_size (int): Width and height for a patch. - default: 16. - in_channels (int): Input channels for images. Default: 3. - embed_dim (int): The embedding dimension. Default: 768. - """ - - def __init__(self, - img_size=224, - patch_size=16, - in_channels=3, - embed_dim=768): - super(PatchEmbed, self).__init__() - if isinstance(img_size, int): - self.img_size = (img_size, img_size) - elif isinstance(img_size, tuple): - self.img_size = img_size - else: - raise TypeError('img_size must be type of int or tuple') - h, w = self.img_size - self.patch_size = (patch_size, patch_size) - self.num_patches = (h // patch_size) * (w // patch_size) - self.proj = Conv2d( - in_channels, embed_dim, kernel_size=patch_size, stride=patch_size) - - def forward(self, x): - return self.proj(x).flatten(2).transpose(1, 2) - - -@BACKBONES.register_module() -class VisionTransformer(nn.Module): - """Vision transformer backbone. - - A PyTorch impl of : `An Image is Worth 16x16 Words: Transformers for - Image Recognition at Scale` - https://arxiv.org/abs/2010.11929 - - Args: - img_size (tuple): input image size. Default: (224, 224). - patch_size (int, tuple): patch size. Default: 16. - in_channels (int): number of input channels. Default: 3. - embed_dim (int): embedding dimension. Default: 768. - depth (int): depth of transformer. Default: 12. - num_heads (int): number of attention heads. Default: 12. - mlp_ratio (int): ratio of mlp hidden dim to embedding dim. - Default: 4. - out_indices (list | tuple | int): Output from which stages. - Default: -1. - qkv_bias (bool): enable bias for qkv if True. Default: True. - qk_scale (float): override default qk scale of head_dim ** -0.5 if set. - drop_rate (float): dropout rate. Default: 0. - attn_drop_rate (float): attention dropout rate. Default: 0. - drop_path_rate (float): Rate of DropPath. Default: 0. - norm_cfg (dict): Config dict for normalization layer. - Default: dict(type='LN', eps=1e-6, requires_grad=True). - act_cfg (dict): Config dict for activation layer. - Default: dict(type='GELU'). - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. Default: False. - final_norm (bool): Whether to add a additional layer to normalize - final feature map. Default: False. - interpolate_mode (str): Select the interpolate mode for position - embeding vector resize. Default: bicubic. - with_cls_token (bool): If concatenating class token into image tokens - as transformer input. Default: True. - with_cp (bool): Use checkpoint or not. Using checkpoint - will save some memory while slowing down the training speed. - Default: False. - """ - - def __init__(self, - img_size=(224, 224), - patch_size=16, - in_channels=3, - embed_dim=768, - depth=12, - num_heads=12, - mlp_ratio=4, - out_indices=11, - qkv_bias=True, - qk_scale=None, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0., - norm_cfg=dict(type='LN', eps=1e-6, requires_grad=True), - act_cfg=dict(type='GELU'), - norm_eval=False, - final_norm=False, - with_cls_token=True, - interpolate_mode='bicubic', - with_cp=False): - super(VisionTransformer, self).__init__() - self.img_size = img_size - self.patch_size = patch_size - self.features = self.embed_dim = embed_dim - self.patch_embed = PatchEmbed( - img_size=img_size, - patch_size=patch_size, - in_channels=in_channels, - embed_dim=embed_dim) - - self.with_cls_token = with_cls_token - self.cls_token = nn.Parameter(torch.zeros(1, 1, self.embed_dim)) - self.pos_embed = nn.Parameter( - torch.zeros(1, self.patch_embed.num_patches + 1, embed_dim)) - self.pos_drop = nn.Dropout(p=drop_rate) - - if isinstance(out_indices, int): - self.out_indices = [out_indices] - elif isinstance(out_indices, list) or isinstance(out_indices, tuple): - self.out_indices = out_indices - else: - raise TypeError('out_indices must be type of int, list or tuple') - - dpr = [x.item() for x in torch.linspace(0, drop_path_rate, depth) - ] # stochastic depth decay rule - self.blocks = nn.ModuleList([ - Block( - dim=embed_dim, - num_heads=num_heads, - mlp_ratio=mlp_ratio, - qkv_bias=qkv_bias, - qk_scale=qk_scale, - drop=dpr[i], - attn_drop=attn_drop_rate, - act_cfg=act_cfg, - norm_cfg=norm_cfg, - with_cp=with_cp) for i in range(depth) - ]) - - self.interpolate_mode = interpolate_mode - self.final_norm = final_norm - if final_norm: - _, self.norm = build_norm_layer(norm_cfg, embed_dim) - - self.norm_eval = norm_eval - self.with_cp = with_cp - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = get_root_logger() - checkpoint = _load_checkpoint(pretrained, logger=logger) - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - else: - state_dict = checkpoint - - if 'pos_embed' in state_dict.keys(): - if self.pos_embed.shape != state_dict['pos_embed'].shape: - logger.info(msg=f'Resize the pos_embed shape from \ -{state_dict["pos_embed"].shape} to {self.pos_embed.shape}') - h, w = self.img_size - pos_size = int( - math.sqrt(state_dict['pos_embed'].shape[1] - 1)) - state_dict['pos_embed'] = self.resize_pos_embed( - state_dict['pos_embed'], (h, w), (pos_size, pos_size), - self.patch_size, self.interpolate_mode) - - self.load_state_dict(state_dict, False) - - elif pretrained is None: - # We only implement the 'jax_impl' initialization implemented at - # https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py#L353 # noqa: E501 - trunc_normal_(self.pos_embed, std=.02) - trunc_normal_(self.cls_token, std=.02) - for n, m in self.named_modules(): - if isinstance(m, Linear): - trunc_normal_(m.weight, std=.02) - if m.bias is not None: - if 'mlp' in n: - normal_init(m.bias, std=1e-6) - else: - constant_init(m.bias, 0) - elif isinstance(m, Conv2d): - kaiming_init(m.weight, mode='fan_in') - if m.bias is not None: - constant_init(m.bias, 0) - elif isinstance(m, (_BatchNorm, nn.GroupNorm, nn.LayerNorm)): - constant_init(m.bias, 0) - constant_init(m.weight, 1.0) - else: - raise TypeError('pretrained must be a str or None') - - def _pos_embeding(self, img, patched_img, pos_embed): - """Positiong embeding method. - - Resize the pos_embed, if the input image size doesn't match - the training size. - Args: - img (torch.Tensor): The inference image tensor, the shape - must be [B, C, H, W]. - patched_img (torch.Tensor): The patched image, it should be - shape of [B, L1, C]. - pos_embed (torch.Tensor): The pos_embed weighs, it should be - shape of [B, L2, c]. - Return: - torch.Tensor: The pos encoded image feature. - """ - assert patched_img.ndim == 3 and pos_embed.ndim == 3, \ - 'the shapes of patched_img and pos_embed must be [B, L, C]' - x_len, pos_len = patched_img.shape[1], pos_embed.shape[1] - if x_len != pos_len: - if pos_len == (self.img_size[0] // self.patch_size) * ( - self.img_size[1] // self.patch_size) + 1: - pos_h = self.img_size[0] // self.patch_size - pos_w = self.img_size[1] // self.patch_size - else: - raise ValueError( - 'Unexpected shape of pos_embed, got {}.'.format( - pos_embed.shape)) - pos_embed = self.resize_pos_embed(pos_embed, img.shape[2:], - (pos_h, pos_w), self.patch_size, - self.interpolate_mode) - return self.pos_drop(patched_img + pos_embed) - - @staticmethod - def resize_pos_embed(pos_embed, input_shpae, pos_shape, patch_size, mode): - """Resize pos_embed weights. - - Resize pos_embed using bicubic interpolate method. - Args: - pos_embed (torch.Tensor): pos_embed weights. - input_shpae (tuple): Tuple for (input_h, intput_w). - pos_shape (tuple): Tuple for (pos_h, pos_w). - patch_size (int): Patch size. - Return: - torch.Tensor: The resized pos_embed of shape [B, L_new, C] - """ - assert pos_embed.ndim == 3, 'shape of pos_embed must be [B, L, C]' - input_h, input_w = input_shpae - pos_h, pos_w = pos_shape - cls_token_weight = pos_embed[:, 0] - pos_embed_weight = pos_embed[:, (-1 * pos_h * pos_w):] - pos_embed_weight = pos_embed_weight.reshape( - 1, pos_h, pos_w, pos_embed.shape[2]).permute(0, 3, 1, 2) - pos_embed_weight = F.interpolate( - pos_embed_weight, - size=[input_h // patch_size, input_w // patch_size], - align_corners=False, - mode=mode) - cls_token_weight = cls_token_weight.unsqueeze(1) - pos_embed_weight = torch.flatten(pos_embed_weight, 2).transpose(1, 2) - pos_embed = torch.cat((cls_token_weight, pos_embed_weight), dim=1) - return pos_embed - - def forward(self, inputs): - B = inputs.shape[0] - - x = self.patch_embed(inputs) - - cls_tokens = self.cls_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, x), dim=1) - x = self._pos_embeding(inputs, x, self.pos_embed) - - if not self.with_cls_token: - # Remove class token for transformer input - x = x[:, 1:] - - outs = [] - for i, blk in enumerate(self.blocks): - x = blk(x) - if i == len(self.blocks) - 1: - if self.final_norm: - x = self.norm(x) - if i in self.out_indices: - if self.with_cls_token: - # Remove class token and reshape token for decoder head - out = x[:, 1:] - else: - out = x - B, _, C = out.shape - out = out.reshape(B, inputs.shape[2] // self.patch_size, - inputs.shape[3] // self.patch_size, - C).permute(0, 3, 1, 2) - outs.append(out) - - return tuple(outs) - - def train(self, mode=True): - super(VisionTransformer, self).train(mode) - if mode and self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.LayerNorm): - m.eval() diff --git a/spaces/abidlabs/gpt-talking-portrait/README.md b/spaces/abidlabs/gpt-talking-portrait/README.md deleted file mode 100644 index 8a901b26e05a20a46528736df1178435bc9e9139..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/gpt-talking-portrait/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: GPT Talking Portrait -emoji: 👄 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.17.1b2 -app_file: app.py -pinned: true -duplicated_from: fffiloni/gpt-talking-portrait ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/app/xlib.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/app/xlib.py deleted file mode 100644 index 1563e99f5038bb8b0f654dce65194ced43a38520..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/app/xlib.py +++ /dev/null @@ -1,88 +0,0 @@ -import os -import select -import threading - -from pyglet import app -from pyglet.app.base import PlatformEventLoop - - -class XlibSelectDevice: - def fileno(self): - """Get the file handle for ``select()`` for this device. - - :rtype: int - """ - raise NotImplementedError('abstract') - - def select(self): - """Perform event processing on the device. - - Called when ``select()`` returns this device in its list of active - files. - """ - raise NotImplementedError('abstract') - - def poll(self): - """Check if the device has events ready to process. - - :rtype: bool - :return: True if there are events to process, False otherwise. - """ - return False - - -class NotificationDevice(XlibSelectDevice): - def __init__(self): - self._sync_file_read, self._sync_file_write = os.pipe() - self._event = threading.Event() - - def fileno(self): - return self._sync_file_read - - def set(self): - self._event.set() - os.write(self._sync_file_write, b'1') - - def select(self): - self._event.clear() - os.read(self._sync_file_read, 1) - app.platform_event_loop.dispatch_posted_events() - - def poll(self): - return self._event.is_set() - - -class XlibEventLoop(PlatformEventLoop): - def __init__(self): - super(XlibEventLoop, self).__init__() - self._notification_device = NotificationDevice() - self.select_devices = set() - self.select_devices.add(self._notification_device) - - def notify(self): - self._notification_device.set() - - def step(self, timeout=None): - # Timeout is from EventLoop.idle(). Return after that timeout or directly - # after receiving a new event. None means: block for user input. - - # Poll devices to check for already pending events (select.select is not enough) - pending_devices = [] - for device in self.select_devices: - if device.poll(): - pending_devices.append(device) - - # If no devices were ready, wait until one gets ready - if not pending_devices: - pending_devices, _, _ = select.select(self.select_devices, (), (), timeout) - - if not pending_devices: - # Notify caller that timeout expired without incoming events - return False - - # Dispatch activity on matching devices - for device in pending_devices: - device.select() - - # Notify caller that events were handled before timeout expired - return True diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/linux/evdev_constants.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/linux/evdev_constants.py deleted file mode 100644 index a8a2a6d1988ded60a8050d335b8886fb8257d211..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/input/linux/evdev_constants.py +++ /dev/null @@ -1,588 +0,0 @@ -"""Event constants from /usr/include/linux/input.h """ - -EV_SYN = 0x00 -EV_KEY = 0x01 -EV_REL = 0x02 -EV_ABS = 0x03 -EV_MSC = 0x04 -EV_LED = 0x11 -EV_SND = 0x12 -EV_REP = 0x14 -EV_FF = 0x15 -EV_PWR = 0x16 -EV_FF_STATUS = 0x17 -EV_MAX = 0x1f - -# Synchronization events. - -SYN_REPORT = 0 -SYN_CONFIG = 1 - -# Keys and buttons - -KEY_RESERVED = 0 -KEY_ESC = 1 -KEY_1 = 2 -KEY_2 = 3 -KEY_3 = 4 -KEY_4 = 5 -KEY_5 = 6 -KEY_6 = 7 -KEY_7 = 8 -KEY_8 = 9 -KEY_9 = 10 -KEY_0 = 11 -KEY_MINUS = 12 -KEY_EQUAL = 13 -KEY_BACKSPACE = 14 -KEY_TAB = 15 -KEY_Q = 16 -KEY_W = 17 -KEY_E = 18 -KEY_R = 19 -KEY_T = 20 -KEY_Y = 21 -KEY_U = 22 -KEY_I = 23 -KEY_O = 24 -KEY_P = 25 -KEY_LEFTBRACE = 26 -KEY_RIGHTBRACE = 27 -KEY_ENTER = 28 -KEY_LEFTCTRL = 29 -KEY_A = 30 -KEY_S = 31 -KEY_D = 32 -KEY_F = 33 -KEY_G = 34 -KEY_H = 35 -KEY_J = 36 -KEY_K = 37 -KEY_L = 38 -KEY_SEMICOLON = 39 -KEY_APOSTROPHE = 40 -KEY_GRAVE = 41 -KEY_LEFTSHIFT = 42 -KEY_BACKSLASH = 43 -KEY_Z = 44 -KEY_X = 45 -KEY_C = 46 -KEY_V = 47 -KEY_B = 48 -KEY_N = 49 -KEY_M = 50 -KEY_COMMA = 51 -KEY_DOT = 52 -KEY_SLASH = 53 -KEY_RIGHTSHIFT = 54 -KEY_KPASTERISK = 55 -KEY_LEFTALT = 56 -KEY_SPACE = 57 -KEY_CAPSLOCK = 58 -KEY_F1 = 59 -KEY_F2 = 60 -KEY_F3 = 61 -KEY_F4 = 62 -KEY_F5 = 63 -KEY_F6 = 64 -KEY_F7 = 65 -KEY_F8 = 66 -KEY_F9 = 67 -KEY_F10 = 68 -KEY_NUMLOCK = 69 -KEY_SCROLLLOCK = 70 -KEY_KP7 = 71 -KEY_KP8 = 72 -KEY_KP9 = 73 -KEY_KPMINUS = 74 -KEY_KP4 = 75 -KEY_KP5 = 76 -KEY_KP6 = 77 -KEY_KPPLUS = 78 -KEY_KP1 = 79 -KEY_KP2 = 80 -KEY_KP3 = 81 -KEY_KP0 = 82 -KEY_KPDOT = 83 - -KEY_ZENKAKUHANKAKU = 85 -KEY_102ND = 86 -KEY_F11 = 87 -KEY_F12 = 88 -KEY_RO = 89 -KEY_KATAKANA = 90 -KEY_HIRAGANA = 91 -KEY_HENKAN = 92 -KEY_KATAKANAHIRAGANA = 93 -KEY_MUHENKAN = 94 -KEY_KPJPCOMMA = 95 -KEY_KPENTER = 96 -KEY_RIGHTCTRL = 97 -KEY_KPSLASH = 98 -KEY_SYSRQ = 99 -KEY_RIGHTALT = 100 -KEY_LINEFEED = 101 -KEY_HOME = 102 -KEY_UP = 103 -KEY_PAGEUP = 104 -KEY_LEFT = 105 -KEY_RIGHT = 106 -KEY_END = 107 -KEY_DOWN = 108 -KEY_PAGEDOWN = 109 -KEY_INSERT = 110 -KEY_DELETE = 111 -KEY_MACRO = 112 -KEY_MUTE = 113 -KEY_VOLUMEDOWN = 114 -KEY_VOLUMEUP = 115 -KEY_POWER = 116 -KEY_KPEQUAL = 117 -KEY_KPPLUSMINUS = 118 -KEY_PAUSE = 119 - -KEY_KPCOMMA = 121 -KEY_HANGUEL = 122 -KEY_HANJA = 123 -KEY_YEN = 124 -KEY_LEFTMETA = 125 -KEY_RIGHTMETA = 126 -KEY_COMPOSE = 127 - -KEY_STOP = 128 -KEY_AGAIN = 129 -KEY_PROPS = 130 -KEY_UNDO = 131 -KEY_FRONT = 132 -KEY_COPY = 133 -KEY_OPEN = 134 -KEY_PASTE = 135 -KEY_FIND = 136 -KEY_CUT = 137 -KEY_HELP = 138 -KEY_MENU = 139 -KEY_CALC = 140 -KEY_SETUP = 141 -KEY_SLEEP = 142 -KEY_WAKEUP = 143 -KEY_FILE = 144 -KEY_SENDFILE = 145 -KEY_DELETEFILE = 146 -KEY_XFER = 147 -KEY_PROG1 = 148 -KEY_PROG2 = 149 -KEY_WWW = 150 -KEY_MSDOS = 151 -KEY_COFFEE = 152 -KEY_DIRECTION = 153 -KEY_CYCLEWINDOWS = 154 -KEY_MAIL = 155 -KEY_BOOKMARKS = 156 -KEY_COMPUTER = 157 -KEY_BACK = 158 -KEY_FORWARD = 159 -KEY_CLOSECD = 160 -KEY_EJECTCD = 161 -KEY_EJECTCLOSECD = 162 -KEY_NEXTSONG = 163 -KEY_PLAYPAUSE = 164 -KEY_PREVIOUSSONG = 165 -KEY_STOPCD = 166 -KEY_RECORD = 167 -KEY_REWIND = 168 -KEY_PHONE = 169 -KEY_ISO = 170 -KEY_CONFIG = 171 -KEY_HOMEPAGE = 172 -KEY_REFRESH = 173 -KEY_EXIT = 174 -KEY_MOVE = 175 -KEY_EDIT = 176 -KEY_SCROLLUP = 177 -KEY_SCROLLDOWN = 178 -KEY_KPLEFTPAREN = 179 -KEY_KPRIGHTPAREN = 180 - -KEY_F13 = 183 -KEY_F14 = 184 -KEY_F15 = 185 -KEY_F16 = 186 -KEY_F17 = 187 -KEY_F18 = 188 -KEY_F19 = 189 -KEY_F20 = 190 -KEY_F21 = 191 -KEY_F22 = 192 -KEY_F23 = 193 -KEY_F24 = 194 - -KEY_PLAYCD = 200 -KEY_PAUSECD = 201 -KEY_PROG3 = 202 -KEY_PROG4 = 203 -KEY_SUSPEND = 205 -KEY_CLOSE = 206 -KEY_PLAY = 207 -KEY_FASTFORWARD = 208 -KEY_BASSBOOST = 209 -KEY_PRINT = 210 -KEY_HP = 211 -KEY_CAMERA = 212 -KEY_SOUND = 213 -KEY_QUESTION = 214 -KEY_EMAIL = 215 -KEY_CHAT = 216 -KEY_SEARCH = 217 -KEY_CONNECT = 218 -KEY_FINANCE = 219 -KEY_SPORT = 220 -KEY_SHOP = 221 -KEY_ALTERASE = 222 -KEY_CANCEL = 223 -KEY_BRIGHTNESSDOWN = 224 -KEY_BRIGHTNESSUP = 225 -KEY_MEDIA = 226 - -KEY_UNKNOWN = 240 - -BTN_MISC = 0x100 -BTN_0 = 0x100 -BTN_1 = 0x101 -BTN_2 = 0x102 -BTN_3 = 0x103 -BTN_4 = 0x104 -BTN_5 = 0x105 -BTN_6 = 0x106 -BTN_7 = 0x107 -BTN_8 = 0x108 -BTN_9 = 0x109 - -BTN_MOUSE = 0x110 -BTN_LEFT = 0x110 -BTN_RIGHT = 0x111 -BTN_MIDDLE = 0x112 -BTN_SIDE = 0x113 -BTN_EXTRA = 0x114 -BTN_FORWARD = 0x115 -BTN_BACK = 0x116 -BTN_TASK = 0x117 - -BTN_JOYSTICK = 0x120 -BTN_TRIGGER = 0x120 -BTN_THUMB = 0x121 -BTN_THUMB2 = 0x122 -BTN_TOP = 0x123 -BTN_TOP2 = 0x124 -BTN_PINKIE = 0x125 -BTN_BASE = 0x126 -BTN_BASE2 = 0x127 -BTN_BASE3 = 0x128 -BTN_BASE4 = 0x129 -BTN_BASE5 = 0x12a -BTN_BASE6 = 0x12b -BTN_DEAD = 0x12f - -BTN_GAMEPAD = 0x130 -BTN_SOUTH = 0x130 -BTN_A = 0x130 -BTN_EAST = 0x131 -BTN_B = 0x131 -BTN_C = 0x132 -BTN_NORTH = 0x133 -BTN_X = 0x133 -BTN_WEST = 0x134 -BTN_Y = 0x134 -BTN_Z = 0x135 -BTN_TL = 0x136 -BTN_TR = 0x137 -BTN_TL2 = 0x138 -BTN_TR2 = 0x139 -BTN_SELECT = 0x13a -BTN_START = 0x13b -BTN_MODE = 0x13c -BTN_THUMBL = 0x13d -BTN_THUMBR = 0x13e - -BTN_DIGI = 0x140 -BTN_TOOL_PEN = 0x140 -BTN_TOOL_RUBBER = 0x141 -BTN_TOOL_BRUSH = 0x142 -BTN_TOOL_PENCIL = 0x143 -BTN_TOOL_AIRBRUSH = 0x144 -BTN_TOOL_FINGER = 0x145 -BTN_TOOL_MOUSE = 0x146 -BTN_TOOL_LENS = 0x147 -BTN_TOUCH = 0x14a -BTN_STYLUS = 0x14b -BTN_STYLUS2 = 0x14c -BTN_TOOL_DOUBLETAP = 0x14d -BTN_TOOL_TRIPLETAP = 0x14e - -BTN_WHEEL = 0x150 -BTN_GEAR_DOWN = 0x150 -BTN_GEAR_UP = 0x151 - -KEY_OK = 0x160 -KEY_SELECT = 0x161 -KEY_GOTO = 0x162 -KEY_CLEAR = 0x163 -KEY_POWER2 = 0x164 -KEY_OPTION = 0x165 -KEY_INFO = 0x166 -KEY_TIME = 0x167 -KEY_VENDOR = 0x168 -KEY_ARCHIVE = 0x169 -KEY_PROGRAM = 0x16a -KEY_CHANNEL = 0x16b -KEY_FAVORITES = 0x16c -KEY_EPG = 0x16d -KEY_PVR = 0x16e -KEY_MHP = 0x16f -KEY_LANGUAGE = 0x170 -KEY_TITLE = 0x171 -KEY_SUBTITLE = 0x172 -KEY_ANGLE = 0x173 -KEY_ZOOM = 0x174 -KEY_MODE = 0x175 -KEY_KEYBOARD = 0x176 -KEY_SCREEN = 0x177 -KEY_PC = 0x178 -KEY_TV = 0x179 -KEY_TV2 = 0x17a -KEY_VCR = 0x17b -KEY_VCR2 = 0x17c -KEY_SAT = 0x17d -KEY_SAT2 = 0x17e -KEY_CD = 0x17f -KEY_TAPE = 0x180 -KEY_RADIO = 0x181 -KEY_TUNER = 0x182 -KEY_PLAYER = 0x183 -KEY_TEXT = 0x184 -KEY_DVD = 0x185 -KEY_AUX = 0x186 -KEY_MP3 = 0x187 -KEY_AUDIO = 0x188 -KEY_VIDEO = 0x189 -KEY_DIRECTORY = 0x18a -KEY_LIST = 0x18b -KEY_MEMO = 0x18c -KEY_CALENDAR = 0x18d -KEY_RED = 0x18e -KEY_GREEN = 0x18f -KEY_YELLOW = 0x190 -KEY_BLUE = 0x191 -KEY_CHANNELUP = 0x192 -KEY_CHANNELDOWN = 0x193 -KEY_FIRST = 0x194 -KEY_LAST = 0x195 -KEY_AB = 0x196 -KEY_NEXT = 0x197 -KEY_RESTART = 0x198 -KEY_SLOW = 0x199 -KEY_SHUFFLE = 0x19a -KEY_BREAK = 0x19b -KEY_PREVIOUS = 0x19c -KEY_DIGITS = 0x19d -KEY_TEEN = 0x19e -KEY_TWEN = 0x19f - -KEY_DEL_EOL = 0x1c0 -KEY_DEL_EOS = 0x1c1 -KEY_INS_LINE = 0x1c2 -KEY_DEL_LINE = 0x1c3 - -KEY_FN = 0x1d0 -KEY_FN_ESC = 0x1d1 -KEY_FN_F1 = 0x1d2 -KEY_FN_F2 = 0x1d3 -KEY_FN_F3 = 0x1d4 -KEY_FN_F4 = 0x1d5 -KEY_FN_F5 = 0x1d6 -KEY_FN_F6 = 0x1d7 -KEY_FN_F7 = 0x1d8 -KEY_FN_F8 = 0x1d9 -KEY_FN_F9 = 0x1da -KEY_FN_F10 = 0x1db -KEY_FN_F11 = 0x1dc -KEY_FN_F12 = 0x1dd -KEY_FN_1 = 0x1de -KEY_FN_2 = 0x1df -KEY_FN_D = 0x1e0 -KEY_FN_E = 0x1e1 -KEY_FN_F = 0x1e2 -KEY_FN_S = 0x1e3 -KEY_FN_B = 0x1e4 - -BTN_DPAD_UP = 0x220 -BTN_DPAD_DOWN = 0x221 -BTN_DPAD_LEFT = 0x222 -BTN_DPAD_RIGHT = 0x223 - -BTN_TRIGGER_HAPPY = 0x2c0 -BTN_TRIGGER_HAPPY1 = 0x2c0 -BTN_TRIGGER_HAPPY2 = 0x2c1 -BTN_TRIGGER_HAPPY3 = 0x2c2 -BTN_TRIGGER_HAPPY4 = 0x2c3 -BTN_TRIGGER_HAPPY5 = 0x2c4 -BTN_TRIGGER_HAPPY6 = 0x2c5 -BTN_TRIGGER_HAPPY7 = 0x2c6 -BTN_TRIGGER_HAPPY8 = 0x2c7 -BTN_TRIGGER_HAPPY9 = 0x2c8 -BTN_TRIGGER_HAPPY10 = 0x2c9 -BTN_TRIGGER_HAPPY11 = 0x2ca -BTN_TRIGGER_HAPPY12 = 0x2cb -BTN_TRIGGER_HAPPY13 = 0x2cc -BTN_TRIGGER_HAPPY14 = 0x2cd -BTN_TRIGGER_HAPPY15 = 0x2ce -BTN_TRIGGER_HAPPY16 = 0x2cf -BTN_TRIGGER_HAPPY17 = 0x2d0 -BTN_TRIGGER_HAPPY18 = 0x2d1 -BTN_TRIGGER_HAPPY19 = 0x2d2 -BTN_TRIGGER_HAPPY20 = 0x2d3 -BTN_TRIGGER_HAPPY21 = 0x2d4 -BTN_TRIGGER_HAPPY22 = 0x2d5 -BTN_TRIGGER_HAPPY23 = 0x2d6 -BTN_TRIGGER_HAPPY24 = 0x2d7 -BTN_TRIGGER_HAPPY25 = 0x2d8 -BTN_TRIGGER_HAPPY26 = 0x2d9 -BTN_TRIGGER_HAPPY27 = 0x2da -BTN_TRIGGER_HAPPY28 = 0x2db -BTN_TRIGGER_HAPPY29 = 0x2dc -BTN_TRIGGER_HAPPY30 = 0x2dd -BTN_TRIGGER_HAPPY31 = 0x2de -BTN_TRIGGER_HAPPY32 = 0x2df -BTN_TRIGGER_HAPPY33 = 0x2e0 -BTN_TRIGGER_HAPPY34 = 0x2e1 -BTN_TRIGGER_HAPPY35 = 0x2e2 -BTN_TRIGGER_HAPPY36 = 0x2e3 -BTN_TRIGGER_HAPPY37 = 0x2e4 -BTN_TRIGGER_HAPPY38 = 0x2e5 -BTN_TRIGGER_HAPPY39 = 0x2e6 -BTN_TRIGGER_HAPPY40 = 0x2e7 - -KEY_MAX = 0x2ff - -# Relative axes - -REL_X = 0x00 -REL_Y = 0x01 -REL_Z = 0x02 -REL_RX = 0x03 -REL_RY = 0x04 -REL_RZ = 0x05 -REL_HWHEEL = 0x06 -REL_DIAL = 0x07 -REL_WHEEL = 0x08 -REL_MISC = 0x09 -REL_MAX = 0x0f - -# Absolute axes - -ABS_X = 0x00 -ABS_Y = 0x01 -ABS_Z = 0x02 -ABS_RX = 0x03 -ABS_RY = 0x04 -ABS_RZ = 0x05 -ABS_THROTTLE = 0x06 -ABS_RUDDER = 0x07 -ABS_WHEEL = 0x08 -ABS_GAS = 0x09 -ABS_BRAKE = 0x0a -ABS_HAT0X = 0x10 -ABS_HAT0Y = 0x11 -ABS_HAT1X = 0x12 -ABS_HAT1Y = 0x13 -ABS_HAT2X = 0x14 -ABS_HAT2Y = 0x15 -ABS_HAT3X = 0x16 -ABS_HAT3Y = 0x17 -ABS_PRESSURE = 0x18 -ABS_DISTANCE = 0x19 -ABS_TILT_X = 0x1a -ABS_TILT_Y = 0x1b -ABS_TOOL_WIDTH = 0x1c -ABS_VOLUME = 0x20 -ABS_MISC = 0x28 -ABS_MAX = 0x3f - -# Misc events - -MSC_SERIAL = 0x00 -MSC_PULSELED = 0x01 -MSC_GESTURE = 0x02 -MSC_RAW = 0x03 -MSC_SCAN = 0x04 -MSC_MAX = 0x07 - -# LEDs - -LED_NUML = 0x00 -LED_CAPSL = 0x01 -LED_SCROLLL = 0x02 -LED_COMPOSE = 0x03 -LED_KANA = 0x04 -LED_SLEEP = 0x05 -LED_SUSPEND = 0x06 -LED_MUTE = 0x07 -LED_MISC = 0x08 -LED_MAIL = 0x09 -LED_CHARGING = 0x0a -LED_MAX = 0x0f - -# Autorepeat values - -REP_DELAY = 0x00 -REP_PERIOD = 0x01 -REP_MAX = 0x01 - -# Sounds - -SND_CLICK = 0x00 -SND_BELL = 0x01 -SND_TONE = 0x02 -SND_MAX = 0x07 - -# IDs. - -ID_BUS = 0 -ID_VENDOR = 1 -ID_PRODUCT = 2 -ID_VERSION = 3 - -BUS_PCI = 0x01 -BUS_ISAPNP = 0x02 -BUS_USB = 0x03 -BUS_HIL = 0x04 -BUS_BLUETOOTH = 0x05 - -BUS_ISA = 0x10 -BUS_I8042 = 0x11 -BUS_XTKBD = 0x12 -BUS_RS232 = 0x13 -BUS_GAMEPORT = 0x14 -BUS_PARPORT = 0x15 -BUS_AMIGA = 0x16 -BUS_ADB = 0x17 -BUS_I2C = 0x18 -BUS_HOST = 0x19 - -# Values describing the status of an effect -FF_STATUS_STOPPED = 0x00 -FF_STATUS_PLAYING = 0x01 -FF_STATUS_MAX = 0x01 -FF_RUMBLE = 0x50 -FF_MAX = 0x7f -FF_CNT = FF_MAX + 1 - -rel_raw_names = {} -abs_raw_names = {} -key_raw_names = {} -for _name, _val in locals().copy().items(): - if _name.startswith('REL_'): - rel_raw_names[_val] = _name - elif _name.startswith('ABS_'): - abs_raw_names[_val] = _name - elif _name.startswith('KEY_') or _name.startswith('BTN_'): - key_raw_names[_val] = _name diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/__init__.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/__init__.py deleted file mode 100644 index ee3709846823b7c4b71b22da0e24d63d805528a8..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/build/lib/pyrender/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -from .camera import (Camera, PerspectiveCamera, OrthographicCamera, - IntrinsicsCamera) -from .light import Light, PointLight, DirectionalLight, SpotLight -from .sampler import Sampler -from .texture import Texture -from .material import Material, MetallicRoughnessMaterial -from .primitive import Primitive -from .mesh import Mesh -from .node import Node -from .scene import Scene -from .renderer import Renderer -from .viewer import Viewer -from .offscreen import OffscreenRenderer -from .version import __version__ -from .constants import RenderFlags, TextAlign, GLTF - -__all__ = [ - 'Camera', 'PerspectiveCamera', 'OrthographicCamera', 'IntrinsicsCamera', - 'Light', 'PointLight', 'DirectionalLight', 'SpotLight', - 'Sampler', 'Texture', 'Material', 'MetallicRoughnessMaterial', - 'Primitive', 'Mesh', 'Node', 'Scene', 'Renderer', 'Viewer', - 'OffscreenRenderer', '__version__', 'RenderFlags', 'TextAlign', - 'GLTF' -] diff --git a/spaces/akhaliq/GPEN/face_inpainting.py b/spaces/akhaliq/GPEN/face_inpainting.py deleted file mode 100644 index 37c1a940ef26a44cd5923dd40b1ef98fb4dff281..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/GPEN/face_inpainting.py +++ /dev/null @@ -1,101 +0,0 @@ -''' -@paper: GAN Prior Embedded Network for Blind Face Restoration in the Wild (CVPR2021) -@author: yangxy (yangtao9009@gmail.com) -''' -import os -import cv2 -import glob -import time -import math -import numpy as np -from PIL import Image, ImageDraw -import __init_paths -from face_model.face_gan import FaceGAN - -# modified by yangxy -def brush_stroke_mask(img, color=(255,255,255)): - min_num_vertex = 8 - max_num_vertex = 28 - mean_angle = 2*math.pi / 5 - angle_range = 2*math.pi / 15 - min_width = 12 - max_width = 80 - def generate_mask(H, W, img=None): - average_radius = math.sqrt(H*H+W*W) / 8 - mask = Image.new('RGB', (W, H), 0) - if img is not None: mask = img #Image.fromarray(img) - - for _ in range(np.random.randint(1, 4)): - num_vertex = np.random.randint(min_num_vertex, max_num_vertex) - angle_min = mean_angle - np.random.uniform(0, angle_range) - angle_max = mean_angle + np.random.uniform(0, angle_range) - angles = [] - vertex = [] - for i in range(num_vertex): - if i % 2 == 0: - angles.append(2*math.pi - np.random.uniform(angle_min, angle_max)) - else: - angles.append(np.random.uniform(angle_min, angle_max)) - - h, w = mask.size - vertex.append((int(np.random.randint(0, w)), int(np.random.randint(0, h)))) - for i in range(num_vertex): - r = np.clip( - np.random.normal(loc=average_radius, scale=average_radius//2), - 0, 2*average_radius) - new_x = np.clip(vertex[-1][0] + r * math.cos(angles[i]), 0, w) - new_y = np.clip(vertex[-1][1] + r * math.sin(angles[i]), 0, h) - vertex.append((int(new_x), int(new_y))) - - draw = ImageDraw.Draw(mask) - width = int(np.random.uniform(min_width, max_width)) - draw.line(vertex, fill=color, width=width) - for v in vertex: - draw.ellipse((v[0] - width//2, - v[1] - width//2, - v[0] + width//2, - v[1] + width//2), - fill=color) - - return mask - - width, height = img.size - mask = generate_mask(height, width, img) - return mask - -class FaceInpainting(object): - def __init__(self, base_dir='./', size=1024, model=None, channel_multiplier=2): - self.facegan = FaceGAN(base_dir, size, model, channel_multiplier) - - # make sure the face image is well aligned. Please refer to face_enhancement.py - def process(self, brokenf): - # complete the face - out = self.facegan.process(brokenf) - - return out - -if __name__=='__main__': - model = {'name':'GPEN-Inpainting-1024', 'size':1024} - - indir = 'examples/ffhq-10' - outdir = 'examples/outs-inpainting' - os.makedirs(outdir, exist_ok=True) - - faceinpainter = FaceInpainting(size=model['size'], model=model['name'], channel_multiplier=2) - - files = sorted(glob.glob(os.path.join(indir, '*.*g'))) - for n, file in enumerate(files[:]): - filename = os.path.basename(file) - - originf = cv2.imread(file, cv2.IMREAD_COLOR) - - brokenf = np.asarray(brush_stroke_mask(Image.fromarray(originf))) - - completef = faceinpainter.process(brokenf) - - originf = cv2.resize(originf, completef.shape[:2]) - brokenf = cv2.resize(brokenf, completef.shape[:2]) - cv2.imwrite(os.path.join(outdir, '.'.join(filename.split('.')[:-1])+'.jpg'), np.hstack((brokenf, completef, originf))) - - if n%10==0: print(n, file) - diff --git a/spaces/akhaliq/JoJoGAN/app.py b/spaces/akhaliq/JoJoGAN/app.py deleted file mode 100644 index df2814cae8ab12b97c33c34c03a6498eb703d0e9..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/JoJoGAN/app.py +++ /dev/null @@ -1,204 +0,0 @@ -import os -from PIL import Image -import torch -import gradio as gr -import torch -torch.backends.cudnn.benchmark = True -from torchvision import transforms, utils -from util import * -from PIL import Image -import math -import random -import numpy as np -from torch import nn, autograd, optim -from torch.nn import functional as F -from tqdm import tqdm -import lpips -from model import * - - -#from e4e_projection import projection as e4e_projection - -from copy import deepcopy -import imageio - -import os -import sys -import numpy as np -from PIL import Image -import torch -import torchvision.transforms as transforms -from argparse import Namespace -from e4e.models.psp import pSp -from util import * -from huggingface_hub import hf_hub_download - -device= 'cpu' -model_path_e = hf_hub_download(repo_id="akhaliq/JoJoGAN_e4e_ffhq_encode", filename="e4e_ffhq_encode.pt") -ckpt = torch.load(model_path_e, map_location='cpu') -opts = ckpt['opts'] -opts['checkpoint_path'] = model_path_e -opts= Namespace(**opts) -net = pSp(opts, device).eval().to(device) - -@ torch.no_grad() -def projection(img, name, device='cuda'): - - - transform = transforms.Compose( - [ - transforms.Resize(256), - transforms.CenterCrop(256), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]), - ] - ) - img = transform(img).unsqueeze(0).to(device) - images, w_plus = net(img, randomize_noise=False, return_latents=True) - result_file = {} - result_file['latent'] = w_plus[0] - torch.save(result_file, name) - return w_plus[0] - - - - -device = 'cpu' - - -latent_dim = 512 - -model_path_s = hf_hub_download(repo_id="akhaliq/jojogan-stylegan2-ffhq-config-f", filename="stylegan2-ffhq-config-f.pt") -original_generator = Generator(1024, latent_dim, 8, 2).to(device) -ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage) -original_generator.load_state_dict(ckpt["g_ema"], strict=False) -mean_latent = original_generator.mean_latent(10000) - -generatorjojo = deepcopy(original_generator) - -generatordisney = deepcopy(original_generator) - -generatorjinx = deepcopy(original_generator) - -generatorcaitlyn = deepcopy(original_generator) - -generatoryasuho = deepcopy(original_generator) - -generatorarcanemulti = deepcopy(original_generator) - -generatorart = deepcopy(original_generator) - -generatorspider = deepcopy(original_generator) - -generatorsketch = deepcopy(original_generator) - - -transform = transforms.Compose( - [ - transforms.Resize((1024, 1024)), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)), - ] -) - - - - -modeljojo = hf_hub_download(repo_id="akhaliq/JoJoGAN-jojo", filename="jojo_preserve_color.pt") - - -ckptjojo = torch.load(modeljojo, map_location=lambda storage, loc: storage) -generatorjojo.load_state_dict(ckptjojo["g"], strict=False) - - -modeldisney = hf_hub_download(repo_id="akhaliq/jojogan-disney", filename="disney_preserve_color.pt") - -ckptdisney = torch.load(modeldisney, map_location=lambda storage, loc: storage) -generatordisney.load_state_dict(ckptdisney["g"], strict=False) - - -modeljinx = hf_hub_download(repo_id="akhaliq/jojo-gan-jinx", filename="arcane_jinx_preserve_color.pt") - -ckptjinx = torch.load(modeljinx, map_location=lambda storage, loc: storage) -generatorjinx.load_state_dict(ckptjinx["g"], strict=False) - - -modelcaitlyn = hf_hub_download(repo_id="akhaliq/jojogan-arcane", filename="arcane_caitlyn_preserve_color.pt") - -ckptcaitlyn = torch.load(modelcaitlyn, map_location=lambda storage, loc: storage) -generatorcaitlyn.load_state_dict(ckptcaitlyn["g"], strict=False) - - -modelyasuho = hf_hub_download(repo_id="akhaliq/JoJoGAN-jojo", filename="jojo_yasuho_preserve_color.pt") - -ckptyasuho = torch.load(modelyasuho, map_location=lambda storage, loc: storage) -generatoryasuho.load_state_dict(ckptyasuho["g"], strict=False) - - -model_arcane_multi = hf_hub_download(repo_id="akhaliq/jojogan-arcane", filename="arcane_multi_preserve_color.pt") - -ckptarcanemulti = torch.load(model_arcane_multi, map_location=lambda storage, loc: storage) -generatorarcanemulti.load_state_dict(ckptarcanemulti["g"], strict=False) - - -modelart = hf_hub_download(repo_id="akhaliq/jojo-gan-art", filename="art.pt") - -ckptart = torch.load(modelart, map_location=lambda storage, loc: storage) -generatorart.load_state_dict(ckptart["g"], strict=False) - - -modelSpiderverse = hf_hub_download(repo_id="akhaliq/jojo-gan-spiderverse", filename="Spiderverse-face-500iters-8face.pt") - -ckptspider = torch.load(modelSpiderverse, map_location=lambda storage, loc: storage) -generatorspider.load_state_dict(ckptspider["g"], strict=False) - -modelSketch = hf_hub_download(repo_id="akhaliq/jojogan-sketch", filename="sketch_multi.pt") - -ckptsketch = torch.load(modelSketch, map_location=lambda storage, loc: storage) -generatorsketch.load_state_dict(ckptsketch["g"], strict=False) - -def inference(img, model): - img.save('out.jpg') - aligned_face = align_face('out.jpg') - - my_w = projection(aligned_face, "test.pt", device).unsqueeze(0) - if model == 'JoJo': - with torch.no_grad(): - my_sample = generatorjojo(my_w, input_is_latent=True) - elif model == 'Disney': - with torch.no_grad(): - my_sample = generatordisney(my_w, input_is_latent=True) - elif model == 'Jinx': - with torch.no_grad(): - my_sample = generatorjinx(my_w, input_is_latent=True) - elif model == 'Caitlyn': - with torch.no_grad(): - my_sample = generatorcaitlyn(my_w, input_is_latent=True) - elif model == 'Yasuho': - with torch.no_grad(): - my_sample = generatoryasuho(my_w, input_is_latent=True) - elif model == 'Arcane Multi': - with torch.no_grad(): - my_sample = generatorarcanemulti(my_w, input_is_latent=True) - elif model == 'Art': - with torch.no_grad(): - my_sample = generatorart(my_w, input_is_latent=True) - elif model == 'Spider-Verse': - with torch.no_grad(): - my_sample = generatorspider(my_w, input_is_latent=True) - else: - with torch.no_grad(): - my_sample = generatorsketch(my_w, input_is_latent=True) - - - npimage = my_sample[0].permute(1, 2, 0).detach().numpy() - imageio.imwrite('filename.jpeg', npimage) - return 'filename.jpeg' - -title = "JoJoGAN" -description = "Gradio Demo for JoJoGAN: One Shot Face Stylization. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." - -article = "

JoJoGAN: One Shot Face Stylization| Github Repo Pytorch

visitor badge
" - -examples=[['mona.png','Jinx']] -gr.Interface(inference, [gr.inputs.Image(type="pil"),gr.inputs.Dropdown(choices=['JoJo', 'Disney','Jinx','Caitlyn','Yasuho','Arcane Multi','Art','Spider-Verse','Sketch'], type="value", default='JoJo', label="Model")], gr.outputs.Image(type="file"),title=title,description=description,article=article,allow_flagging=False,examples=examples,allow_screenshot=False).launch() diff --git a/spaces/akhaliq/deeplab2/trainer/trainer.py b/spaces/akhaliq/deeplab2/trainer/trainer.py deleted file mode 100644 index 2390a486fe061a433513cff2ee896b438e86d731..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/trainer/trainer.py +++ /dev/null @@ -1,292 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""This file contains code to create a Trainer for training and validation.""" - -from typing import Dict, Any, Text -import orbit -import tensorflow as tf - -from deeplab2 import common -from deeplab2 import config_pb2 -from deeplab2.model import utils -from deeplab2.trainer import runner_utils - - -class WarmUp(tf.keras.optimizers.schedules.LearningRateSchedule): - """Applies a warmup schedule on a given learning rate decay schedule.""" - - def __init__(self, - initial_learning_rate, - decay_schedule_fn, - warmup_steps, - name=None): - super(WarmUp, self).__init__() - self.initial_learning_rate = initial_learning_rate - self.warmup_steps = warmup_steps - self.decay_schedule_fn = decay_schedule_fn - self.name = name - - def __call__(self, step): - with tf.name_scope(self.name or 'WarmUp') as name: - # Implements linear warmup. i.e., if global_step < warmup_steps, the - # learning rate will be `global_step/num_warmup_steps * init_lr`. - global_step_float = tf.cast(step, tf.float32) - warmup_steps_float = tf.cast(self.warmup_steps, tf.float32) - warmup_percent_done = global_step_float / warmup_steps_float - warmup_learning_rate = self.initial_learning_rate * warmup_percent_done - return tf.cond( - global_step_float < warmup_steps_float, - lambda: warmup_learning_rate, - lambda: self.decay_schedule_fn(step), - name=name) - - def get_config(self): - return { - 'initial_learning_rate': self.initial_learning_rate, - 'decay_schedule_fn': self.decay_schedule_fn, - 'warmup_steps': self.warmup_steps, - 'name': self.name - } - - -def _create_optimizer( - solver_config: config_pb2.SolverOptions, - learning_rate_multiplier: float = 1.0) -> tf.keras.optimizers.Optimizer: - """Creates an Optimizer based on the configuration. - - Args: - solver_config: A trainer_pb2.SolverOptions configuration. - learning_rate_multiplier: A float, the learning rate multiplier applied on - top of the base learning rate. Default to 1.0. - - Returns: - A tf.keras.optimizer.Optimizer. - - Raises: - ValueError: An error occurs when the desired optimizer or learning rate - scheduler is not supported. - """ - learning_rate = (solver_config.base_learning_rate * learning_rate_multiplier) - if solver_config.learning_policy == 'poly': - lr_scheduler = tf.keras.optimizers.schedules.PolynomialDecay( - initial_learning_rate=learning_rate, - decay_steps=solver_config.training_number_of_steps, - end_learning_rate=solver_config.poly_end_learning_rate, - power=solver_config.poly_learning_power, - cycle=False) - elif solver_config.learning_policy == 'cosine': - lr_scheduler = tf.keras.experimental.CosineDecay( - initial_learning_rate=learning_rate, - decay_steps=solver_config.training_number_of_steps, - alpha=0.0) - else: - raise ValueError('Learning rate policy %s is not supported.' % - solver_config.learning_policy) - - if solver_config.warmup_steps: - lr_scheduler = WarmUp( - initial_learning_rate=learning_rate, - decay_schedule_fn=lr_scheduler, - warmup_steps=solver_config.warmup_steps, - name='linear_warmup') - - if solver_config.optimizer == 'adam': - return tf.keras.optimizers.Adam(learning_rate=lr_scheduler) - elif solver_config.optimizer == 'sgd': - # We use momentum = 0.9, the most frequently used case. - return tf.keras.optimizers.SGD(learning_rate=lr_scheduler, - momentum=0.9) - - raise ValueError('Optimizer %s is not supported.' % solver_config.optimizer) - - -class Trainer(orbit.StandardTrainer): - """Implements a Trainer for training DeepLab models.""" - - def __init__(self, config: config_pb2.ExperimentOptions, - model: tf.keras.Model, loss: tf.keras.losses.Loss, - global_step: tf.Variable): - """Initializes the trainer. - - Args: - config: A config_pb2.ExperimentOptions configuration. - model: A tf.keras.Model. - loss: A tf.keras.losses.Loss. - global_step: A tf.Variable that records the global training step. - """ - self._strategy = tf.distribute.get_strategy() - - support_panoptic = (common.TASK_PANOPTIC_SEGMENTATION in - utils.get_supported_tasks(config)) - train_dataset = runner_utils.create_dataset( - config.train_dataset_options, - is_training=True, - only_semantic_annotations=not support_panoptic) - train_dataset = orbit.utils.make_distributed_dataset( - self.strategy, train_dataset) - super(Trainer, self).__init__(train_dataset) - - self._config = config - self._model = model - self._loss = loss - - solver_options = config.trainer_options.solver_options - self._optimizer = _create_optimizer(solver_options) - self._backbone_optimizer = None - if solver_options.HasField('backbone_learning_rate_multiplier'): - self._backbone_optimizer = _create_optimizer( - solver_options, learning_rate_multiplier=( - solver_options.backbone_learning_rate_multiplier)) - - self._global_step = global_step - self._use_gradient_clipping = solver_options.use_gradient_clipping - self._clip_gradient_norm = solver_options.clip_gradient_norm - - self._train_loss_metric_dict = runner_utils.create_loss_metric_dict( - loss.get_loss_names(), prefix='train_') - - def train_loop_begin(self): - """Called once at the beginning of the training loop. - - This method is called before dataset iterators creation. - """ - for metric in self._train_loss_metric_dict.values(): - metric.reset_states() - - def _apply_gradients_to_optimizers(self, gradients_and_variables): - """Applies gradients to their optimizers. - - This function divides all trainable variables (and their gradients) into - two groups. One group contains backbone variables that have been pretrained, - e.g., on ImageNet classification. The other group contains all other - variables that are added specifically for the dense prediction task, e.g., - panoptic segmentation. Then, we apply two optimizers, optionally with two - learning rates, to the variables and gradients. - - Args: - gradients_and_variables: A list of tuple of (gradient, variable) tensors. - """ - if self._backbone_optimizer is None: - self._optimizer.apply_gradients(gradients_and_variables) - else: - optimizer_inputs = [] - backbone_optimizer_inputs = [] - - encoder = self._model.checkpoint_items['encoder'] - encoder_variable_names = [x.name for x in encoder.trainable_variables] - encoder_name = self._config.model_options.backbone.name - - for gradient, variable in gradients_and_variables: - if runner_utils.check_if_variable_in_backbone(variable, encoder_name, - encoder_variable_names): - backbone_optimizer_inputs.append((gradient, variable)) - else: - optimizer_inputs.append((gradient, variable)) - self._optimizer.apply_gradients(optimizer_inputs) - self._backbone_optimizer.apply_gradients(backbone_optimizer_inputs) - - def train_step(self, iterator): - """Implements one step of training. - - Runs one step of evaluation with respect to the chosen strategy. In case of - a distributed strategy, the replica results are gathered and returned. - - Note that all operations within `_train_step` are tf.function compatible, as - they will be traced with tf.function. Any other/numpy operations are put in - `train_loop_begin` or `train_loop_end` functions. - - Args: - iterator: A tf.nest-compatible structure of tf.data Iterator or - DistributedIterator. - """ - - def step_fn(inputs): - self._train_step(inputs) - self._global_step.assign_add(1) - - self._strategy.run(step_fn, args=(next(iterator),)) - - def _train_step(self, inputs: Dict[Text, Any]): - """Performs a forward and backward pass. - - Args: - inputs: A dictionary to be consumed by the model. - """ - with tf.GradientTape() as tape: - outputs = self._model(inputs[common.IMAGE], training=True) - # Get the average per-batch loss and scale it down by the number of - # replicas. This ensures that we don't end up multiplying our loss by the - # number of workers - gradients are summed, not averaged, across replicas - # during the apply_gradients call. - loss_dict = self._loss(inputs, outputs) - # Average over the batch. - average_loss_dict = { - key: tf.reduce_mean(value) for key, value in loss_dict.items()} - total_loss = average_loss_dict[common.TOTAL_LOSS] - scaled_loss = total_loss / self.strategy.num_replicas_in_sync - - training_vars = self._model.trainable_variables - gradients = tape.gradient(scaled_loss, training_vars) - - # Apply gradient clipping. - if self._clip_gradient_norm > 0.0 and self._use_gradient_clipping: - gradients, _ = tf.clip_by_global_norm(gradients, self._clip_gradient_norm) - - self._apply_gradients_to_optimizers(list(zip(gradients, training_vars))) - - for name, value in average_loss_dict.items(): - self._train_loss_metric_dict[name].update_state(value) - - def train_loop_end(self) -> Dict[Text, tf.Tensor]: - """Called at the end of the training loop. - - The value returned from this function will be returned as-is from the - train() method. - - Returns: - A dictionary of `Tensors`, which will be written to logs and as - TensorBoard summaries. - """ - train_logs = {} - for loss_metric in self._train_loss_metric_dict.values(): - train_logs['losses/' + loss_metric.name] = loss_metric.result() - - if callable(self._optimizer.learning_rate): - train_logs['learning_rate'] = self._optimizer.learning_rate( - self._global_step) - else: - train_logs['learning_rate'] = self._optimizer.learning_rate - return train_logs - - @property - def optimizer(self): - return self._optimizer - - @property - def backbone_optimizer(self): - return self._backbone_optimizer - - @property - def strategy(self): - return self._strategy - - @property - def global_step(self): - return self._global_step - - @property - def model(self): - return self._model diff --git a/spaces/ali-ghamdan/realesrgan-models/inference_realesrgan_video.py b/spaces/ali-ghamdan/realesrgan-models/inference_realesrgan_video.py deleted file mode 100644 index 39fed08a5cf32e543338498be16dfd5a52cdb061..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/realesrgan-models/inference_realesrgan_video.py +++ /dev/null @@ -1,362 +0,0 @@ -import argparse -import cv2 -import glob -import mimetypes -import numpy as np -import os -import shutil -import subprocess -import torch -from basicsr.archs.rrdbnet_arch import RRDBNet -from os import path as osp -from tqdm import tqdm - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - -try: - import ffmpeg -except ImportError: - import pip - pip.main(['install', '--user', 'ffmpeg-python']) - import ffmpeg - - -def get_video_meta_info(video_path): - ret = {} - probe = ffmpeg.probe(video_path) - video_streams = [stream for stream in probe['streams'] if stream['codec_type'] == 'video'] - has_audio = any(stream['codec_type'] == 'audio' for stream in probe['streams']) - ret['width'] = video_streams[0]['width'] - ret['height'] = video_streams[0]['height'] - ret['fps'] = eval(video_streams[0]['avg_frame_rate']) - ret['audio'] = ffmpeg.input(video_path).audio if has_audio else None - ret['nb_frames'] = int(video_streams[0]['nb_frames']) - return ret - - -def get_sub_video(args, num_process, process_idx): - if num_process == 1: - return args.input - meta = get_video_meta_info(args.input) - duration = int(meta['nb_frames'] / meta['fps']) - part_time = duration // num_process - print(f'duration: {duration}, part_time: {part_time}') - os.makedirs(osp.join(args.output, f'{args.video_name}_inp_tmp_videos'), exist_ok=True) - out_path = osp.join(args.output, f'{args.video_name}_inp_tmp_videos', f'{process_idx:03d}.mp4') - cmd = [ - args.ffmpeg_bin, f'-i {args.input}', '-ss', f'{part_time * process_idx}', - f'-to {part_time * (process_idx + 1)}' if process_idx != num_process - 1 else '', '-async 1', out_path, '-y' - ] - print(' '.join(cmd)) - subprocess.call(' '.join(cmd), shell=True) - return out_path - - -class Reader: - - def __init__(self, args, total_workers=1, worker_idx=0): - self.args = args - input_type = mimetypes.guess_type(args.input)[0] - self.input_type = 'folder' if input_type is None else input_type - self.paths = [] # for image&folder type - self.audio = None - self.input_fps = None - if self.input_type.startswith('video'): - video_path = get_sub_video(args, total_workers, worker_idx) - self.stream_reader = ( - ffmpeg.input(video_path).output('pipe:', format='rawvideo', pix_fmt='bgr24', - loglevel='error').run_async( - pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin)) - meta = get_video_meta_info(video_path) - self.width = meta['width'] - self.height = meta['height'] - self.input_fps = meta['fps'] - self.audio = meta['audio'] - self.nb_frames = meta['nb_frames'] - - else: - if self.input_type.startswith('image'): - self.paths = [args.input] - else: - paths = sorted(glob.glob(os.path.join(args.input, '*'))) - tot_frames = len(paths) - num_frame_per_worker = tot_frames // total_workers + (1 if tot_frames % total_workers else 0) - self.paths = paths[num_frame_per_worker * worker_idx:num_frame_per_worker * (worker_idx + 1)] - - self.nb_frames = len(self.paths) - assert self.nb_frames > 0, 'empty folder' - from PIL import Image - tmp_img = Image.open(self.paths[0]) - self.width, self.height = tmp_img.size - self.idx = 0 - - def get_resolution(self): - return self.height, self.width - - def get_fps(self): - if self.args.fps is not None: - return self.args.fps - elif self.input_fps is not None: - return self.input_fps - return 24 - - def get_audio(self): - return self.audio - - def __len__(self): - return self.nb_frames - - def get_frame_from_stream(self): - img_bytes = self.stream_reader.stdout.read(self.width * self.height * 3) # 3 bytes for one pixel - if not img_bytes: - return None - img = np.frombuffer(img_bytes, np.uint8).reshape([self.height, self.width, 3]) - return img - - def get_frame_from_list(self): - if self.idx >= self.nb_frames: - return None - img = cv2.imread(self.paths[self.idx]) - self.idx += 1 - return img - - def get_frame(self): - if self.input_type.startswith('video'): - return self.get_frame_from_stream() - else: - return self.get_frame_from_list() - - def close(self): - if self.input_type.startswith('video'): - self.stream_reader.stdin.close() - self.stream_reader.wait() - - -class Writer: - - def __init__(self, args, audio, height, width, video_save_path, fps): - out_width, out_height = int(width * args.outscale), int(height * args.outscale) - if out_height > 2160: - print('You are generating video that is larger than 4K, which will be very slow due to IO speed.', - 'We highly recommend to decrease the outscale(aka, -s).') - - if audio is not None: - self.stream_writer = ( - ffmpeg.input('pipe:', format='rawvideo', pix_fmt='bgr24', s=f'{out_width}x{out_height}', - framerate=fps).output( - audio, - video_save_path, - pix_fmt='yuv420p', - vcodec='libx264', - loglevel='error', - acodec='copy').overwrite_output().run_async( - pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin)) - else: - self.stream_writer = ( - ffmpeg.input('pipe:', format='rawvideo', pix_fmt='bgr24', s=f'{out_width}x{out_height}', - framerate=fps).output( - video_save_path, pix_fmt='yuv420p', vcodec='libx264', - loglevel='error').overwrite_output().run_async( - pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin)) - - def write_frame(self, frame): - frame = frame.astype(np.uint8).tobytes() - self.stream_writer.stdin.write(frame) - - def close(self): - self.stream_writer.stdin.close() - self.stream_writer.wait() - - -def inference_video(args, video_save_path, device=None, total_workers=1, worker_idx=0): - # ---------------------- determine models according to model names ---------------------- # - args.model_name = args.model_name.split('.pth')[0] - if args.model_name in ['RealESRGAN_x4plus', 'RealESRNet_x4plus']: # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - elif args.model_name in ['RealESRGAN_x4plus_anime_6B']: # x4 RRDBNet model with 6 blocks - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - netscale = 4 - elif args.model_name in ['RealESRGAN_x2plus']: # x2 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - netscale = 2 - elif args.model_name in ['realesr-animevideov3']: # x4 VGG-style model (XS size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - netscale = 4 - else: - raise NotImplementedError - - # ---------------------- determine model paths ---------------------- # - model_path = os.path.join('experiments/pretrained_models', args.model_name + '.pth') - if not os.path.isfile(model_path): - model_path = os.path.join('realesrgan/weights', args.model_name + '.pth') - if not os.path.isfile(model_path): - raise ValueError(f'Model {args.model_name} does not exist.') - - # restorer - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - model=model, - tile=args.tile, - tile_pad=args.tile_pad, - pre_pad=args.pre_pad, - half=not args.fp32, - device=device, - ) - - if 'anime' in args.model_name and args.face_enhance: - print('face_enhance is not supported in anime models, we turned this option off for you. ' - 'if you insist on turning it on, please manually comment the relevant lines of code.') - args.face_enhance = False - - if args.face_enhance: # Use GFPGAN for face enhancement - from gfpgan import GFPGANer - face_enhancer = GFPGANer( - model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth', - upscale=args.outscale, - arch='clean', - channel_multiplier=2, - bg_upsampler=upsampler) # TODO support custom device - else: - face_enhancer = None - - reader = Reader(args, total_workers, worker_idx) - audio = reader.get_audio() - height, width = reader.get_resolution() - fps = reader.get_fps() - writer = Writer(args, audio, height, width, video_save_path, fps) - - pbar = tqdm(total=len(reader), unit='frame', desc='inference') - while True: - img = reader.get_frame() - if img is None: - break - - try: - if args.face_enhance: - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - else: - output, _ = upsampler.enhance(img, outscale=args.outscale) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set --tile with a smaller number.') - else: - writer.write_frame(output) - - torch.cuda.synchronize(device) - pbar.update(1) - - reader.close() - writer.close() - - -def run(args): - args.video_name = osp.splitext(os.path.basename(args.input))[0] - video_save_path = osp.join(args.output, f'{args.video_name}_{args.suffix}.mp4') - - if args.extract_frame_first: - tmp_frames_folder = osp.join(args.output, f'{args.video_name}_inp_tmp_frames') - os.makedirs(tmp_frames_folder, exist_ok=True) - os.system(f'ffmpeg -i {args.input} -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 {tmp_frames_folder}/frame%08d.png') - args.input = tmp_frames_folder - - num_gpus = torch.cuda.device_count() - num_process = num_gpus * args.num_process_per_gpu - if num_process == 1: - inference_video(args, video_save_path) - return - - ctx = torch.multiprocessing.get_context('spawn') - pool = ctx.Pool(num_process) - os.makedirs(osp.join(args.output, f'{args.video_name}_out_tmp_videos'), exist_ok=True) - pbar = tqdm(total=num_process, unit='sub_video', desc='inference') - for i in range(num_process): - sub_video_save_path = osp.join(args.output, f'{args.video_name}_out_tmp_videos', f'{i:03d}.mp4') - pool.apply_async( - inference_video, - args=(args, sub_video_save_path, torch.device(i % num_gpus), num_process, i), - callback=lambda arg: pbar.update(1)) - pool.close() - pool.join() - - # combine sub videos - # prepare vidlist.txt - with open(f'{args.output}/{args.video_name}_vidlist.txt', 'w') as f: - for i in range(num_process): - f.write(f'file \'{args.video_name}_out_tmp_videos/{i:03d}.mp4\'\n') - - cmd = [ - args.ffmpeg_bin, '-f', 'concat', '-safe', '0', '-i', f'{args.output}/{args.video_name}_vidlist.txt', '-c', - 'copy', f'{video_save_path}' - ] - print(' '.join(cmd)) - subprocess.call(cmd) - shutil.rmtree(osp.join(args.output, f'{args.video_name}_out_tmp_videos')) - if osp.exists(osp.join(args.output, f'{args.video_name}_inp_tmp_videos')): - shutil.rmtree(osp.join(args.output, f'{args.video_name}_inp_tmp_videos')) - os.remove(f'{args.output}/{args.video_name}_vidlist.txt') - - -def main(): - """Inference demo for Real-ESRGAN. - It mainly for restoring anime videos. - - """ - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='inputs', help='Input video, image or folder') - parser.add_argument( - '-n', - '--model_name', - type=str, - default='realesr-animevideov3', - help=('Model names: realesr-animevideov3 | RealESRGAN_x4plus_anime_6B | RealESRGAN_x4plus | RealESRNet_x4plus |' - ' RealESRGAN_x2plus | ' - 'Default:realesr-animevideov3')) - parser.add_argument('-o', '--output', type=str, default='results', help='Output folder') - parser.add_argument('-s', '--outscale', type=float, default=4, help='The final upsampling scale of the image') - parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored video') - parser.add_argument('-t', '--tile', type=int, default=0, help='Tile size, 0 for no tile during testing') - parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding') - parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border') - parser.add_argument('--face_enhance', action='store_true', help='Use GFPGAN to enhance face') - parser.add_argument( - '--fp32', action='store_true', help='Use fp32 precision during inference. Default: fp16 (half precision).') - parser.add_argument('--fps', type=float, default=None, help='FPS of the output video') - parser.add_argument('--ffmpeg_bin', type=str, default='ffmpeg', help='The path to ffmpeg') - parser.add_argument('--extract_frame_first', action='store_true') - parser.add_argument('--num_process_per_gpu', type=int, default=1) - - parser.add_argument( - '--alpha_upsampler', - type=str, - default='realesrgan', - help='The upsampler for the alpha channels. Options: realesrgan | bicubic') - parser.add_argument( - '--ext', - type=str, - default='auto', - help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs') - args = parser.parse_args() - - args.input = args.input.rstrip('/').rstrip('\\') - os.makedirs(args.output, exist_ok=True) - - if mimetypes.guess_type(args.input)[0] is not None and mimetypes.guess_type(args.input)[0].startswith('video'): - is_video = True - else: - is_video = False - - if args.extract_frame_first and not is_video: - args.extract_frame_first = False - - run(args) - - if args.extract_frame_first: - tmp_frames_folder = osp.join(args.output, f'{args.video_name}_inp_tmp_frames') - shutil.rmtree(tmp_frames_folder) - - -if __name__ == '__main__': - main() diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Entity.pod b/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Entity.pod deleted file mode 100644 index 45418e87f14ae1630a175d6e38278547fa2c9d17..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/ThirdParty/ROUGE/ROUGE-1.5.5/XML/DOM/Entity.pod +++ /dev/null @@ -1,56 +0,0 @@ -=head1 NAME - -XML::DOM::Entity - An XML ENTITY in XML::DOM - -=head1 DESCRIPTION - -XML::DOM::Entity extends L. - -This node represents an Entity declaration, e.g. - - - - - -The first one is called a parameter entity and is referenced like this: %draft; -The 2nd is a (regular) entity and is referenced like this: &hatch-pic; - -=head2 METHODS - -=over 4 - -=item getNotationName - -Returns the name of the notation for the entity. - -I The DOM Spec says: For unparsed entities, the name of the -notation for the entity. For parsed entities, this is null. -(This implementation does not support unparsed entities.) - -=item getSysId - -Returns the system id, or undef. - -=item getPubId - -Returns the public id, or undef. - -=back - -=head2 Additional methods not in the DOM Spec - -=over 4 - -=item isParameterEntity - -Whether it is a parameter entity (%ent;) or not (&ent;) - -=item getValue - -Returns the entity value. - -=item getNdata - -Returns the NDATA declaration (for general unparsed entities), or undef. - -=back diff --git a/spaces/aliabid94/reverse_audio/README.md b/spaces/aliabid94/reverse_audio/README.md deleted file mode 100644 index 7d654d6393c5f2f9cd9617f9e9b4b2955841088d..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/reverse_audio/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: reverse_audio -app_file: run.py -emoji: 🤯 -colorFrom: indigo -colorTo: indigo -sdk: gradio -python_version: 3.9 -sdk_version: 3.31.0 ---- diff --git a/spaces/allknowingroger/Image-Models-Test128/app.py b/spaces/allknowingroger/Image-Models-Test128/app.py deleted file mode 100644 index 8305800eefa222ebeaf8f85e538d800889761df4..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test128/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "MakAttack/6540dc2059df1fe983c03af2", - "notbellan/Abelmodel", - "tensor-diffusion/melaura-v1-1", - "Charnx2/lora-trained-xl", - "digiplay/Sudachi_diffusers", - "Yntec/OpenGenDiffusers", - "thiru9330/thiru_atmdl_SDXL", - "digiplay/2K-VAE", - "LinoyTsaban/huggy_v20", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/alvanlii/domain-expansion/expansion_utils/text_templates.py b/spaces/alvanlii/domain-expansion/expansion_utils/text_templates.py deleted file mode 100644 index 7758e093c2369884cc2a8781dc26b8112fe8a48c..0000000000000000000000000000000000000000 --- a/spaces/alvanlii/domain-expansion/expansion_utils/text_templates.py +++ /dev/null @@ -1,131 +0,0 @@ -# Taken from https://github.com/rinongal/StyleGAN-nada. - -imagenet_templates = [ - 'a bad photo of a {}.', - 'a sculpture of a {}.', - 'a photo of the hard to see {}.', - 'a low resolution photo of the {}.', - 'a rendering of a {}.', - 'graffiti of a {}.', - 'a bad photo of the {}.', - 'a cropped photo of the {}.', - 'a tattoo of a {}.', - 'the embroidered {}.', - 'a photo of a hard to see {}.', - 'a bright photo of a {}.', - 'a photo of a clean {}.', - 'a photo of a dirty {}.', - 'a dark photo of the {}.', - 'a drawing of a {}.', - 'a photo of my {}.', - 'the plastic {}.', - 'a photo of the cool {}.', - 'a close-up photo of a {}.', - 'a black and white photo of the {}.', - 'a painting of the {}.', - 'a painting of a {}.', - 'a pixelated photo of the {}.', - 'a sculpture of the {}.', - 'a bright photo of the {}.', - 'a cropped photo of a {}.', - 'a plastic {}.', - 'a photo of the dirty {}.', - 'a jpeg corrupted photo of a {}.', - 'a blurry photo of the {}.', - 'a photo of the {}.', - 'a good photo of the {}.', - 'a rendering of the {}.', - 'a {} in a video game.', - 'a photo of one {}.', - 'a doodle of a {}.', - 'a close-up photo of the {}.', - 'a photo of a {}.', - 'the origami {}.', - 'the {} in a video game.', - 'a sketch of a {}.', - 'a doodle of the {}.', - 'a origami {}.', - 'a low resolution photo of a {}.', - 'the toy {}.', - 'a rendition of the {}.', - 'a photo of the clean {}.', - 'a photo of a large {}.', - 'a rendition of a {}.', - 'a photo of a nice {}.', - 'a photo of a weird {}.', - 'a blurry photo of a {}.', - 'a cartoon {}.', - 'art of a {}.', - 'a sketch of the {}.', - 'a embroidered {}.', - 'a pixelated photo of a {}.', - 'itap of the {}.', - 'a jpeg corrupted photo of the {}.', - 'a good photo of a {}.', - 'a plushie {}.', - 'a photo of the nice {}.', - 'a photo of the small {}.', - 'a photo of the weird {}.', - 'the cartoon {}.', - 'art of the {}.', - 'a drawing of the {}.', - 'a photo of the large {}.', - 'a black and white photo of a {}.', - 'the plushie {}.', - 'a dark photo of a {}.', - 'itap of a {}.', - 'graffiti of the {}.', - 'a toy {}.', - 'itap of my {}.', - 'a photo of a cool {}.', - 'a photo of a small {}.', - 'a tattoo of the {}.', -] - -part_templates = [ - 'the paw of a {}.', - 'the nose of a {}.', - 'the eye of the {}.', - 'the ears of a {}.', - 'an eye of a {}.', - 'the tongue of a {}.', - 'the fur of the {}.', - 'colorful {} fur.', - 'a snout of a {}.', - 'the teeth of the {}.', - 'the {}s fangs.', - 'a claw of the {}.', - 'the face of the {}', - 'a neck of a {}', - 'the head of the {}', -] - -imagenet_templates_small = [ - 'a photo of a {}.', - 'a rendering of a {}.', - 'a cropped photo of the {}.', - 'the photo of a {}.', - 'a photo of a clean {}.', - 'a photo of a dirty {}.', - 'a dark photo of the {}.', - 'a photo of my {}.', - 'a photo of the cool {}.', - 'a close-up photo of a {}.', - 'a bright photo of the {}.', - 'a cropped photo of a {}.', - 'a photo of the {}.', - 'a good photo of the {}.', - 'a photo of one {}.', - 'a close-up photo of the {}.', - 'a rendition of the {}.', - 'a photo of the clean {}.', - 'a rendition of a {}.', - 'a photo of a nice {}.', - 'a good photo of a {}.', - 'a photo of the nice {}.', - 'a photo of the small {}.', - 'a photo of the weird {}.', - 'a photo of the large {}.', - 'a photo of a cool {}.', - 'a photo of a small {}.', -] \ No newline at end of file diff --git a/spaces/amankishore/sjc/sd1/ldm/modules/losses/contperceptual.py b/spaces/amankishore/sjc/sd1/ldm/modules/losses/contperceptual.py deleted file mode 100644 index 672c1e32a1389def02461c0781339681060c540e..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/sd1/ldm/modules/losses/contperceptual.py +++ /dev/null @@ -1,111 +0,0 @@ -import torch -import torch.nn as nn - -from taming.modules.losses.vqperceptual import * # TODO: taming dependency yes/no? - - -class LPIPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_loss="hinge"): - - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - self.kl_weight = kl_weight - self.pixel_weight = pixelloss_weight - self.perceptual_loss = LPIPS().eval() - self.perceptual_weight = perceptual_weight - # output log variance - self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init) - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm - ).apply(weights_init) - self.discriminator_iter_start = disc_start - self.disc_loss = hinge_d_loss if disc_loss == "hinge" else vanilla_d_loss - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, inputs, reconstructions, posteriors, optimizer_idx, - global_step, last_layer=None, cond=None, split="train", - weights=None): - rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - rec_loss = rec_loss + self.perceptual_weight * p_loss - - nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar - weighted_nll_loss = nll_loss - if weights is not None: - weighted_nll_loss = weights*nll_loss - weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0] - nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - kl_loss = posteriors.kl() - kl_loss = torch.sum(kl_loss) / kl_loss.shape[0] - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - if self.disc_factor > 0.0: - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - else: - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss - - log = {"{}/total_loss".format(split): loss.clone().detach().mean(), "{}/logvar".format(split): self.logvar.detach(), - "{}/kl_loss".format(split): kl_loss.detach().mean(), "{}/nll_loss".format(split): nll_loss.detach().mean(), - "{}/rec_loss".format(split): rec_loss.detach().mean(), - "{}/d_weight".format(split): d_weight.detach(), - "{}/disc_factor".format(split): torch.tensor(disc_factor), - "{}/g_loss".format(split): g_loss.detach().mean(), - } - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(), - "{}/logits_real".format(split): logits_real.detach().mean(), - "{}/logits_fake".format(split): logits_fake.detach().mean() - } - return d_loss, log - diff --git a/spaces/amish1729/LFUNet/keras_vggface/__init__.py b/spaces/amish1729/LFUNet/keras_vggface/__init__.py deleted file mode 100644 index edc5547df9c34148a78d984ceacea57304a06344..0000000000000000000000000000000000000000 --- a/spaces/amish1729/LFUNet/keras_vggface/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from keras_vggface.vggface import VGGFace -from keras_vggface.version import __version__ \ No newline at end of file diff --git a/spaces/anaclaudia13ct/insect_detection/utils/aws/__init__.py b/spaces/anaclaudia13ct/insect_detection/utils/aws/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/analist/qa_table/README.md b/spaces/analist/qa_table/README.md deleted file mode 100644 index 4d4b9556f4bda5ff800957662493988a59bf736e..0000000000000000000000000000000000000000 --- a/spaces/analist/qa_table/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Qa Table -emoji: 👁 -colorFrom: blue -colorTo: yellow -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/andersab/QuijoBERT/quijoBERT.py b/spaces/andersab/QuijoBERT/quijoBERT.py deleted file mode 100644 index f05bbbb9a45c82a523bea018468e9974acd1f48b..0000000000000000000000000000000000000000 --- a/spaces/andersab/QuijoBERT/quijoBERT.py +++ /dev/null @@ -1,113 +0,0 @@ - - -from transformers import AutoTokenizer, AutoModelForMaskedLM, RobertaConfig , RobertaTokenizer,RobertaForMaskedLM, DataCollatorForLanguageModeling, LineByLineTextDataset, Trainer, TrainingArguments - - -from pathlib import Path -from tokenizers import ByteLevelBPETokenizer -from tokenizers.implementations import ByteLevelBPETokenizer -from tokenizers.processors import BertProcessing -import torch -from torchinfo import summary - - -import os - -paths = [str(x) for x in Path(".").glob("**/el_*.txt")] -print(paths) -# Initialize a tokenizer -tokenizer = ByteLevelBPETokenizer() -# Customize training -tokenizer.train(files=paths, vocab_size=52_000, min_frequency=2, -special_tokens=[ -"", -"", -"", -"", -"", -]) - - -dir_path = os.getcwd() -token_dir = os.path.join(dir_path, 'QuijoBERT') - -if not os.path.exists(token_dir): - os.makedirs(token_dir) -tokenizer.save_model('QuijoBERT') - -tokenizer = ByteLevelBPETokenizer( -"./QuijoBERT/vocab.json", -"./QuijoBERT/merges.txt", -) - -tokenizer._tokenizer.post_processor = BertProcessing( -("", tokenizer.token_to_id("")), -("", tokenizer.token_to_id("")), -) -tokenizer.enable_truncation(max_length=512) - - - -config = RobertaConfig( - vocab_size=52_000, - max_position_embeddings=514, - num_attention_heads=12, - num_hidden_layers=6, - type_vocab_size=1, - ) - -"""# Step 8: Re-creating the Tokenizer in Transformers""" - -tokenizer = RobertaTokenizer.from_pretrained("./QuijoBERT", max_length=512) - -#Initializing a Model - -model = RobertaForMaskedLM(config=config) -#In case we want to recover the after a crash -#model = RobertaForMaskedLM.from_pretrained("./QuijoBERT/Checkpoint-xxxxx") - - -#Tensorflow -print(model) -#Pytorch -summary(model) - - -dataset = LineByLineTextDataset( - tokenizer=tokenizer, - file_path="./el_quijote.txt", - block_size=128, - ) - - -#Defining a Data Collator - -data_collator = DataCollatorForLanguageModeling( - tokenizer=tokenizer, mlm=True, mlm_probability=0.15 -) - -# Initializing the Trainer Object -training_args = TrainingArguments( - output_dir="./QuijoBERT", - overwrite_output_dir=True, - num_train_epochs=1, - per_device_train_batch_size=64, - save_steps=1000, - save_total_limit=2, - ) -trainer = Trainer( - model=model, - args=training_args, - data_collator=data_collator, - train_dataset=dataset, -) - - -#Training the Model -print('aqui') -trainer.train() -trainer.save_model("./QuijoBERT") - -#Saving the Final Model(+tokenizer + config) to disk -trainer.save_model("./QuijoBERT") - diff --git a/spaces/arnikdehnavi/citationPrediction/app.py b/spaces/arnikdehnavi/citationPrediction/app.py deleted file mode 100644 index de917f2fd8872c5d7970600a7f793812fb2f6a78..0000000000000000000000000000000000000000 --- a/spaces/arnikdehnavi/citationPrediction/app.py +++ /dev/null @@ -1,86 +0,0 @@ -import pandas as pd -from sklearn.feature_extraction.text import CountVectorizer -from tensorflow.keras.models import load_model -from catboost import CatBoostRegressor -import joblib -import plotly.express as px -import tensorflow as tf -from tensorflow.keras.preprocessing.text import Tokenizer -from sklearn.model_selection import train_test_split as tts -from tensorflow.keras.preprocessing.sequence import pad_sequences -import numpy as np -import streamlit as st - -@st.cache_resource -def model_lstm(): - model_lstm=load_model('citation_new.h5') - model_lstm.compile(optimizer='adam',loss=tf.keras.losses.MeanSquaredError(),metrics='mse') - return model_lstm -@st.cache_resource -def model_cat(): - model_cat=joblib.load('cat_feature.json') - return model_cat -@st.cache_resource -def CountVectorizer(): - CountVectorizer1=joblib.load('CountVectorizer.json') - return CountVectorizer1 - -@st.cache_data -def data(): - df=pd.read_excel('in.xlsx') - return df - -model_lstm=model_lstm() -model_cat=model_cat() -count=CountVectorizer() -df=data() - - - - -def tkn(): - paper=df['paper'] - t=tts(paper,train_size=0.8,random_state=10) - max_word=1000 - tkn=Tokenizer(num_words=max_word) - tkn.fit_on_texts(t[0]) - return tkn.fit_on_texts(t[0]) ,tkn -tkn_fit=tkn()[0] -tkn=tkn()[1] - -def cite(p,i): - - x1=count.transform(p) - x2=i - x1=x1.toarray() - x1=x1.tolist()[0] - x1.extend(x2) - cat=model_cat.predict(x1) - x2.append(cat) - - seq=tkn.texts_to_sequences(p) - max_len=100 - padded_docs=pad_sequences(seq,padding='pre',maxlen=max_len) - final_predict=model_lstm.predict([padded_docs,np.array([x2])]) - if final_predict>=0: - return(int(final_predict)) - else: - return(0) -st.sidebar.title('Title of your paper:') - -msg=st.sidebar.text_area('Title', value='investigation of acoustic and visual features for acoustic scene classifications') -st.sidebar.title('Journal information:') -st.sidebar.markdown('https://www.scimagojr.com/journalrank.php') -h=st.sidebar.number_input('H index',value=10) -c_d=st.sidebar.number_input('Cites / Doc (2years)',value=2) -r_d=st.sidebar.number_input('Ref. / Doc',value=1) - -citation=[] -for i in [1,2,3,4,5,6,7,8,9,10]: - journal=[i,h,c_d,r_d] - out=cite([msg],journal) - citation.append(out) - -plot=pd.DataFrame({'year':[1,2,3,4,5,6,7,8,9,10],'number of citations':citation}) -fig=px.line(plot,x='year',y='number of citations',title=msg) -st.plotly_chart(fig) \ No newline at end of file diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/diffusion.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/diffusion.py deleted file mode 100644 index cb350af779ede3185f7fa71ca29f8b62f9691b30..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/tortoise/diffusion.py +++ /dev/null @@ -1,1234 +0,0 @@ -""" -This is an almost carbon copy of gaussian_diffusion.py from OpenAI's ImprovedDiffusion repo, which itself: - -This code started out as a PyTorch port of Ho et al's diffusion models: -https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py - -Docstrings have been added, as well as DDIM sampling and a new collection of beta schedules. -""" - -import enum -import math - -import numpy as np -import torch -import torch as th -from k_diffusion.sampling import sample_dpmpp_2m, sample_euler_ancestral -from tqdm import tqdm - -from TTS.tts.layers.tortoise.dpm_solver import DPM_Solver, NoiseScheduleVP, model_wrapper - -K_DIFFUSION_SAMPLERS = {"k_euler_a": sample_euler_ancestral, "dpm++2m": sample_dpmpp_2m} -SAMPLERS = ["dpm++2m", "p", "ddim"] - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - Compute the KL divergence between two gaussians. - - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, th.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for th.exp(). - logvar1, logvar2 = [x if isinstance(x, th.Tensor) else th.tensor(x).to(tensor) for x in (logvar1, logvar2)] - - return 0.5 * (-1.0 + logvar2 - logvar1 + th.exp(logvar1 - logvar2) + ((mean1 - mean2) ** 2) * th.exp(-logvar2)) - - -def approx_standard_normal_cdf(x): - """ - A fast approximation of the cumulative distribution function of the - standard normal. - """ - return 0.5 * (1.0 + th.tanh(np.sqrt(2.0 / np.pi) * (x + 0.044715 * th.pow(x, 3)))) - - -def discretized_gaussian_log_likelihood(x, *, means, log_scales): - """ - Compute the log-likelihood of a Gaussian distribution discretizing to a - given image. - - :param x: the target images. It is assumed that this was uint8 values, - rescaled to the range [-1, 1]. - :param means: the Gaussian mean Tensor. - :param log_scales: the Gaussian log stddev Tensor. - :return: a tensor like x of log probabilities (in nats). - """ - assert x.shape == means.shape == log_scales.shape - centered_x = x - means - inv_stdv = th.exp(-log_scales) - plus_in = inv_stdv * (centered_x + 1.0 / 255.0) - cdf_plus = approx_standard_normal_cdf(plus_in) - min_in = inv_stdv * (centered_x - 1.0 / 255.0) - cdf_min = approx_standard_normal_cdf(min_in) - log_cdf_plus = th.log(cdf_plus.clamp(min=1e-12)) - log_one_minus_cdf_min = th.log((1.0 - cdf_min).clamp(min=1e-12)) - cdf_delta = cdf_plus - cdf_min - log_probs = th.where( - x < -0.999, - log_cdf_plus, - th.where(x > 0.999, log_one_minus_cdf_min, th.log(cdf_delta.clamp(min=1e-12))), - ) - assert log_probs.shape == x.shape - return log_probs - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def get_named_beta_schedule(schedule_name, num_diffusion_timesteps): - """ - Get a pre-defined beta schedule for the given name. - - The beta schedule library consists of beta schedules which remain similar - in the limit of num_diffusion_timesteps. - Beta schedules may be added, but should not be removed or changed once - they are committed to maintain backwards compatibility. - """ - if schedule_name == "linear": - # Linear schedule from Ho et al, extended to work for any number of - # diffusion steps. - scale = 1000 / num_diffusion_timesteps - beta_start = scale * 0.0001 - beta_end = scale * 0.02 - return np.linspace(beta_start, beta_end, num_diffusion_timesteps, dtype=np.float64) - elif schedule_name == "cosine": - return betas_for_alpha_bar( - num_diffusion_timesteps, - lambda t: math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2, - ) - else: - raise NotImplementedError(f"unknown beta schedule: {schedule_name}") - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -class ModelMeanType(enum.Enum): - """ - Which type of output the model predicts. - """ - - PREVIOUS_X = "previous_x" # the model predicts x_{t-1} - START_X = "start_x" # the model predicts x_0 - EPSILON = "epsilon" # the model predicts epsilon - - -class ModelVarType(enum.Enum): - """ - What is used as the model's output variance. - - The LEARNED_RANGE option has been added to allow the model to predict - values between FIXED_SMALL and FIXED_LARGE, making its job easier. - """ - - LEARNED = "learned" - FIXED_SMALL = "fixed_small" - FIXED_LARGE = "fixed_large" - LEARNED_RANGE = "learned_range" - - -class LossType(enum.Enum): - MSE = "mse" # use raw MSE loss (and KL when learning variances) - RESCALED_MSE = "rescaled_mse" # use raw MSE loss (with RESCALED_KL when learning variances) - KL = "kl" # use the variational lower-bound - RESCALED_KL = "rescaled_kl" # like KL, but rescale to estimate the full VLB - - def is_vb(self): - return self == LossType.KL or self == LossType.RESCALED_KL - - -class GaussianDiffusion: - """ - Utilities for training and sampling diffusion models. - - Ported directly from here, and then adapted over time to further experimentation. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils_2.py#L42 - - :param betas: a 1-D numpy array of betas for each diffusion timestep, - starting at T and going to 1. - :param model_mean_type: a ModelMeanType determining what the model outputs. - :param model_var_type: a ModelVarType determining how variance is output. - :param loss_type: a LossType determining the loss function to use. - :param rescale_timesteps: if True, pass floating point timesteps into the - model so that they are always scaled like in the - original paper (0 to 1000). - """ - - def __init__( - self, - *, - betas, - model_mean_type, - model_var_type, - loss_type, - rescale_timesteps=False, - conditioning_free=False, - conditioning_free_k=1, - ramp_conditioning_free=True, - sampler="p", - ): - self.sampler = sampler - self.model_mean_type = ModelMeanType(model_mean_type) - self.model_var_type = ModelVarType(model_var_type) - self.loss_type = LossType(loss_type) - self.rescale_timesteps = rescale_timesteps - self.conditioning_free = conditioning_free - self.conditioning_free_k = conditioning_free_k - self.ramp_conditioning_free = ramp_conditioning_free - - # Use float64 for accuracy. - betas = np.array(betas, dtype=np.float64) - self.betas = betas - assert len(betas.shape) == 1, "betas must be 1-D" - assert (betas > 0).all() and (betas <= 1).all() - - self.num_timesteps = int(betas.shape[0]) - - alphas = 1.0 - betas - self.alphas_cumprod = np.cumprod(alphas, axis=0) - self.alphas_cumprod_prev = np.append(1.0, self.alphas_cumprod[:-1]) - self.alphas_cumprod_next = np.append(self.alphas_cumprod[1:], 0.0) - assert self.alphas_cumprod_prev.shape == (self.num_timesteps,) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.sqrt_alphas_cumprod = np.sqrt(self.alphas_cumprod) - self.sqrt_one_minus_alphas_cumprod = np.sqrt(1.0 - self.alphas_cumprod) - self.log_one_minus_alphas_cumprod = np.log(1.0 - self.alphas_cumprod) - self.sqrt_recip_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod) - self.sqrt_recipm1_alphas_cumprod = np.sqrt(1.0 / self.alphas_cumprod - 1) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - self.posterior_variance = betas * (1.0 - self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) - # log calculation clipped because the posterior variance is 0 at the - # beginning of the diffusion chain. - self.posterior_log_variance_clipped = np.log(np.append(self.posterior_variance[1], self.posterior_variance[1:])) - self.posterior_mean_coef1 = betas * np.sqrt(self.alphas_cumprod_prev) / (1.0 - self.alphas_cumprod) - self.posterior_mean_coef2 = (1.0 - self.alphas_cumprod_prev) * np.sqrt(alphas) / (1.0 - self.alphas_cumprod) - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - variance = _extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = _extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def q_sample(self, x_start, t, noise=None): - """ - Diffuse the data for a given number of diffusion steps. - - In other words, sample from q(x_t | x_0). - - :param x_start: the initial data batch. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :param noise: if specified, the split-out normal noise. - :return: A noisy version of x_start. - """ - if noise is None: - noise = th.randn_like(x_start) - assert noise.shape == x_start.shape - return ( - _extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - + _extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise - ) - - def q_posterior_mean_variance(self, x_start, x_t, t): - """ - Compute the mean and variance of the diffusion posterior: - - q(x_{t-1} | x_t, x_0) - - """ - assert x_start.shape == x_t.shape - posterior_mean = ( - _extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start - + _extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = _extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = _extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - assert ( - posterior_mean.shape[0] - == posterior_variance.shape[0] - == posterior_log_variance_clipped.shape[0] - == x_start.shape[0] - ) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, model, x, t, clip_denoised=True, denoised_fn=None, model_kwargs=None): - """ - Apply the model to get p(x_{t-1} | x_t), as well as a prediction of - the initial x, x_0. - - :param model: the model, which takes a signal and a batch of timesteps - as input. - :param x: the [N x C x ...] tensor at time t. - :param t: a 1-D Tensor of timesteps. - :param clip_denoised: if True, clip the denoised signal into [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. Applies before - clip_denoised. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :return: a dict with the following keys: - - 'mean': the model mean output. - - 'variance': the model variance output. - - 'log_variance': the log of 'variance'. - - 'pred_xstart': the prediction for x_0. - """ - if model_kwargs is None: - model_kwargs = {} - - B, C = x.shape[:2] - assert t.shape == (B,) - model_output = model(x, self._scale_timesteps(t), **model_kwargs) - if self.conditioning_free: - model_output_no_conditioning = model(x, self._scale_timesteps(t), conditioning_free=True, **model_kwargs) - - if self.model_var_type in [ModelVarType.LEARNED, ModelVarType.LEARNED_RANGE]: - assert model_output.shape == (B, C * 2, *x.shape[2:]) - model_output, model_var_values = th.split(model_output, C, dim=1) - if self.conditioning_free: - model_output_no_conditioning, _ = th.split(model_output_no_conditioning, C, dim=1) - if self.model_var_type == ModelVarType.LEARNED: - model_log_variance = model_var_values - model_variance = th.exp(model_log_variance) - else: - min_log = _extract_into_tensor(self.posterior_log_variance_clipped, t, x.shape) - max_log = _extract_into_tensor(np.log(self.betas), t, x.shape) - # The model_var_values is [-1, 1] for [min_var, max_var]. - frac = (model_var_values + 1) / 2 - model_log_variance = frac * max_log + (1 - frac) * min_log - model_variance = th.exp(model_log_variance) - else: - model_variance, model_log_variance = { - # for fixedlarge, we set the initial (log-)variance like so - # to get a better decoder log likelihood. - ModelVarType.FIXED_LARGE: ( - np.append(self.posterior_variance[1], self.betas[1:]), - np.log(np.append(self.posterior_variance[1], self.betas[1:])), - ), - ModelVarType.FIXED_SMALL: ( - self.posterior_variance, - self.posterior_log_variance_clipped, - ), - }[self.model_var_type] - model_variance = _extract_into_tensor(model_variance, t, x.shape) - model_log_variance = _extract_into_tensor(model_log_variance, t, x.shape) - - if self.conditioning_free: - if self.ramp_conditioning_free: - assert t.shape[0] == 1 # This should only be used in inference. - cfk = self.conditioning_free_k * (1 - self._scale_timesteps(t)[0].item() / self.num_timesteps) - else: - cfk = self.conditioning_free_k - model_output = (1 + cfk) * model_output - cfk * model_output_no_conditioning - - def process_xstart(x): - if denoised_fn is not None: - x = denoised_fn(x) - if clip_denoised: - return x.clamp(-1, 1) - return x - - if self.model_mean_type == ModelMeanType.PREVIOUS_X: - pred_xstart = process_xstart(self._predict_xstart_from_xprev(x_t=x, t=t, xprev=model_output)) - model_mean = model_output - elif self.model_mean_type in [ModelMeanType.START_X, ModelMeanType.EPSILON]: - if self.model_mean_type == ModelMeanType.START_X: - pred_xstart = process_xstart(model_output) - else: - pred_xstart = process_xstart(self._predict_xstart_from_eps(x_t=x, t=t, eps=model_output)) - model_mean, _, _ = self.q_posterior_mean_variance(x_start=pred_xstart, x_t=x, t=t) - else: - raise NotImplementedError(self.model_mean_type) - - assert model_mean.shape == model_log_variance.shape == pred_xstart.shape == x.shape - return { - "mean": model_mean, - "variance": model_variance, - "log_variance": model_log_variance, - "pred_xstart": pred_xstart, - } - - def _predict_xstart_from_eps(self, x_t, t, eps): - assert x_t.shape == eps.shape - return ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * eps - ) - - def _predict_xstart_from_xprev(self, x_t, t, xprev): - assert x_t.shape == xprev.shape - return ( # (xprev - coef2*x_t) / coef1 - _extract_into_tensor(1.0 / self.posterior_mean_coef1, t, x_t.shape) * xprev - - _extract_into_tensor(self.posterior_mean_coef2 / self.posterior_mean_coef1, t, x_t.shape) * x_t - ) - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart - ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _scale_timesteps(self, t): - if self.rescale_timesteps: - return t.float() * (1000.0 / self.num_timesteps) - return t - - def condition_mean(self, cond_fn, p_mean_var, x, t, model_kwargs=None): - """ - Compute the mean for the previous step, given a function cond_fn that - computes the gradient of a conditional log probability with respect to - x. In particular, cond_fn computes grad(log(p(y|x))), and we want to - condition on y. - - This uses the conditioning strategy from Sohl-Dickstein et al. (2015). - """ - gradient = cond_fn(x, self._scale_timesteps(t), **model_kwargs) - new_mean = p_mean_var["mean"].float() + p_mean_var["variance"] * gradient.float() - return new_mean - - def condition_score(self, cond_fn, p_mean_var, x, t, model_kwargs=None): - """ - Compute what the p_mean_variance output would have been, should the - model's score function be conditioned by cond_fn. - - See condition_mean() for details on cond_fn. - - Unlike condition_mean(), this instead uses the conditioning strategy - from Song et al (2020). - """ - alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) - - eps = self._predict_eps_from_xstart(x, t, p_mean_var["pred_xstart"]) - eps = eps - (1 - alpha_bar).sqrt() * cond_fn(x, self._scale_timesteps(t), **model_kwargs) - - out = p_mean_var.copy() - out["pred_xstart"] = self._predict_xstart_from_eps(x, t, eps) - out["mean"], _, _ = self.q_posterior_mean_variance(x_start=out["pred_xstart"], x_t=x, t=t) - return out - - def k_diffusion_sample_loop( - self, - k_sampler, - pbar, - model, - shape, - noise=None, # all given - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - device=None, # ALL UNUSED - model_kwargs=None, # {'precomputed_aligned_embeddings': precomputed_embeddings}, - progress=False, # unused as well - ): - assert isinstance(model_kwargs, dict) - if device is None: - device = next(model.parameters()).device - s_in = noise.new_ones([noise.shape[0]]) - - def model_split(*args, **kwargs): - model_output = model(*args, **kwargs) - model_epsilon, model_var = th.split(model_output, model_output.shape[1] // 2, dim=1) - return model_epsilon, model_var - - # - """ - print(self.betas) - print(th.tensor(self.betas)) - noise_schedule = NoiseScheduleVP(schedule='discrete', betas=th.tensor(self.betas)) - """ - noise_schedule = NoiseScheduleVP(schedule="linear", continuous_beta_0=0.1 / 4, continuous_beta_1=20.0 / 4) - - def model_fn_prewrap(x, t, *args, **kwargs): - """ - x_in = torch.cat([x] * 2) - t_in = torch.cat([t_continuous] * 2) - c_in = torch.cat([unconditional_condition, condition]) - noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2) - print(t) - print(self.timestep_map) - exit() - """ - """ - model_output = model(x, self._scale_timesteps(t*4000), **model_kwargs) - out = self.p_mean_variance(model, x, t*4000, model_kwargs=model_kwargs) - return out['pred_xstart'] - """ - x, _ = x.chunk(2) - t, _ = (t * 1000).chunk(2) - res = torch.cat( - [ - model_split(x, t, conditioning_free=True, **model_kwargs)[0], - model_split(x, t, **model_kwargs)[0], - ] - ) - pbar.update(1) - return res - - model_fn = model_wrapper( - model_fn_prewrap, - noise_schedule, - model_type="noise", # "noise" or "x_start" or "v" or "score" - model_kwargs=model_kwargs, - guidance_type="classifier-free", - condition=th.Tensor(1), - unconditional_condition=th.Tensor(1), - guidance_scale=self.conditioning_free_k, - ) - dpm_solver = DPM_Solver(model_fn, noise_schedule, algorithm_type="dpmsolver++") - x_sample = dpm_solver.sample( - noise, - steps=self.num_timesteps, - order=2, - skip_type="time_uniform", - method="multistep", - ) - #''' - return x_sample - - def sample_loop(self, *args, **kwargs): - s = self.sampler - if s == "p": - return self.p_sample_loop(*args, **kwargs) - elif s == "ddim": - return self.ddim_sample_loop(*args, **kwargs) - elif s == "dpm++2m": - if self.conditioning_free is not True: - raise RuntimeError("cond_free must be true") - with tqdm(total=self.num_timesteps) as pbar: - return self.k_diffusion_sample_loop(K_DIFFUSION_SAMPLERS[s], pbar, *args, **kwargs) - else: - raise RuntimeError("sampler not impl") - - def p_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - ): - """ - Sample x_{t-1} from the model at the given timestep. - - :param model: the model to sample from. - :param x: the current tensor at x_{t-1}. - :param t: the value of t, starting at 0 for the first diffusion step. - :param clip_denoised: if True, clip the x_start prediction to [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. - :param cond_fn: if not None, this is a gradient function that acts - similarly to the model. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :return: a dict containing the following keys: - - 'sample': a random sample from the model. - - 'pred_xstart': a prediction of x_0. - """ - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - noise = th.randn_like(x) - nonzero_mask = (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) # no noise when t == 0 - if cond_fn is not None: - out["mean"] = self.condition_mean(cond_fn, out, x, t, model_kwargs=model_kwargs) - sample = out["mean"] + nonzero_mask * th.exp(0.5 * out["log_variance"]) * noise - return {"sample": sample, "pred_xstart": out["pred_xstart"]} - - def p_sample_loop( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - ): - """ - Generate samples from the model. - - :param model: the model module. - :param shape: the shape of the samples, (N, C, H, W). - :param noise: if specified, the noise from the encoder to sample. - Should be of the same shape as `shape`. - :param clip_denoised: if True, clip x_start predictions to [-1, 1]. - :param denoised_fn: if not None, a function which applies to the - x_start prediction before it is used to sample. - :param cond_fn: if not None, this is a gradient function that acts - similarly to the model. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :param device: if specified, the device to create the samples on. - If not specified, use a model parameter's device. - :param progress: if True, show a tqdm progress bar. - :return: a non-differentiable batch of samples. - """ - final = None - for sample in self.p_sample_loop_progressive( - model, - shape, - noise=noise, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - device=device, - progress=progress, - ): - final = sample - return final["sample"] - - def p_sample_loop_progressive( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - ): - """ - Generate samples from the model and yield intermediate samples from - each timestep of diffusion. - - Arguments are the same as p_sample_loop(). - Returns a generator over dicts, where each dict is the return value of - p_sample(). - """ - if device is None: - device = next(model.parameters()).device - assert isinstance(shape, (tuple, list)) - if noise is not None: - img = noise - else: - img = th.randn(*shape, device=device) - indices = list(range(self.num_timesteps))[::-1] - - for i in tqdm(indices, disable=not progress): - t = th.tensor([i] * shape[0], device=device) - with th.no_grad(): - out = self.p_sample( - model, - img, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - ) - yield out - img = out["sample"] - - def ddim_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - eta=0.0, - ): - """ - Sample x_{t-1} from the model using DDIM. - - Same usage as p_sample(). - """ - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - if cond_fn is not None: - out = self.condition_score(cond_fn, out, x, t, model_kwargs=model_kwargs) - - # Usually our model outputs epsilon, but we re-derive it - # in case we used x_start or x_prev prediction. - eps = self._predict_eps_from_xstart(x, t, out["pred_xstart"]) - - alpha_bar = _extract_into_tensor(self.alphas_cumprod, t, x.shape) - alpha_bar_prev = _extract_into_tensor(self.alphas_cumprod_prev, t, x.shape) - sigma = eta * th.sqrt((1 - alpha_bar_prev) / (1 - alpha_bar)) * th.sqrt(1 - alpha_bar / alpha_bar_prev) - # Equation 12. - noise = th.randn_like(x) - mean_pred = out["pred_xstart"] * th.sqrt(alpha_bar_prev) + th.sqrt(1 - alpha_bar_prev - sigma**2) * eps - nonzero_mask = (t != 0).float().view(-1, *([1] * (len(x.shape) - 1))) # no noise when t == 0 - sample = mean_pred + nonzero_mask * sigma * noise - return {"sample": sample, "pred_xstart": out["pred_xstart"]} - - def ddim_reverse_sample( - self, - model, - x, - t, - clip_denoised=True, - denoised_fn=None, - model_kwargs=None, - eta=0.0, - ): - """ - Sample x_{t+1} from the model using DDIM reverse ODE. - """ - assert eta == 0.0, "Reverse ODE only for deterministic path" - out = self.p_mean_variance( - model, - x, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - model_kwargs=model_kwargs, - ) - # Usually our model outputs epsilon, but we re-derive it - # in case we used x_start or x_prev prediction. - eps = ( - _extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x.shape) * x - out["pred_xstart"] - ) / _extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x.shape) - alpha_bar_next = _extract_into_tensor(self.alphas_cumprod_next, t, x.shape) - - # Equation 12. reversed - mean_pred = out["pred_xstart"] * th.sqrt(alpha_bar_next) + th.sqrt(1 - alpha_bar_next) * eps - - return {"sample": mean_pred, "pred_xstart": out["pred_xstart"]} - - def ddim_sample_loop( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - eta=0.0, - ): - """ - Generate samples from the model using DDIM. - - Same usage as p_sample_loop(). - """ - final = None - for sample in self.ddim_sample_loop_progressive( - model, - shape, - noise=noise, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - device=device, - progress=progress, - eta=eta, - ): - final = sample - return final["sample"] - - def ddim_sample_loop_progressive( - self, - model, - shape, - noise=None, - clip_denoised=True, - denoised_fn=None, - cond_fn=None, - model_kwargs=None, - device=None, - progress=False, - eta=0.0, - ): - """ - Use DDIM to sample from the model and yield intermediate samples from - each timestep of DDIM. - - Same usage as p_sample_loop_progressive(). - """ - if device is None: - device = next(model.parameters()).device - assert isinstance(shape, (tuple, list)) - if noise is not None: - img = noise - else: - img = th.randn(*shape, device=device) - indices = list(range(self.num_timesteps))[::-1] - - if progress: - # Lazy import so that we don't depend on tqdm. - from tqdm.auto import tqdm - - indices = tqdm(indices, disable=not progress) - - for i in indices: - t = th.tensor([i] * shape[0], device=device) - with th.no_grad(): - out = self.ddim_sample( - model, - img, - t, - clip_denoised=clip_denoised, - denoised_fn=denoised_fn, - cond_fn=cond_fn, - model_kwargs=model_kwargs, - eta=eta, - ) - yield out - img = out["sample"] - - def _vb_terms_bpd(self, model, x_start, x_t, t, clip_denoised=True, model_kwargs=None): - """ - Get a term for the variational lower-bound. - - The resulting units are bits (rather than nats, as one might expect). - This allows for comparison to other papers. - - :return: a dict with the following keys: - - 'output': a shape [N] tensor of NLLs or KLs. - - 'pred_xstart': the x_0 predictions. - """ - true_mean, _, true_log_variance_clipped = self.q_posterior_mean_variance(x_start=x_start, x_t=x_t, t=t) - out = self.p_mean_variance(model, x_t, t, clip_denoised=clip_denoised, model_kwargs=model_kwargs) - kl = normal_kl(true_mean, true_log_variance_clipped, out["mean"], out["log_variance"]) - kl = mean_flat(kl) / np.log(2.0) - - decoder_nll = -discretized_gaussian_log_likelihood( - x_start, means=out["mean"], log_scales=0.5 * out["log_variance"] - ) - assert decoder_nll.shape == x_start.shape - decoder_nll = mean_flat(decoder_nll) / np.log(2.0) - - # At the first timestep return the decoder NLL, - # otherwise return KL(q(x_{t-1}|x_t,x_0) || p(x_{t-1}|x_t)) - output = th.where((t == 0), decoder_nll, kl) - return {"output": output, "pred_xstart": out["pred_xstart"]} - - def training_losses(self, model, x_start, t, model_kwargs=None, noise=None): - """ - Compute training losses for a single timestep. - - :param model: the model to evaluate loss on. - :param x_start: the [N x C x ...] tensor of inputs. - :param t: a batch of timestep indices. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :param noise: if specified, the specific Gaussian noise to try to remove. - :return: a dict with the key "loss" containing a tensor of shape [N]. - Some mean or variance settings may also have other keys. - """ - if model_kwargs is None: - model_kwargs = {} - if noise is None: - noise = th.randn_like(x_start) - x_t = self.q_sample(x_start, t, noise=noise) - - terms = {} - - if self.loss_type == LossType.KL or self.loss_type == LossType.RESCALED_KL: - # TODO: support multiple model outputs for this mode. - terms["loss"] = self._vb_terms_bpd( - model=model, - x_start=x_start, - x_t=x_t, - t=t, - clip_denoised=False, - model_kwargs=model_kwargs, - )["output"] - if self.loss_type == LossType.RESCALED_KL: - terms["loss"] *= self.num_timesteps - elif self.loss_type == LossType.MSE or self.loss_type == LossType.RESCALED_MSE: - model_outputs = model(x_t, self._scale_timesteps(t), **model_kwargs) - if isinstance(model_outputs, tuple): - model_output = model_outputs[0] - terms["extra_outputs"] = model_outputs[1:] - else: - model_output = model_outputs - - if self.model_var_type in [ - ModelVarType.LEARNED, - ModelVarType.LEARNED_RANGE, - ]: - B, C = x_t.shape[:2] - assert model_output.shape == (B, C * 2, *x_t.shape[2:]) - model_output, model_var_values = th.split(model_output, C, dim=1) - # Learn the variance using the variational bound, but don't let - # it affect our mean prediction. - frozen_out = th.cat([model_output.detach(), model_var_values], dim=1) - terms["vb"] = self._vb_terms_bpd( - model=lambda *args, r=frozen_out: r, - x_start=x_start, - x_t=x_t, - t=t, - clip_denoised=False, - )["output"] - if self.loss_type == LossType.RESCALED_MSE: - # Divide by 1000 for equivalence with initial implementation. - # Without a factor of 1/1000, the VB term hurts the MSE term. - terms["vb"] *= self.num_timesteps / 1000.0 - - if self.model_mean_type == ModelMeanType.PREVIOUS_X: - target = self.q_posterior_mean_variance(x_start=x_start, x_t=x_t, t=t)[0] - x_start_pred = torch.zeros(x_start) # Not supported. - elif self.model_mean_type == ModelMeanType.START_X: - target = x_start - x_start_pred = model_output - elif self.model_mean_type == ModelMeanType.EPSILON: - target = noise - x_start_pred = self._predict_xstart_from_eps(x_t, t, model_output) - else: - raise NotImplementedError(self.model_mean_type) - assert model_output.shape == target.shape == x_start.shape - terms["mse"] = mean_flat((target - model_output) ** 2) - terms["x_start_predicted"] = x_start_pred - if "vb" in terms: - terms["loss"] = terms["mse"] + terms["vb"] - else: - terms["loss"] = terms["mse"] - else: - raise NotImplementedError(self.loss_type) - - return terms - - def autoregressive_training_losses( - self, model, x_start, t, model_output_keys, gd_out_key, model_kwargs=None, noise=None - ): - """ - Compute training losses for a single timestep. - - :param model: the model to evaluate loss on. - :param x_start: the [N x C x ...] tensor of inputs. - :param t: a batch of timestep indices. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - :param noise: if specified, the specific Gaussian noise to try to remove. - :return: a dict with the key "loss" containing a tensor of shape [N]. - Some mean or variance settings may also have other keys. - """ - if model_kwargs is None: - model_kwargs = {} - if noise is None: - noise = th.randn_like(x_start) - x_t = self.q_sample(x_start, t, noise=noise) - terms = {} - if self.loss_type == LossType.KL or self.loss_type == LossType.RESCALED_KL: - assert False # not currently supported for this type of diffusion. - elif self.loss_type == LossType.MSE or self.loss_type == LossType.RESCALED_MSE: - model_outputs = model(x_t, x_start, self._scale_timesteps(t), **model_kwargs) - terms.update({k: o for k, o in zip(model_output_keys, model_outputs)}) - model_output = terms[gd_out_key] - if self.model_var_type in [ - ModelVarType.LEARNED, - ModelVarType.LEARNED_RANGE, - ]: - B, C = x_t.shape[:2] - assert model_output.shape == (B, C, 2, *x_t.shape[2:]) - model_output, model_var_values = model_output[:, :, 0], model_output[:, :, 1] - # Learn the variance using the variational bound, but don't let - # it affect our mean prediction. - frozen_out = th.cat([model_output.detach(), model_var_values], dim=1) - terms["vb"] = self._vb_terms_bpd( - model=lambda *args, r=frozen_out: r, - x_start=x_start, - x_t=x_t, - t=t, - clip_denoised=False, - )["output"] - if self.loss_type == LossType.RESCALED_MSE: - # Divide by 1000 for equivalence with initial implementation. - # Without a factor of 1/1000, the VB term hurts the MSE term. - terms["vb"] *= self.num_timesteps / 1000.0 - - if self.model_mean_type == ModelMeanType.PREVIOUS_X: - target = self.q_posterior_mean_variance(x_start=x_start, x_t=x_t, t=t)[0] - x_start_pred = torch.zeros(x_start) # Not supported. - elif self.model_mean_type == ModelMeanType.START_X: - target = x_start - x_start_pred = model_output - elif self.model_mean_type == ModelMeanType.EPSILON: - target = noise - x_start_pred = self._predict_xstart_from_eps(x_t, t, model_output) - else: - raise NotImplementedError(self.model_mean_type) - assert model_output.shape == target.shape == x_start.shape - terms["mse"] = mean_flat((target - model_output) ** 2) - terms["x_start_predicted"] = x_start_pred - if "vb" in terms: - terms["loss"] = terms["mse"] + terms["vb"] - else: - terms["loss"] = terms["mse"] - else: - raise NotImplementedError(self.loss_type) - - return terms - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - - This term can't be optimized, as it only depends on the encoder. - - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = th.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def calc_bpd_loop(self, model, x_start, clip_denoised=True, model_kwargs=None): - """ - Compute the entire variational lower-bound, measured in bits-per-dim, - as well as other related quantities. - - :param model: the model to evaluate loss on. - :param x_start: the [N x C x ...] tensor of inputs. - :param clip_denoised: if True, clip denoised samples. - :param model_kwargs: if not None, a dict of extra keyword arguments to - pass to the model. This can be used for conditioning. - - :return: a dict containing the following keys: - - total_bpd: the total variational lower-bound, per batch element. - - prior_bpd: the prior term in the lower-bound. - - vb: an [N x T] tensor of terms in the lower-bound. - - xstart_mse: an [N x T] tensor of x_0 MSEs for each timestep. - - mse: an [N x T] tensor of epsilon MSEs for each timestep. - """ - device = x_start.device - batch_size = x_start.shape[0] - - vb = [] - xstart_mse = [] - mse = [] - for t in list(range(self.num_timesteps))[::-1]: - t_batch = th.tensor([t] * batch_size, device=device) - noise = th.randn_like(x_start) - x_t = self.q_sample(x_start=x_start, t=t_batch, noise=noise) - # Calculate VLB term at the current timestep - with th.no_grad(): - out = self._vb_terms_bpd( - model, - x_start=x_start, - x_t=x_t, - t=t_batch, - clip_denoised=clip_denoised, - model_kwargs=model_kwargs, - ) - vb.append(out["output"]) - xstart_mse.append(mean_flat((out["pred_xstart"] - x_start) ** 2)) - eps = self._predict_eps_from_xstart(x_t, t_batch, out["pred_xstart"]) - mse.append(mean_flat((eps - noise) ** 2)) - - vb = th.stack(vb, dim=1) - xstart_mse = th.stack(xstart_mse, dim=1) - mse = th.stack(mse, dim=1) - - prior_bpd = self._prior_bpd(x_start) - total_bpd = vb.sum(dim=1) + prior_bpd - return { - "total_bpd": total_bpd, - "prior_bpd": prior_bpd, - "vb": vb, - "xstart_mse": xstart_mse, - "mse": mse, - } - - -class SpacedDiffusion(GaussianDiffusion): - """ - A diffusion process which can skip steps in a base diffusion process. - - :param use_timesteps: a collection (sequence or set) of timesteps from the - original diffusion process to retain. - :param kwargs: the kwargs to create the base diffusion process. - """ - - def __init__(self, use_timesteps, **kwargs): - self.use_timesteps = set(use_timesteps) - self.timestep_map = [] - self.original_num_steps = len(kwargs["betas"]) - base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa - last_alpha_cumprod = 1.0 - new_betas = [] - for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod): - if i in self.use_timesteps: - new_betas.append(1 - alpha_cumprod / last_alpha_cumprod) - last_alpha_cumprod = alpha_cumprod - self.timestep_map.append(i) - kwargs["betas"] = np.array(new_betas) - super().__init__(**kwargs) - - def p_mean_variance(self, model, *args, **kwargs): # pylint: disable=signature-differs - return super().p_mean_variance(self._wrap_model(model), *args, **kwargs) - - def training_losses(self, model, *args, **kwargs): # pylint: disable=signature-differs - return super().training_losses(self._wrap_model(model), *args, **kwargs) - - def autoregressive_training_losses(self, model, *args, **kwargs): # pylint: disable=signature-differs - return super().autoregressive_training_losses(self._wrap_model(model, True), *args, **kwargs) - - def condition_mean(self, cond_fn, *args, **kwargs): - return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs) - - def condition_score(self, cond_fn, *args, **kwargs): - return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs) - - def _wrap_model(self, model, autoregressive=False): - if isinstance(model, _WrappedModel) or isinstance(model, _WrappedAutoregressiveModel): - return model - mod = _WrappedAutoregressiveModel if autoregressive else _WrappedModel - return mod(model, self.timestep_map, self.rescale_timesteps, self.original_num_steps) - - def _scale_timesteps(self, t): - # Scaling is done by the wrapped model. - return t - - -def space_timesteps(num_timesteps, section_counts): - """ - Create a list of timesteps to use from an original diffusion process, - given the number of timesteps we want to take from equally-sized portions - of the original process. - - For example, if there's 300 timesteps and the section counts are [10,15,20] - then the first 100 timesteps are strided to be 10 timesteps, the second 100 - are strided to be 15 timesteps, and the final 100 are strided to be 20. - - If the stride is a string starting with "ddim", then the fixed striding - from the DDIM paper is used, and only one section is allowed. - - :param num_timesteps: the number of diffusion steps in the original - process to divide up. - :param section_counts: either a list of numbers, or a string containing - comma-separated numbers, indicating the step count - per section. As a special case, use "ddimN" where N - is a number of steps to use the striding from the - DDIM paper. - :return: a set of diffusion steps from the original process to use. - """ - if isinstance(section_counts, str): - if section_counts.startswith("ddim"): - desired_count = int(section_counts[len("ddim") :]) - for i in range(1, num_timesteps): - if len(range(0, num_timesteps, i)) == desired_count: - return set(range(0, num_timesteps, i)) - raise ValueError(f"cannot create exactly {num_timesteps} steps with an integer stride") - section_counts = [int(x) for x in section_counts.split(",")] - size_per = num_timesteps // len(section_counts) - extra = num_timesteps % len(section_counts) - start_idx = 0 - all_steps = [] - for i, section_count in enumerate(section_counts): - size = size_per + (1 if i < extra else 0) - if size < section_count: - raise ValueError(f"cannot divide section of {size} steps into {section_count}") - if section_count <= 1: - frac_stride = 1 - else: - frac_stride = (size - 1) / (section_count - 1) - cur_idx = 0.0 - taken_steps = [] - for _ in range(section_count): - taken_steps.append(start_idx + round(cur_idx)) - cur_idx += frac_stride - all_steps += taken_steps - start_idx += size - return set(all_steps) - - -class _WrappedModel: - def __init__(self, model, timestep_map, rescale_timesteps, original_num_steps): - self.model = model - self.timestep_map = timestep_map - self.rescale_timesteps = rescale_timesteps - self.original_num_steps = original_num_steps - - def __call__(self, x, ts, **kwargs): - map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype) - new_ts = map_tensor[ts] - if self.rescale_timesteps: - new_ts = new_ts.float() * (1000.0 / self.original_num_steps) - model_output = self.model(x, new_ts, **kwargs) - return model_output - - -class _WrappedAutoregressiveModel: - def __init__(self, model, timestep_map, rescale_timesteps, original_num_steps): - self.model = model - self.timestep_map = timestep_map - self.rescale_timesteps = rescale_timesteps - self.original_num_steps = original_num_steps - - def __call__(self, x, x0, ts, **kwargs): - map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype) - new_ts = map_tensor[ts] - if self.rescale_timesteps: - new_ts = new_ts.float() * (1000.0 / self.original_num_steps) - return self.model(x, x0, new_ts, **kwargs) - - -def _extract_into_tensor(arr, timesteps, broadcast_shape): - """ - Extract values from a 1-D numpy array for a batch of indices. - - :param arr: the 1-D numpy array. - :param timesteps: a tensor of indices into the array to extract. - :param broadcast_shape: a larger shape of K dimensions with the batch - dimension equal to the length of timesteps. - :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims. - """ - res = th.from_numpy(arr).to(device=timesteps.device)[timesteps].float() - while len(res.shape) < len(broadcast_shape): - res = res[..., None] - return res.expand(broadcast_shape) diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/models/wavegrad.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/models/wavegrad.py deleted file mode 100644 index a0f9221a8f64fb9953527a6b859d114aabb702d2..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/vocoder/models/wavegrad.py +++ /dev/null @@ -1,344 +0,0 @@ -from dataclasses import dataclass, field -from typing import Dict, List, Tuple - -import numpy as np -import torch -from coqpit import Coqpit -from torch import nn -from torch.nn.utils import weight_norm -from torch.utils.data import DataLoader -from torch.utils.data.distributed import DistributedSampler -from trainer.trainer_utils import get_optimizer, get_scheduler - -from TTS.utils.io import load_fsspec -from TTS.vocoder.datasets import WaveGradDataset -from TTS.vocoder.layers.wavegrad import Conv1d, DBlock, FiLM, UBlock -from TTS.vocoder.models.base_vocoder import BaseVocoder -from TTS.vocoder.utils.generic_utils import plot_results - - -@dataclass -class WavegradArgs(Coqpit): - in_channels: int = 80 - out_channels: int = 1 - use_weight_norm: bool = False - y_conv_channels: int = 32 - x_conv_channels: int = 768 - dblock_out_channels: List[int] = field(default_factory=lambda: [128, 128, 256, 512]) - ublock_out_channels: List[int] = field(default_factory=lambda: [512, 512, 256, 128, 128]) - upsample_factors: List[int] = field(default_factory=lambda: [4, 4, 4, 2, 2]) - upsample_dilations: List[List[int]] = field( - default_factory=lambda: [[1, 2, 1, 2], [1, 2, 1, 2], [1, 2, 4, 8], [1, 2, 4, 8], [1, 2, 4, 8]] - ) - - -class Wavegrad(BaseVocoder): - """🐸 🌊 WaveGrad 🌊 model. - Paper - https://arxiv.org/abs/2009.00713 - - Examples: - Initializing the model. - - >>> from TTS.vocoder.configs import WavegradConfig - >>> config = WavegradConfig() - >>> model = Wavegrad(config) - - Paper Abstract: - This paper introduces WaveGrad, a conditional model for waveform generation which estimates gradients of the - data density. The model is built on prior work on score matching and diffusion probabilistic models. It starts - from a Gaussian white noise signal and iteratively refines the signal via a gradient-based sampler conditioned - on the mel-spectrogram. WaveGrad offers a natural way to trade inference speed for sample quality by adjusting - the number of refinement steps, and bridges the gap between non-autoregressive and autoregressive models in - terms of audio quality. We find that it can generate high fidelity audio samples using as few as six iterations. - Experiments reveal WaveGrad to generate high fidelity audio, outperforming adversarial non-autoregressive - baselines and matching a strong likelihood-based autoregressive baseline using fewer sequential operations. - Audio samples are available at this https URL. - """ - - # pylint: disable=dangerous-default-value - def __init__(self, config: Coqpit): - super().__init__(config) - self.config = config - self.use_weight_norm = config.model_params.use_weight_norm - self.hop_len = np.prod(config.model_params.upsample_factors) - self.noise_level = None - self.num_steps = None - self.beta = None - self.alpha = None - self.alpha_hat = None - self.c1 = None - self.c2 = None - self.sigma = None - - # dblocks - self.y_conv = Conv1d(1, config.model_params.y_conv_channels, 5, padding=2) - self.dblocks = nn.ModuleList([]) - ic = config.model_params.y_conv_channels - for oc, df in zip(config.model_params.dblock_out_channels, reversed(config.model_params.upsample_factors)): - self.dblocks.append(DBlock(ic, oc, df)) - ic = oc - - # film - self.film = nn.ModuleList([]) - ic = config.model_params.y_conv_channels - for oc in reversed(config.model_params.ublock_out_channels): - self.film.append(FiLM(ic, oc)) - ic = oc - - # ublocksn - self.ublocks = nn.ModuleList([]) - ic = config.model_params.x_conv_channels - for oc, uf, ud in zip( - config.model_params.ublock_out_channels, - config.model_params.upsample_factors, - config.model_params.upsample_dilations, - ): - self.ublocks.append(UBlock(ic, oc, uf, ud)) - ic = oc - - self.x_conv = Conv1d(config.model_params.in_channels, config.model_params.x_conv_channels, 3, padding=1) - self.out_conv = Conv1d(oc, config.model_params.out_channels, 3, padding=1) - - if config.model_params.use_weight_norm: - self.apply_weight_norm() - - def forward(self, x, spectrogram, noise_scale): - shift_and_scale = [] - - x = self.y_conv(x) - shift_and_scale.append(self.film[0](x, noise_scale)) - - for film, layer in zip(self.film[1:], self.dblocks): - x = layer(x) - shift_and_scale.append(film(x, noise_scale)) - - x = self.x_conv(spectrogram) - for layer, (film_shift, film_scale) in zip(self.ublocks, reversed(shift_and_scale)): - x = layer(x, film_shift, film_scale) - x = self.out_conv(x) - return x - - def load_noise_schedule(self, path): - beta = np.load(path, allow_pickle=True).item()["beta"] # pylint: disable=unexpected-keyword-arg - self.compute_noise_level(beta) - - @torch.no_grad() - def inference(self, x, y_n=None): - """ - Shapes: - x: :math:`[B, C , T]` - y_n: :math:`[B, 1, T]` - """ - if y_n is None: - y_n = torch.randn(x.shape[0], 1, self.hop_len * x.shape[-1]) - else: - y_n = torch.FloatTensor(y_n).unsqueeze(0).unsqueeze(0) - y_n = y_n.type_as(x) - sqrt_alpha_hat = self.noise_level.to(x) - for n in range(len(self.alpha) - 1, -1, -1): - y_n = self.c1[n] * (y_n - self.c2[n] * self.forward(y_n, x, sqrt_alpha_hat[n].repeat(x.shape[0]))) - if n > 0: - z = torch.randn_like(y_n) - y_n += self.sigma[n - 1] * z - y_n.clamp_(-1.0, 1.0) - return y_n - - def compute_y_n(self, y_0): - """Compute noisy audio based on noise schedule""" - self.noise_level = self.noise_level.to(y_0) - if len(y_0.shape) == 3: - y_0 = y_0.squeeze(1) - s = torch.randint(0, self.num_steps - 1, [y_0.shape[0]]) - l_a, l_b = self.noise_level[s], self.noise_level[s + 1] - noise_scale = l_a + torch.rand(y_0.shape[0]).to(y_0) * (l_b - l_a) - noise_scale = noise_scale.unsqueeze(1) - noise = torch.randn_like(y_0) - noisy_audio = noise_scale * y_0 + (1.0 - noise_scale**2) ** 0.5 * noise - return noise.unsqueeze(1), noisy_audio.unsqueeze(1), noise_scale[:, 0] - - def compute_noise_level(self, beta): - """Compute noise schedule parameters""" - self.num_steps = len(beta) - alpha = 1 - beta - alpha_hat = np.cumprod(alpha) - noise_level = np.concatenate([[1.0], alpha_hat**0.5], axis=0) - noise_level = alpha_hat**0.5 - - # pylint: disable=not-callable - self.beta = torch.tensor(beta.astype(np.float32)) - self.alpha = torch.tensor(alpha.astype(np.float32)) - self.alpha_hat = torch.tensor(alpha_hat.astype(np.float32)) - self.noise_level = torch.tensor(noise_level.astype(np.float32)) - - self.c1 = 1 / self.alpha**0.5 - self.c2 = (1 - self.alpha) / (1 - self.alpha_hat) ** 0.5 - self.sigma = ((1.0 - self.alpha_hat[:-1]) / (1.0 - self.alpha_hat[1:]) * self.beta[1:]) ** 0.5 - - def remove_weight_norm(self): - for _, layer in enumerate(self.dblocks): - if len(layer.state_dict()) != 0: - try: - nn.utils.remove_weight_norm(layer) - except ValueError: - layer.remove_weight_norm() - - for _, layer in enumerate(self.film): - if len(layer.state_dict()) != 0: - try: - nn.utils.remove_weight_norm(layer) - except ValueError: - layer.remove_weight_norm() - - for _, layer in enumerate(self.ublocks): - if len(layer.state_dict()) != 0: - try: - nn.utils.remove_weight_norm(layer) - except ValueError: - layer.remove_weight_norm() - - nn.utils.remove_weight_norm(self.x_conv) - nn.utils.remove_weight_norm(self.out_conv) - nn.utils.remove_weight_norm(self.y_conv) - - def apply_weight_norm(self): - for _, layer in enumerate(self.dblocks): - if len(layer.state_dict()) != 0: - layer.apply_weight_norm() - - for _, layer in enumerate(self.film): - if len(layer.state_dict()) != 0: - layer.apply_weight_norm() - - for _, layer in enumerate(self.ublocks): - if len(layer.state_dict()) != 0: - layer.apply_weight_norm() - - self.x_conv = weight_norm(self.x_conv) - self.out_conv = weight_norm(self.out_conv) - self.y_conv = weight_norm(self.y_conv) - - def load_checkpoint( - self, config, checkpoint_path, eval=False, cache=False - ): # pylint: disable=unused-argument, redefined-builtin - state = load_fsspec(checkpoint_path, map_location=torch.device("cpu"), cache=cache) - self.load_state_dict(state["model"]) - if eval: - self.eval() - assert not self.training - if self.config.model_params.use_weight_norm: - self.remove_weight_norm() - betas = np.linspace( - config["test_noise_schedule"]["min_val"], - config["test_noise_schedule"]["max_val"], - config["test_noise_schedule"]["num_steps"], - ) - self.compute_noise_level(betas) - else: - betas = np.linspace( - config["train_noise_schedule"]["min_val"], - config["train_noise_schedule"]["max_val"], - config["train_noise_schedule"]["num_steps"], - ) - self.compute_noise_level(betas) - - def train_step(self, batch: Dict, criterion: Dict) -> Tuple[Dict, Dict]: - # format data - x = batch["input"] - y = batch["waveform"] - - # set noise scale - noise, x_noisy, noise_scale = self.compute_y_n(y) - - # forward pass - noise_hat = self.forward(x_noisy, x, noise_scale) - - # compute losses - loss = criterion(noise, noise_hat) - return {"model_output": noise_hat}, {"loss": loss} - - def train_log( # pylint: disable=no-self-use - self, batch: Dict, outputs: Dict, logger: "Logger", assets: Dict, steps: int # pylint: disable=unused-argument - ) -> Tuple[Dict, np.ndarray]: - pass - - @torch.no_grad() - def eval_step(self, batch: Dict, criterion: nn.Module) -> Tuple[Dict, Dict]: - return self.train_step(batch, criterion) - - def eval_log( # pylint: disable=no-self-use - self, batch: Dict, outputs: Dict, logger: "Logger", assets: Dict, steps: int # pylint: disable=unused-argument - ) -> None: - pass - - def test(self, assets: Dict, test_loader: "DataLoader", outputs=None): # pylint: disable=unused-argument - # setup noise schedule and inference - ap = assets["audio_processor"] - noise_schedule = self.config["test_noise_schedule"] - betas = np.linspace(noise_schedule["min_val"], noise_schedule["max_val"], noise_schedule["num_steps"]) - self.compute_noise_level(betas) - samples = test_loader.dataset.load_test_samples(1) - for sample in samples: - x = sample[0] - x = x[None, :, :].to(next(self.parameters()).device) - y = sample[1] - y = y[None, :] - # compute voice - y_pred = self.inference(x) - # compute spectrograms - figures = plot_results(y_pred, y, ap, "test") - # Sample audio - sample_voice = y_pred[0].squeeze(0).detach().cpu().numpy() - return figures, {"test/audio": sample_voice} - - def get_optimizer(self): - return get_optimizer(self.config.optimizer, self.config.optimizer_params, self.config.lr, self) - - def get_scheduler(self, optimizer): - return get_scheduler(self.config.lr_scheduler, self.config.lr_scheduler_params, optimizer) - - @staticmethod - def get_criterion(): - return torch.nn.L1Loss() - - @staticmethod - def format_batch(batch: Dict) -> Dict: - # return a whole audio segment - m, y = batch[0], batch[1] - y = y.unsqueeze(1) - return {"input": m, "waveform": y} - - def get_data_loader(self, config: Coqpit, assets: Dict, is_eval: True, samples: List, verbose: bool, num_gpus: int): - ap = assets["audio_processor"] - dataset = WaveGradDataset( - ap=ap, - items=samples, - seq_len=self.config.seq_len, - hop_len=ap.hop_length, - pad_short=self.config.pad_short, - conv_pad=self.config.conv_pad, - is_training=not is_eval, - return_segments=True, - use_noise_augment=False, - use_cache=config.use_cache, - verbose=verbose, - ) - sampler = DistributedSampler(dataset) if num_gpus > 1 else None - loader = DataLoader( - dataset, - batch_size=self.config.batch_size, - shuffle=num_gpus <= 1, - drop_last=False, - sampler=sampler, - num_workers=self.config.num_eval_loader_workers if is_eval else self.config.num_loader_workers, - pin_memory=False, - ) - return loader - - def on_epoch_start(self, trainer): # pylint: disable=unused-argument - noise_schedule = self.config["train_noise_schedule"] - betas = np.linspace(noise_schedule["min_val"], noise_schedule["max_val"], noise_schedule["num_steps"]) - self.compute_noise_level(betas) - - @staticmethod - def init_from_config(config: "WavegradConfig"): - return Wavegrad(config) diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/vctk/glow_tts/train_glow_tts.py b/spaces/artificialguybr/video-dubbing/TTS/recipes/vctk/glow_tts/train_glow_tts.py deleted file mode 100644 index ae26029b9151217164fa6c4d5c592fc26fb44ee2..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/recipes/vctk/glow_tts/train_glow_tts.py +++ /dev/null @@ -1,96 +0,0 @@ -import os - -from trainer import Trainer, TrainerArgs - -from TTS.config.shared_configs import BaseAudioConfig -from TTS.tts.configs.glow_tts_config import GlowTTSConfig -from TTS.tts.configs.shared_configs import BaseDatasetConfig -from TTS.tts.datasets import load_tts_samples -from TTS.tts.models.glow_tts import GlowTTS -from TTS.tts.utils.speakers import SpeakerManager -from TTS.tts.utils.text.tokenizer import TTSTokenizer -from TTS.utils.audio import AudioProcessor - -# set experiment paths -output_path = os.path.dirname(os.path.abspath(__file__)) -dataset_path = os.path.join(output_path, "../VCTK/") - -# download the dataset if not downloaded -if not os.path.exists(dataset_path): - from TTS.utils.downloaders import download_vctk - - download_vctk(dataset_path) - -# define dataset config -dataset_config = BaseDatasetConfig(formatter="vctk", meta_file_train="", path=dataset_path) - -# define audio config -# ❗ resample the dataset externally using `TTS/bin/resample.py` and set `resample=False` for faster training -audio_config = BaseAudioConfig(sample_rate=22050, resample=True, do_trim_silence=True, trim_db=23.0) - -# define model config -config = GlowTTSConfig( - batch_size=64, - eval_batch_size=16, - num_loader_workers=4, - num_eval_loader_workers=4, - precompute_num_workers=4, - run_eval=True, - test_delay_epochs=-1, - epochs=1000, - text_cleaner="phoneme_cleaners", - use_phonemes=True, - phoneme_language="en-us", - phoneme_cache_path=os.path.join(output_path, "phoneme_cache"), - print_step=25, - print_eval=False, - mixed_precision=True, - output_path=output_path, - datasets=[dataset_config], - use_speaker_embedding=True, - min_text_len=0, - max_text_len=500, - min_audio_len=0, - max_audio_len=500000, -) - -# INITIALIZE THE AUDIO PROCESSOR -# Audio processor is used for feature extraction and audio I/O. -# It mainly serves to the dataloader and the training loggers. -ap = AudioProcessor.init_from_config(config) - -# INITIALIZE THE TOKENIZER -# Tokenizer is used to convert text to sequences of token IDs. -# If characters are not defined in the config, default characters are passed to the config -tokenizer, config = TTSTokenizer.init_from_config(config) - -# LOAD DATA SAMPLES -# Each sample is a list of ```[text, audio_file_path, speaker_name]``` -# You can define your custom sample loader returning the list of samples. -# Or define your custom formatter and pass it to the `load_tts_samples`. -# Check `TTS.tts.datasets.load_tts_samples` for more details. -train_samples, eval_samples = load_tts_samples( - dataset_config, - eval_split=True, - eval_split_max_size=config.eval_split_max_size, - eval_split_size=config.eval_split_size, -) - -# init speaker manager for multi-speaker training -# it maps speaker-id to speaker-name in the model and data-loader -speaker_manager = SpeakerManager() -speaker_manager.set_ids_from_data(train_samples + eval_samples, parse_key="speaker_name") -config.num_speakers = speaker_manager.num_speakers - -# init model -model = GlowTTS(config, ap, tokenizer, speaker_manager=speaker_manager) - -# INITIALIZE THE TRAINER -# Trainer provides a generic API to train all the 🐸TTS models with all its perks like mixed-precision training, -# distributed training, etc. -trainer = Trainer( - TrainerArgs(), config, output_path, model=model, train_samples=train_samples, eval_samples=eval_samples -) - -# AND... 3,2,1... 🚀 -trainer.fit() diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/DdsImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/DdsImagePlugin.py deleted file mode 100644 index eea6e31534ce17024055f4e5074e90f02e39e71b..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/DdsImagePlugin.py +++ /dev/null @@ -1,267 +0,0 @@ -""" -A Pillow loader for .dds files (S3TC-compressed aka DXTC) -Jerome Leclanche - -Documentation: - https://web.archive.org/web/20170802060935/http://oss.sgi.com/projects/ogl-sample/registry/EXT/texture_compression_s3tc.txt - -The contents of this file are hereby released in the public domain (CC0) -Full text of the CC0 license: - https://creativecommons.org/publicdomain/zero/1.0/ -""" - -import struct -from io import BytesIO - -from . import Image, ImageFile -from ._binary import o32le as o32 - -# Magic ("DDS ") -DDS_MAGIC = 0x20534444 - -# DDS flags -DDSD_CAPS = 0x1 -DDSD_HEIGHT = 0x2 -DDSD_WIDTH = 0x4 -DDSD_PITCH = 0x8 -DDSD_PIXELFORMAT = 0x1000 -DDSD_MIPMAPCOUNT = 0x20000 -DDSD_LINEARSIZE = 0x80000 -DDSD_DEPTH = 0x800000 - -# DDS caps -DDSCAPS_COMPLEX = 0x8 -DDSCAPS_TEXTURE = 0x1000 -DDSCAPS_MIPMAP = 0x400000 - -DDSCAPS2_CUBEMAP = 0x200 -DDSCAPS2_CUBEMAP_POSITIVEX = 0x400 -DDSCAPS2_CUBEMAP_NEGATIVEX = 0x800 -DDSCAPS2_CUBEMAP_POSITIVEY = 0x1000 -DDSCAPS2_CUBEMAP_NEGATIVEY = 0x2000 -DDSCAPS2_CUBEMAP_POSITIVEZ = 0x4000 -DDSCAPS2_CUBEMAP_NEGATIVEZ = 0x8000 -DDSCAPS2_VOLUME = 0x200000 - -# Pixel Format -DDPF_ALPHAPIXELS = 0x1 -DDPF_ALPHA = 0x2 -DDPF_FOURCC = 0x4 -DDPF_PALETTEINDEXED8 = 0x20 -DDPF_RGB = 0x40 -DDPF_LUMINANCE = 0x20000 - - -# dds.h - -DDS_FOURCC = DDPF_FOURCC -DDS_RGB = DDPF_RGB -DDS_RGBA = DDPF_RGB | DDPF_ALPHAPIXELS -DDS_LUMINANCE = DDPF_LUMINANCE -DDS_LUMINANCEA = DDPF_LUMINANCE | DDPF_ALPHAPIXELS -DDS_ALPHA = DDPF_ALPHA -DDS_PAL8 = DDPF_PALETTEINDEXED8 - -DDS_HEADER_FLAGS_TEXTURE = DDSD_CAPS | DDSD_HEIGHT | DDSD_WIDTH | DDSD_PIXELFORMAT -DDS_HEADER_FLAGS_MIPMAP = DDSD_MIPMAPCOUNT -DDS_HEADER_FLAGS_VOLUME = DDSD_DEPTH -DDS_HEADER_FLAGS_PITCH = DDSD_PITCH -DDS_HEADER_FLAGS_LINEARSIZE = DDSD_LINEARSIZE - -DDS_HEIGHT = DDSD_HEIGHT -DDS_WIDTH = DDSD_WIDTH - -DDS_SURFACE_FLAGS_TEXTURE = DDSCAPS_TEXTURE -DDS_SURFACE_FLAGS_MIPMAP = DDSCAPS_COMPLEX | DDSCAPS_MIPMAP -DDS_SURFACE_FLAGS_CUBEMAP = DDSCAPS_COMPLEX - -DDS_CUBEMAP_POSITIVEX = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEX -DDS_CUBEMAP_NEGATIVEX = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEX -DDS_CUBEMAP_POSITIVEY = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEY -DDS_CUBEMAP_NEGATIVEY = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEY -DDS_CUBEMAP_POSITIVEZ = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEZ -DDS_CUBEMAP_NEGATIVEZ = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEZ - - -# DXT1 -DXT1_FOURCC = 0x31545844 - -# DXT3 -DXT3_FOURCC = 0x33545844 - -# DXT5 -DXT5_FOURCC = 0x35545844 - - -# dxgiformat.h - -DXGI_FORMAT_R8G8B8A8_TYPELESS = 27 -DXGI_FORMAT_R8G8B8A8_UNORM = 28 -DXGI_FORMAT_R8G8B8A8_UNORM_SRGB = 29 -DXGI_FORMAT_BC5_TYPELESS = 82 -DXGI_FORMAT_BC5_UNORM = 83 -DXGI_FORMAT_BC5_SNORM = 84 -DXGI_FORMAT_BC6H_UF16 = 95 -DXGI_FORMAT_BC6H_SF16 = 96 -DXGI_FORMAT_BC7_TYPELESS = 97 -DXGI_FORMAT_BC7_UNORM = 98 -DXGI_FORMAT_BC7_UNORM_SRGB = 99 - - -class DdsImageFile(ImageFile.ImageFile): - format = "DDS" - format_description = "DirectDraw Surface" - - def _open(self): - if not _accept(self.fp.read(4)): - raise SyntaxError("not a DDS file") - (header_size,) = struct.unpack(" List[int]: - "Get the indexes of the layers where the size of the activation changes." - feature_szs = [size[-1] for size in sizes] - sfs_idxs = list( - np.where(np.array(feature_szs[:-1]) != np.array(feature_szs[1:]))[0] - ) - if feature_szs[0] != feature_szs[1]: - sfs_idxs = [0] + sfs_idxs - return sfs_idxs - - -class CustomPixelShuffle_ICNR(nn.Module): - "Upsample by `scale` from `ni` filters to `nf` (default `ni`), using `nn.PixelShuffle`, `icnr` init, and `weight_norm`." - - def __init__( - self, - ni: int, - nf: int = None, - scale: int = 2, - blur: bool = False, - leaky: float = None, - **kwargs - ): - super().__init__() - nf = ifnone(nf, ni) - self.conv = custom_conv_layer( - ni, nf * (scale ** 2), ks=1, use_activ=False, **kwargs - ) - icnr(self.conv[0].weight) - self.shuf = nn.PixelShuffle(scale) - # Blurring over (h*w) kernel - # "Super-Resolution using Convolutional Neural Networks without Any Checkerboard Artifacts" - # - https://arxiv.org/abs/1806.02658 - self.pad = nn.ReplicationPad2d((1, 0, 1, 0)) - self.blur = nn.AvgPool2d(2, stride=1) - self.relu = relu(True, leaky=leaky) - - def forward(self, x): - x = self.shuf(self.relu(self.conv(x))) - return self.blur(self.pad(x)) if self.blur else x - - -class UnetBlockDeep(nn.Module): - "A quasi-UNet block, using `PixelShuffle_ICNR upsampling`." - - def __init__( - self, - up_in_c: int, - x_in_c: int, - hook: Hook, - final_div: bool = True, - blur: bool = False, - leaky: float = None, - self_attention: bool = False, - nf_factor: float = 1.0, - **kwargs - ): - super().__init__() - self.hook = hook - self.shuf = CustomPixelShuffle_ICNR( - up_in_c, up_in_c // 2, blur=blur, leaky=leaky, **kwargs - ) - self.bn = batchnorm_2d(x_in_c) - ni = up_in_c // 2 + x_in_c - nf = int((ni if final_div else ni // 2) * nf_factor) - self.conv1 = custom_conv_layer(ni, nf, leaky=leaky, **kwargs) - self.conv2 = custom_conv_layer( - nf, nf, leaky=leaky, self_attention=self_attention, **kwargs - ) - self.relu = relu(leaky=leaky) - - def forward(self, up_in: Tensor) -> Tensor: - s = self.hook.stored - up_out = self.shuf(up_in) - ssh = s.shape[-2:] - if ssh != up_out.shape[-2:]: - up_out = F.interpolate(up_out, s.shape[-2:], mode='nearest') - cat_x = self.relu(torch.cat([up_out, self.bn(s)], dim=1)) - return self.conv2(self.conv1(cat_x)) - - -class DynamicUnetDeep(SequentialEx): - "Create a U-Net from a given architecture." - - def __init__( - self, - encoder: nn.Module, - n_classes: int, - blur: bool = False, - blur_final=True, - self_attention: bool = False, - y_range: Optional[Tuple[float, float]] = None, - last_cross: bool = True, - bottle: bool = False, - norm_type: Optional[NormType] = NormType.Batch, - nf_factor: float = 1.0, - **kwargs - ): - extra_bn = norm_type == NormType.Spectral - imsize = (256, 256) - sfs_szs = model_sizes(encoder, size=imsize) - sfs_idxs = list(reversed(_get_sfs_idxs(sfs_szs))) - self.sfs = hook_outputs([encoder[i] for i in sfs_idxs], detach=False) - x = dummy_eval(encoder, imsize).detach() - - ni = sfs_szs[-1][1] - middle_conv = nn.Sequential( - custom_conv_layer( - ni, ni * 2, norm_type=norm_type, extra_bn=extra_bn, **kwargs - ), - custom_conv_layer( - ni * 2, ni, norm_type=norm_type, extra_bn=extra_bn, **kwargs - ), - ).eval() - x = middle_conv(x) - layers = [encoder, batchnorm_2d(ni), nn.ReLU(), middle_conv] - - for i, idx in enumerate(sfs_idxs): - not_final = i != len(sfs_idxs) - 1 - up_in_c, x_in_c = int(x.shape[1]), int(sfs_szs[idx][1]) - do_blur = blur and (not_final or blur_final) - sa = self_attention and (i == len(sfs_idxs) - 3) - unet_block = UnetBlockDeep( - up_in_c, - x_in_c, - self.sfs[i], - final_div=not_final, - blur=blur, - self_attention=sa, - norm_type=norm_type, - extra_bn=extra_bn, - nf_factor=nf_factor, - **kwargs - ).eval() - layers.append(unet_block) - x = unet_block(x) - - ni = x.shape[1] - if imsize != sfs_szs[0][-2:]: - layers.append(PixelShuffle_ICNR(ni, **kwargs)) - if last_cross: - layers.append(MergeLayer(dense=True)) - ni += in_channels(encoder) - layers.append(res_block(ni, bottle=bottle, norm_type=norm_type, **kwargs)) - layers += [ - custom_conv_layer(ni, n_classes, ks=1, use_activ=False, norm_type=norm_type) - ] - if y_range is not None: - layers.append(SigmoidRange(*y_range)) - super().__init__(*layers) - - def __del__(self): - if hasattr(self, "sfs"): - self.sfs.remove() - - -# ------------------------------------------------------ -class UnetBlockWide(nn.Module): - "A quasi-UNet block, using `PixelShuffle_ICNR upsampling`." - - def __init__( - self, - up_in_c: int, - x_in_c: int, - n_out: int, - hook: Hook, - final_div: bool = True, - blur: bool = False, - leaky: float = None, - self_attention: bool = False, - **kwargs - ): - super().__init__() - self.hook = hook - up_out = x_out = n_out // 2 - self.shuf = CustomPixelShuffle_ICNR( - up_in_c, up_out, blur=blur, leaky=leaky, **kwargs - ) - self.bn = batchnorm_2d(x_in_c) - ni = up_out + x_in_c - self.conv = custom_conv_layer( - ni, x_out, leaky=leaky, self_attention=self_attention, **kwargs - ) - self.relu = relu(leaky=leaky) - - def forward(self, up_in: Tensor) -> Tensor: - s = self.hook.stored - up_out = self.shuf(up_in) - ssh = s.shape[-2:] - if ssh != up_out.shape[-2:]: - up_out = F.interpolate(up_out, s.shape[-2:], mode='nearest') - cat_x = self.relu(torch.cat([up_out, self.bn(s)], dim=1)) - return self.conv(cat_x) - - -class DynamicUnetWide(SequentialEx): - "Create a U-Net from a given architecture." - - def __init__( - self, - encoder: nn.Module, - n_classes: int, - blur: bool = False, - blur_final=True, - self_attention: bool = False, - y_range: Optional[Tuple[float, float]] = None, - last_cross: bool = True, - bottle: bool = False, - norm_type: Optional[NormType] = NormType.Batch, - nf_factor: int = 1, - **kwargs - ): - - nf = 512 * nf_factor - extra_bn = norm_type == NormType.Spectral - imsize = (256, 256) - sfs_szs = model_sizes(encoder, size=imsize) - sfs_idxs = list(reversed(_get_sfs_idxs(sfs_szs))) - self.sfs = hook_outputs([encoder[i] for i in sfs_idxs], detach=False) - x = dummy_eval(encoder, imsize).detach() - - ni = sfs_szs[-1][1] - middle_conv = nn.Sequential( - custom_conv_layer( - ni, ni * 2, norm_type=norm_type, extra_bn=extra_bn, **kwargs - ), - custom_conv_layer( - ni * 2, ni, norm_type=norm_type, extra_bn=extra_bn, **kwargs - ), - ).eval() - x = middle_conv(x) - layers = [encoder, batchnorm_2d(ni), nn.ReLU(), middle_conv] - - for i, idx in enumerate(sfs_idxs): - not_final = i != len(sfs_idxs) - 1 - up_in_c, x_in_c = int(x.shape[1]), int(sfs_szs[idx][1]) - do_blur = blur and (not_final or blur_final) - sa = self_attention and (i == len(sfs_idxs) - 3) - - n_out = nf if not_final else nf // 2 - - unet_block = UnetBlockWide( - up_in_c, - x_in_c, - n_out, - self.sfs[i], - final_div=not_final, - blur=blur, - self_attention=sa, - norm_type=norm_type, - extra_bn=extra_bn, - **kwargs - ).eval() - layers.append(unet_block) - x = unet_block(x) - - ni = x.shape[1] - if imsize != sfs_szs[0][-2:]: - layers.append(PixelShuffle_ICNR(ni, **kwargs)) - if last_cross: - layers.append(MergeLayer(dense=True)) - ni += in_channels(encoder) - layers.append(res_block(ni, bottle=bottle, norm_type=norm_type, **kwargs)) - layers += [ - custom_conv_layer(ni, n_classes, ks=1, use_activ=False, norm_type=norm_type) - ] - if y_range is not None: - layers.append(SigmoidRange(*y_range)) - super().__init__(*layers) - - def __del__(self): - if hasattr(self, "sfs"): - self.sfs.remove() diff --git a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/losses/contperceptual.py b/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/losses/contperceptual.py deleted file mode 100644 index 672c1e32a1389def02461c0781339681060c540e..0000000000000000000000000000000000000000 --- a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/losses/contperceptual.py +++ /dev/null @@ -1,111 +0,0 @@ -import torch -import torch.nn as nn - -from taming.modules.losses.vqperceptual import * # TODO: taming dependency yes/no? - - -class LPIPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_loss="hinge"): - - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - self.kl_weight = kl_weight - self.pixel_weight = pixelloss_weight - self.perceptual_loss = LPIPS().eval() - self.perceptual_weight = perceptual_weight - # output log variance - self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init) - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm - ).apply(weights_init) - self.discriminator_iter_start = disc_start - self.disc_loss = hinge_d_loss if disc_loss == "hinge" else vanilla_d_loss - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, inputs, reconstructions, posteriors, optimizer_idx, - global_step, last_layer=None, cond=None, split="train", - weights=None): - rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - rec_loss = rec_loss + self.perceptual_weight * p_loss - - nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar - weighted_nll_loss = nll_loss - if weights is not None: - weighted_nll_loss = weights*nll_loss - weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0] - nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - kl_loss = posteriors.kl() - kl_loss = torch.sum(kl_loss) / kl_loss.shape[0] - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - if self.disc_factor > 0.0: - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - else: - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss - - log = {"{}/total_loss".format(split): loss.clone().detach().mean(), "{}/logvar".format(split): self.logvar.detach(), - "{}/kl_loss".format(split): kl_loss.detach().mean(), "{}/nll_loss".format(split): nll_loss.detach().mean(), - "{}/rec_loss".format(split): rec_loss.detach().mean(), - "{}/d_weight".format(split): d_weight.detach(), - "{}/disc_factor".format(split): torch.tensor(disc_factor), - "{}/g_loss".format(split): g_loss.detach().mean(), - } - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"{}/disc_loss".format(split): d_loss.clone().detach().mean(), - "{}/logits_real".format(split): logits_real.detach().mean(), - "{}/logits_fake".format(split): logits_fake.detach().mean() - } - return d_loss, log - diff --git a/spaces/avans06/whisper-webui-translate/docs/colab.md b/spaces/avans06/whisper-webui-translate/docs/colab.md deleted file mode 100644 index 3fcdb835327238764fb643b9bbd2e27b6e14f58c..0000000000000000000000000000000000000000 --- a/spaces/avans06/whisper-webui-translate/docs/colab.md +++ /dev/null @@ -1,20 +0,0 @@ -# Running Whisper on Google Colab - -If you don't have a decent GPU or any experience in running command-line applications, you might want to try this Google Colab instead: - -* [Google Colab - Whisper WebUI GPU](https://colab.research.google.com/drive/1qeTSvi7Bt_5RMm88ipW4fkcsMOKlDDss?usp=sharing) -* [Screenshots](https://imgur.com/a/ZfY6uBO) - -The runtime (Runtime -> Change runtime type -> Hardware accelerator) should already be set top GPU. But if not, change it to GPU. - -Then, sign in to Google if you haven't already. Next, click on "Connect" at the top right. - -Under "Checking out WebUI from Git", click on the [play icon](https://imgur.com/a/81gOLyD) that appears in "[ ]" at the left. If you get a warning, click "Run anyway". - -After this step has completed, it should be get a green check mark. Then move on to the next section under "Installing dependencies", and click in "[ ]" again. This might take approximately 30 seconds. - -Once this has completed, scroll down to the "Run WebUI" section, and click on "[ ]". This will launch the WebUI in a shared link (expires in 72 hours). To open the UI, click on the link next to "Running on public URL", which will be something like https://12xxx.gradio.app/ - -The audio length in this version is not restricted, and it will run much faster as it is backed by a GPU. You can also run it using the "Large" model. Also note that it might take some time to start the model the first time, as it may need to download a 2.8 GB file on Google's servers. - -Once you're done, you can close the WebUI session by clicking the animated close button under "Run WebUI". You can also do this if you encounter any errors and need to restart the UI. You should also go to "Manage Sessions" and terminate the session, otherwise you may end up using all your free compute credits. \ No newline at end of file diff --git a/spaces/awacke1/AIDocumentUnderstandingOCR/README.md b/spaces/awacke1/AIDocumentUnderstandingOCR/README.md deleted file mode 100644 index f3348a03ddb946a07a9f9cccacfb9924ade30411..0000000000000000000000000000000000000000 --- a/spaces/awacke1/AIDocumentUnderstandingOCR/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 📑 NLP Document Understand OCR 👁️ -emoji: 📑👁️ -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.0.5 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/awacke1/BiasMitigatorForFairEquityData/README.md b/spaces/awacke1/BiasMitigatorForFairEquityData/README.md deleted file mode 100644 index af869f30202a8ce07fd6c50ee56003a7611496e5..0000000000000000000000000000000000000000 --- a/spaces/awacke1/BiasMitigatorForFairEquityData/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 🌍 Bias Mitigator For Fair Equity Data Streamlit -emoji: 🌍 -colorFrom: yellow -colorTo: red -sdk: streamlit -sdk_version: 1.15.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/CardGameActivity/README.md b/spaces/awacke1/CardGameActivity/README.md deleted file mode 100644 index 56ba5b7763de66b8ae281979aa8ae235dd3d09e4..0000000000000000000000000000000000000000 --- a/spaces/awacke1/CardGameActivity/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CardGameActivity -emoji: 🔥 -colorFrom: gray -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/GradioUpdateUI/README.md b/spaces/awacke1/GradioUpdateUI/README.md deleted file mode 100644 index 211958fa8a1e8492a595ea04c28d9bbf79c24793..0000000000000000000000000000000000000000 --- a/spaces/awacke1/GradioUpdateUI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GradioUpdateUI -emoji: 😻 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Reference: https://huggingface.co/course/chapter9/8?fw=pt diff --git a/spaces/awacke1/Memory-Shared/README.md b/spaces/awacke1/Memory-Shared/README.md deleted file mode 100644 index f2ca07a78b105829863d15f3d6aaf11d7fc82d9d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Memory-Shared/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PersistentDatasetMemoryGradio -emoji: 💾📊📻 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 2.4.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/awacke1/PlayableMovingLottieAnimationStreamlit/README.md b/spaces/awacke1/PlayableMovingLottieAnimationStreamlit/README.md deleted file mode 100644 index b4328faec68aeaf15cc623d4a40b89130d50a8a7..0000000000000000000000000000000000000000 --- a/spaces/awacke1/PlayableMovingLottieAnimationStreamlit/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PlayableMovingLottieAnimationStreamlit -emoji: 📊 -colorFrom: green -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Sentiment-aware-chatbot/README.md b/spaces/awacke1/Sentiment-aware-chatbot/README.md deleted file mode 100644 index 10b8903f297d68cf56249f55b63be153bcd1db18..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Sentiment-aware-chatbot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Sentiment Aware Chatbot -emoji: 🚀 -colorFrom: pink -colorTo: yellow -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/src/cameras/ArrayCamera.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/cameras/ArrayCamera.d.ts deleted file mode 100644 index ead49b9fb174c439dacc460439adaec185c9115b..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/cameras/ArrayCamera.d.ts +++ /dev/null @@ -1,8 +0,0 @@ -import { PerspectiveCamera } from './PerspectiveCamera'; - -export class ArrayCamera extends PerspectiveCamera { - constructor(cameras?: PerspectiveCamera[]); - - cameras: PerspectiveCamera[]; - isArrayCamera: true; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/helpers/PlaneHelper.js b/spaces/banana-projects/web3d/node_modules/three/src/helpers/PlaneHelper.js deleted file mode 100644 index edb94334aae3070f76af704b7a824f01c1e3748e..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/helpers/PlaneHelper.js +++ /dev/null @@ -1,63 +0,0 @@ -/** - * @author WestLangley / http://github.com/WestLangley - */ - -import { Line } from '../objects/Line.js'; -import { Mesh } from '../objects/Mesh.js'; -import { LineBasicMaterial } from '../materials/LineBasicMaterial.js'; -import { MeshBasicMaterial } from '../materials/MeshBasicMaterial.js'; -import { Float32BufferAttribute } from '../core/BufferAttribute.js'; -import { BufferGeometry } from '../core/BufferGeometry.js'; -import { Object3D } from '../core/Object3D.js'; -import { FrontSide, BackSide } from '../constants.js'; - -function PlaneHelper( plane, size, hex ) { - - this.type = 'PlaneHelper'; - - this.plane = plane; - - this.size = ( size === undefined ) ? 1 : size; - - var color = ( hex !== undefined ) ? hex : 0xffff00; - - var positions = [ 1, - 1, 1, - 1, 1, 1, - 1, - 1, 1, 1, 1, 1, - 1, 1, 1, - 1, - 1, 1, 1, - 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0 ]; - - var geometry = new BufferGeometry(); - geometry.addAttribute( 'position', new Float32BufferAttribute( positions, 3 ) ); - geometry.computeBoundingSphere(); - - Line.call( this, geometry, new LineBasicMaterial( { color: color } ) ); - - // - - var positions2 = [ 1, 1, 1, - 1, 1, 1, - 1, - 1, 1, 1, 1, 1, - 1, - 1, 1, 1, - 1, 1 ]; - - var geometry2 = new BufferGeometry(); - geometry2.addAttribute( 'position', new Float32BufferAttribute( positions2, 3 ) ); - geometry2.computeBoundingSphere(); - - this.add( new Mesh( geometry2, new MeshBasicMaterial( { color: color, opacity: 0.2, transparent: true, depthWrite: false } ) ) ); - -} - -PlaneHelper.prototype = Object.create( Line.prototype ); -PlaneHelper.prototype.constructor = PlaneHelper; - -PlaneHelper.prototype.updateMatrixWorld = function ( force ) { - - var scale = - this.plane.constant; - - if ( Math.abs( scale ) < 1e-8 ) scale = 1e-8; // sign does not matter - - this.scale.set( 0.5 * this.size, 0.5 * this.size, scale ); - - this.children[ 0 ].material.side = ( scale < 0 ) ? BackSide : FrontSide; // renderer flips side when determinant < 0; flipping not wanted here - - this.lookAt( this.plane.normal ); - - Object3D.prototype.updateMatrixWorld.call( this, force ); - -}; - -export { PlaneHelper }; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326230538.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326230538.py deleted file mode 100644 index 94bdfdfa79b1d370f5f7b27bd0dc1d63a679a842..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326230538.py +++ /dev/null @@ -1,68 +0,0 @@ -import os -os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - - - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_faces[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

visitor badge
" -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Cwcheat Download God Eater Burst.md b/spaces/bioriAsaeru/text-to-voice/Cwcheat Download God Eater Burst.md deleted file mode 100644 index 19225d3e28961ecae88281fac939a760fc0dd128..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Cwcheat Download God Eater Burst.md +++ /dev/null @@ -1,6 +0,0 @@ -

cwcheat download god eater burst


DOWNLOADhttps://urloso.com/2uyS2g



-
-for this page there are no information. Find out why we can't find this page online. The information on this page is out of date or cannot be found. We would be grateful if you could let us know if there is information on this page that needs to be updated. 8a78ff9644
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Icecream PDF Candy Desktop Pro 2.62 Patch.md b/spaces/bioriAsaeru/text-to-voice/Icecream PDF Candy Desktop Pro 2.62 Patch.md deleted file mode 100644 index eb759696cef1f4849ca67b7cdbb9edf2e611cc3c..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Icecream PDF Candy Desktop Pro 2.62 Patch.md +++ /dev/null @@ -1,36 +0,0 @@ - -

How to Use Icecream PDF Candy Desktop Pro 2.62 Patch to Convert, Edit and Protect PDF Files

-

PDF files are widely used for various purposes, such as sharing documents, ebooks, images and more. However, sometimes you may need to convert PDF files to other formats, such as Word, JPG, etc., or edit PDF files by merging, splitting, extracting or changing metadata. You may also want to protect your PDF files with passwords or unlock them if they are encrypted.

-

Icecream PDF Candy Desktop Pro 2.62 Patch


DOWNLOAD ✒ ✒ ✒ https://urloso.com/2uyQxU



-

One of the tools that can help you with these tasks is Icecream PDF Candy Desktop Pro 2.62 Patch. This is a versatile and user-friendly software that enables you to convert files from PDF to various supported formats (PDF to DOC, PDF to JPG, etc), convert documents, ebooks and images to PDF, merge PDF, split PDF, extract images and text from PDF, edit PDF metadata, protect PDF and unlock password-protected PDF files. It also supports batch processing and OCR (text recognition) for scanned PDFs.

-

In this article, we will show you how to use Icecream PDF Candy Desktop Pro 2.62 Patch to perform some common operations on PDF files.

-

How to Convert PDF Files to Other Formats

-

If you want to convert a PDF file to another format, such as Word or JPG, you can follow these steps:

-
    -
  1. Download and install Icecream PDF Candy Desktop Pro 2.62 Patch from the official website or from one of the web search results[^1^] [^2^] [^3^].
  2. -
  3. Launch the program and select "From PDF" mode from the main window.
  4. -
  5. Add the PDF file(s) that you want to convert by clicking on the "Add file(s)" button or dragging and dropping them into the program window.
  6. -
  7. Select the output format that you want from the drop-down menu at the bottom of the window. You can choose from DOC, DOCX, RTF, ODT, JPG, PNG, BMP, TIFF and GIF.
  8. -
  9. Click on the "Convert" button and wait for the process to finish. You can see the progress and status of each file in the list.
  10. -
  11. Once the conversion is done, you can open the output folder by clicking on the "Open folder" button or access the output files directly from the program window by clicking on their names.
  12. -
-

How to Convert Other Formats to PDF

-

If you want to convert a document, ebook or image file to PDF format, you can follow these steps:

-

-
    -
  1. Download and install Icecream PDF Candy Desktop Pro 2.62 Patch from the official website or from one of the web search results[^1^] [^2^] [^3^].
  2. -
  3. Launch the program and select "To PDF" mode from the main window.
  4. -
  5. Add the file(s) that you want to convert by clicking on the "Add file(s)" button or dragging and dropping them into the program window.
  6. -
  7. Select the output settings that you want from the right panel. You can choose the page size, orientation, margins and compression level for your output PDF file.
  8. -
  9. Click on the "Convert" button and wait for the process to finish. You can see the progress and status of each file in the list.
  10. -
  11. Once the conversion is done, you can open the output folder by clicking on the "Open folder" button or access the output files directly from the program window by clicking on their names.
  12. -
-

How to Merge PDF Files

-

If you want to merge two or more PDF files into one single file, you can follow these steps:

-
    -
  1. Download and install Icecream PDF Candy Desktop Pro 2.62 Patch from the official website or from one of the web search results[^1^] [^2^] [^3^].
  2. -
  3. Launch the program and select "Merge" mode from the main window.
  4. -
  5. Add the PDF file(s) that you want to merge by clicking on the "Add file(s)" button or dragging and dropping them into the program window.
  6. -
  7. Arrange the order of the files by

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/lvis_evaluation.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/lvis_evaluation.py deleted file mode 100644 index 6cc854a157dc469be99a9be1bb7d570068adc891..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/evaluation/lvis_evaluation.py +++ /dev/null @@ -1,380 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import itertools -import json -import logging -import os -import pickle -from collections import OrderedDict -import torch - -import detectron2.utils.comm as comm -from detectron2.config import CfgNode -from detectron2.data import MetadataCatalog -from detectron2.structures import Boxes, BoxMode, pairwise_iou -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import create_small_table - -from .coco_evaluation import instances_to_coco_json -from .evaluator import DatasetEvaluator - - -class LVISEvaluator(DatasetEvaluator): - """ - Evaluate object proposal and instance detection/segmentation outputs using - LVIS's metrics and evaluation API. - """ - - def __init__( - self, - dataset_name, - tasks=None, - distributed=True, - output_dir=None, - *, - max_dets_per_image=None, - ): - """ - Args: - dataset_name (str): name of the dataset to be evaluated. - It must have the following corresponding metadata: - "json_file": the path to the LVIS format annotation - tasks (tuple[str]): tasks that can be evaluated under the given - configuration. A task is one of "bbox", "segm". - By default, will infer this automatically from predictions. - distributed (True): if True, will collect results from all ranks for evaluation. - Otherwise, will evaluate the results in the current process. - output_dir (str): optional, an output directory to dump results. - max_dets_per_image (None or int): limit on maximum detections per image in evaluating AP - This limit, by default of the LVIS dataset, is 300. - """ - from lvis import LVIS - - self._logger = logging.getLogger(__name__) - - if tasks is not None and isinstance(tasks, CfgNode): - self._logger.warn( - "COCO Evaluator instantiated using config, this is deprecated behavior." - " Please pass in explicit arguments instead." - ) - self._tasks = None # Infering it from predictions should be better - else: - self._tasks = tasks - - self._distributed = distributed - self._output_dir = output_dir - self._max_dets_per_image = max_dets_per_image - - self._cpu_device = torch.device("cpu") - - self._metadata = MetadataCatalog.get(dataset_name) - json_file = PathManager.get_local_path(self._metadata.json_file) - self._lvis_api = LVIS(json_file) - # Test set json files do not contain annotations (evaluation must be - # performed using the LVIS evaluation server). - self._do_evaluation = len(self._lvis_api.get_ann_ids()) > 0 - - def reset(self): - self._predictions = [] - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a LVIS model (e.g., GeneralizedRCNN). - It is a list of dict. Each dict corresponds to an image and - contains keys like "height", "width", "file_name", "image_id". - outputs: the outputs of a LVIS model. It is a list of dicts with key - "instances" that contains :class:`Instances`. - """ - for input, output in zip(inputs, outputs): - prediction = {"image_id": input["image_id"]} - - if "instances" in output: - instances = output["instances"].to(self._cpu_device) - prediction["instances"] = instances_to_coco_json(instances, input["image_id"]) - if "proposals" in output: - prediction["proposals"] = output["proposals"].to(self._cpu_device) - self._predictions.append(prediction) - - def evaluate(self): - if self._distributed: - comm.synchronize() - predictions = comm.gather(self._predictions, dst=0) - predictions = list(itertools.chain(*predictions)) - - if not comm.is_main_process(): - return - else: - predictions = self._predictions - - if len(predictions) == 0: - self._logger.warning("[LVISEvaluator] Did not receive valid predictions.") - return {} - - if self._output_dir: - PathManager.mkdirs(self._output_dir) - file_path = os.path.join(self._output_dir, "instances_predictions.pth") - with PathManager.open(file_path, "wb") as f: - torch.save(predictions, f) - - self._results = OrderedDict() - if "proposals" in predictions[0]: - self._eval_box_proposals(predictions) - if "instances" in predictions[0]: - self._eval_predictions(predictions) - # Copy so the caller can do whatever with results - return copy.deepcopy(self._results) - - def _tasks_from_predictions(self, predictions): - for pred in predictions: - if "segmentation" in pred: - return ("bbox", "segm") - return ("bbox",) - - def _eval_predictions(self, predictions): - """ - Evaluate predictions. Fill self._results with the metrics of the tasks. - - Args: - predictions (list[dict]): list of outputs from the model - """ - self._logger.info("Preparing results in the LVIS format ...") - lvis_results = list(itertools.chain(*[x["instances"] for x in predictions])) - tasks = self._tasks or self._tasks_from_predictions(lvis_results) - - # LVIS evaluator can be used to evaluate results for COCO dataset categories. - # In this case `_metadata` variable will have a field with COCO-specific category mapping. - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - reverse_id_mapping = { - v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() - } - for result in lvis_results: - result["category_id"] = reverse_id_mapping[result["category_id"]] - else: - # unmap the category ids for LVIS (from 0-indexed to 1-indexed) - for result in lvis_results: - result["category_id"] += 1 - - if self._output_dir: - file_path = os.path.join(self._output_dir, "lvis_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(lvis_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating predictions ...") - for task in sorted(tasks): - res = _evaluate_predictions_on_lvis( - self._lvis_api, - lvis_results, - task, - max_dets_per_image=self._max_dets_per_image, - class_names=self._metadata.get("thing_classes"), - ) - self._results[task] = res - - def _eval_box_proposals(self, predictions): - """ - Evaluate the box proposals in predictions. - Fill self._results with the metrics for "box_proposals" task. - """ - if self._output_dir: - # Saving generated box proposals to file. - # Predicted box_proposals are in XYXY_ABS mode. - bbox_mode = BoxMode.XYXY_ABS.value - ids, boxes, objectness_logits = [], [], [] - for prediction in predictions: - ids.append(prediction["image_id"]) - boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy()) - objectness_logits.append(prediction["proposals"].objectness_logits.numpy()) - - proposal_data = { - "boxes": boxes, - "objectness_logits": objectness_logits, - "ids": ids, - "bbox_mode": bbox_mode, - } - with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f: - pickle.dump(proposal_data, f) - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating bbox proposals ...") - res = {} - areas = {"all": "", "small": "s", "medium": "m", "large": "l"} - for limit in [100, 1000]: - for area, suffix in areas.items(): - stats = _evaluate_box_proposals(predictions, self._lvis_api, area=area, limit=limit) - key = "AR{}@{:d}".format(suffix, limit) - res[key] = float(stats["ar"].item() * 100) - self._logger.info("Proposal metrics: \n" + create_small_table(res)) - self._results["box_proposals"] = res - - -# inspired from Detectron: -# https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa -def _evaluate_box_proposals(dataset_predictions, lvis_api, thresholds=None, area="all", limit=None): - """ - Evaluate detection proposal recall metrics. This function is a much - faster alternative to the official LVIS API recall evaluation code. However, - it produces slightly different results. - """ - # Record max overlap value for each gt box - # Return vector of overlap values - areas = { - "all": 0, - "small": 1, - "medium": 2, - "large": 3, - "96-128": 4, - "128-256": 5, - "256-512": 6, - "512-inf": 7, - } - area_ranges = [ - [0**2, 1e5**2], # all - [0**2, 32**2], # small - [32**2, 96**2], # medium - [96**2, 1e5**2], # large - [96**2, 128**2], # 96-128 - [128**2, 256**2], # 128-256 - [256**2, 512**2], # 256-512 - [512**2, 1e5**2], - ] # 512-inf - assert area in areas, "Unknown area range: {}".format(area) - area_range = area_ranges[areas[area]] - gt_overlaps = [] - num_pos = 0 - - for prediction_dict in dataset_predictions: - predictions = prediction_dict["proposals"] - - # sort predictions in descending order - # TODO maybe remove this and make it explicit in the documentation - inds = predictions.objectness_logits.sort(descending=True)[1] - predictions = predictions[inds] - - ann_ids = lvis_api.get_ann_ids(img_ids=[prediction_dict["image_id"]]) - anno = lvis_api.load_anns(ann_ids) - gt_boxes = [ - BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS) for obj in anno - ] - gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4) # guard against no boxes - gt_boxes = Boxes(gt_boxes) - gt_areas = torch.as_tensor([obj["area"] for obj in anno]) - - if len(gt_boxes) == 0 or len(predictions) == 0: - continue - - valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1]) - gt_boxes = gt_boxes[valid_gt_inds] - - num_pos += len(gt_boxes) - - if len(gt_boxes) == 0: - continue - - if limit is not None and len(predictions) > limit: - predictions = predictions[:limit] - - overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes) - - _gt_overlaps = torch.zeros(len(gt_boxes)) - for j in range(min(len(predictions), len(gt_boxes))): - # find which proposal box maximally covers each gt box - # and get the iou amount of coverage for each gt box - max_overlaps, argmax_overlaps = overlaps.max(dim=0) - - # find which gt box is 'best' covered (i.e. 'best' = most iou) - gt_ovr, gt_ind = max_overlaps.max(dim=0) - assert gt_ovr >= 0 - # find the proposal box that covers the best covered gt box - box_ind = argmax_overlaps[gt_ind] - # record the iou coverage of this gt box - _gt_overlaps[j] = overlaps[box_ind, gt_ind] - assert _gt_overlaps[j] == gt_ovr - # mark the proposal box and the gt box as used - overlaps[box_ind, :] = -1 - overlaps[:, gt_ind] = -1 - - # append recorded iou coverage level - gt_overlaps.append(_gt_overlaps) - gt_overlaps = ( - torch.cat(gt_overlaps, dim=0) if len(gt_overlaps) else torch.zeros(0, dtype=torch.float32) - ) - gt_overlaps, _ = torch.sort(gt_overlaps) - - if thresholds is None: - step = 0.05 - thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32) - recalls = torch.zeros_like(thresholds) - # compute recall for each iou threshold - for i, t in enumerate(thresholds): - recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos) - # ar = 2 * np.trapz(recalls, thresholds) - ar = recalls.mean() - return { - "ar": ar, - "recalls": recalls, - "thresholds": thresholds, - "gt_overlaps": gt_overlaps, - "num_pos": num_pos, - } - - -def _evaluate_predictions_on_lvis( - lvis_gt, lvis_results, iou_type, max_dets_per_image=None, class_names=None -): - """ - Args: - iou_type (str): - max_dets_per_image (None or int): limit on maximum detections per image in evaluating AP - This limit, by default of the LVIS dataset, is 300. - class_names (None or list[str]): if provided, will use it to predict - per-category AP. - - Returns: - a dict of {metric name: score} - """ - metrics = { - "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"], - "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"], - }[iou_type] - - logger = logging.getLogger(__name__) - - if len(lvis_results) == 0: # TODO: check if needed - logger.warn("No predictions from the model!") - return {metric: float("nan") for metric in metrics} - - if iou_type == "segm": - lvis_results = copy.deepcopy(lvis_results) - # When evaluating mask AP, if the results contain bbox, LVIS API will - # use the box area as the area of the instance, instead of the mask area. - # This leads to a different definition of small/medium/large. - # We remove the bbox field to let mask AP use mask area. - for c in lvis_results: - c.pop("bbox", None) - - if max_dets_per_image is None: - max_dets_per_image = 300 # Default for LVIS dataset - - from lvis import LVISEval, LVISResults - - logger.info(f"Evaluating with max detections per image = {max_dets_per_image}") - lvis_results = LVISResults(lvis_gt, lvis_results, max_dets=max_dets_per_image) - lvis_eval = LVISEval(lvis_gt, lvis_results, iou_type) - lvis_eval.run() - lvis_eval.print_results() - - # Pull the standard metrics from the LVIS results - results = lvis_eval.get_results() - results = {metric: float(results[metric] * 100) for metric in metrics} - logger.info("Evaluation results for {}: \n".format(iou_type) + create_small_table(results)) - return results diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/config/dir1/bad_import.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/config/dir1/bad_import.py deleted file mode 100644 index d7452c4dfc211223c946f22df7a2eb6bdc2cd829..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/config/dir1/bad_import.py +++ /dev/null @@ -1,2 +0,0 @@ -# import from directory is not allowed -from . import dir1a diff --git a/spaces/cc1799/vits-uma-genshin-honkai/README.md b/spaces/cc1799/vits-uma-genshin-honkai/README.md deleted file mode 100644 index 1c0aa069bfd980b6b45bb2bf62ff74bd9b0b61c2..0000000000000000000000000000000000000000 --- a/spaces/cc1799/vits-uma-genshin-honkai/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -license: apache-2.0 -title: ' vits-uma-genshin-honkai' -sdk: gradio -sdk_version: 3.7 -emoji: 🐨 -colorTo: yellow -pinned: false -app_file: app.py -duplicated_from: ikechan8370/vits-uma-genshin-honkai ---- diff --git a/spaces/ccolas/TastyPiano/src/music/representation_learning/mlm_pretrain/saving_tokenizer_and_config.py b/spaces/ccolas/TastyPiano/src/music/representation_learning/mlm_pretrain/saving_tokenizer_and_config.py deleted file mode 100644 index 593cef776ad2b39db4301aa247e1c56dc7cb413b..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/music/representation_learning/mlm_pretrain/saving_tokenizer_and_config.py +++ /dev/null @@ -1,41 +0,0 @@ -from transformers import T5Config, BertConfig -from my_tokenizer import MyT5Tokenizer, MyBERTTokenizer - -models = ['t5', 'bert', 'spanbert'] # 't5' 'bert' 'spanbert' - -for model in models: - if model == 't5': - tokens = ['', '
    ', ''] + [str(i) for i in range(2, 183)] - tokens_ids = list(range(len(tokens))) - vocab = dict(zip(tokens, tokens_ids)) - - tokenizer = MyT5Tokenizer(vocab=vocab, unk_token=tokens[2], eos_token=tokens[1], pad_token=tokens[0]) - assert tokenizer.decode(tokenizer.encode('0 2 3 182 183').ids) == ' 2 3 182
    ' - tokenizer.save("./models/music-t5-small/tokenizer.json") - - # config = T5Config.from_pretrained("t5-small", vocab_size=tokenizer.get_vocab_size()) - config = T5Config.from_json_file(json_file="/home/cedric/Downloads/config.json") - config.save_pretrained("./models/music-t5-small") - elif model == 'bert': - - tokens = ['[PAD]', '[MASK]', '[UNK]'] + [str(i) for i in range(2, 183)] - tokens_ids = list(range(len(tokens))) - vocab = dict(zip(tokens, tokens_ids)) - - tokenizer = MyBERTTokenizer(vocab=vocab, unk_token=tokens[2]) - assert tokenizer.decode(tokenizer.encode('0 2 3 182 183').ids) == '[UNK] 2 3 182 [UNK]' - tokenizer.save("./models/music-bert/tokenizer.json") - - config = BertConfig(position_embedding_type='relative_key_query') - config.save_pretrained("./models/music-bert") - elif model == 'spanbert': - tokens = ['[PAD]', '[MASK]', '[UNK]'] + [str(i) for i in range(2, 183)] - tokens_ids = list(range(len(tokens))) - vocab = dict(zip(tokens, tokens_ids)) - - tokenizer = MyBERTTokenizer(vocab=vocab, unk_token=tokens[2]) - assert tokenizer.decode(tokenizer.encode('0 2 3 182 183').ids) == '[UNK] 2 3 182 [UNK]' - tokenizer.save("./models/music-spanbert/tokenizer.json") - - config = BertConfig(position_embedding_type='relative_key_query') - config.save_pretrained("./models/music-spanbert") diff --git a/spaces/ceckenrode/Human.Feedback.Dynamic.JSONL.Dataset.Download/README.md b/spaces/ceckenrode/Human.Feedback.Dynamic.JSONL.Dataset.Download/README.md deleted file mode 100644 index 9e226e3b6be9a1cfb25c3d62b8fab00a29b7ebe7..0000000000000000000000000000000000000000 --- a/spaces/ceckenrode/Human.Feedback.Dynamic.JSONL.Dataset.Download/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Human.Feedback.Dynamic.JSONL.Dataset.Download -emoji: 😻 -colorFrom: indigo -colorTo: indigo -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cfr26575/webui/app.py b/spaces/cfr26575/webui/app.py deleted file mode 100644 index 1cd83154fe013ef1426ea1951f940da6b0db7a92..0000000000000000000000000000000000000000 --- a/spaces/cfr26575/webui/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import os -from subprocess import getoutput - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") - -os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui /home/user/app/stable-diffusion-webui") -os.chdir("/home/user/app/stable-diffusion-webui") - -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/env_patch.py -O /home/user/app/env_patch.py") -os.system(f"sed -i -e '/import image_from_url_text/r /home/user/app/env_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(modelmerger_interface, \"Checkpoint Merger\", \"modelmerger\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/(train_interface, \"Train\", \"ti\"),/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/extensions_interface, \"Extensions\", \"extensions\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e '/settings_interface, \"Settings\", \"settings\"/d' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f'''sed -i -e "s/document.getElementsByTagName('gradio-app')\[0\].shadowRoot/!!document.getElementsByTagName('gradio-app')[0].shadowRoot ? document.getElementsByTagName('gradio-app')[0].shadowRoot : document/g" /home/user/app/stable-diffusion-webui/script.js''') -os.system(f"sed -i -e 's/ show_progress=False,/ show_progress=True,/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/shared.demo.launch/shared.demo.queue().launch/g' /home/user/app/stable-diffusion-webui/webui.py") -os.system(f"sed -i -e 's/ outputs=\[/queue=False, &/g' /home/user/app/stable-diffusion-webui/modules/ui.py") -os.system(f"sed -i -e 's/ queue=False, / /g' /home/user/app/stable-diffusion-webui/modules/ui.py") - -# ----------------------------Please duplicate this space and delete this block if you don't want to see the extra header---------------------------- -os.system(f"wget -q https://github.com/camenduru/webui/raw/main/header_patch.py -O /home/user/app/header_patch.py") -os.system(f"sed -i -e '/demo:/r /home/user/app/header_patch.py' /home/user/app/stable-diffusion-webui/modules/ui.py") -# --------------------------------------------------------------------------------------------------------------------------------------------------- - -if "IS_SHARED_UI" in os.environ: - os.system(f"rm -rfv /home/user/app/stable-diffusion-webui/scripts/") - - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-config.json -O /home/user/app/shared-config.json") - os.system(f"wget -q https://github.com/camenduru/webui/raw/main/shared-ui-config.json -O /home/user/app/shared-ui-config.json") - - os.system(f"wget -q {os.getenv('MODEL_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('MODEL_NAME')}") - os.system(f"wget -q {os.getenv('VAE_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('VAE_NAME')}") - os.system(f"wget -q {os.getenv('YAML_LINK')} -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/{os.getenv('YAML_NAME')}") - - os.system(f"python launch.py --force-enable-xformers --disable-console-progressbars --enable-console-prompts --ui-config-file /home/user/app/shared-ui-config.json --ui-settings-file /home/user/app/shared-config.json --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding") -else: - # Please duplicate this space and delete # character in front of the custom script you want to use or add here more custom scripts with same structure os.system(f"wget -q https://CUSTOM_SCRIPT_URL -O /home/user/app/stable-diffusion-webui/scripts/CUSTOM_SCRIPT_NAME.py") - os.system(f"wget -q https://gist.github.com/camenduru/9ec5f8141db9902e375967e93250860f/raw/d0bcf01786f20107c329c03f8968584ee67be12a/run_n_times.py -O /home/user/app/stable-diffusion-webui/scripts/run_n_times.py") - - # Please duplicate this space and delete # character in front of the extension you want to use or add here more extensions with same structure os.system(f"git clone https://EXTENSION_GIT_URL /home/user/app/stable-diffusion-webui/extensions/EXTENSION_NAME") - #os.system(f"git clone https://github.com/camenduru/stable-diffusion-webui-artists-to-study /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-artists-to-study") - os.system(f"git clone https://github.com/yfszzx/stable-diffusion-webui-images-browser /home/user/app/stable-diffusion-webui/extensions/stable-diffusion-webui-images-browser") - os.system(f"git clone https://github.com/deforum-art/deforum-for-automatic1111-webui /home/user/app/stable-diffusion-webui/extensions/deforum-for-automatic1111-webui") - - # Please duplicate this space and delete # character in front of the model you want to use or add here more ckpts with same structure os.system(f"wget -q https://CKPT_URL -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/CKPT_NAME.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/Arcane-Diffusion/resolve/main/arcane-diffusion-v3.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/arcane-diffusion-v3.ckpt") - #os.system(f"wget -q https://huggingface.co/DGSpitzer/Cyberpunk-Anime-Diffusion/resolve/main/Cyberpunk-Anime-Diffusion.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Cyberpunk-Anime-Diffusion.ckpt") - #os.system(f"wget -q https://huggingface.co/prompthero/midjourney-v4-diffusion/resolve/main/mdjrny-v4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/mdjrny-v4.ckpt") - #os.system(f"wget -q https://huggingface.co/nitrosocke/mo-di-diffusion/resolve/main/moDi-v1-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/moDi-v1-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/resolve/main/PaperCut_v1.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/PaperCut_v1.ckpt") - #os.system(f"wget -q https://huggingface.co/lilpotat/sa/resolve/main/samdoesarts_style.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/samdoesarts_style.ckpt") - #os.system(f"wget -q https://huggingface.co/hakurei/waifu-diffusion-v1-3/resolve/main/wd-v1-3-float32.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/wd-v1-3-float32.ckpt") - #os.system(f"wget -q https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-4.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.ckpt") - #os.system(f"wget -q https://huggingface.co/runwayml/stable-diffusion-inpainting/resolve/main/sd-v1-5-inpainting.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/sd-v1-5-inpainting.ckpt") - - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.ckpt") - #os.system(f"wget -q https://huggingface.co/Linaqruf/anything-v3.0/resolve/main/Anything-V3.0.vae.pt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/Anything-V3.0-pruned.vae.pt") - - #os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2/resolve/main/768-v-ema.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt") - #os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml") - - os.system(f"wget -q https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned.ckpt -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.ckpt") - os.system(f"wget -q https://raw.githubusercontent.com/Stability-AI/stablediffusion/main/configs/stable-diffusion/v2-inference-v.yaml -O /home/user/app/stable-diffusion-webui/models/Stable-diffusion/v2-1_768-ema-pruned.yaml") - - os.system(f"python launch.py --force-enable-xformers --ui-config-file /home/user/app/ui-config.json --ui-settings-file /home/user/app/config.json --disable-console-progressbars --enable-console-prompts --cors-allow-origins huggingface.co,hf.space --no-progressbar-hiding --api --skip-torch-cuda-test") - \ No newline at end of file diff --git a/spaces/chilge/Fushimi/vdecoder/__init__.py b/spaces/chilge/Fushimi/vdecoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/utils.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/utils.py deleted file mode 100644 index dd2d245a0bebcd5fc37ac20526aabbd5358dab0e..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/utils.py +++ /dev/null @@ -1,71 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module offers general convenience and utility functions for dealing with -datetimes. - -.. versionadded:: 2.7.0 -""" -from __future__ import unicode_literals - -from datetime import datetime, time - - -def today(tzinfo=None): - """ - Returns a :py:class:`datetime` representing the current day at midnight - - :param tzinfo: - The time zone to attach (also used to determine the current day). - - :return: - A :py:class:`datetime.datetime` object representing the current day - at midnight. - """ - - dt = datetime.now(tzinfo) - return datetime.combine(dt.date(), time(0, tzinfo=tzinfo)) - - -def default_tzinfo(dt, tzinfo): - """ - Sets the ``tzinfo`` parameter on naive datetimes only - - This is useful for example when you are provided a datetime that may have - either an implicit or explicit time zone, such as when parsing a time zone - string. - - .. doctest:: - - >>> from dateutil.tz import tzoffset - >>> from dateutil.parser import parse - >>> from dateutil.utils import default_tzinfo - >>> dflt_tz = tzoffset("EST", -18000) - >>> print(default_tzinfo(parse('2014-01-01 12:30 UTC'), dflt_tz)) - 2014-01-01 12:30:00+00:00 - >>> print(default_tzinfo(parse('2014-01-01 12:30'), dflt_tz)) - 2014-01-01 12:30:00-05:00 - - :param dt: - The datetime on which to replace the time zone - - :param tzinfo: - The :py:class:`datetime.tzinfo` subclass instance to assign to - ``dt`` if (and only if) it is naive. - - :return: - Returns an aware :py:class:`datetime.datetime`. - """ - if dt.tzinfo is not None: - return dt - else: - return dt.replace(tzinfo=tzinfo) - - -def within_delta(dt1, dt2, delta): - """ - Useful for comparing two datetimes that may have a negligible difference - to be considered equal. - """ - delta = abs(delta) - difference = dt1 - dt2 - return -delta <= difference <= delta diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/internal/decoder.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/internal/decoder.py deleted file mode 100644 index 8ff549381e7424f42cd227b0366158a0e2b56c04..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/internal/decoder.py +++ /dev/null @@ -1,1067 +0,0 @@ -# Protocol Buffers - Google's data interchange format -# Copyright 2008 Google Inc. All rights reserved. -# https://developers.google.com/protocol-buffers/ -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above -# copyright notice, this list of conditions and the following disclaimer -# in the documentation and/or other materials provided with the -# distribution. -# * Neither the name of Google Inc. nor the names of its -# contributors may be used to endorse or promote products derived from -# this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -"""Code for decoding protocol buffer primitives. - -This code is very similar to encoder.py -- read the docs for that module first. - -A "decoder" is a function with the signature: - Decode(buffer, pos, end, message, field_dict) -The arguments are: - buffer: The string containing the encoded message. - pos: The current position in the string. - end: The position in the string where the current message ends. May be - less than len(buffer) if we're reading a sub-message. - message: The message object into which we're parsing. - field_dict: message._fields (avoids a hashtable lookup). -The decoder reads the field and stores it into field_dict, returning the new -buffer position. A decoder for a repeated field may proactively decode all of -the elements of that field, if they appear consecutively. - -Note that decoders may throw any of the following: - IndexError: Indicates a truncated message. - struct.error: Unpacking of a fixed-width field failed. - message.DecodeError: Other errors. - -Decoders are expected to raise an exception if they are called with pos > end. -This allows callers to be lax about bounds checking: it's fineto read past -"end" as long as you are sure that someone else will notice and throw an -exception later on. - -Something up the call stack is expected to catch IndexError and struct.error -and convert them to message.DecodeError. - -Decoders are constructed using decoder constructors with the signature: - MakeDecoder(field_number, is_repeated, is_packed, key, new_default) -The arguments are: - field_number: The field number of the field we want to decode. - is_repeated: Is the field a repeated field? (bool) - is_packed: Is the field a packed field? (bool) - key: The key to use when looking up the field within field_dict. - (This is actually the FieldDescriptor but nothing in this - file should depend on that.) - new_default: A function which takes a message object as a parameter and - returns a new instance of the default value for this field. - (This is called for repeated fields and sub-messages, when an - instance does not already exist.) - -As with encoders, we define a decoder constructor for every type of field. -Then, for every field of every message class we construct an actual decoder. -That decoder goes into a dict indexed by tag, so when we decode a message -we repeatedly read a tag, look up the corresponding decoder, and invoke it. -""" - -__author__ = 'kenton@google.com (Kenton Varda)' - -import math -import struct - -from google.protobuf.internal import containers -from google.protobuf.internal import encoder -from google.protobuf.internal import wire_format -from google.protobuf import message - - -# This is not for optimization, but rather to avoid conflicts with local -# variables named "message". -_DecodeError = message.DecodeError - - -def _VarintDecoder(mask, result_type): - """Return an encoder for a basic varint value (does not include tag). - - Decoded values will be bitwise-anded with the given mask before being - returned, e.g. to limit them to 32 bits. The returned decoder does not - take the usual "end" parameter -- the caller is expected to do bounds checking - after the fact (often the caller can defer such checking until later). The - decoder returns a (value, new_pos) pair. - """ - - def DecodeVarint(buffer, pos): - result = 0 - shift = 0 - while 1: - b = buffer[pos] - result |= ((b & 0x7f) << shift) - pos += 1 - if not (b & 0x80): - result &= mask - result = result_type(result) - return (result, pos) - shift += 7 - if shift >= 64: - raise _DecodeError('Too many bytes when decoding varint.') - return DecodeVarint - - -def _SignedVarintDecoder(bits, result_type): - """Like _VarintDecoder() but decodes signed values.""" - - signbit = 1 << (bits - 1) - mask = (1 << bits) - 1 - - def DecodeVarint(buffer, pos): - result = 0 - shift = 0 - while 1: - b = buffer[pos] - result |= ((b & 0x7f) << shift) - pos += 1 - if not (b & 0x80): - result &= mask - result = (result ^ signbit) - signbit - result = result_type(result) - return (result, pos) - shift += 7 - if shift >= 64: - raise _DecodeError('Too many bytes when decoding varint.') - return DecodeVarint - -# All 32-bit and 64-bit values are represented as int. -_DecodeVarint = _VarintDecoder((1 << 64) - 1, int) -_DecodeSignedVarint = _SignedVarintDecoder(64, int) - -# Use these versions for values which must be limited to 32 bits. -_DecodeVarint32 = _VarintDecoder((1 << 32) - 1, int) -_DecodeSignedVarint32 = _SignedVarintDecoder(32, int) - - -def ReadTag(buffer, pos): - """Read a tag from the memoryview, and return a (tag_bytes, new_pos) tuple. - - We return the raw bytes of the tag rather than decoding them. The raw - bytes can then be used to look up the proper decoder. This effectively allows - us to trade some work that would be done in pure-python (decoding a varint) - for work that is done in C (searching for a byte string in a hash table). - In a low-level language it would be much cheaper to decode the varint and - use that, but not in Python. - - Args: - buffer: memoryview object of the encoded bytes - pos: int of the current position to start from - - Returns: - Tuple[bytes, int] of the tag data and new position. - """ - start = pos - while buffer[pos] & 0x80: - pos += 1 - pos += 1 - - tag_bytes = buffer[start:pos].tobytes() - return tag_bytes, pos - - -# -------------------------------------------------------------------- - - -def _SimpleDecoder(wire_type, decode_value): - """Return a constructor for a decoder for fields of a particular type. - - Args: - wire_type: The field's wire type. - decode_value: A function which decodes an individual value, e.g. - _DecodeVarint() - """ - - def SpecificDecoder(field_number, is_repeated, is_packed, key, new_default, - clear_if_default=False): - if is_packed: - local_DecodeVarint = _DecodeVarint - def DecodePackedField(buffer, pos, end, message, field_dict): - value = field_dict.get(key) - if value is None: - value = field_dict.setdefault(key, new_default(message)) - (endpoint, pos) = local_DecodeVarint(buffer, pos) - endpoint += pos - if endpoint > end: - raise _DecodeError('Truncated message.') - while pos < endpoint: - (element, pos) = decode_value(buffer, pos) - value.append(element) - if pos > endpoint: - del value[-1] # Discard corrupt value. - raise _DecodeError('Packed element was truncated.') - return pos - return DecodePackedField - elif is_repeated: - tag_bytes = encoder.TagBytes(field_number, wire_type) - tag_len = len(tag_bytes) - def DecodeRepeatedField(buffer, pos, end, message, field_dict): - value = field_dict.get(key) - if value is None: - value = field_dict.setdefault(key, new_default(message)) - while 1: - (element, new_pos) = decode_value(buffer, pos) - value.append(element) - # Predict that the next tag is another copy of the same repeated - # field. - pos = new_pos + tag_len - if buffer[new_pos:pos] != tag_bytes or new_pos >= end: - # Prediction failed. Return. - if new_pos > end: - raise _DecodeError('Truncated message.') - return new_pos - return DecodeRepeatedField - else: - def DecodeField(buffer, pos, end, message, field_dict): - (new_value, pos) = decode_value(buffer, pos) - if pos > end: - raise _DecodeError('Truncated message.') - if clear_if_default and not new_value: - field_dict.pop(key, None) - else: - field_dict[key] = new_value - return pos - return DecodeField - - return SpecificDecoder - - -def _ModifiedDecoder(wire_type, decode_value, modify_value): - """Like SimpleDecoder but additionally invokes modify_value on every value - before storing it. Usually modify_value is ZigZagDecode. - """ - - # Reusing _SimpleDecoder is slightly slower than copying a bunch of code, but - # not enough to make a significant difference. - - def InnerDecode(buffer, pos): - (result, new_pos) = decode_value(buffer, pos) - return (modify_value(result), new_pos) - return _SimpleDecoder(wire_type, InnerDecode) - - -def _StructPackDecoder(wire_type, format): - """Return a constructor for a decoder for a fixed-width field. - - Args: - wire_type: The field's wire type. - format: The format string to pass to struct.unpack(). - """ - - value_size = struct.calcsize(format) - local_unpack = struct.unpack - - # Reusing _SimpleDecoder is slightly slower than copying a bunch of code, but - # not enough to make a significant difference. - - # Note that we expect someone up-stack to catch struct.error and convert - # it to _DecodeError -- this way we don't have to set up exception- - # handling blocks every time we parse one value. - - def InnerDecode(buffer, pos): - new_pos = pos + value_size - result = local_unpack(format, buffer[pos:new_pos])[0] - return (result, new_pos) - return _SimpleDecoder(wire_type, InnerDecode) - - -def _FloatDecoder(): - """Returns a decoder for a float field. - - This code works around a bug in struct.unpack for non-finite 32-bit - floating-point values. - """ - - local_unpack = struct.unpack - - def InnerDecode(buffer, pos): - """Decode serialized float to a float and new position. - - Args: - buffer: memoryview of the serialized bytes - pos: int, position in the memory view to start at. - - Returns: - Tuple[float, int] of the deserialized float value and new position - in the serialized data. - """ - # We expect a 32-bit value in little-endian byte order. Bit 1 is the sign - # bit, bits 2-9 represent the exponent, and bits 10-32 are the significand. - new_pos = pos + 4 - float_bytes = buffer[pos:new_pos].tobytes() - - # If this value has all its exponent bits set, then it's non-finite. - # In Python 2.4, struct.unpack will convert it to a finite 64-bit value. - # To avoid that, we parse it specially. - if (float_bytes[3:4] in b'\x7F\xFF' and float_bytes[2:3] >= b'\x80'): - # If at least one significand bit is set... - if float_bytes[0:3] != b'\x00\x00\x80': - return (math.nan, new_pos) - # If sign bit is set... - if float_bytes[3:4] == b'\xFF': - return (-math.inf, new_pos) - return (math.inf, new_pos) - - # Note that we expect someone up-stack to catch struct.error and convert - # it to _DecodeError -- this way we don't have to set up exception- - # handling blocks every time we parse one value. - result = local_unpack('= b'\xF0') - and (double_bytes[0:7] != b'\x00\x00\x00\x00\x00\x00\xF0')): - return (math.nan, new_pos) - - # Note that we expect someone up-stack to catch struct.error and convert - # it to _DecodeError -- this way we don't have to set up exception- - # handling blocks every time we parse one value. - result = local_unpack(' end: - raise _DecodeError('Truncated message.') - while pos < endpoint: - value_start_pos = pos - (element, pos) = _DecodeSignedVarint32(buffer, pos) - # pylint: disable=protected-access - if element in enum_type.values_by_number: - value.append(element) - else: - if not message._unknown_fields: - message._unknown_fields = [] - tag_bytes = encoder.TagBytes(field_number, - wire_format.WIRETYPE_VARINT) - - message._unknown_fields.append( - (tag_bytes, buffer[value_start_pos:pos].tobytes())) - if message._unknown_field_set is None: - message._unknown_field_set = containers.UnknownFieldSet() - message._unknown_field_set._add( - field_number, wire_format.WIRETYPE_VARINT, element) - # pylint: enable=protected-access - if pos > endpoint: - if element in enum_type.values_by_number: - del value[-1] # Discard corrupt value. - else: - del message._unknown_fields[-1] - # pylint: disable=protected-access - del message._unknown_field_set._values[-1] - # pylint: enable=protected-access - raise _DecodeError('Packed element was truncated.') - return pos - return DecodePackedField - elif is_repeated: - tag_bytes = encoder.TagBytes(field_number, wire_format.WIRETYPE_VARINT) - tag_len = len(tag_bytes) - def DecodeRepeatedField(buffer, pos, end, message, field_dict): - """Decode serialized repeated enum to its value and a new position. - - Args: - buffer: memoryview of the serialized bytes. - pos: int, position in the memory view to start at. - end: int, end position of serialized data - message: Message object to store unknown fields in - field_dict: Map[Descriptor, Any] to store decoded values in. - - Returns: - int, new position in serialized data. - """ - value = field_dict.get(key) - if value is None: - value = field_dict.setdefault(key, new_default(message)) - while 1: - (element, new_pos) = _DecodeSignedVarint32(buffer, pos) - # pylint: disable=protected-access - if element in enum_type.values_by_number: - value.append(element) - else: - if not message._unknown_fields: - message._unknown_fields = [] - message._unknown_fields.append( - (tag_bytes, buffer[pos:new_pos].tobytes())) - if message._unknown_field_set is None: - message._unknown_field_set = containers.UnknownFieldSet() - message._unknown_field_set._add( - field_number, wire_format.WIRETYPE_VARINT, element) - # pylint: enable=protected-access - # Predict that the next tag is another copy of the same repeated - # field. - pos = new_pos + tag_len - if buffer[new_pos:pos] != tag_bytes or new_pos >= end: - # Prediction failed. Return. - if new_pos > end: - raise _DecodeError('Truncated message.') - return new_pos - return DecodeRepeatedField - else: - def DecodeField(buffer, pos, end, message, field_dict): - """Decode serialized repeated enum to its value and a new position. - - Args: - buffer: memoryview of the serialized bytes. - pos: int, position in the memory view to start at. - end: int, end position of serialized data - message: Message object to store unknown fields in - field_dict: Map[Descriptor, Any] to store decoded values in. - - Returns: - int, new position in serialized data. - """ - value_start_pos = pos - (enum_value, pos) = _DecodeSignedVarint32(buffer, pos) - if pos > end: - raise _DecodeError('Truncated message.') - if clear_if_default and not enum_value: - field_dict.pop(key, None) - return pos - # pylint: disable=protected-access - if enum_value in enum_type.values_by_number: - field_dict[key] = enum_value - else: - if not message._unknown_fields: - message._unknown_fields = [] - tag_bytes = encoder.TagBytes(field_number, - wire_format.WIRETYPE_VARINT) - message._unknown_fields.append( - (tag_bytes, buffer[value_start_pos:pos].tobytes())) - if message._unknown_field_set is None: - message._unknown_field_set = containers.UnknownFieldSet() - message._unknown_field_set._add( - field_number, wire_format.WIRETYPE_VARINT, enum_value) - # pylint: enable=protected-access - return pos - return DecodeField - - -# -------------------------------------------------------------------- - - -Int32Decoder = _SimpleDecoder( - wire_format.WIRETYPE_VARINT, _DecodeSignedVarint32) - -Int64Decoder = _SimpleDecoder( - wire_format.WIRETYPE_VARINT, _DecodeSignedVarint) - -UInt32Decoder = _SimpleDecoder(wire_format.WIRETYPE_VARINT, _DecodeVarint32) -UInt64Decoder = _SimpleDecoder(wire_format.WIRETYPE_VARINT, _DecodeVarint) - -SInt32Decoder = _ModifiedDecoder( - wire_format.WIRETYPE_VARINT, _DecodeVarint32, wire_format.ZigZagDecode) -SInt64Decoder = _ModifiedDecoder( - wire_format.WIRETYPE_VARINT, _DecodeVarint, wire_format.ZigZagDecode) - -# Note that Python conveniently guarantees that when using the '<' prefix on -# formats, they will also have the same size across all platforms (as opposed -# to without the prefix, where their sizes depend on the C compiler's basic -# type sizes). -Fixed32Decoder = _StructPackDecoder(wire_format.WIRETYPE_FIXED32, ' end: - raise _DecodeError('Truncated string.') - value.append(_ConvertToUnicode(buffer[pos:new_pos])) - # Predict that the next tag is another copy of the same repeated field. - pos = new_pos + tag_len - if buffer[new_pos:pos] != tag_bytes or new_pos == end: - # Prediction failed. Return. - return new_pos - return DecodeRepeatedField - else: - def DecodeField(buffer, pos, end, message, field_dict): - (size, pos) = local_DecodeVarint(buffer, pos) - new_pos = pos + size - if new_pos > end: - raise _DecodeError('Truncated string.') - if clear_if_default and not size: - field_dict.pop(key, None) - else: - field_dict[key] = _ConvertToUnicode(buffer[pos:new_pos]) - return new_pos - return DecodeField - - -def BytesDecoder(field_number, is_repeated, is_packed, key, new_default, - clear_if_default=False): - """Returns a decoder for a bytes field.""" - - local_DecodeVarint = _DecodeVarint - - assert not is_packed - if is_repeated: - tag_bytes = encoder.TagBytes(field_number, - wire_format.WIRETYPE_LENGTH_DELIMITED) - tag_len = len(tag_bytes) - def DecodeRepeatedField(buffer, pos, end, message, field_dict): - value = field_dict.get(key) - if value is None: - value = field_dict.setdefault(key, new_default(message)) - while 1: - (size, pos) = local_DecodeVarint(buffer, pos) - new_pos = pos + size - if new_pos > end: - raise _DecodeError('Truncated string.') - value.append(buffer[pos:new_pos].tobytes()) - # Predict that the next tag is another copy of the same repeated field. - pos = new_pos + tag_len - if buffer[new_pos:pos] != tag_bytes or new_pos == end: - # Prediction failed. Return. - return new_pos - return DecodeRepeatedField - else: - def DecodeField(buffer, pos, end, message, field_dict): - (size, pos) = local_DecodeVarint(buffer, pos) - new_pos = pos + size - if new_pos > end: - raise _DecodeError('Truncated string.') - if clear_if_default and not size: - field_dict.pop(key, None) - else: - field_dict[key] = buffer[pos:new_pos].tobytes() - return new_pos - return DecodeField - - -def GroupDecoder(field_number, is_repeated, is_packed, key, new_default): - """Returns a decoder for a group field.""" - - end_tag_bytes = encoder.TagBytes(field_number, - wire_format.WIRETYPE_END_GROUP) - end_tag_len = len(end_tag_bytes) - - assert not is_packed - if is_repeated: - tag_bytes = encoder.TagBytes(field_number, - wire_format.WIRETYPE_START_GROUP) - tag_len = len(tag_bytes) - def DecodeRepeatedField(buffer, pos, end, message, field_dict): - value = field_dict.get(key) - if value is None: - value = field_dict.setdefault(key, new_default(message)) - while 1: - value = field_dict.get(key) - if value is None: - value = field_dict.setdefault(key, new_default(message)) - # Read sub-message. - pos = value.add()._InternalParse(buffer, pos, end) - # Read end tag. - new_pos = pos+end_tag_len - if buffer[pos:new_pos] != end_tag_bytes or new_pos > end: - raise _DecodeError('Missing group end tag.') - # Predict that the next tag is another copy of the same repeated field. - pos = new_pos + tag_len - if buffer[new_pos:pos] != tag_bytes or new_pos == end: - # Prediction failed. Return. - return new_pos - return DecodeRepeatedField - else: - def DecodeField(buffer, pos, end, message, field_dict): - value = field_dict.get(key) - if value is None: - value = field_dict.setdefault(key, new_default(message)) - # Read sub-message. - pos = value._InternalParse(buffer, pos, end) - # Read end tag. - new_pos = pos+end_tag_len - if buffer[pos:new_pos] != end_tag_bytes or new_pos > end: - raise _DecodeError('Missing group end tag.') - return new_pos - return DecodeField - - -def MessageDecoder(field_number, is_repeated, is_packed, key, new_default): - """Returns a decoder for a message field.""" - - local_DecodeVarint = _DecodeVarint - - assert not is_packed - if is_repeated: - tag_bytes = encoder.TagBytes(field_number, - wire_format.WIRETYPE_LENGTH_DELIMITED) - tag_len = len(tag_bytes) - def DecodeRepeatedField(buffer, pos, end, message, field_dict): - value = field_dict.get(key) - if value is None: - value = field_dict.setdefault(key, new_default(message)) - while 1: - # Read length. - (size, pos) = local_DecodeVarint(buffer, pos) - new_pos = pos + size - if new_pos > end: - raise _DecodeError('Truncated message.') - # Read sub-message. - if value.add()._InternalParse(buffer, pos, new_pos) != new_pos: - # The only reason _InternalParse would return early is if it - # encountered an end-group tag. - raise _DecodeError('Unexpected end-group tag.') - # Predict that the next tag is another copy of the same repeated field. - pos = new_pos + tag_len - if buffer[new_pos:pos] != tag_bytes or new_pos == end: - # Prediction failed. Return. - return new_pos - return DecodeRepeatedField - else: - def DecodeField(buffer, pos, end, message, field_dict): - value = field_dict.get(key) - if value is None: - value = field_dict.setdefault(key, new_default(message)) - # Read length. - (size, pos) = local_DecodeVarint(buffer, pos) - new_pos = pos + size - if new_pos > end: - raise _DecodeError('Truncated message.') - # Read sub-message. - if value._InternalParse(buffer, pos, new_pos) != new_pos: - # The only reason _InternalParse would return early is if it encountered - # an end-group tag. - raise _DecodeError('Unexpected end-group tag.') - return new_pos - return DecodeField - - -# -------------------------------------------------------------------- - -MESSAGE_SET_ITEM_TAG = encoder.TagBytes(1, wire_format.WIRETYPE_START_GROUP) - -def MessageSetItemDecoder(descriptor): - """Returns a decoder for a MessageSet item. - - The parameter is the message Descriptor. - - The message set message looks like this: - message MessageSet { - repeated group Item = 1 { - required int32 type_id = 2; - required string message = 3; - } - } - """ - - type_id_tag_bytes = encoder.TagBytes(2, wire_format.WIRETYPE_VARINT) - message_tag_bytes = encoder.TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED) - item_end_tag_bytes = encoder.TagBytes(1, wire_format.WIRETYPE_END_GROUP) - - local_ReadTag = ReadTag - local_DecodeVarint = _DecodeVarint - local_SkipField = SkipField - - def DecodeItem(buffer, pos, end, message, field_dict): - """Decode serialized message set to its value and new position. - - Args: - buffer: memoryview of the serialized bytes. - pos: int, position in the memory view to start at. - end: int, end position of serialized data - message: Message object to store unknown fields in - field_dict: Map[Descriptor, Any] to store decoded values in. - - Returns: - int, new position in serialized data. - """ - message_set_item_start = pos - type_id = -1 - message_start = -1 - message_end = -1 - - # Technically, type_id and message can appear in any order, so we need - # a little loop here. - while 1: - (tag_bytes, pos) = local_ReadTag(buffer, pos) - if tag_bytes == type_id_tag_bytes: - (type_id, pos) = local_DecodeVarint(buffer, pos) - elif tag_bytes == message_tag_bytes: - (size, message_start) = local_DecodeVarint(buffer, pos) - pos = message_end = message_start + size - elif tag_bytes == item_end_tag_bytes: - break - else: - pos = SkipField(buffer, pos, end, tag_bytes) - if pos == -1: - raise _DecodeError('Missing group end tag.') - - if pos > end: - raise _DecodeError('Truncated message.') - - if type_id == -1: - raise _DecodeError('MessageSet item missing type_id.') - if message_start == -1: - raise _DecodeError('MessageSet item missing message.') - - extension = message.Extensions._FindExtensionByNumber(type_id) - # pylint: disable=protected-access - if extension is not None: - value = field_dict.get(extension) - if value is None: - message_type = extension.message_type - if not hasattr(message_type, '_concrete_class'): - message_factory.GetMessageClass(message_type) - value = field_dict.setdefault( - extension, message_type._concrete_class()) - if value._InternalParse(buffer, message_start,message_end) != message_end: - # The only reason _InternalParse would return early is if it encountered - # an end-group tag. - raise _DecodeError('Unexpected end-group tag.') - else: - if not message._unknown_fields: - message._unknown_fields = [] - message._unknown_fields.append( - (MESSAGE_SET_ITEM_TAG, buffer[message_set_item_start:pos].tobytes())) - if message._unknown_field_set is None: - message._unknown_field_set = containers.UnknownFieldSet() - message._unknown_field_set._add( - type_id, - wire_format.WIRETYPE_LENGTH_DELIMITED, - buffer[message_start:message_end].tobytes()) - # pylint: enable=protected-access - - return pos - - return DecodeItem - - -def UnknownMessageSetItemDecoder(): - """Returns a decoder for a Unknown MessageSet item.""" - - type_id_tag_bytes = encoder.TagBytes(2, wire_format.WIRETYPE_VARINT) - message_tag_bytes = encoder.TagBytes(3, wire_format.WIRETYPE_LENGTH_DELIMITED) - item_end_tag_bytes = encoder.TagBytes(1, wire_format.WIRETYPE_END_GROUP) - - def DecodeUnknownItem(buffer): - pos = 0 - end = len(buffer) - message_start = -1 - message_end = -1 - while 1: - (tag_bytes, pos) = ReadTag(buffer, pos) - if tag_bytes == type_id_tag_bytes: - (type_id, pos) = _DecodeVarint(buffer, pos) - elif tag_bytes == message_tag_bytes: - (size, message_start) = _DecodeVarint(buffer, pos) - pos = message_end = message_start + size - elif tag_bytes == item_end_tag_bytes: - break - else: - pos = SkipField(buffer, pos, end, tag_bytes) - if pos == -1: - raise _DecodeError('Missing group end tag.') - - if pos > end: - raise _DecodeError('Truncated message.') - - if type_id == -1: - raise _DecodeError('MessageSet item missing type_id.') - if message_start == -1: - raise _DecodeError('MessageSet item missing message.') - - return (type_id, buffer[message_start:message_end].tobytes()) - - return DecodeUnknownItem - -# -------------------------------------------------------------------- - -def MapDecoder(field_descriptor, new_default, is_message_map): - """Returns a decoder for a map field.""" - - key = field_descriptor - tag_bytes = encoder.TagBytes(field_descriptor.number, - wire_format.WIRETYPE_LENGTH_DELIMITED) - tag_len = len(tag_bytes) - local_DecodeVarint = _DecodeVarint - # Can't read _concrete_class yet; might not be initialized. - message_type = field_descriptor.message_type - - def DecodeMap(buffer, pos, end, message, field_dict): - submsg = message_type._concrete_class() - value = field_dict.get(key) - if value is None: - value = field_dict.setdefault(key, new_default(message)) - while 1: - # Read length. - (size, pos) = local_DecodeVarint(buffer, pos) - new_pos = pos + size - if new_pos > end: - raise _DecodeError('Truncated message.') - # Read sub-message. - submsg.Clear() - if submsg._InternalParse(buffer, pos, new_pos) != new_pos: - # The only reason _InternalParse would return early is if it - # encountered an end-group tag. - raise _DecodeError('Unexpected end-group tag.') - - if is_message_map: - value[submsg.key].CopyFrom(submsg.value) - else: - value[submsg.key] = submsg.value - - # Predict that the next tag is another copy of the same repeated field. - pos = new_pos + tag_len - if buffer[new_pos:pos] != tag_bytes or new_pos == end: - # Prediction failed. Return. - return new_pos - - return DecodeMap - -# -------------------------------------------------------------------- -# Optimization is not as heavy here because calls to SkipField() are rare, -# except for handling end-group tags. - -def _SkipVarint(buffer, pos, end): - """Skip a varint value. Returns the new position.""" - # Previously ord(buffer[pos]) raised IndexError when pos is out of range. - # With this code, ord(b'') raises TypeError. Both are handled in - # python_message.py to generate a 'Truncated message' error. - while ord(buffer[pos:pos+1].tobytes()) & 0x80: - pos += 1 - pos += 1 - if pos > end: - raise _DecodeError('Truncated message.') - return pos - -def _SkipFixed64(buffer, pos, end): - """Skip a fixed64 value. Returns the new position.""" - - pos += 8 - if pos > end: - raise _DecodeError('Truncated message.') - return pos - - -def _DecodeFixed64(buffer, pos): - """Decode a fixed64.""" - new_pos = pos + 8 - return (struct.unpack(' end: - raise _DecodeError('Truncated message.') - return pos - - -def _SkipGroup(buffer, pos, end): - """Skip sub-group. Returns the new position.""" - - while 1: - (tag_bytes, pos) = ReadTag(buffer, pos) - new_pos = SkipField(buffer, pos, end, tag_bytes) - if new_pos == -1: - return pos - pos = new_pos - - -def _DecodeUnknownFieldSet(buffer, pos, end_pos=None): - """Decode UnknownFieldSet. Returns the UnknownFieldSet and new position.""" - - unknown_field_set = containers.UnknownFieldSet() - while end_pos is None or pos < end_pos: - (tag_bytes, pos) = ReadTag(buffer, pos) - (tag, _) = _DecodeVarint(tag_bytes, 0) - field_number, wire_type = wire_format.UnpackTag(tag) - if wire_type == wire_format.WIRETYPE_END_GROUP: - break - (data, pos) = _DecodeUnknownField(buffer, pos, wire_type) - # pylint: disable=protected-access - unknown_field_set._add(field_number, wire_type, data) - - return (unknown_field_set, pos) - - -def _DecodeUnknownField(buffer, pos, wire_type): - """Decode a unknown field. Returns the UnknownField and new position.""" - - if wire_type == wire_format.WIRETYPE_VARINT: - (data, pos) = _DecodeVarint(buffer, pos) - elif wire_type == wire_format.WIRETYPE_FIXED64: - (data, pos) = _DecodeFixed64(buffer, pos) - elif wire_type == wire_format.WIRETYPE_FIXED32: - (data, pos) = _DecodeFixed32(buffer, pos) - elif wire_type == wire_format.WIRETYPE_LENGTH_DELIMITED: - (size, pos) = _DecodeVarint(buffer, pos) - data = buffer[pos:pos+size].tobytes() - pos += size - elif wire_type == wire_format.WIRETYPE_START_GROUP: - (data, pos) = _DecodeUnknownFieldSet(buffer, pos) - elif wire_type == wire_format.WIRETYPE_END_GROUP: - return (0, -1) - else: - raise _DecodeError('Wrong wire type in tag.') - - return (data, pos) - - -def _EndGroup(buffer, pos, end): - """Skipping an END_GROUP tag returns -1 to tell the parent loop to break.""" - - return -1 - - -def _SkipFixed32(buffer, pos, end): - """Skip a fixed32 value. Returns the new position.""" - - pos += 4 - if pos > end: - raise _DecodeError('Truncated message.') - return pos - - -def _DecodeFixed32(buffer, pos): - """Decode a fixed32.""" - - new_pos = pos + 4 - return (struct.unpack('Action Stealth Treadmill Manual

    DOWNLOAD 🗸🗸🗸 https://tinurli.com/2uwjBS



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Cubase 4 32 Bit Free Download The Best Way to Start Your Music Production Journey.md b/spaces/cihyFjudo/fairness-paper-search/Cubase 4 32 Bit Free Download The Best Way to Start Your Music Production Journey.md deleted file mode 100644 index a53b6f28133cf4156d3daece6aea812e6d20b0a4..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Cubase 4 32 Bit Free Download The Best Way to Start Your Music Production Journey.md +++ /dev/null @@ -1,16 +0,0 @@ -
    -

    so i now run two versions of cubase 6.5, as i rely on lots of 32-bit plugins for my job. if i need it, i have 64-bit, but cubase 6.5 32-bit runs all my plugins that i cant let go, superbly. for everything else, i have the 64-bit version.

    -

    It is part from audio mixers category and is licensed as shareware for Windows 32-bit and 64-bit platform and can be used as a free trial until the trial period will end. The Cubase demo is available to all software users as a free download with potential restrictions compared with the full version.

    -

    Cubase 4 32 Bit Free Download


    Download ===== https://tinurli.com/2uwi93



    -

    Its pitch correction module is efficient and easy to use, earning Graillon 2 the number one spot on the autotune freebie list. It is compatible with all digital audio workstations on Windows and macOS.

    -

    This free autotune effect is flexible and easy to operate, with adjustable speed, range, scale, and depth. The added stereo widening feature can be helpful in a vocal processing chain, but make sure to double-check your mix in mono when using it.

    -

    Aside from those few drawbacks, MAutoPitch is a brilliant free autotune VST that could quickly become your go-to pitch correction tool. Just like Graillon 2, it is compatible with all VST and AU plugin hosts on PC and Mac.

    -

    GSnap is an old freeware autotune plugin. It was the first free autotune VST on the market. Pitch correction software was still somewhat of a rarity back in the day when GSnap was released.

    -

    Unlike Graillon 2 and MAutoPitch, GSnap will only work on Windows-based systems. It does come with a very well-written manual, though. The instructions are worth reading if you decide to use GSnap as your go-to free autotune effect.

    -

    Although Voloco is available as a VST3 and AU plugin on desktop operating systems, it is primarily used on iOS and Android. The app version of Voloco is easily the best free autotune for mobile devices.

    -

    Onyx-I mixers offer a pre/post EQ switch on each channel that lets you determine whether you are recording with or without EQ and/or inserts.

    Additionally, there are several available modifications that our service center network can perform, including post-fader FireWire sends, fixed post-insert FireWire sends (instead of needing to engage the EQ button), and pre-EQ auxiliary sends. Feel free to visit our Support page to learn more.

    -

    The Onyx 1640i does indeed allow 16 returns from your DAW. These can be routed into each channel instead of the mic/line input, and because most DAWs allow free routing of virtually any signal to any output, you can send whatever you like to each of the 16 channels. Additionally, like the original 1640 with a FireWire card, the first two returns can feed your control room for basic stereo monitoring purposes.

    -

    -

    Direct monitoring gives you the ability to hear channels 1 and 2 right from the preamp. It is the unprocessed signal going into the unit.

    This is perfect for latency-free monitoring of what you are recording.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Dromenvanger Dubbed Italian Movie Free Download Torrent UPDATED.md b/spaces/cihyFjudo/fairness-paper-search/Dromenvanger Dubbed Italian Movie Free Download Torrent UPDATED.md deleted file mode 100644 index d2d47830e5a7073a658cceb4b6debd6938068c12..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Dromenvanger Dubbed Italian Movie Free Download Torrent UPDATED.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Dromenvanger dubbed italian movie free download torrent


    Downloadhttps://tinurli.com/2uwkFQ



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Structural Design Manual Vol 2 Mcdonnell Douglas Corporation The Essential Reference for Aerospace Engineering.md b/spaces/cihyFjudo/fairness-paper-search/Structural Design Manual Vol 2 Mcdonnell Douglas Corporation The Essential Reference for Aerospace Engineering.md deleted file mode 100644 index 0bc2bdf6f46f15ae0fe553b326b482b589f70573..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Structural Design Manual Vol 2 Mcdonnell Douglas Corporation The Essential Reference for Aerospace Engineering.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    The instrumentation and control layout is essentially conventional, with a right-handed displacement-type control stick and left-handed throttles. An array of fingertip buttons allow for hands-on-throttle-and-stick control of key sensors, weapons and defense aids.[20] Flight systems employ digital fly-by-light controls overseen by a powerful Herriman-Weston 5/480 flight computer.[20] In flight, engine thrust and exhaust nozzle settings are automatically set to their optimum positions depending on speed, altitude, and the pilot's throttle and stick input. The dropship also features an intelligent autopilot facility that allows the computer to fly all phases of a mission profile with no physical pilot input, including ingress and egress to the target zone as well as landing and docking cycles. The Cheyenne features no manual reversion capability since the craft is too inherently unstable to be flown without computer assistance.[20] The UD-4's avionics are specifically designed to facilitate maximum cockpit efficiency. Essential flight information is provided to the pilot as required on a wide-angle heads-up-display (HUD) and three integrated multi-function displays (MFDs). A voice control system helps reduce the pilot's workload and can be employed for data entry, selecting communications channels, switching operating modes for the MFDs and weapons selection.[20]

    -

    structural design manual vol 2 mcdonnell douglas corporation rar


    DOWNLOADhttps://tinurli.com/2uwjyb



    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/You Are An Idiot Virus [UPDATED] Downloadl.md b/spaces/cihyFjudo/fairness-paper-search/You Are An Idiot Virus [UPDATED] Downloadl.md deleted file mode 100644 index c5673b9b1552033f4904dddfefa5d3ec393abd6a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/You Are An Idiot Virus [UPDATED] Downloadl.md +++ /dev/null @@ -1,76 +0,0 @@ -## You Are An Idiot Virus Downloadl - - - - - - ![You Are An Idiot Virus \[UPDATED\] Downloadl](https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcT68HvVvkD9WEjffe3154hDk0fpbAOF-LEHIhll2mtcKT5C2FcY7UpcrC-2) - - - - - -**LINK ===> [https://venemena.blogspot.com/?download=2txRfl](https://venemena.blogspot.com/?download=2txRfl)** - - - - - - - - - - - - - -# You Are An Idiot Virus: What Is It and How to Avoid It - - - -You Are An Idiot is a malicious website that displays flashing images of smiley faces and plays a looping audio clip of a man singing "You are an idiot, ha ha ha ha ha ha ha". The website also attempts to launch multiple pop-up windows with the same content, making it difficult to close them. The website is designed to annoy and prank unsuspecting users who visit it or click on a link to it. - - - -However, some versions of the website may also contain a JavaScript code that tries to download and execute a file called "You are an idiot.exe" on the user's computer. This file is a Trojan horse that can infect the user's system and perform malicious actions, such as deleting files, changing settings, stealing data, or downloading other malware. The Trojan may also prevent the user from running antivirus software or accessing certain websites. - - - -Therefore, it is important to avoid visiting the You Are An Idiot website or clicking on any links that lead to it. If you encounter the website, do not download or run any files from it. Instead, try to close the browser window or tab using the keyboard shortcut Ctrl+W (Windows) or Command+W (Mac). If that does not work, use the Task Manager (Windows) or Activity Monitor (Mac) to end the browser process. Then, scan your computer with a reputable antivirus program and remove any detected threats. - - - -You can also protect yourself from the You Are An Idiot virus by using a web browser that has a pop-up blocker and a JavaScript blocker. These features can prevent the website from launching multiple windows and executing malicious code. You can also use a firewall and an ad blocker to block unwanted connections and advertisements that may contain malicious links. - - - -In conclusion, You Are An Idiot is a prank website that can also be a source of malware infection. To avoid falling victim to it, do not visit the website or click on any links that lead to it. If you encounter it, close the browser window or tab and scan your computer with an antivirus program. You can also use browser extensions and security software to block pop-ups, JavaScript, and malicious connections. - - - -Here are some additional tips on how to avoid the You Are An Idiot virus and other similar malware: - - - -- Do not open email attachments or click on links from unknown or suspicious senders. They may contain malware or lead to malicious websites. - -- Do not download or run files from untrusted sources. They may contain malware or unwanted programs. - -- Do not give out your personal or financial information to anyone online. They may use it to steal your identity or money. - -- Keep your operating system and software updated. They may fix security vulnerabilities that malware can exploit. - -- Use strong and unique passwords for your online accounts. They may prevent hackers from accessing your data. - -- Backup your important files regularly. They may help you recover your data in case of a malware attack or a system failure. - - - -By following these simple steps, you can protect yourself from the You Are An Idiot virus and other online threats. Stay safe and smart on the internet! - - dfd1c89656 - - - - - diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/vegalite/v5/schema/mixins.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/vegalite/v5/schema/mixins.py deleted file mode 100644 index 569daefb8f3f00c519d350de98e542c7562db1b6..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/vegalite/v5/schema/mixins.py +++ /dev/null @@ -1,1292 +0,0 @@ -# The contents of this file are automatically written by -# tools/generate_schema_wrapper.py. Do not modify directly. -import sys - -from . import core -from altair.utils import use_signature -from altair.utils.schemapi import Undefined - -if sys.version_info >= (3, 11): - from typing import Self -else: - from typing_extensions import Self - - -class MarkMethodMixin: - """A mixin class that defines mark methods""" - - def mark_arc(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, dir=Undefined, - discreteBandSize=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined, - fill=Undefined, fillOpacity=Undefined, filled=Undefined, font=Undefined, - fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, height=Undefined, - href=Undefined, innerRadius=Undefined, interpolate=Undefined, invalid=Undefined, - limit=Undefined, line=Undefined, lineBreak=Undefined, lineHeight=Undefined, - opacity=Undefined, order=Undefined, orient=Undefined, outerRadius=Undefined, - padAngle=Undefined, point=Undefined, radius=Undefined, radius2=Undefined, - radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, size=Undefined, - smooth=Undefined, stroke=Undefined, strokeCap=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeJoin=Undefined, strokeMiterLimit=Undefined, - strokeOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - style=Undefined, tension=Undefined, text=Undefined, theta=Undefined, theta2=Undefined, - theta2Offset=Undefined, thetaOffset=Undefined, thickness=Undefined, - timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, tooltip=Undefined, - url=Undefined, width=Undefined, x=Undefined, x2=Undefined, x2Offset=Undefined, - xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, yOffset=Undefined, - **kwds) -> Self: - """Set the chart's mark to 'arc' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="arc", **kwds) - else: - copy.mark = "arc" - return copy - - def mark_area(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'area' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="area", **kwds) - else: - copy.mark = "area" - return copy - - def mark_bar(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, dir=Undefined, - discreteBandSize=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined, - fill=Undefined, fillOpacity=Undefined, filled=Undefined, font=Undefined, - fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, height=Undefined, - href=Undefined, innerRadius=Undefined, interpolate=Undefined, invalid=Undefined, - limit=Undefined, line=Undefined, lineBreak=Undefined, lineHeight=Undefined, - opacity=Undefined, order=Undefined, orient=Undefined, outerRadius=Undefined, - padAngle=Undefined, point=Undefined, radius=Undefined, radius2=Undefined, - radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, size=Undefined, - smooth=Undefined, stroke=Undefined, strokeCap=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeJoin=Undefined, strokeMiterLimit=Undefined, - strokeOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - style=Undefined, tension=Undefined, text=Undefined, theta=Undefined, theta2=Undefined, - theta2Offset=Undefined, thetaOffset=Undefined, thickness=Undefined, - timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, tooltip=Undefined, - url=Undefined, width=Undefined, x=Undefined, x2=Undefined, x2Offset=Undefined, - xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, yOffset=Undefined, - **kwds) -> Self: - """Set the chart's mark to 'bar' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="bar", **kwds) - else: - copy.mark = "bar" - return copy - - def mark_image(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'image' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="image", **kwds) - else: - copy.mark = "image" - return copy - - def mark_line(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'line' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="line", **kwds) - else: - copy.mark = "line" - return copy - - def mark_point(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'point' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="point", **kwds) - else: - copy.mark = "point" - return copy - - def mark_rect(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'rect' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="rect", **kwds) - else: - copy.mark = "rect" - return copy - - def mark_rule(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'rule' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="rule", **kwds) - else: - copy.mark = "rule" - return copy - - def mark_text(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'text' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="text", **kwds) - else: - copy.mark = "text" - return copy - - def mark_tick(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'tick' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="tick", **kwds) - else: - copy.mark = "tick" - return copy - - def mark_trail(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'trail' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="trail", **kwds) - else: - copy.mark = "trail" - return copy - - def mark_circle(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, - y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'circle' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="circle", **kwds) - else: - copy.mark = "circle" - return copy - - def mark_square(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, - y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'square' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="square", **kwds) - else: - copy.mark = "square" - return copy - - def mark_geoshape(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, - tension=Undefined, text=Undefined, theta=Undefined, theta2=Undefined, - theta2Offset=Undefined, thetaOffset=Undefined, thickness=Undefined, - timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, tooltip=Undefined, - url=Undefined, width=Undefined, x=Undefined, x2=Undefined, x2Offset=Undefined, - xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'geoshape' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="geoshape", **kwds) - else: - copy.mark = "geoshape" - return copy - - def mark_boxplot(self, box=Undefined, clip=Undefined, color=Undefined, extent=Undefined, - invalid=Undefined, median=Undefined, opacity=Undefined, orient=Undefined, - outliers=Undefined, rule=Undefined, size=Undefined, ticks=Undefined, **kwds) -> Self: - """Set the chart's mark to 'boxplot' (see :class:`BoxPlotDef`) - """ - kwds = dict(box=box, clip=clip, color=color, extent=extent, invalid=invalid, median=median, - opacity=opacity, orient=orient, outliers=outliers, rule=rule, size=size, - ticks=ticks, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.BoxPlotDef(type="boxplot", **kwds) - else: - copy.mark = "boxplot" - return copy - - def mark_errorbar(self, clip=Undefined, color=Undefined, extent=Undefined, opacity=Undefined, - orient=Undefined, rule=Undefined, size=Undefined, thickness=Undefined, - ticks=Undefined, **kwds) -> Self: - """Set the chart's mark to 'errorbar' (see :class:`ErrorBarDef`) - """ - kwds = dict(clip=clip, color=color, extent=extent, opacity=opacity, orient=orient, rule=rule, - size=size, thickness=thickness, ticks=ticks, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.ErrorBarDef(type="errorbar", **kwds) - else: - copy.mark = "errorbar" - return copy - - def mark_errorband(self, band=Undefined, borders=Undefined, clip=Undefined, color=Undefined, - extent=Undefined, interpolate=Undefined, opacity=Undefined, orient=Undefined, - tension=Undefined, **kwds) -> Self: - """Set the chart's mark to 'errorband' (see :class:`ErrorBandDef`) - """ - kwds = dict(band=band, borders=borders, clip=clip, color=color, extent=extent, - interpolate=interpolate, opacity=opacity, orient=orient, tension=tension, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.ErrorBandDef(type="errorband", **kwds) - else: - copy.mark = "errorband" - return copy - - -class ConfigMethodMixin: - """A mixin class that defines config methods""" - - @use_signature(core.Config) - def configure(self, *args, **kwargs) -> Self: - copy = self.copy(deep=False) - copy.config = core.Config(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_arc(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["arc"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.AreaConfig) - def configure_area(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["area"] = core.AreaConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axis(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axis"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisBottom(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisBottom"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisLeft(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisLeft"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisRight(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisRight"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisTop(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisTop"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisX(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisX"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisY(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisY"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.BarConfig) - def configure_bar(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["bar"] = core.BarConfig(*args, **kwargs) - return copy - - @use_signature(core.BoxPlotConfig) - def configure_boxplot(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["boxplot"] = core.BoxPlotConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_circle(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["circle"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.CompositionConfig) - def configure_concat(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["concat"] = core.CompositionConfig(*args, **kwargs) - return copy - - @use_signature(core.ErrorBandConfig) - def configure_errorband(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["errorband"] = core.ErrorBandConfig(*args, **kwargs) - return copy - - @use_signature(core.ErrorBarConfig) - def configure_errorbar(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["errorbar"] = core.ErrorBarConfig(*args, **kwargs) - return copy - - @use_signature(core.CompositionConfig) - def configure_facet(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["facet"] = core.CompositionConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_geoshape(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["geoshape"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_header(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["header"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerColumn(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerColumn"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerFacet(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerFacet"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerRow(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerRow"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_image(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["image"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.LegendConfig) - def configure_legend(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["legend"] = core.LegendConfig(*args, **kwargs) - return copy - - @use_signature(core.LineConfig) - def configure_line(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["line"] = core.LineConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_mark(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["mark"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_point(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["point"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.ProjectionConfig) - def configure_projection(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["projection"] = core.ProjectionConfig(*args, **kwargs) - return copy - - @use_signature(core.RangeConfig) - def configure_range(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["range"] = core.RangeConfig(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_rect(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["rect"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_rule(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["rule"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.ScaleConfig) - def configure_scale(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["scale"] = core.ScaleConfig(*args, **kwargs) - return copy - - @use_signature(core.SelectionConfig) - def configure_selection(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["selection"] = core.SelectionConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_square(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["square"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_text(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["text"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.TickConfig) - def configure_tick(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["tick"] = core.TickConfig(*args, **kwargs) - return copy - - @use_signature(core.TitleConfig) - def configure_title(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["title"] = core.TitleConfig(*args, **kwargs) - return copy - - @use_signature(core.LineConfig) - def configure_trail(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["trail"] = core.LineConfig(*args, **kwargs) - return copy - - @use_signature(core.ViewConfig) - def configure_view(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["view"] = core.ViewConfig(*args, **kwargs) - return copy \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/fft_init_aarch64.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/fft_init_aarch64.c deleted file mode 100644 index 77f56079608e26140633706c87f52b9f239f7756..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aarch64/fft_init_aarch64.c +++ /dev/null @@ -1,52 +0,0 @@ -/* - * Copyright (c) 2009 Mans Rullgard - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "config.h" - -#include "libavutil/attributes.h" -#include "libavutil/cpu.h" -#include "libavutil/aarch64/cpu.h" - -#include "libavcodec/fft.h" - -void ff_fft_permute_neon(FFTContext *s, FFTComplex *z); -void ff_fft_calc_neon(FFTContext *s, FFTComplex *z); - -void ff_imdct_calc_neon(FFTContext *s, FFTSample *output, const FFTSample *input); -void ff_imdct_half_neon(FFTContext *s, FFTSample *output, const FFTSample *input); -void ff_mdct_calc_neon(FFTContext *s, FFTSample *output, const FFTSample *input); - -av_cold void ff_fft_init_aarch64(FFTContext *s) -{ - int cpu_flags = av_get_cpu_flags(); - - if (have_neon(cpu_flags)) { - if (s->nbits < 17) { - s->fft_permute = ff_fft_permute_neon; - s->fft_calc = ff_fft_calc_neon; - } -#if CONFIG_MDCT - s->imdct_calc = ff_imdct_calc_neon; - s->imdct_half = ff_imdct_half_neon; - s->mdct_calc = ff_mdct_calc_neon; - s->mdct_permutation = FF_MDCT_PERM_INTERLEAVE; -#endif - } -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fflcms2.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fflcms2.h deleted file mode 100644 index af63c9a13c8fc1a7482f278d35c8d37b90c335d7..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/fflcms2.h +++ /dev/null @@ -1,87 +0,0 @@ -/* - * Copyright (c) 2022 Niklas Haas - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Various functions for dealing with ICC profiles - */ - -#ifndef AVCODEC_FFLCMS2_H -#define AVCODEC_FFLCMS2_H - -#include "libavutil/csp.h" -#include "libavutil/frame.h" -#include "libavutil/pixfmt.h" - -#include - -typedef struct FFIccContext { - void *avctx; - cmsContext ctx; - cmsToneCurve *curves[AVCOL_TRC_NB]; /* tone curve cache */ -} FFIccContext; - -/** - * Initializes an FFIccContext. This must be done prior to using it. - * - * Returns 0 on success, or a negative error code. - */ -int ff_icc_context_init(FFIccContext *s, void *avctx); -void ff_icc_context_uninit(FFIccContext *s); - -/** - * Generate an ICC profile for a given combination of color primaries and - * transfer function. Both values must be set to valid entries (not - * "undefined") for this function to work. - * - * Returns 0 on success, or a negative error code. - */ -int ff_icc_profile_generate(FFIccContext *s, - enum AVColorPrimaries color_prim, - enum AVColorTransferCharacteristic color_trc, - cmsHPROFILE *out_profile); - -/** - * Attach an ICC profile to a frame. Helper wrapper around cmsSaveProfileToMem - * and av_frame_new_side_data_from_buf. - * - * Returns 0 on success, or a negative error code. - */ -int ff_icc_profile_attach(FFIccContext *s, cmsHPROFILE profile, AVFrame *frame); - -/** - * Read the color primaries and white point coefficients encoded by an ICC - * profile, and return the raw values in `out_primaries`. - * - * Returns 0 on success, or a negative error code. - */ -int ff_icc_profile_read_primaries(FFIccContext *s, cmsHPROFILE profile, - AVColorPrimariesDesc *out_primaries); - -/** - * Attempt detecting the transfer characteristic that best approximates the - * transfer function encoded by an ICC profile. Sets `out_trc` to - * AVCOL_TRC_UNSPECIFIED if no clear match can be identified. - * - * Returns 0 on success (including no match), or a negative error code. - */ -int ff_icc_profile_detect_transfer(FFIccContext *s, cmsHPROFILE profile, - enum AVColorTransferCharacteristic *out_trc); - -#endif /* AVCODEC_FFLCMS2_H */ diff --git a/spaces/congsaPfin/Manga-OCR/logs/Car Parking Multiplayer 4.8.2 Mod The Best Parking Game for Android.md b/spaces/congsaPfin/Manga-OCR/logs/Car Parking Multiplayer 4.8.2 Mod The Best Parking Game for Android.md deleted file mode 100644 index b50088b0e940cb429ce2518ff035ed4e7bdef794..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Car Parking Multiplayer 4.8.2 Mod The Best Parking Game for Android.md +++ /dev/null @@ -1,114 +0,0 @@ - -

    Download Game Car Parking Multiplayer Mod APK 4.8.2

    -

    If you are a fan of car parking games, you might want to try Car Parking Multiplayer, a realistic and fun simulation game that lets you drive, park, and customize your own cars in an open world environment. And if you want to enjoy the game with unlimited money, unlocked cars, and premium features, you can download the modded version of the game, Car Parking Multiplayer Mod APK 4.8.2, from the link below.

    -

    What is Car Parking Multiplayer?

    -

    Car Parking Multiplayer is a simulation game developed by olzhass, a popular game studio that specializes in car games. The game has over 100 million downloads on Google Play Store and has a rating of 4.3 out of 5 stars. The game is designed to provide a realistic and immersive car parking experience, with over 200 different cars to choose from, realistic car physics and graphics, open world map with various locations, multiplayer mode with voice chat, and customizable cars and garages.

    -

    download game car parking multiplayer mod apk 4.8.2


    DOWNLOAD ✯✯✯ https://urlca.com/2uO6bu



    -

    Features of Car Parking Multiplayer

    -

    Realistic car physics and graphics

    -

    One of the main attractions of Car Parking Multiplayer is its realistic car physics and graphics, which make the game more challenging and enjoyable. The game features different types of cars, such as sedans, SUVs, trucks, sports cars, and supercars, each with their own characteristics and performance. The game also has realistic sound effects, such as engine noise, horn, brake, and tire screech. The game also has dynamic weather and day-night cycle, which affect the visibility and driving conditions. -

    Open world map and multiplayer mode

    -

    Another feature of Car Parking Multiplayer is its open world map and multiplayer mode, which allow you to explore different locations and interact with other players online. The game has a large map with various places to park your car, such as parking lots, gas stations, airports, highways, cities, deserts, and more. You can also join or create your own server and play with your friends or other players from around the world. You can chat with them using voice chat or text chat, race with them, exchange cars with them, or even prank them by blocking their way or honking at them. -

    Customizable cars and garages

    -

    The last feature of Car Parking Multiplayer is its customizable cars and garages, which allow you to personalize your cars according to your preferences. You can change the color, wheels, suspension, engine, exhaust, spoiler, lights, stickers, license plate, and more of your cars. You can also upgrade your cars to improve their performance and speed. You can also buy new cars or sell your old ones using the in-game currency. You can also decorate your garage with various items, such as posters, flags, furniture, tools, etc. -

    How to download and install Car Parking Multiplayer Mod APK 4.8.2?

    -

    Requirements and permissions

    -

    To download and install Car Parking Multiplayer Mod APK 4.8.2 on your Android device, you need to have the following requirements and permissions:

    -
      -
    • Your device must have Android 5.0 or higher version.
    • -
    • Your device must have at least 1 GB of RAM and 500 MB of free storage space.
    • -
    • You need to enable the installation of apps from unknown sources in your device settings.
    • -
    • You need to grant the following permissions to the app: access to storage, microphone, location, network state

      Steps to download and install

      -

      To download and install Car Parking Multiplayer Mod APK 4.8.2 on your Android device, you need to follow these steps:

      -
        -
      1. Click on the link below to download the Car Parking Multiplayer Mod APK 4.8.2 file.
      2. -
      3. Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
      4. -
      5. Follow the instructions on the screen and wait for the installation to finish.
      6. -
      7. Launch the game and enjoy the mod features.
      8. -
      -

      Download Car Parking Multiplayer Mod APK 4.8.2 here

      -

      How to use the mod features

      -

      To use the mod features of Car Parking Multiplayer Mod APK 4.8.2, you need to do the following:

      -
        -
      • To get unlimited money, you need to go to the shop and buy any item. Your money will increase instead of decrease.
      • -
      • To unlock all cars, you need to go to the garage and tap on the lock icon of any car. You will be able to use it without paying.
      • -
      • To access the premium features, such as no ads, free parking, and VIP chat, you need to go to the settings and tap on the premium button. You will be able to activate them without buying.
      • -
      -

      Pros and cons of Car Parking Multiplayer Mod APK 4.8.2

      -

      Pros

      -

      Some of the pros of Car Parking Multiplayer Mod APK 4.8.2 are:

      -
        -
      • You can enjoy the game with unlimited money, unlocked cars, and premium features.
      • -
      • You can customize your cars and garages according to your preferences.
      • -
      • You can explore the open world map and interact with other players online.
      • -
      • You can experience realistic car physics and graphics.
      • -
      -

      Cons

      -

      Some of the cons of Car Parking Multiplayer Mod APK 4.8.2 are:

      -
        -
      • You may face some compatibility issues with some devices or Android versions.
      • -
      • You may encounter some bugs or glitches in the game.
      • -
      • You may get banned from the game if you abuse the mod features or cheat in multiplayer mode.
      • -
      • You may lose your progress or data if you uninstall or update the game.
      • -
      -

      Conclusion

      -

      Car Parking Multiplayer is a simulation game that offers a realistic and fun car parking experience, with over 200 different cars, realistic car physics and graphics, open world map with various locations, multiplayer mode with voice chat, and customizable cars and garages. If you want to enjoy the game with unlimited money, unlocked cars, and premium features, you can download Car Parking Multiplayer Mod APK 4.8.2 from the link above and follow the steps to install it on your Android device. However, you should also be aware of the pros and cons of using the modded version of the game and use it at your own risk.

      -

      Go to [Keyword Tool](^1^) and enter your query in the search box.
      -Choose the Google domain and language that you want to target.
      -Click on the Search button and wait for the results to load.
      -You will see a list of keyword suggestions based on Google Autocomplete, along with their estimated monthly search volume, cost per click (CPC), and competition level.
      -You can filter the results by word count, include or exclude certain words, or sort them by different criteria.
      -You can also switch to the Questions tab to see keyword suggestions in the form of questions that people ask on Google.
      -How to download game car parking multiplayer mod apk 4.8.2
      -Download game car parking multiplayer mod apk 4.8.2 unlimited money
      -Download game car parking multiplayer mod apk 4.8.2 latest version
      -Download game car parking multiplayer mod apk 4.8.2 android 1
      -Download game car parking multiplayer mod apk 4.8.2 offline
      -Download game car parking multiplayer mod apk 4.8.2 rexdl
      -Download game car parking multiplayer mod apk 4.8.2 revdl
      -Download game car parking multiplayer mod apk 4.8.2 happymod
      -Download game car parking multiplayer mod apk 4.8.2 free shopping
      -Download game car parking multiplayer mod apk 4.8.2 mega mod
      -Download game car parking multiplayer mod apk 4.8.2 no root
      -Download game car parking multiplayer mod apk 4.8.2 for pc
      -Download game car parking multiplayer mod apk 4.8.2 for ios
      -Download game car parking multiplayer mod apk 4.8.2 mediafıre
      -Download game car parking multiplayer mod apk 4.8.2 google drive
      -Download game car parking multiplayer mod apk 4.8.2 update
      -Download game car parking multiplayer mod apk 4.8.2 terbaru
      -Download game car parking multiplayer mod apk 4.8.2 versi lama
      -Download game car parking multiplayer mod apk 4.8.2 cheat
      -Download game car parking multiplayer mod apk 4.8.2 hack
      -Cara download game car parking multiplayer mod apk 4.8.2
      -Link download game car parking multiplayer mod apk 4.8.2
      -Situs download game car parking multiplayer mod apk 4.8.2
      -Tempat download game car parking multiplayer mod apk 4.8.2
      -Tutorial download game car parking multiplayer mod apk 4.8.2
      -What is game car parking multiplayer mod apk 4.8.2
      -How to install game car parking multiplayer mod apk 4.8.2
      -How to play game car parking multiplayer mod apk 4.8.2
      -How to update game car parking multiplayer mod apk 4.8.2
      -How to uninstall game car parking multiplayer mod apk 4.8.2
      -How to use game car parking multiplayer mod apk 4.8.2
      -How to hack game car parking multiplayer mod apk 4.8.2
      -How to get unlimited money in game car parking multiplayer mod apk 4.8.2
      -How to get free shopping in game car parking multiplayer mod apk 4.8.2
      -How to get mega mod in game car parking multiplayer mod apk 4.8.2
      -Is game car parking multiplayer mod apk 4.8.2 safe
      -Is game car parking multiplayer mod apk 4

      -

      FAQs

      -

      Here are some frequently asked questions about Car Parking Multiplayer Mod APK 4.8.2:

      -
        -
      1. Is Car Parking Multiplayer Mod APK 4.8.2 safe to use?
      2. -

        Car Parking Multiplayer Mod APK 4.8.2 is safe to use as long as you download it from a trusted source and scan it for viruses before installing it on your device. However, you should also be careful not to abuse the mod features or cheat in multiplayer mode, as this may result in getting banned from the game or losing your progress or data.

        -
      3. Is Car Parking Multiplayer Mod APK 4.8.2 free to download?
      4. -

        Yes, Car Parking Multiplayer Mod APK 4.8.2 is free to download from the link above. You do not need to pay anything to get unlimited money, unlocked cars, and premium features in the game.

        -
      5. Can I play Car Parking Multiplayer Mod APK 4.8.2 offline?
      6. -

        Yes, you can play Car Parking Multiplayer Mod APK 4.8.2 offline without an internet connection. However, you will not be able to access some features of the game, such as multiplayer mode, voice chat, online servers, etc.

        -
      7. Can I update Car Parking Multiplayer Mod APK 4.8.2?
      8. -

        No, you cannot update Car Parking Multiplayer Mod APK 4.8.2 from the Google Play Store or any other source, as this may cause the mod features to stop working or the game to crash. You need to wait for the mod developers to release a new version of the mod APK that is compatible with the latest version of the game.

        -
      9. How can I contact the developers of Car Parking Multiplayer Mod APK 4.8.2?
      10. -

        If you have any questions, feedback, or issues regarding Car Parking Multiplayer Mod APK 4.8.2, you can contact the developers of the mod APK by visiting their website or social media pages. You can also leave a comment on the download page of the mod APK and they may reply to you.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/4c Lipka For Window 7 19.md b/spaces/contluForse/HuggingGPT/assets/4c Lipka For Window 7 19.md deleted file mode 100644 index fd1fa3fab2d288b55bc1406ea530794f6929e44d..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/4c Lipka For Window 7 19.md +++ /dev/null @@ -1,10 +0,0 @@ - -

      lipka and borsuk now live with their father, bruce lipka, at his uk apartment at 531 e. goforth avenue. he is still working at ramsey's and also does some djing. he has been issued a restraining order against allen. "people are just hearing what they want to hear," bruce lipka said, referring to allen's reputation as a dj and hype man for local bands. "but i know corey." he said he last spoke to his son in august.

      -

      as for what happened to his son's $2.5 million, lipka isn't sure. bruce lipka doesn't know whether his son has been paying his insurance and maintenance bills; he's already approached his credit card company in hopes of getting a payment plan, he said. "i don't see why he didn't tell me the truth," bruce lipka said. "he always said he would."

      -

      4c lipka for window 7 19


      Download Zip ———>>> https://ssurll.com/2uzxqy



      -

      on saturday night, bruce lipka went over to his son's house to find out what he needed to do to turn on the heat. he found corey's car still parked outside. "he should've called me," bruce lipka said.

      -

      a week after borsuk and lipka were arrested, angela allen approached the girl, "beaumont," at the student-run delta electronic stock google+ updates -.. club on dawson street, telling her about both the money she had lost and a computer job that might be available. the job for $7,500 seemed very promising, she said.

      -

      last summer, the cleveland press reported that cleveland state had entered into a $150,000 contract with lipka to produce a documentary, but that it was never completed. lipka's attorney, william schwartz, said his client had been working on the project for about a year. the school was unable to complete it because of the ohio state reformatory investigation, schwartz said.

      -

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Chemical Process Industry Shreves Download Games How to Improve Your Skills and Knowledge.md b/spaces/contluForse/HuggingGPT/assets/Chemical Process Industry Shreves Download Games How to Improve Your Skills and Knowledge.md deleted file mode 100644 index 456548407488a13874c8ae8ec56e24c0bfe0bce2..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Chemical Process Industry Shreves Download Games How to Improve Your Skills and Knowledge.md +++ /dev/null @@ -1,8 +0,0 @@ -
      -

      Covers the major chemical processes and their technical and economic relationships. Intended for professionals and students, this work offers guidance in the designing and operating of processing units.

      -

      Die casting equipment was invented in 1838 for the purpose of producing movable type for the printing industry. The first die casting-related patent was granted in 1849 for a small hand-operated machine for the purpose of mechanized printing type production. In 1885 Otto Mergenthaler invented the Linotype machine, which cast an entire line of type as a single unit, using a die casting process. It nearly completely replaced setting type by hand in the publishing industry. The Soss die-casting machine, manufactured in Brooklyn, NY, was the first machine to be sold in the open market in North America.[2] Other applications grew rapidly, with die casting facilitating the growth of consumer goods, and appliances, by greatly reducing the production cost of intricate parts in high volumes.[3] In 1966,[4] General Motors released the Acurad process.[5]

      -

      Chemical Process Industry Shreves Download Games


      Download Zip ->>->>->> https://ssurll.com/2uzyX4



      -

      Today "water-in-oil" and "oil-in-water" emulsions are used, because, when the lubricant is applied, the water cools the die surface by evaporating depositing the oil that helps release the shot. A common mixture for this type of emulsion is thirty parts water to one part oil, however in extreme cases a ratio of one-hundred to one is used.[25] Oils that are used include heavy residual oil (HRO), animal fat, vegetable fat, synthetic oil, and all sorts of mixtures of these. HROs are gelatinous at room temperature, but at the high temperatures found in die casting, they form a thin film. Other substances are added to control the viscosity and thermal properties of these emulsions, e.g. graphite, aluminium, mica. Other chemical additives are used to inhibit rusting and oxidation. In addition emulsifiers are added to improve the emulsion manufacturing process, e.g. soap, alcohol esters, ethylene oxides.[26]

      -

      Mahdi Abu-Omar, Purdue's R.B. Wetherill Professor of Chemistry, holds a small vial containing results of a new catalytic process that can convert the lignin in wood into high-value chemical products for use in fragrances and flavoring. (Purdue University photo/Mark Simons)
      Download Photo

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/keypose/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/keypose/__init__.py deleted file mode 100644 index aa3dfa2e1589f22471411b3180ccaf870f147d73..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/keypose/__init__.py +++ /dev/null @@ -1,212 +0,0 @@ -import numpy as np -import cv2 -import torch - -import os -from modules import devices -from annotator.annotator_path import models_path - -import mmcv -from mmdet.apis import inference_detector, init_detector -from mmpose.apis import inference_top_down_pose_model -from mmpose.apis import init_pose_model, process_mmdet_results, vis_pose_result - - -def preprocessing(image, device): - # Resize - scale = 640 / max(image.shape[:2]) - image = cv2.resize(image, dsize=None, fx=scale, fy=scale) - raw_image = image.astype(np.uint8) - - # Subtract mean values - image = image.astype(np.float32) - image -= np.array( - [ - float(104.008), - float(116.669), - float(122.675), - ] - ) - - # Convert to torch.Tensor and add "batch" axis - image = torch.from_numpy(image.transpose(2, 0, 1)).float().unsqueeze(0) - image = image.to(device) - - return image, raw_image - - -def imshow_keypoints(img, - pose_result, - skeleton=None, - kpt_score_thr=0.1, - pose_kpt_color=None, - pose_link_color=None, - radius=4, - thickness=1): - """Draw keypoints and links on an image. - Args: - img (ndarry): The image to draw poses on. - pose_result (list[kpts]): The poses to draw. Each element kpts is - a set of K keypoints as an Kx3 numpy.ndarray, where each - keypoint is represented as x, y, score. - kpt_score_thr (float, optional): Minimum score of keypoints - to be shown. Default: 0.3. - pose_kpt_color (np.array[Nx3]`): Color of N keypoints. If None, - the keypoint will not be drawn. - pose_link_color (np.array[Mx3]): Color of M links. If None, the - links will not be drawn. - thickness (int): Thickness of lines. - """ - - img_h, img_w, _ = img.shape - img = np.zeros(img.shape) - - for idx, kpts in enumerate(pose_result): - if idx > 1: - continue - kpts = kpts['keypoints'] - # print(kpts) - kpts = np.array(kpts, copy=False) - - # draw each point on image - if pose_kpt_color is not None: - assert len(pose_kpt_color) == len(kpts) - - for kid, kpt in enumerate(kpts): - x_coord, y_coord, kpt_score = int(kpt[0]), int(kpt[1]), kpt[2] - - if kpt_score < kpt_score_thr or pose_kpt_color[kid] is None: - # skip the point that should not be drawn - continue - - color = tuple(int(c) for c in pose_kpt_color[kid]) - cv2.circle(img, (int(x_coord), int(y_coord)), - radius, color, -1) - - # draw links - if skeleton is not None and pose_link_color is not None: - assert len(pose_link_color) == len(skeleton) - - for sk_id, sk in enumerate(skeleton): - pos1 = (int(kpts[sk[0], 0]), int(kpts[sk[0], 1])) - pos2 = (int(kpts[sk[1], 0]), int(kpts[sk[1], 1])) - - if (pos1[0] <= 0 or pos1[0] >= img_w or pos1[1] <= 0 or pos1[1] >= img_h or pos2[0] <= 0 - or pos2[0] >= img_w or pos2[1] <= 0 or pos2[1] >= img_h or kpts[sk[0], 2] < kpt_score_thr - or kpts[sk[1], 2] < kpt_score_thr or pose_link_color[sk_id] is None): - # skip the link that should not be drawn - continue - color = tuple(int(c) for c in pose_link_color[sk_id]) - cv2.line(img, pos1, pos2, color, thickness=thickness) - - return img - - -human_det, pose_model = None, None -det_model_path = "https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth" -pose_model_path = "https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth" - -modeldir = os.path.join(models_path, "keypose") -old_modeldir = os.path.dirname(os.path.realpath(__file__)) - -det_config = 'faster_rcnn_r50_fpn_coco.py' -pose_config = 'hrnet_w48_coco_256x192.py' - -det_checkpoint = 'faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth' -pose_checkpoint = 'hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth' -det_cat_id = 1 -bbox_thr = 0.2 - -skeleton = [ - [15, 13], [13, 11], [16, 14], [14, 12], [11, 12], [5, 11], [6, 12], [5, 6], [5, 7], [6, 8], - [7, 9], [8, 10], - [1, 2], [0, 1], [0, 2], [1, 3], [2, 4], [3, 5], [4, 6] -] - -pose_kpt_color = [ - [51, 153, 255], [51, 153, 255], [51, 153, 255], [51, 153, 255], [51, 153, 255], - [0, 255, 0], - [255, 128, 0], [0, 255, 0], [255, 128, 0], [0, 255, 0], [255, 128, 0], [0, 255, 0], - [255, 128, 0], - [0, 255, 0], [255, 128, 0], [0, 255, 0], [255, 128, 0] -] - -pose_link_color = [ - [0, 255, 0], [0, 255, 0], [255, 128, 0], [255, 128, 0], - [51, 153, 255], [51, 153, 255], [51, 153, 255], [51, 153, 255], [0, 255, 0], - [255, 128, 0], - [0, 255, 0], [255, 128, 0], [51, 153, 255], [51, 153, 255], [51, 153, 255], - [51, 153, 255], - [51, 153, 255], [51, 153, 255], [51, 153, 255] -] - -def find_download_model(checkpoint, remote_path): - modelpath = os.path.join(modeldir, checkpoint) - old_modelpath = os.path.join(old_modeldir, checkpoint) - - if os.path.exists(old_modelpath): - modelpath = old_modelpath - elif not os.path.exists(modelpath): - from basicsr.utils.download_util import load_file_from_url - load_file_from_url(remote_path, model_dir=modeldir) - - return modelpath - -def apply_keypose(input_image): - global human_det, pose_model - if netNetwork is None: - det_model_local = find_download_model(det_checkpoint, det_model_path) - hrnet_model_local = find_download_model(pose_checkpoint, pose_model_path) - det_config_mmcv = mmcv.Config.fromfile(det_config) - pose_config_mmcv = mmcv.Config.fromfile(pose_config) - human_det = init_detector(det_config_mmcv, det_model_local, device=devices.get_device_for("controlnet")) - pose_model = init_pose_model(pose_config_mmcv, hrnet_model_local, device=devices.get_device_for("controlnet")) - - assert input_image.ndim == 3 - input_image = input_image.copy() - with torch.no_grad(): - image = torch.from_numpy(input_image).float().to(devices.get_device_for("controlnet")) - image = image / 255.0 - mmdet_results = inference_detector(human_det, image) - - # keep the person class bounding boxes. - person_results = process_mmdet_results(mmdet_results, det_cat_id) - - return_heatmap = False - dataset = pose_model.cfg.data['test']['type'] - - # e.g. use ('backbone', ) to return backbone feature - output_layer_names = None - pose_results, _ = inference_top_down_pose_model( - pose_model, - image, - person_results, - bbox_thr=bbox_thr, - format='xyxy', - dataset=dataset, - dataset_info=None, - return_heatmap=return_heatmap, - outputs=output_layer_names - ) - - im_keypose_out = imshow_keypoints( - image, - pose_results, - skeleton=skeleton, - pose_kpt_color=pose_kpt_color, - pose_link_color=pose_link_color, - radius=2, - thickness=2 - ) - im_keypose_out = im_keypose_out.astype(np.uint8) - - # image_hed = rearrange(image_hed, 'h w c -> 1 c h w') - # edge = netNetwork(image_hed)[0] - # edge = (edge.cpu().numpy() * 255.0).clip(0, 255).astype(np.uint8) - return im_keypose_out - - -def unload_hed_model(): - global netNetwork - if netNetwork is not None: - netNetwork.cpu() diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/midas/midas/dpt_depth.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/midas/midas/dpt_depth.py deleted file mode 100644 index 4e9aab5d2767dffea39da5b3f30e2798688216f1..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/midas/midas/dpt_depth.py +++ /dev/null @@ -1,109 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .base_model import BaseModel -from .blocks import ( - FeatureFusionBlock, - FeatureFusionBlock_custom, - Interpolate, - _make_encoder, - forward_vit, -) - - -def _make_fusion_block(features, use_bn): - return FeatureFusionBlock_custom( - features, - nn.ReLU(False), - deconv=False, - bn=use_bn, - expand=False, - align_corners=True, - ) - - -class DPT(BaseModel): - def __init__( - self, - head, - features=256, - backbone="vitb_rn50_384", - readout="project", - channels_last=False, - use_bn=False, - ): - - super(DPT, self).__init__() - - self.channels_last = channels_last - - hooks = { - "vitb_rn50_384": [0, 1, 8, 11], - "vitb16_384": [2, 5, 8, 11], - "vitl16_384": [5, 11, 17, 23], - } - - # Instantiate backbone and reassemble blocks - self.pretrained, self.scratch = _make_encoder( - backbone, - features, - False, # Set to true of you want to train from scratch, uses ImageNet weights - groups=1, - expand=False, - exportable=False, - hooks=hooks[backbone], - use_readout=readout, - ) - - self.scratch.refinenet1 = _make_fusion_block(features, use_bn) - self.scratch.refinenet2 = _make_fusion_block(features, use_bn) - self.scratch.refinenet3 = _make_fusion_block(features, use_bn) - self.scratch.refinenet4 = _make_fusion_block(features, use_bn) - - self.scratch.output_conv = head - - - def forward(self, x): - if self.channels_last == True: - x.contiguous(memory_format=torch.channels_last) - - layer_1, layer_2, layer_3, layer_4 = forward_vit(self.pretrained, x) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return out - - -class DPTDepthModel(DPT): - def __init__(self, path=None, non_negative=True, **kwargs): - features = kwargs["features"] if "features" in kwargs else 256 - - head = nn.Sequential( - nn.Conv2d(features, features // 2, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear", align_corners=True), - nn.Conv2d(features // 2, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - super().__init__(head, **kwargs) - - if path is not None: - self.load(path) - - def forward(self, x): - return super().forward(x).squeeze(dim=1) - diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/roiaware_pool3d.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/roiaware_pool3d.py deleted file mode 100644 index 8191920ca50b388ef58f577dc986da101662ac53..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/roiaware_pool3d.py +++ /dev/null @@ -1,114 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn as nn -from torch.autograd import Function - -import annotator.mmpkg.mmcv as mmcv -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roiaware_pool3d_forward', 'roiaware_pool3d_backward']) - - -class RoIAwarePool3d(nn.Module): - """Encode the geometry-specific features of each 3D proposal. - - Please refer to `PartA2 `_ for more - details. - - Args: - out_size (int or tuple): The size of output features. n or - [n1, n2, n3]. - max_pts_per_voxel (int, optional): The maximum number of points per - voxel. Default: 128. - mode (str, optional): Pooling method of RoIAware, 'max' or 'avg'. - Default: 'max'. - """ - - def __init__(self, out_size, max_pts_per_voxel=128, mode='max'): - super().__init__() - - self.out_size = out_size - self.max_pts_per_voxel = max_pts_per_voxel - assert mode in ['max', 'avg'] - pool_mapping = {'max': 0, 'avg': 1} - self.mode = pool_mapping[mode] - - def forward(self, rois, pts, pts_feature): - """ - Args: - rois (torch.Tensor): [N, 7], in LiDAR coordinate, - (x, y, z) is the bottom center of rois. - pts (torch.Tensor): [npoints, 3], coordinates of input points. - pts_feature (torch.Tensor): [npoints, C], features of input points. - - Returns: - pooled_features (torch.Tensor): [N, out_x, out_y, out_z, C] - """ - - return RoIAwarePool3dFunction.apply(rois, pts, pts_feature, - self.out_size, - self.max_pts_per_voxel, self.mode) - - -class RoIAwarePool3dFunction(Function): - - @staticmethod - def forward(ctx, rois, pts, pts_feature, out_size, max_pts_per_voxel, - mode): - """ - Args: - rois (torch.Tensor): [N, 7], in LiDAR coordinate, - (x, y, z) is the bottom center of rois. - pts (torch.Tensor): [npoints, 3], coordinates of input points. - pts_feature (torch.Tensor): [npoints, C], features of input points. - out_size (int or tuple): The size of output features. n or - [n1, n2, n3]. - max_pts_per_voxel (int): The maximum number of points per voxel. - Default: 128. - mode (int): Pooling method of RoIAware, 0 (max pool) or 1 (average - pool). - - Returns: - pooled_features (torch.Tensor): [N, out_x, out_y, out_z, C], output - pooled features. - """ - - if isinstance(out_size, int): - out_x = out_y = out_z = out_size - else: - assert len(out_size) == 3 - assert mmcv.is_tuple_of(out_size, int) - out_x, out_y, out_z = out_size - - num_rois = rois.shape[0] - num_channels = pts_feature.shape[-1] - num_pts = pts.shape[0] - - pooled_features = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, num_channels)) - argmax = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, num_channels), dtype=torch.int) - pts_idx_of_voxels = pts_feature.new_zeros( - (num_rois, out_x, out_y, out_z, max_pts_per_voxel), - dtype=torch.int) - - ext_module.roiaware_pool3d_forward(rois, pts, pts_feature, argmax, - pts_idx_of_voxels, pooled_features, - mode) - - ctx.roiaware_pool3d_for_backward = (pts_idx_of_voxels, argmax, mode, - num_pts, num_channels) - return pooled_features - - @staticmethod - def backward(ctx, grad_out): - ret = ctx.roiaware_pool3d_for_backward - pts_idx_of_voxels, argmax, mode, num_pts, num_channels = ret - - grad_in = grad_out.new_zeros((num_pts, num_channels)) - ext_module.roiaware_pool3d_backward(pts_idx_of_voxels, argmax, - grad_out.contiguous(), grad_in, - mode) - - return None, None, grad_in, None, None, None diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/utils/drop.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/utils/drop.py deleted file mode 100644 index 4520b0ff407d2a95a864086bdbca0065f222aa63..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/utils/drop.py +++ /dev/null @@ -1,31 +0,0 @@ -"""Modified from https://github.com/rwightman/pytorch-image- -models/blob/master/timm/models/layers/drop.py.""" - -import torch -from torch import nn - - -class DropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of - residual blocks). - - Args: - drop_prob (float): Drop rate for paths of model. Dropout rate has - to be between 0 and 1. Default: 0. - """ - - def __init__(self, drop_prob=0.): - super(DropPath, self).__init__() - self.drop_prob = drop_prob - self.keep_prob = 1 - drop_prob - - def forward(self, x): - if self.drop_prob == 0. or not self.training: - return x - shape = (x.shape[0], ) + (1, ) * ( - x.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = self.keep_prob + torch.rand( - shape, dtype=x.dtype, device=x.device) - random_tensor.floor_() # binarize - output = x.div(self.keep_prob) * random_tensor - return output diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/app/src/main/java/org/tensorflow/lite/examples/classification/LegacyCameraConnectionFragment.java b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/app/src/main/java/org/tensorflow/lite/examples/classification/LegacyCameraConnectionFragment.java deleted file mode 100644 index 760fe90375450c7b1356603c83fb37a68548ca13..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/mobile/android/app/src/main/java/org/tensorflow/lite/examples/classification/LegacyCameraConnectionFragment.java +++ /dev/null @@ -1,203 +0,0 @@ -package org.tensorflow.lite.examples.classification; - -/* - * Copyright 2019 The TensorFlow Authors. All Rights Reserved. - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -import android.annotation.SuppressLint; -import android.app.Fragment; -import android.graphics.SurfaceTexture; -import android.hardware.Camera; -import android.hardware.Camera.CameraInfo; -import android.os.Bundle; -import android.os.Handler; -import android.os.HandlerThread; -import android.util.Size; -import android.util.SparseIntArray; -import android.view.LayoutInflater; -import android.view.Surface; -import android.view.TextureView; -import android.view.View; -import android.view.ViewGroup; -import java.io.IOException; -import java.util.List; -import org.tensorflow.lite.examples.classification.customview.AutoFitTextureView; -import org.tensorflow.lite.examples.classification.env.ImageUtils; -import org.tensorflow.lite.examples.classification.env.Logger; - -public class LegacyCameraConnectionFragment extends Fragment { - private static final Logger LOGGER = new Logger(); - /** Conversion from screen rotation to JPEG orientation. */ - private static final SparseIntArray ORIENTATIONS = new SparseIntArray(); - - static { - ORIENTATIONS.append(Surface.ROTATION_0, 90); - ORIENTATIONS.append(Surface.ROTATION_90, 0); - ORIENTATIONS.append(Surface.ROTATION_180, 270); - ORIENTATIONS.append(Surface.ROTATION_270, 180); - } - - private Camera camera; - private Camera.PreviewCallback imageListener; - private Size desiredSize; - /** The layout identifier to inflate for this Fragment. */ - private int layout; - /** An {@link AutoFitTextureView} for camera preview. */ - private AutoFitTextureView textureView; - /** - * {@link TextureView.SurfaceTextureListener} handles several lifecycle events on a {@link - * TextureView}. - */ - private final TextureView.SurfaceTextureListener surfaceTextureListener = - new TextureView.SurfaceTextureListener() { - @Override - public void onSurfaceTextureAvailable( - final SurfaceTexture texture, final int width, final int height) { - - int index = getCameraId(); - camera = Camera.open(index); - - try { - Camera.Parameters parameters = camera.getParameters(); - List focusModes = parameters.getSupportedFocusModes(); - if (focusModes != null - && focusModes.contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE)) { - parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE); - } - List cameraSizes = parameters.getSupportedPreviewSizes(); - Size[] sizes = new Size[cameraSizes.size()]; - int i = 0; - for (Camera.Size size : cameraSizes) { - sizes[i++] = new Size(size.width, size.height); - } - Size previewSize = - CameraConnectionFragment.chooseOptimalSize( - sizes, desiredSize.getWidth(), desiredSize.getHeight()); - parameters.setPreviewSize(previewSize.getWidth(), previewSize.getHeight()); - camera.setDisplayOrientation(90); - camera.setParameters(parameters); - camera.setPreviewTexture(texture); - } catch (IOException exception) { - camera.release(); - } - - camera.setPreviewCallbackWithBuffer(imageListener); - Camera.Size s = camera.getParameters().getPreviewSize(); - camera.addCallbackBuffer(new byte[ImageUtils.getYUVByteSize(s.height, s.width)]); - - textureView.setAspectRatio(s.height, s.width); - - camera.startPreview(); - } - - @Override - public void onSurfaceTextureSizeChanged( - final SurfaceTexture texture, final int width, final int height) {} - - @Override - public boolean onSurfaceTextureDestroyed(final SurfaceTexture texture) { - return true; - } - - @Override - public void onSurfaceTextureUpdated(final SurfaceTexture texture) {} - }; - /** An additional thread for running tasks that shouldn't block the UI. */ - private HandlerThread backgroundThread; - - @SuppressLint("ValidFragment") - public LegacyCameraConnectionFragment( - final Camera.PreviewCallback imageListener, final int layout, final Size desiredSize) { - this.imageListener = imageListener; - this.layout = layout; - this.desiredSize = desiredSize; - } - - @Override - public View onCreateView( - final LayoutInflater inflater, final ViewGroup container, final Bundle savedInstanceState) { - return inflater.inflate(layout, container, false); - } - - @Override - public void onViewCreated(final View view, final Bundle savedInstanceState) { - textureView = (AutoFitTextureView) view.findViewById(R.id.texture); - } - - @Override - public void onActivityCreated(final Bundle savedInstanceState) { - super.onActivityCreated(savedInstanceState); - } - - @Override - public void onResume() { - super.onResume(); - startBackgroundThread(); - // When the screen is turned off and turned back on, the SurfaceTexture is already - // available, and "onSurfaceTextureAvailable" will not be called. In that case, we can open - // a camera and start preview from here (otherwise, we wait until the surface is ready in - // the SurfaceTextureListener). - - if (textureView.isAvailable()) { - if (camera != null) { - camera.startPreview(); - } - } else { - textureView.setSurfaceTextureListener(surfaceTextureListener); - } - } - - @Override - public void onPause() { - stopCamera(); - stopBackgroundThread(); - super.onPause(); - } - - /** Starts a background thread and its {@link Handler}. */ - private void startBackgroundThread() { - backgroundThread = new HandlerThread("CameraBackground"); - backgroundThread.start(); - } - - /** Stops the background thread and its {@link Handler}. */ - private void stopBackgroundThread() { - backgroundThread.quitSafely(); - try { - backgroundThread.join(); - backgroundThread = null; - } catch (final InterruptedException e) { - LOGGER.e(e, "Exception!"); - } - } - - protected void stopCamera() { - if (camera != null) { - camera.stopPreview(); - camera.setPreviewCallback(null); - camera.release(); - camera = null; - } - } - - private int getCameraId() { - CameraInfo ci = new CameraInfo(); - for (int i = 0; i < Camera.getNumberOfCameras(); i++) { - Camera.getCameraInfo(i, ci); - if (ci.facing == CameraInfo.CAMERA_FACING_BACK) return i; - } - return -1; // No camera found - } -} diff --git a/spaces/cozyanduofen/bingo/src/pages/api/create.ts b/spaces/cozyanduofen/bingo/src/pages/api/create.ts deleted file mode 100644 index 508fa97ef609cbb215a61085711638e116235ebe..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/pages/api/create.ts +++ /dev/null @@ -1,31 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { fetch, debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' - -// const API_ENDPOINT = 'https://www.bing.com/turing/conversation/create' -const API_ENDPOINT = 'https://edgeservices.bing.com/edgesvc/turing/conversation/create'; - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const headers = createHeaders(req.cookies) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - - debug('headers', headers) - const response = await fetch(API_ENDPOINT, { method: 'GET', headers }) - .then((res) => res.text()) - - res.end(response) - } catch (e) { - return res.end(JSON.stringify({ - result: { - value: 'UnauthorizedRequest', - message: `${e}` - } - })) - } -} diff --git a/spaces/crashedice/signify/signify/siamese.py b/spaces/crashedice/signify/signify/siamese.py deleted file mode 100644 index ddc21030445b7b1d470b6b554a99c68d8712db03..0000000000000000000000000000000000000000 --- a/spaces/crashedice/signify/signify/siamese.py +++ /dev/null @@ -1,532 +0,0 @@ -# AUTOGENERATED! DO NOT EDIT! File to edit: ../nbs/00_siamese.ipynb. - -# %% auto 0 -__all__ = ['def_device', 'compare', 'with_cbs', 'run_cbs', 'to_cpu', 'to_device', 'CancelFitException', 'CancelBatchException', - 'CancelEpochException', 'Callback', 'MetricsCB', 'TrainCB', 'TrainContrastiveCB', 'TrainBceCB', 'DeviceCB', - 'ProgressCB', 'SaveModelCallback', 'LRFinderCB', 'HooksCallback', 'append_stats', 'ActivationStats', - 'LoadModelCallback', 'TwoDLVCallback', 'BatchTransformCB', 'BaseSchedCB', 'BatchSchedCB', 'rand_erase', - 'RandErase', 'SquareReflectPad', 'Learner', 'show_image', 'subplots', 'get_grid', 'show_images', - 'reshape_alternating'] - -# %% ../nbs/00_siamese.ipynb 4 -import pathlib, os, shutil, sys, cv2, torch, random, glob - -import pandas as pd -import numpy as np -import matplotlib.pyplot as plt -import math -import statistics - -from PIL import Image -from tqdm import tqdm -from pathlib import Path -from itertools import zip_longest -from copy import copy -from operator import attrgetter -from functools import partial -from collections.abc import Mapping - -import torch.nn as nn -import torch.nn.functional as F -import torchvision.models as models -import torch.optim as optim -import torchvision.transforms as transforms -import torchvision -from torch.utils.data import DataLoader, Dataset -from torcheval.metrics import MulticlassAccuracy,Mean, BinaryAccuracy -from torch.nn import init -from fastprogress import progress_bar,master_bar -from torch.optim.lr_scheduler import ExponentialLR - -# %% ../nbs/00_siamese.ipynb 8 -def compare(pic1, pic2): - return random.random() - -# %% ../nbs/00_siamese.ipynb 16 -class with_cbs: - def __init__(self, nm): self.nm = nm - def __call__(self, f): - def _f(o, *args, **kwargs): - try: - o.callback(f'before_{self.nm}') - f(o, *args, **kwargs) - o.callback(f'after_{self.nm}') - except globals()[f'Cancel{self.nm.title()}Exception']: pass - finally: o.callback(f'cleanup_{self.nm}') - return _f - - -def run_cbs(cbs, method_nm, learn=None): - for cb in sorted(cbs, key=attrgetter('order')): - method = getattr(cb, method_nm, None) - if method is not None: method(learn) - -# %% ../nbs/00_siamese.ipynb 17 -def to_cpu(x): - if isinstance(x, Mapping): return {k:to_cpu(v) for k,v in x.items()} - if isinstance(x, list): return [to_cpu(o) for o in x] - if isinstance(x, tuple): return tuple(to_cpu(list(x))) - res = x.detach().cpu() - return res.float() if res.dtype==torch.float16 else res - -def_device = 'mps' if torch.backends.mps.is_available() else 'cuda' if torch.cuda.is_available() else 'cpu' - -def to_device(x, device=def_device): - if isinstance(x, list): return [to_device(o) for o in x] - if isinstance(x, tuple): return tuple(to_device(list(x))) - if isinstance(x, torch.Tensor): return x.to(device) - -# %% ../nbs/00_siamese.ipynb 18 -class CancelFitException(Exception): pass -class CancelBatchException(Exception): pass -class CancelEpochException(Exception): pass - -# %% ../nbs/00_siamese.ipynb 19 -class Callback(): order = 0 - -class MetricsCB(Callback): - def __init__(self, wandb, *ms, **metrics): - for o in ms: metrics[type(o).__name__] = o - self.metrics = metrics - self.all_metrics = copy(metrics) - self.all_metrics['loss'] = self.loss = Mean() - self.wandb = wandb - - def _log(self, d): print(d) - def before_fit(self, learn): learn.metrics = self - def before_epoch(self, learn): [o.reset() for o in self.all_metrics.values()] - - def after_epoch(self, learn): - log = {k:f'{v.compute():.3f}' for k,v in self.all_metrics.items()} - log['epoch'] = learn.epoch - log['train'] = 'train' if learn.model.training else 'eval' - self._log(log) - - log_wandb = {f'{k}_{log["train"]}':round(float(v.compute()), 3) for k,v in self.all_metrics.items()} - self.wandb.log(log_wandb) - - - def after_batch(self, learn): - x,x2, y = to_cpu(learn.batch) - for m in self.metrics.values(): - m.update(to_cpu(learn.preds), y) - - loss = to_cpu(learn.loss) - self.loss.update(loss, weight=len(x)) - self.wandb.log({"batch loss": loss.item()}) - -class TrainCB(Callback): - def __init__(self, n_inp=1): self.n_inp = n_inp - def predict(self, learn): - learn.x, learn.y = learn.batch[:self.n_inp], learn.batch[self.n_inp] - learn.preds = learn.model(*learn.x) - def get_loss(self, learn): - learn.loss = learn.loss_func(learn.preds, *learn.batch[self.n_inp:]) - def backward(self, learn): learn.loss.backward() - def step(self, learn): learn.opt.step() - def zero_grad(self, learn): learn.opt.zero_grad() - -class TrainContrastiveCB(Callback): - def __init__(self, n_inp=1): - self.n_inp = n_inp - self.loss_func - def predict(self, learn): - learn.preds = learn.model(*learn.batch[:self.n_inp]) - def get_loss(self, learn): - pred1, pred2 = learn.preds - label = learn.batch[self.n_inp] - learn.loss = learn.loss_func(pred1, pred2, label) - def backward(self, learn): learn.loss.backward() - def step(self, learn): learn.opt.step() - def zero_grad(self, learn): learn.opt.zero_grad() - -class TrainBceCB(Callback): - def __init__(self, n_inp=1): - self.n_inp = n_inp - self.loss_func = torch.nn.BCELoss() - def predict(self, learn): - learn.x, learn.y = learn.batch[:self.n_inp], learn.batch[self.n_inp] - learn.preds = learn.model(*learn.x) - def get_loss(self, learn): - label = learn.batch[self.n_inp] - learn.loss = self.loss_func(learn.preds, label) - def backward(self, learn): learn.loss.backward() - def step(self, learn): learn.opt.step() - def zero_grad(self, learn): learn.opt.zero_grad() - -class DeviceCB(Callback): - order = 1 - def __init__(self, device): fc.store_attr() - def before_fit(self, learn): - if hasattr(learn.model, 'to'): learn.model.to(self.device) - def before_batch(self, learn): - learn.batch = to_device(learn.batch, device=self.device) - -class ProgressCB(Callback): - order = MetricsCB.order+1 - def __init__(self, plot=False): self.plot = plot - def before_fit(self, learn): - learn.epochs = self.mbar = master_bar(learn.epochs) - self.first = True - if hasattr(learn, 'metrics'): learn.metrics._log = self._log - self.losses = [] - self.val_losses = [] - - - def _log(self, d): - if self.first: - self.mbar.write(list(d), table=True) - self.first = False - self.mbar.write(list(d.values()), table=True) - - def before_epoch(self, learn): learn.dl = progress_bar(learn.dl, leave=False, parent=self.mbar) - def after_batch(self, learn): - learn.dl.comment = f'{learn.loss:.3f}' - if self.plot and hasattr(learn, 'metrics') and learn.training: - self.losses.append(learn.loss.item()) - if self.val_losses: self.mbar.update_graph([[fc.L.range(self.losses), self.losses],[fc.L.range(learn.epoch).map(lambda x: (x+1)*len(learn.dlt)), self.val_losses]]) - - def after_epoch(self, learn): - if not learn.training: - if self.plot and hasattr(learn, 'metrics'): - self.val_losses.append(learn.metrics.all_metrics['loss'].compute()) - self.mbar.update_graph([[fc.L.range(self.losses), self.losses],[fc.L.range(learn.epoch+1).map(lambda x: (x+1)*len(learn.dlt)), self.val_losses]]) - - -class SaveModelCallback(Callback): - "A `TrackerCallback` that saves the model's best during training and loads it at the end." - order = ProgressCB.order + 1 - def __init__(self): - - try: - old_loss = torch.load("model.pth")["loss"] - except: - old_loss = 1000 - - self.valid_losses = [old_loss] - self.valid_losses_batch = [] - - def after_batch(self, learn): - if not learn.training: - self.valid_losses_batch.append(learn.loss.item()) - - def after_epoch(self,learn): - - if not learn.training: - - current_valid_loss = statistics.mean(self.valid_losses_batch) - prev_best = min(self.valid_losses) - if current_valid_loss < prev_best: - print(f"saving model in epoch {learn.epoch} with loss {current_valid_loss} (prev: {prev_best})") - torch.save({ - 'epoch': learn.epoch, - 'model_state_dict': learn.model.state_dict(), - 'optimizer_state_dict': learn.opt.state_dict(), - 'loss':current_valid_loss, - }, "model.pth") - self.valid_losses.append(current_valid_loss) - self.valid_losses_batch = [] - - -class LRFinderCB(Callback): - order = 1 - def __init__(self, gamma=1.3, max_mult=3): fc.store_attr() - - def before_fit(self, learn): - self.sched = ExponentialLR(learn.opt, self.gamma) - self.lrs,self.losses = [],[] - self.min = math.inf - - def after_batch(self, learn): - if not learn.training: raise CancelEpochException() - self.lrs.append(learn.opt.param_groups[0]['lr']) - loss = to_cpu(learn.loss) - self.losses.append(loss) - if loss < self.min: self.min = loss - if math.isnan(loss) or (loss > self.min*self.max_mult): - raise CancelFitException() - self.sched.step() - - def cleanup_fit(self, learn): - plt.plot(self.lrs, self.losses) - plt.xscale('log') - -#| export -class HooksCallback(Callback): - def __init__(self, hookfunc, mod_filter=fc.noop, on_train=True, on_valid=False, mods=None): - fc.store_attr() - super().__init__() - - def before_fit(self, learn): - if self.mods: mods=self.mods - else: mods = fc.filter_ex(learn.model.modules(), self.mod_filter) - self.hooks = Hooks(mods, partial(self._hookfunc, learn)) - - def _hookfunc(self, learn, *args, **kwargs): - if (self.on_train and learn.training) or (self.on_valid and not learn.training): self.hookfunc(*args, **kwargs) - - def after_fit(self, learn): self.hooks.remove() - def __iter__(self): return iter(self.hooks) - def __len__(self): return len(self.hooks) - -#| export -def append_stats(hook, mod, inp, outp): - if not hasattr(hook,'stats'): hook.stats = ([],[],[]) - acts = to_cpu(outp) - hook.stats[0].append(acts.mean()) - hook.stats[1].append(acts.std()) - hook.stats[2].append(acts.abs().histc(40,0,10)) - -#|export -class ActivationStats(HooksCallback): - def __init__(self, mod_filter=fc.noop): super().__init__(append_stats, mod_filter) - - def color_dim(self, figsize=(11,5)): - fig,axes = get_grid(len(self), figsize=figsize) - for ax,h in zip(axes.flat, self): - show_image(get_hist(h), ax, origin='lower') - - def dead_chart(self, figsize=(11,5)): - fig,axes = get_grid(len(self), figsize=figsize) - for ax,h in zip(axes.flatten(), self): - ax.plot(get_min(h)) - ax.set_ylim(0,1) - - def plot_stats(self, figsize=(10,4)): - fig,axs = plt.subplots(1,2, figsize=figsize) - for h in self: - for i in 0,1: axs[i].plot(h.stats[i]) - axs[0].set_title('Means') - axs[1].set_title('Stdevs') - plt.legend(fc.L.range(self)) - - - -class LoadModelCallback(Callback): - order = 0 - def __init__(self, path): - self.path = path - - def before_fit(self, learn): - learn.model.load_state_dict(torch.load(self.path)["model_state_dict"]) - -class TwoDLVCallback(Callback): - order = 0 - def __init__(self, dlv): - self.dlv = dlv - - def after_epoch(self, learn): - print("2nd valid") - storedlearner = deepcopy(learn) #.copy() - storedlearner.dlv = self.dlv - torch.no_grad()(storedlearner._one_epoch)() - - -#| export -class BatchTransformCB(Callback): - def __init__(self, tfm, on_train=True, on_val=True): fc.store_attr() - - def before_batch(self, learn): - if (self.on_train and learn.training) or (self.on_val and not learn.training): - learn.batch = self.tfm(learn.batch) - - -# %% ../nbs/00_siamese.ipynb 21 -class BaseSchedCB(Callback): - def __init__(self, sched): self.sched = sched - def before_fit(self, learn): self.schedo = self.sched(learn.opt) - def _step(self, learn): - if learn.training: self.schedo.step() -#|export -class BatchSchedCB(BaseSchedCB): - def after_batch(self, learn): self._step(learn) - -# %% ../nbs/00_siamese.ipynb 23 -def _rand_erase1(x, pct, xm, xs, mn, mx): - szx = int(pct*x.shape[-2]) - szy = int(pct*x.shape[-1]) - stx = int(random.random()*(1-pct)*x.shape[-2]) - sty = int(random.random()*(1-pct)*x.shape[-1]) - init.normal_(x[:,:,stx:stx+szx,sty:sty+szy], mean=xm, std=xs) - x.clamp_(mn, mx) - -#|export -def rand_erase(x, pct=0.2, max_num = 4): - xm,xs,mn,mx = x.mean(),x.std(),x.min(),x.max() - num = random.randint(0, max_num) - for i in range(num): _rand_erase1(x, pct, xm, xs, mn, mx) - return x - -class RandErase(nn.Module): - def __init__(self, pct=0.2, max_num=4): - super().__init__() - self.pct,self.max_num = pct,max_num - def forward(self, x): return rand_erase(x, self.pct, self.max_num) -class SquareReflectPad: - def __call__(self, image): - image = image.squeeze() - s = image.size() - max_wh = np.min([s[-1], s[-2] * 2.9]) - hp = np.max(int((max_wh - s[-1]) / 2), 0) - vp = np.max(int((max_wh - s[-2]) / 2), 0) - padding = (hp, vp, hp, vp) - new_img = torchvision.transforms.functional.pad(image, padding, padding_mode='reflect') - new_img = new_img.unsqueeze(1).expand(new_img.shape[0],3, new_img.shape[1]).permute(1,0,2) - return new_img - - -# %% ../nbs/00_siamese.ipynb 25 -def _flops(x, h, w): - if x.dim()<3: return x.numel() - if x.dim()==4: return x.numel()*h*w - -class Learner(): - def __init__(self, model, dlt, dlv, lr=0.1, cbs=None, opt_func=optim.SGD): - cbs = fc.L(cbs) - fc.store_attr() - - @with_cbs('batch') - def _one_batch(self): - self.predict() - self.callback('after_predict') - self.get_loss() - self.callback('after_loss') - if self.training: - self.backward() - self.callback('after_backward') - self.step() - self.callback('after_step') - self.zero_grad() - - @with_cbs('epoch') - def _one_epoch(self): - for self.iter ,self.batch in enumerate(self.dl): - self._one_batch() - if self.iter > 100: - break - - def one_epoch(self, training): - self.model.train(training) - self.dl = self.dlt if training else self.dlv - self._one_epoch() - - @with_cbs('fit') - def _fit(self, train, valid): - for self.epoch in self.epochs: - if train: self.one_epoch(True) - if valid: torch.no_grad()(self.one_epoch)(False) - - def fit(self, n_epochs=1, train=True, valid=True, cbs=None, lr=None): - cbs = fc.L(cbs) - # `add_cb` and `rm_cb` were added in lesson 18 - for cb in cbs: self.cbs.append(cb) - try: - self.n_epochs = n_epochs - self.epochs = range(n_epochs) - if lr is None: lr = self.lr - if self.opt_func: self.opt = self.opt_func(self.model.parameters(), lr) - self._fit(train, True) - finally: - for cb in cbs: self.cbs.remove(cb) - - def __getattr__(self, name): - if name in ('predict','get_loss','backward','step','zero_grad'): return partial(self.callback, name) - raise AttributeError(name) - - def callback(self, method_nm): run_cbs(self.cbs, method_nm, self) - - @property - def training(self): return self.model.training - - def lr_find(self, gamma=1.3, max_mult=3, start_lr=1e-5, max_epochs=10): - self.fit(max_epochs, lr=start_lr, cbs=LRFinderCB(gamma=gamma, max_mult=max_mult)) - - def summary(self): - res = '|Module|Input|Output|Num params|MFLOPS|\n|--|--|--|--|--|\n' - totp,totf = 0,0 - def _f(hook, mod, inp, outp): - nonlocal res,totp,totf - nparms = sum(o.numel() for o in mod.parameters()) - totp += nparms - *_,h,w = outp.shape - flops = sum(_flops(o, h, w) for o in mod.parameters())/1e6 - totf += flops - res += f'|{type(mod).__name__}|{tuple(inp[0].shape)}|{tuple(outp.shape)}|{nparms}|{flops:.1f}|\n' - with Hooks(self.model, _f) as hooks: self.fit(1, lr=1, cbs=SingleBatchCB()) - print(f"Tot params: {totp}; MFLOPS: {totf:.1f}") - if fc.IN_NOTEBOOK: - from IPython.display import Markdown - return Markdown(res) - else: print(res) - - - -# %% ../nbs/00_siamese.ipynb 40 -@fc.delegates(plt.Axes.imshow) -def show_image(im, ax=None, figsize=None, title=None, noframe=True, **kwargs): - "Show a PIL or PyTorch image on `ax`." - if fc.hasattrs(im, ('cpu','permute','detach')): - im = im.detach().cpu() - if len(im.shape)==3 and im.shape[0]<5: im=im.permute(1,2,0) - elif not isinstance(im,np.ndarray): im=np.array(im) - if im.shape[-1]==1: im=im[...,0] - if ax is None: _,ax = plt.subplots(figsize=figsize) - ax.imshow(im, **kwargs, cmap='gray') - if title is not None: ax.set_title(title, color="red") - ax.set_xticks([]) - ax.set_yticks([]) - if noframe: ax.axis('off') - - return ax - -@fc.delegates(plt.subplots, keep=True) -def subplots( - nrows:int=1, # Number of rows in returned axes grid - ncols:int=1, # Number of columns in returned axes grid - figsize:tuple=None, # Width, height in inches of the returned figure - imsize:int=3, # Size (in inches) of images that will be displayed in the returned figure - suptitle:str=None, # Title to be set to returned figure - **kwargs -): # fig and axs - "A figure and set of subplots to display images of `imsize` inches" - if figsize is None: figsize=(ncols*imsize, nrows*imsize) - fig,ax = plt.subplots(nrows, ncols, figsize=figsize, **kwargs) - if suptitle is not None: fig.suptitle(suptitle) - if nrows*ncols==1: ax = np.array([ax]) - - return fig,ax - -@fc.delegates(subplots) -def get_grid( - n:int, # Number of axes - nrows:int=None, # Number of rows, defaulting to `int(math.sqrt(n))` - ncols:int=None, # Number of columns, defaulting to `ceil(n/rows)` - title:str=None, # If passed, title set to the figure - weight:str='bold', # Title font weight - size:int=14, # Title font size - **kwargs, -): # fig and axs - "Return a grid of `n` axes, `rows` by `cols`" - if nrows: ncols = ncols or int(np.floor(n/nrows)) - elif ncols: nrows = nrows or int(np.ceil(n/ncols)) - else: - nrows = int(math.sqrt(n)) - ncols = int(np.floor(n/nrows)) - fig,axs = subplots(nrows, ncols, **kwargs) - for i in range(n, nrows*ncols): axs.flat[i].set_axis_off() - if title is not None: fig.suptitle(title, weight=weight, size=size) - return fig,axs - -@fc.delegates(subplots) -def show_images(ims:list, # Images to show - nrows:int|None=None, # Number of rows in grid - ncols:int|None=None, # Number of columns in grid (auto-calculated if None) - titles:list|None=None, # Optional list of titles for each image - **kwargs): - "Show all images `ims` as subplots with `rows` using `titles`" - axs = get_grid(len(ims), nrows, ncols, **kwargs)[1].flat - for im,t,ax in zip_longest(ims, titles or [], axs): show_image(im, ax=ax, title=t) - -def reshape_alternating(tens1, tens2): - new = torch.stack((tens1, tens2), dim=0) - return torch.transpose(new,0,1).flatten(start_dim=0, end_dim=1) diff --git a/spaces/daffyshaci/bert-keyword-extraction/app.py b/spaces/daffyshaci/bert-keyword-extraction/app.py deleted file mode 100644 index a0e1f295d96091ead250209c02afc8942c1fbc77..0000000000000000000000000000000000000000 --- a/spaces/daffyshaci/bert-keyword-extraction/app.py +++ /dev/null @@ -1,157 +0,0 @@ -import pandas as pd - -import gradio as gr -import stanza - -import torch - -from collections import Counter - -import nltk -from thefuzz import process -from thefuzz import fuzz -import re - -from sentence_transformers import SentenceTransformer -from keyphrase_vectorizers import KeyphraseCountVectorizer -from keybert import KeyBERT - -DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") -print(DEVICE) - -def get_fuzz_candidates(candidates): - result = [] - try: - for item in candidates: - if len(result) == 0: - result.append(item) - elif len(result) == 1: - extract = fuzz.token_sort_ratio(item, candidates[0]) - if extract < 85: - result.append(item) - else: - extract = process.extractOne(item, result, scorer=fuzz.token_sort_ratio) - if extract[1] < 85: - result.append(item) - - except Exception as e: - print(str(e)) - - -def pre_proccess_text(text): - stopword = ['ada', 'adalah', 'adanya', 'adapun', 'agak', 'agaknya', 'agar', 'akan', 'akankah', 'akhir', 'akhiri', 'akhirnya', 'aku', 'akulah', 'amat', 'amatlah', 'anda', 'andalah', 'antar', 'antara', 'antaranya', 'apa', 'apaan', 'apabila', 'apakah', 'apalagi', 'apatah', 'artinya', 'asal', 'asalkan', 'atas', 'atau', 'ataukah', 'ataupun', 'awal', 'awalnya', 'bagai', 'bagaikan', 'bagaimana', 'bagaimanakah', 'bagaimanapun', 'bagi', 'bagian', 'bahkan', 'bahwa', 'bahwasanya', 'baik', 'bakal', 'bakalan', 'balik', 'banyak', 'bapak', 'baru', 'bawah', 'beberapa', 'begini', 'beginian', 'beginikah', 'beginilah', 'begitu', 'begitukah', 'begitulah', 'begitupun', 'bekerja', 'belakang', 'belakangan', 'belum', 'belumlah', 'benar', 'benarkah', 'benarlah', 'berada', 'berakhir', 'berakhirlah', 'berakhirnya', 'berapa', 'berapakah', 'berapalah', 'berapapun', 'berarti', 'berawal', 'berbagai', 'berdatangan', 'beri', 'berikan', 'berikut', 'berikutnya', 'berjumlah', 'berkali-kali', 'berkata', 'berkehendak', 'berkeinginan', 'berkenaan', 'berlainan', 'berlalu', 'berlangsung', 'berlebihan', 'bermacam', 'bermacam-macam', 'bermaksud', 'bermula', 'bersama', 'bersama-sama', 'bersiap', 'bersiap-siap', 'bertanya', 'bertanya-tanya', 'berturut', 'berturut-turut', 'bertutur', 'berujar', 'berupa', 'besar', 'betul', 'betulkah', 'biasa', 'biasanya', 'bila', 'bilakah', 'bisa', 'bisakah', 'boleh', 'bolehkah', 'bolehlah', 'buat', 'bukan', 'bukankah', 'bukanlah', 'bukannya', 'bulan', 'bung', 'cara', 'caranya', 'cukup', 'cukupkah', 'cukuplah', 'cuma', 'dahulu', 'dalam', 'dan', 'dapat', 'dari', 'daripada', 'datang', 'dekat', 'demi', 'demikian', 'demikianlah', 'dengan', 'depan', 'di', 'dia', 'diakhiri', 'diakhirinya', 'dialah', 'diantara', 'diantaranya', 'diberi', 'diberikan', 'diberikannya', 'dibuat', 'dibuatnya', 'didapat', 'didatangkan', 'digunakan', 'diibaratkan', 'diibaratkannya', 'diingat', 'diingatkan', 'diinginkan', 'dijawab', 'dijelaskan', 'dijelaskannya', 'dikarenakan', 'dikatakan', 'dikatakannya', 'dikerjakan', 'diketahui', 'diketahuinya', 'dikira', 'dilakukan', 'dilalui', 'dilihat', 'dimaksud', 'dimaksudkan', 'dimaksudkannya', 'dimaksudnya', 'diminta', 'dimintai', 'dimisalkan', 'dimulai', 'dimulailah', 'dimulainya', 'dimungkinkan', 'dini', 'dipastikan', 'diperbuat', 'diperbuatnya', 'dipergunakan', 'diperkirakan', 'diperlihatkan', 'diperlukan', 'diperlukannya', 'dipersoalkan', 'dipertanyakan', 'dipunyai', 'diri', 'dirinya', 'disampaikan', 'disebut', 'disebutkan', 'disebutkannya', 'disini', 'disinilah', 'ditambahkan', 'ditandaskan', 'ditanya', 'ditanyai', 'ditanyakan', 'ditegaskan', 'ditujukan', 'ditunjuk', 'ditunjuki', 'ditunjukkan', 'ditunjukkannya', 'ditunjuknya', 'dituturkan', 'dituturkannya', 'diucapkan', 'diucapkannya', 'diungkapkan', 'dong', 'dua', 'dulu', 'empat', 'enggak', 'enggaknya', 'entah', 'entahlah', 'guna', 'gunakan', 'hal', 'hampir', 'hanya', 'hanyalah', 'hari', 'harus', 'haruslah', 'harusnya', 'hendak', 'hendaklah', 'hendaknya', 'hingga', 'ia', 'ialah', 'ibarat', 'ibaratkan', 'ibaratnya', 'ibu', 'ikut', 'ingat', 'ingat-ingat', 'ingin', 'inginkah', 'inginkan', 'ini', 'inikah', 'inilah', 'itu', 'itukah', 'itulah', 'jadi', 'jadilah', 'jadinya', 'jangan', 'jangankan', 'janganlah', 'jauh', 'jawab', 'jawaban', 'jawabnya', 'jelas', 'jelaskan', 'jelaslah', 'jelasnya', 'jika', 'jikalau', 'juga', 'jumlah', 'jumlahnya', 'justru', 'kala', 'kalau', 'kalaulah', 'kalaupun', 'kalian', 'kami', 'kamilah', 'kamu', 'kamulah', 'kan', 'kapan', 'kapankah', 'kapanpun', 'karena', 'karenanya', 'kasus', 'kata', 'katakan', 'katakanlah', 'katanya', 'ke', 'keadaan', 'kebetulan', 'kecil', 'kedua', 'keduanya', 'keinginan', 'kelamaan', 'kelihatan', 'kelihatannya', 'kelima', 'keluar', 'kembali', 'kemudian', 'kemungkinan', 'kemungkinannya', 'kenapa', 'kepada', 'kepadanya', 'kesampaian', 'keseluruhan', 'keseluruhannya', 'keterlaluan', 'ketika', 'khususnya', 'kini', 'kinilah', 'kira', 'kira-kira', 'kiranya', 'kita', 'kitalah', 'kok', 'kurang', 'lagi', 'lagian', 'lah', 'lain', 'lainnya', 'lalu', 'lama', 'lamanya', 'lanjut', 'lanjutnya', 'lebih', 'lewat', 'lima', 'luar', 'macam', 'maka', 'makanya', 'makin', 'malah', 'malahan', 'mampu', 'mampukah', 'mana', 'manakala', 'manalagi', 'masa', 'masalah', 'masalahnya', 'masih', 'masihkah', 'masing', 'masing-masing', 'mau', 'maupun', 'melainkan', 'melakukan', 'melalui', 'melihat', 'melihatnya', 'memang', 'memastikan', 'memberi', 'memberikan', 'membuat', 'memerlukan', 'memihak', 'meminta', 'memintakan', 'memisalkan', 'memperbuat', 'mempergunakan', 'memperkirakan', 'memperlihatkan', 'mempersiapkan', 'mempersoalkan', 'mempertanyakan', 'mempunyai', 'memulai', 'memungkinkan', 'menaiki', 'menambahkan', 'menandaskan', 'menanti', 'menanti-nanti', 'menantikan', 'menanya', 'menanyai', 'menanyakan', 'mendapat', 'mendapatkan', 'mendatang', 'mendatangi', 'mendatangkan', 'menegaskan', 'mengakhiri', 'mengapa', 'mengatakan', 'mengatakannya', 'mengenai', 'mengerjakan', 'mengetahui', 'menggunakan', 'menghendaki', 'mengibaratkan', 'mengibaratkannya', 'mengingat', 'mengingatkan', 'menginginkan', 'mengira', 'mengucapkan', 'mengucapkannya', 'mengungkapkan', 'menjadi', 'menjawab', 'menjelaskan', 'menuju', 'menunjuk', 'menunjuki', 'menunjukkan', 'menunjuknya', 'menurut', 'menuturkan', 'menyampaikan', 'menyangkut', 'menyatakan', 'menyebutkan', 'menyeluruh', 'menyiapkan', 'merasa', 'mereka', 'merekalah', 'merupakan', 'meski', 'meskipun', 'meyakini', 'meyakinkan', 'minta', 'mirip', 'misal', 'misalkan', 'misalnya', 'mula', 'mulai', 'mulailah', 'mulanya', 'mungkin', 'mungkinkah', 'nah', 'naik', 'namun', 'nanti', 'nantinya', 'nyaris', 'nyatanya', 'oleh', 'olehnya', 'pada', 'padahal', 'padanya', 'pak', 'paling', 'panjang', 'pantas', 'para', 'pasti', 'pastilah', 'penting', 'pentingnya', 'per', 'percuma', 'perlu', 'perlukah', 'perlunya', 'pernah', 'persoalan', 'pertama', 'pertama-tama', 'pertanyaan', 'pertanyakan', 'pihak', 'pihaknya', 'pukul', 'pula', 'pun', 'punya', 'rasa', 'rasanya', 'rata', 'rupanya', 'saat', 'saatnya', 'saja', 'sajalah', 'saling', 'sama', 'sama-sama', 'sambil', 'sampai', 'sampai-sampai', 'sampaikan', 'sana', 'sangat', 'sangatlah', 'satu', 'saya', 'sayalah', 'se', 'sebab', 'sebabnya', 'sebagai', 'sebagaimana', 'sebagainya', 'sebagian', 'sebaik', 'sebaik-baiknya', 'sebaiknya', 'sebaliknya', 'sebanyak', 'sebegini', 'sebegitu', 'sebelum', 'sebelumnya', 'sebenarnya', 'seberapa', 'sebesar', 'sebetulnya', 'sebisanya', 'sebuah', 'sebut', 'sebutlah', 'sebutnya', 'secara', 'secukupnya', 'sedang', 'sedangkan', 'sedemikian', 'sedikit', 'sedikitnya', 'seenaknya', 'segala', 'segalanya', 'segera', 'seharusnya', 'sehingga', 'seingat', 'sejak', 'sejauh', 'sejenak', 'sejumlah', 'sekadar', 'sekadarnya', 'sekali', 'sekali-kali', 'sekalian', 'sekaligus', 'sekalipun', 'sekarang', 'sekarang', 'sekecil', 'seketika', 'sekiranya', 'sekitar', 'sekitarnya', 'sekurang-kurangnya', 'sekurangnya', 'sela', 'selain', 'selaku', 'selalu', 'selama', 'selama-lamanya', 'selamanya', 'selanjutnya', 'seluruh', 'seluruhnya', 'semacam', 'semakin', 'semampu', 'semampunya', 'semasa', 'semasih', 'semata', 'semata-mata', 'semaunya', 'sementara', 'semisal', 'semisalnya', 'sempat', 'semua', 'semuanya', 'semula', 'sendiri', 'sendirian', 'sendirinya', 'seolah', 'seolah-olah', 'seorang', 'sepanjang', 'sepantasnya', 'sepantasnyalah', 'seperlunya', 'seperti', 'sepertinya', 'sepihak', 'sering', 'seringnya', 'serta', 'serupa', 'sesaat', 'sesama', 'sesampai', 'sesegera', 'sesekali', 'seseorang', 'sesuatu', 'sesuatunya', 'sesudah', 'sesudahnya', 'setelah', 'setempat', 'setengah', 'seterusnya', 'setiap', 'setiba', 'setibanya', 'setidak-tidaknya', 'setidaknya', 'setinggi', 'seusai', 'sewaktu', 'siap', 'siapa', 'siapakah', 'siapapun', 'sini', 'sinilah', 'soal', 'soalnya', 'suatu', 'sudah', 'sudahkah', 'sudahlah', 'supaya', 'tadi', 'tadinya', 'tahu', 'tahun', 'tak', 'tambah', 'tambahnya', 'tampak', 'tampaknya', 'tandas', 'tandasnya', 'tanpa', 'tanya', 'tanyakan', 'tanyanya', 'tapi', 'tegas', 'tegasnya', 'telah', 'tempat', 'tengah', 'tentang', 'tentu', 'tentulah', 'tentunya', 'tepat', 'terakhir', 'terasa', 'terbanyak', 'terdahulu', 'terdapat', 'terdiri', 'terhadap', 'terhadapnya', 'teringat', 'teringat-ingat', 'terjadi', 'terjadilah', 'terjadinya', 'terkira', 'terlalu', 'terlebih', 'terlihat', 'termasuk', 'ternyata', 'tersampaikan', 'tersebut', 'tersebutlah', 'tertentu', 'tertuju', 'terus', 'terutama', 'tetap', 'tetapi', 'tiap', 'tiba', 'tiba-tiba', 'tidak', 'tidakkah', 'tidaklah', 'tiga', 'tinggi', 'toh', 'tunjuk', 'turut', 'tutur', 'tuturnya', 'ucap', 'ucapnya', 'ujar', 'ujarnya', 'umum', 'umumnya', 'ungkap', 'ungkapnya', 'untuk', 'usah', 'usai', 'waduh', 'wah', 'wahai', 'waktu', 'waktunya', 'walau', 'walaupun', 'wong', 'yaitu', 'yakin', 'yakni', 'yang'] - text_1 = text.lower().split() - text_1 = [x for x in text_1 if x not in stopword] - text_1 = " ".join(text_1) - return text_1 - -def parser_text(doc, text): - grammar = "NP: {+}" - # grammar = "NP: {+}" - parser = nltk.RegexpParser(grammar) - # create word and POS tag pair - pairs = [] - for sentence in doc.sentences: - tagged = [] - for word in sentence.words: - tagged.append((word.text, word.upos)) - pairs.append(tagged) - candidates = [] - for sentence in pairs: - parse_tree = parser.parse(sentence) - for subtree in parse_tree.subtrees(): - if subtree.label() == 'NP' and (len(subtree.leaves()) >= 1 and len(subtree.leaves()) <= 3): # only consider bigram - words = [item[0] for item in subtree.leaves()] - keyword = ' '.join(words) - clean_keyword = re.sub('[^A-Za-z0-9]+', ' ', keyword) - if len(re.findall(clean_keyword, text, flags=re.IGNORECASE)) > 0: - candidates.append(clean_keyword) - return candidates - -def sort_freq_candidates(self, candidates): - - freq = Counter(candidates) - freq_used = freq.most_common(500) - for k,v in freq_used: - if k not in self.candidates: - self.candidates.append(k) - -def keybert_keyword(text, candidate, diversity, result): - - top_p = result - if result > len(candidate): - top_p = len(candidate) - -# DEVICE = "cuda:0" - DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu") - model_1 = 'paraphrase-multilingual-MiniLM-L12-v2' - - model = SentenceTransformer(model_1, device=DEVICE) - kw_model = KeyBERT(model) - - #vectorizer = KeyphraseCountVectorizer(pos_pattern='*+') - #vectorizer=vectorizer, - keyword = kw_model.extract_keywords(text, candidates=candidate, top_n=top_p, use_mmr=True, diversity=diversity) - keyword = parser_keyword(keyword, text) - return keyword - -def parser_keyword(keywords, text): - clean_data = [] - for k, v in keywords: - data = {} - if v > 0.22 : - freq = len(re.findall(k, text, flags=re.IGNORECASE)) - if freq > 0: - data['keyword'] = k - data['score'] = v - data['freq'] = len(re.findall(k, text, flags=re.IGNORECASE)) - clean_data.append(data) - - - return sorted(clean_data, key=lambda k: k['score'], reverse=True) - -def get_keyword(text, diversity, result, lang): - lang = lang - stanza.download(lang, processors='tokenize,pos') - nlp = stanza.Pipeline(lang=lang, processors='tokenize,pos', use_gpu=False) - list_candidates = [] - - clean_text = pre_proccess_text(text) - nlp = nlp(clean_text.lower()) - candidates = parser_text(nlp, text) - #freq = Counter(candidates) - #freq_used = freq.most_common(500) - #for k,v in freq_used: - # if k not in list_candidates: - # list_candidates.append(k) - - bert_keyword = keybert_keyword(text, candidates, diversity, result) - - kw = pd.DataFrame(columns=['keyword', 'Score', 'Freq']) - kw = kw.append(bert_keyword, ignore_index=True) - kw = pd.DataFrame(data=bert_keyword) - - return kw - - - -ex1 ="Bitcoint is a digital asset and currency created by Satoshi Nakamoto. It was proposed in 2008 and implemented in 2009 as the first practical cryptocurrency. Bitcoint is based on the blockchain technology, which allows secure, decentralized transactions and control of the creation of new units." -ex2 = "Bitcoin adalah mata uang digital yang pertama kali diperkenalkan pada tahun 2009. Digunakan untuk membeli barang dan melakukan transaksi di internet. Bitcoin terdesentralisasi, artinya tidak tunduk pada kontrol pemerintah atau lembaga keuangan. Bitcoin telah disebut sebagai contoh terbaik dari proyek perangkat lunak sumber terbuka karena kodenya terus diperbarui dan ditingkatkan." - - -title = "SEO NLP - Bert Keyword Extractor" -description = "Demo penggunaan BERT models untuk meng-ekstrak keyword yang penting dari input artikel. Penjelasan dari SEO NLP di saung seo" - -web = gr.Interface( - get_keyword, - inputs=[gr.Textbox(label="Artikel Text",lines=20), - gr.Slider(0.2,0.9,step=0.1,value=0.3,label="Diversity Keyword"), - gr.Slider(5,100,step=1,value=10,label="Max Result"), - gr.Dropdown(["id", "en", "es"])], - outputs=[ gr.outputs.Dataframe()], - examples=[[ex1,0.3,10,"en"],[ex2,0.3,10,"id"]], examples_per_page=2, live=False, - layout="horizontal", interpretation=None, title=title, - description=description - ) - -web.launch(debug=True) \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/exceptions.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/exceptions.py deleted file mode 100644 index 42f4709fba8f8d05003fee32dc13e25c92ade7e4..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fastapi/exceptions.py +++ /dev/null @@ -1,55 +0,0 @@ -from typing import Any, Dict, Optional, Sequence, Type - -from pydantic import BaseModel, create_model -from starlette.exceptions import HTTPException as StarletteHTTPException -from starlette.exceptions import WebSocketException as WebSocketException # noqa: F401 - - -class HTTPException(StarletteHTTPException): - def __init__( - self, - status_code: int, - detail: Any = None, - headers: Optional[Dict[str, str]] = None, - ) -> None: - super().__init__(status_code=status_code, detail=detail, headers=headers) - - -RequestErrorModel: Type[BaseModel] = create_model("Request") -WebSocketErrorModel: Type[BaseModel] = create_model("WebSocket") - - -class FastAPIError(RuntimeError): - """ - A generic, FastAPI-specific error. - """ - - -class ValidationException(Exception): - def __init__(self, errors: Sequence[Any]) -> None: - self._errors = errors - - def errors(self) -> Sequence[Any]: - return self._errors - - -class RequestValidationError(ValidationException): - def __init__(self, errors: Sequence[Any], *, body: Any = None) -> None: - super().__init__(errors) - self.body = body - - -class WebSocketRequestValidationError(ValidationException): - pass - - -class ResponseValidationError(ValidationException): - def __init__(self, errors: Sequence[Any], *, body: Any = None) -> None: - super().__init__(errors) - self.body = body - - def __str__(self) -> str: - message = f"{len(self._errors)} validation errors:\n" - for err in self._errors: - message += f" {err}\n" - return message diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/plot.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/plot.py deleted file mode 100644 index 54927b8b89885485ce1422daac54bbb99c46cd7b..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/plot.py +++ /dev/null @@ -1,159 +0,0 @@ -"""gr.Plot() component.""" - -from __future__ import annotations - -import json -from types import ModuleType -from typing import Any, Callable, Literal - -import altair as alt -import pandas as pd -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import JSONSerializable - -from gradio import processing_utils -from gradio.components.base import IOComponent, _Keywords -from gradio.deprecation import warn_style_method_deprecation -from gradio.events import Changeable, Clearable - -set_documentation_group("component") - - -@document() -class Plot(Changeable, Clearable, IOComponent, JSONSerializable): - """ - Used to display various kinds of plots (matplotlib, plotly, or bokeh are supported) - Preprocessing: this component does *not* accept input. - Postprocessing: expects either a {matplotlib.figure.Figure}, a {plotly.graph_objects._figure.Figure}, or a {dict} corresponding to a bokeh plot (json_item format) - - Demos: altair_plot, outbreak_forecast, blocks_kinematics, stock_forecast, map_airbnb - Guides: plot-component-for-maps - """ - - def __init__( - self, - value: Callable | None | pd.DataFrame = None, - *, - label: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - value: Optionally, supply a default plot object to display, must be a matplotlib, plotly, altair, or bokeh figure, or a callable. If callable, the function will be called whenever the app loads to set the initial value of the component. - label: component name in interface. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - IOComponent.__init__( - self, - label=label, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def get_config(self): - try: - import bokeh # type: ignore - - bokeh_version = bokeh.__version__ - except ImportError: - bokeh_version = None - return { - "value": self.value, - "bokeh_version": bokeh_version, - **IOComponent.get_config(self), - } - - @staticmethod - def update( - value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - label: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - ): - updated_config = { - "label": label, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "__type__": "update", - } - return updated_config - - def postprocess(self, y) -> dict[str, str] | None: - """ - Parameters: - y: plot data - Returns: - plot type mapped to plot base64 data - """ - import matplotlib.figure - - if y is None: - return None - if isinstance(y, (ModuleType, matplotlib.figure.Figure)): # type: ignore - dtype = "matplotlib" - out_y = processing_utils.encode_plot_to_base64(y) - elif "bokeh" in y.__module__: - dtype = "bokeh" - from bokeh.embed import json_item # type: ignore - - out_y = json.dumps(json_item(y)) - else: - is_altair = "altair" in y.__module__ - dtype = "altair" if is_altair else "plotly" - out_y = y.to_json() - return {"type": dtype, "plot": out_y} - - def style(self, container: bool | None = None): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if container is not None: - self.container = container - return self - - -class AltairPlot: - @staticmethod - def create_legend(position, title): - if position == "none": - legend = None - else: - position = {"orient": position} if position else {} - legend = {"title": title, **position} - - return legend - - @staticmethod - def create_scale(limit): - return alt.Scale(domain=limit) if limit else alt.Undefined diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-e3702dba.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-e3702dba.js deleted file mode 100644 index 46affce108983e2fefd10eea343955184f2a5576..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-e3702dba.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as te,e as se,s as ae,F as S,G as j,w as I,u as B,H as q,C as ie,V as oe,ae as ue,o as z,m as C,g as d,h as b,Q as fe,R as _e,r as re,v as me,k as v,I as G,P as ce,X as H,Y as M,j as K,n as X,Z as y,t as he,K as Y,p as E,x as ge,B as de}from"./index-39fce9e2.js";import{B as be}from"./Button-79f6e3bf.js";import{B as ve}from"./BlockLabel-b1428685.js";import{E as ke}from"./Empty-16d6169a.js";import{I as p}from"./Image-e7c48875.js";import{n as Z}from"./ModifyUpload.svelte_svelte_type_style_lang-14b768c9.js";function J(n,e,t){const l=n.slice();return l[27]=e[t][0],l[12]=e[t][1],l[29]=t,l}function O(n,e,t){const l=n.slice();return l[30]=e[t][0],l[12]=e[t][1],l[29]=t,l}function we(n){let e,t,l,s,i,a,r=G(n[13]?n[13][1]:[]),m=[];for(let u=0;u{_[w]=null}),me(),r=_[a],r?r.p(f,g):(r=_[a]=h[a](f),r.c()),I(r,1),r.m(i,null))},i(f){m||(I(e.$$.fragment,f),I(l.$$.fragment,f),I(r),m=!0)},o(f){B(e.$$.fragment,f),B(l.$$.fragment,f),B(r),m=!1},d(f){f&&(v(t),v(s),v(i)),q(e,f),q(l,f),_[a].d()}}}function Me(n){let e,t;return e=new be({props:{visible:n[2],elem_id:n[0],elem_classes:n[1],padding:!1,height:n[5],width:n[6],allow_overflow:!1,container:n[8],scale:n[9],min_width:n[10],$$slots:{default:[Ae]},$$scope:{ctx:n}}}),{c(){S(e.$$.fragment)},m(l,s){j(e,l,s),t=!0},p(l,s){const i={};s[0]&4&&(i.visible=l[2]),s[0]&1&&(i.elem_id=l[0]),s[0]&2&&(i.elem_classes=l[1]),s[0]&32&&(i.height=l[5]),s[0]&64&&(i.width=l[6]),s[0]&256&&(i.container=l[8]),s[0]&512&&(i.scale=l[9]),s[0]&1024&&(i.min_width=l[10]),s[0]&30904|s[1]&2&&(i.$$scope={dirty:s,ctx:l}),e.$set(i)},i(l){t||(I(e.$$.fragment,l),t=!0)},o(l){B(e.$$.fragment,l),t=!1},d(l){q(e,l)}}}function Ce(n,e,t){let{elem_id:l=""}=e,{elem_classes:s=[]}=e,{visible:i=!0}=e,{value:a}=e,r,m,{label:c="Annotated Image"}=e,{show_label:u=!0}=e,{show_legend:h=!0}=e,{height:_}=e,{width:k}=e,{color_map:f}=e,{container:g=!0}=e,{scale:D=null}=e,{min_width:A=void 0}=e,{root:w}=e,{root_url:F}=e,L=null,{loading_status:V}=e;const N=ie();function P(o){t(14,L=o)}function Q(){t(14,L=null)}const x=o=>P(o),$=o=>P(o),ee=()=>Q(),le=()=>Q(),ne=(o,R)=>N("select",{index:o,value:R});return n.$$set=o=>{"elem_id"in o&&t(0,l=o.elem_id),"elem_classes"in o&&t(1,s=o.elem_classes),"visible"in o&&t(2,i=o.visible),"value"in o&&t(18,a=o.value),"label"in o&&t(12,c=o.label),"show_label"in o&&t(3,u=o.show_label),"show_legend"in o&&t(4,h=o.show_legend),"height"in o&&t(5,_=o.height),"width"in o&&t(6,k=o.width),"color_map"in o&&t(7,f=o.color_map),"container"in o&&t(8,g=o.container),"scale"in o&&t(9,D=o.scale),"min_width"in o&&t(10,A=o.min_width),"root"in o&&t(19,w=o.root),"root_url"in o&&t(20,F=o.root_url),"loading_status"in o&&t(11,V=o.loading_status)},n.$$.update=()=>{n.$$.dirty[0]&3932160&&(a!==r&&(t(21,r=a),N("change")),a?t(13,m=[Z(a[0],w,F),a[1].map(([o,R])=>[Z(o,w,F),R])]):t(13,m=null))},[l,s,i,u,h,_,k,f,g,D,A,V,c,m,L,N,P,Q,a,w,F,r,x,$,ee,le,ne]}class Ee extends te{constructor(e){super(),se(this,e,Ce,Me,ae,{elem_id:0,elem_classes:1,visible:2,value:18,label:12,show_label:3,show_legend:4,height:5,width:6,color_map:7,container:8,scale:9,min_width:10,root:19,root_url:20,loading_status:11},null,[-1,-1])}}const Ge=Ee,He=["static"];export{Ge as Component,He as modes}; -//# sourceMappingURL=index-e3702dba.js.map diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_git_credential.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_git_credential.py deleted file mode 100644 index fc287b2a77236df4024b53bccc2559a99a79b8f7..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_git_credential.py +++ /dev/null @@ -1,96 +0,0 @@ -# coding=utf-8 -# Copyright 2022-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains utilities to manage Git credentials.""" -import subprocess -from typing import List, Optional - -from ..constants import ENDPOINT -from ._subprocess import run_interactive_subprocess, run_subprocess - - -def list_credential_helpers(folder: Optional[str] = None) -> List[str]: - """Return the list of git credential helpers configured. - - See https://git-scm.com/docs/gitcredentials. - - Credentials are saved in all configured helpers (store, cache, macOS keychain,...). - Calls "`git credential approve`" internally. See https://git-scm.com/docs/git-credential. - - Args: - folder (`str`, *optional*): - The folder in which to check the configured helpers. - """ - try: - output = run_subprocess("git config --list", folder=folder).stdout - # NOTE: If user has set an helper for a custom URL, it will not we caught here. - # Example: `credential.https://huggingface.co.helper=store` - # See: https://github.com/huggingface/huggingface_hub/pull/1138#discussion_r1013324508 - return sorted( # Sort for nice printing - { # Might have some duplicates - line.split("=")[-1].split()[0] for line in output.split("\n") if "credential.helper" in line - } - ) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - -def set_git_credential(token: str, username: str = "hf_user", folder: Optional[str] = None) -> None: - """Save a username/token pair in git credential for HF Hub registry. - - Credentials are saved in all configured helpers (store, cache, macOS keychain,...). - Calls "`git credential approve`" internally. See https://git-scm.com/docs/git-credential. - - Args: - username (`str`, defaults to `"hf_user"`): - A git username. Defaults to `"hf_user"`, the default user used in the Hub. - token (`str`, defaults to `"hf_user"`): - A git password. In practice, the User Access Token for the Hub. - See https://huggingface.co/settings/tokens. - folder (`str`, *optional*): - The folder in which to check the configured helpers. - """ - with run_interactive_subprocess("git credential approve", folder=folder) as ( - stdin, - _, - ): - stdin.write(f"url={ENDPOINT}\nusername={username.lower()}\npassword={token}\n\n") - stdin.flush() - - -def unset_git_credential(username: str = "hf_user", folder: Optional[str] = None) -> None: - """Erase credentials from git credential for HF Hub registry. - - Credentials are erased from the configured helpers (store, cache, macOS - keychain,...), if any. If `username` is not provided, any credential configured for - HF Hub endpoint is erased. - Calls "`git credential erase`" internally. See https://git-scm.com/docs/git-credential. - - Args: - username (`str`, defaults to `"hf_user"`): - A git username. Defaults to `"hf_user"`, the default user used in the Hub. - folder (`str`, *optional*): - The folder in which to check the configured helpers. - """ - with run_interactive_subprocess("git credential reject", folder=folder) as ( - stdin, - _, - ): - standard_input = f"url={ENDPOINT}\n" - if username is not None: - standard_input += f"username={username.lower()}\n" - standard_input += "\n" - - stdin.write(standard_input) - stdin.flush() diff --git a/spaces/deelerb/3dselfie/PIFu/lib/renderer/gl/glcontext.py b/spaces/deelerb/3dselfie/PIFu/lib/renderer/gl/glcontext.py deleted file mode 100644 index 881df0feca38678d6c075ef85ae65c12875b6b48..0000000000000000000000000000000000000000 --- a/spaces/deelerb/3dselfie/PIFu/lib/renderer/gl/glcontext.py +++ /dev/null @@ -1,142 +0,0 @@ -"""Headless GPU-accelerated OpenGL context creation on Google Colaboratory. - -Typical usage: - - # Optional PyOpenGL configuratiopn can be done here. - # import OpenGL - # OpenGL.ERROR_CHECKING = True - - # 'glcontext' must be imported before any OpenGL.* API. - from lucid.misc.gl.glcontext import create_opengl_context - - # Now it's safe to import OpenGL and EGL functions - import OpenGL.GL as gl - - # create_opengl_context() creates a GL context that is attached to an - # offscreen surface of the specified size. Note that rendering to buffers - # of other sizes and formats is still possible with OpenGL Framebuffers. - # - # Users are expected to directly use the EGL API in case more advanced - # context management is required. - width, height = 640, 480 - create_opengl_context((width, height)) - - # OpenGL context is available here. - -""" - -from __future__ import print_function - -# pylint: disable=unused-import,g-import-not-at-top,g-statement-before-imports - -try: - import OpenGL -except: - print('This module depends on PyOpenGL.') - print('Please run "\033[1m!pip install -q pyopengl\033[0m" ' - 'prior importing this module.') - raise - -import ctypes -from ctypes import pointer, util -import os - -os.environ['PYOPENGL_PLATFORM'] = 'egl' - -# OpenGL loading workaround. -# -# * PyOpenGL tries to load libGL, but we need libOpenGL, see [1,2]. -# This could have been solved by a symlink libGL->libOpenGL, but: -# -# * Python 2.7 can't find libGL and linEGL due to a bug (see [3]) -# in ctypes.util, that was only wixed in Python 3.6. -# -# So, the only solution I've found is to monkeypatch ctypes.util -# [1] https://devblogs.nvidia.com/egl-eye-opengl-visualization-without-x-server/ -# [2] https://devblogs.nvidia.com/linking-opengl-server-side-rendering/ -# [3] https://bugs.python.org/issue9998 -_find_library_old = ctypes.util.find_library -try: - - def _find_library_new(name): - return { - 'GL': 'libOpenGL.so', - 'EGL': 'libEGL.so', - }.get(name, _find_library_old(name)) - util.find_library = _find_library_new - import OpenGL.GL as gl - import OpenGL.EGL as egl - from OpenGL import error - from OpenGL.EGL.EXT.device_base import egl_get_devices - from OpenGL.raw.EGL.EXT.platform_device import EGL_PLATFORM_DEVICE_EXT -except: - print('Unable to load OpenGL libraries. ' - 'Make sure you use GPU-enabled backend.') - print('Press "Runtime->Change runtime type" and set ' - '"Hardware accelerator" to GPU.') - raise -finally: - util.find_library = _find_library_old - -def create_initialized_headless_egl_display(): - """Creates an initialized EGL display directly on a device.""" - for device in egl_get_devices(): - display = egl.eglGetPlatformDisplayEXT(EGL_PLATFORM_DEVICE_EXT, device, None) - - if display != egl.EGL_NO_DISPLAY and egl.eglGetError() == egl.EGL_SUCCESS: - # `eglInitialize` may or may not raise an exception on failure depending - # on how PyOpenGL is configured. We therefore catch a `GLError` and also - # manually check the output of `eglGetError()` here. - try: - initialized = egl.eglInitialize(display, None, None) - except error.GLError: - pass - else: - if initialized == egl.EGL_TRUE and egl.eglGetError() == egl.EGL_SUCCESS: - return display - return egl.EGL_NO_DISPLAY - -def create_opengl_context(surface_size=(640, 480)): - """Create offscreen OpenGL context and make it current. - - Users are expected to directly use EGL API in case more advanced - context management is required. - - Args: - surface_size: (width, height), size of the offscreen rendering surface. - """ - egl_display = create_initialized_headless_egl_display() - if egl_display == egl.EGL_NO_DISPLAY: - raise ImportError('Cannot initialize a headless EGL display.') - - major, minor = egl.EGLint(), egl.EGLint() - egl.eglInitialize(egl_display, pointer(major), pointer(minor)) - - config_attribs = [ - egl.EGL_SURFACE_TYPE, egl.EGL_PBUFFER_BIT, egl.EGL_BLUE_SIZE, 8, - egl.EGL_GREEN_SIZE, 8, egl.EGL_RED_SIZE, 8, egl.EGL_DEPTH_SIZE, 24, - egl.EGL_RENDERABLE_TYPE, egl.EGL_OPENGL_BIT, egl.EGL_NONE - ] - config_attribs = (egl.EGLint * len(config_attribs))(*config_attribs) - - num_configs = egl.EGLint() - egl_cfg = egl.EGLConfig() - egl.eglChooseConfig(egl_display, config_attribs, pointer(egl_cfg), 1, - pointer(num_configs)) - - width, height = surface_size - pbuffer_attribs = [ - egl.EGL_WIDTH, - width, - egl.EGL_HEIGHT, - height, - egl.EGL_NONE, - ] - pbuffer_attribs = (egl.EGLint * len(pbuffer_attribs))(*pbuffer_attribs) - egl_surf = egl.eglCreatePbufferSurface(egl_display, egl_cfg, pbuffer_attribs) - - egl.eglBindAPI(egl.EGL_OPENGL_API) - - egl_context = egl.eglCreateContext(egl_display, egl_cfg, egl.EGL_NO_CONTEXT, - None) - egl.eglMakeCurrent(egl_display, egl_surf, egl_surf, egl_context) diff --git a/spaces/deepwisdom/MetaGPT/examples/write_teaching_plan.py b/spaces/deepwisdom/MetaGPT/examples/write_teaching_plan.py deleted file mode 100644 index c3a647b94ad83344e11049fb732a3824b2a662c5..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/examples/write_teaching_plan.py +++ /dev/null @@ -1,113 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023-07-27 -@Author : mashenquan -@File : write_teaching_plan.py -@Desc: Write teaching plan demo - ``` - export PYTHONPATH=$PYTHONPATH:$PWD - python examples/write_teaching_plan.py --language=Chinese --teaching_language=English - - ``` -""" - -import asyncio -from pathlib import Path - -from metagpt.config import CONFIG - -import aiofiles -import fire -from metagpt.logs import logger -from metagpt.actions.write_teaching_plan import TeachingPlanRequirement -from metagpt.roles.teacher import Teacher -from metagpt.software_company import SoftwareCompany - - -async def startup(lesson_file: str, investment: float = 3.0, n_round: int = 1, *args, **kwargs): - """Run a startup. Be a teacher in education industry.""" - - demo_lesson = """ - UNIT 1 Making New Friends - TOPIC 1 Welcome to China! - Section A - - 1a Listen and number the following names. - Jane Mari Kangkang Michael - Look, listen and understand. Then practice the conversation. - Work in groups. Introduce yourself using - I ’m ... Then practice 1a - with your own hometown or the following places. - - 1b Listen and number the following names - Jane Michael Maria Kangkang - 1c Work in groups. Introduce yourself using I ’m ... Then practice 1a with your own hometown or the following places. - China the USA the UK Hong Kong Beijing - - 2a Look, listen and understand. Then practice the conversation - Hello! - Hello! - Hello! - Hello! Are you Maria? - No, I’m not. I’m Jane. - Oh, nice to meet you, Jane - Nice to meet you, too. - Hi, Maria! - Hi, Kangkang! - Welcome to China! - Thanks. - - 2b Work in groups. Make up a conversation with your own name and the - following structures. - A: Hello! / Good morning! / Hi! I’m ... Are you ... ? - B: ... - - 3a Listen, say and trace - Aa Bb Cc Dd Ee Ff Gg - - 3b Listen and number the following letters. Then circle the letters with the same sound as Bb. - Aa Bb Cc Dd Ee Ff Gg - - 3c Match the big letters with the small ones. Then write them on the lines. - """ - CONFIG.set_context(kwargs) - - lesson = "" - if lesson_file and Path(lesson_file).exists(): - async with aiofiles.open(lesson_file, mode="r", encoding="utf-8") as reader: - lesson = await reader.read() - logger.info(f"Course content: {lesson}") - if not lesson: - logger.info("No course content provided, using the demo course.") - lesson = demo_lesson - - company = SoftwareCompany() - company.hire([Teacher(*args, **kwargs)]) - company.invest(investment) - company.start_project(lesson, cause_by=TeachingPlanRequirement, role="Teacher", **kwargs) - await company.run(n_round=1) - - -def main(idea: str, investment: float = 3.0, n_round: int = 5, *args, **kwargs): - """ - We are a software startup comprised of AI. By investing in us, you are empowering a future filled with limitless possibilities. - :param idea: lesson filename. - :param investment: As an investor, you have the opportunity to contribute a certain dollar amount to this AI company. - :param n_round: Reserved. - :param args: Parameters passed in format: `python your_script.py arg1 arg2 arg3` - :param kwargs: Parameters passed in format: `python your_script.py --param1=value1 --param2=value2` - :return: - """ - asyncio.run(startup(idea, investment, n_round, *args, **kwargs)) - - -if __name__ == '__main__': - """ - Formats: - ``` - python write_teaching_plan.py lesson_filename --teaching_language= --language= - ``` - If `lesson_filename` is not available, a demo lesson content will be used. - """ - fire.Fire(main) diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/memory/test_memory_storage.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/memory/test_memory_storage.py deleted file mode 100644 index 6bb3e8f1d5154d74b6e84244299a693fbf2d2b69..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/memory/test_memory_storage.py +++ /dev/null @@ -1,82 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# @Desc : the unittests of metagpt/memory/memory_storage.py - -from typing import List - -from metagpt.memory.memory_storage import MemoryStorage -from metagpt.schema import Message -from metagpt.actions import BossRequirement -from metagpt.actions import WritePRD -from metagpt.actions.action_output import ActionOutput - - -def test_idea_message(): - idea = 'Write a cli snake game' - role_id = 'UTUser1(Product Manager)' - message = Message(role='BOSS', content=idea, cause_by=BossRequirement) - - memory_storage: MemoryStorage = MemoryStorage() - messages = memory_storage.recover_memory(role_id) - assert len(messages) == 0 - - memory_storage.add(message) - assert memory_storage.is_initialized is True - - sim_idea = 'Write a game of cli snake' - sim_message = Message(role='BOSS', content=sim_idea, cause_by=BossRequirement) - new_messages = memory_storage.search(sim_message) - assert len(new_messages) == 0 # similar, return [] - - new_idea = 'Write a 2048 web game' - new_message = Message(role='BOSS', content=new_idea, cause_by=BossRequirement) - new_messages = memory_storage.search(new_message) - assert new_messages[0].content == message.content - - memory_storage.clean() - assert memory_storage.is_initialized is False - - -def test_actionout_message(): - out_mapping = { - 'field1': (str, ...), - 'field2': (List[str], ...) - } - out_data = { - 'field1': 'field1 value', - 'field2': ['field2 value1', 'field2 value2'] - } - ic_obj = ActionOutput.create_model_class('prd', out_mapping) - - role_id = 'UTUser2(Architect)' - content = 'The boss has requested the creation of a command-line interface (CLI) snake game' - message = Message(content=content, - instruct_content=ic_obj(**out_data), - role='user', - cause_by=WritePRD) # WritePRD as test action - - memory_storage: MemoryStorage = MemoryStorage() - messages = memory_storage.recover_memory(role_id) - assert len(messages) == 0 - - memory_storage.add(message) - assert memory_storage.is_initialized is True - - sim_conent = 'The request is command-line interface (CLI) snake game' - sim_message = Message(content=sim_conent, - instruct_content=ic_obj(**out_data), - role='user', - cause_by=WritePRD) - new_messages = memory_storage.search(sim_message) - assert len(new_messages) == 0 # similar, return [] - - new_conent = 'Incorporate basic features of a snake game such as scoring and increasing difficulty' - new_message = Message(content=new_conent, - instruct_content=ic_obj(**out_data), - role='user', - cause_by=WritePRD) - new_messages = memory_storage.search(new_message) - assert new_messages[0].content == message.content - - memory_storage.clean() - assert memory_storage.is_initialized is False diff --git a/spaces/dhavala/KrishiGPT/README.md b/spaces/dhavala/KrishiGPT/README.md deleted file mode 100644 index 20bacf8e819368a6dae4e024c684e3e169315613..0000000000000000000000000000000000000000 --- a/spaces/dhavala/KrishiGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Krishi GPT -emoji: 📚 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Alias Concept 2019 Crack Xforce 64 LINK.md b/spaces/diacanFperku/AutoGPT/Alias Concept 2019 Crack Xforce 64 LINK.md deleted file mode 100644 index f3e12e8b5321f782997c2e616b9fc395a94d0508..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Alias Concept 2019 Crack Xforce 64 LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Alias Concept 2019 crack xforce 64


      Download ☆☆☆ https://gohhs.com/2uFTfm



      -
      -Alias SpeedForm 2019 Crack Xforce 64 ->>> http://bytlly.com/1bxl5t. X-Force 2019 es el ... Autodesk Alias Concept 2019.... Autodesk Alias ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/BewerbungsMaster Professional 2011 V2.1 Download Pc BEST.md b/spaces/diacanFperku/AutoGPT/BewerbungsMaster Professional 2011 V2.1 Download Pc BEST.md deleted file mode 100644 index 547a4a414b7e147f29d2ef266652fb3d018f68b2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/BewerbungsMaster Professional 2011 V2.1 Download Pc BEST.md +++ /dev/null @@ -1,6 +0,0 @@ -

      BewerbungsMaster Professional 2011 v2.1 download pc


      Download File ===== https://gohhs.com/2uFTNJ



      - -Unsere Bewerbungssoftware steht per Download zur Verfügung - jetzt testen! ... Viren überprüft, sodass Ihr PC garantiert virenfrei bleibt und Sie die Vorzüge des BewerbungsMasters ... 2021/v2.0 master.exe 22 MB Jetzt herunterladen. ZIP (alternativ) 2021/v2.0 ... Die Professional-Version ist für alle Bewerbungen geeignet. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/diacanFperku/AutoGPT/Drumsite 1 7 Serial Keygen VERIFIED Patch.md b/spaces/diacanFperku/AutoGPT/Drumsite 1 7 Serial Keygen VERIFIED Patch.md deleted file mode 100644 index 4d564ff01e48b8395ad1cd8eba14ed28fc7a2c61..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Drumsite 1 7 Serial Keygen VERIFIED Patch.md +++ /dev/null @@ -1,6 +0,0 @@ -

      drumsite 1 7 serial keygen patch


      DOWNLOADhttps://gohhs.com/2uFV6Y



      - -Driver Easy Pro Full Crack v You can easily download the latest types of computer drivers with this application. ... Game Like Godfather 1 and Godfather 2 pc game full was Release on 7 April ... Drumsite is a software for drums programming. 1fdad05405
      -
      -
      -

      diff --git a/spaces/digitalxingtong/Azuma-Bert-VITS2/data_utils.py b/spaces/digitalxingtong/Azuma-Bert-VITS2/data_utils.py deleted file mode 100644 index 2c98d3dc8b9572bd05859033a74d155425a2a2ab..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Azuma-Bert-VITS2/data_utils.py +++ /dev/null @@ -1,332 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import torchaudio -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio_norm, sampling_rate = torchaudio.load(filename, frame_offset=0, num_frames=-1, normalize=True, channels_first=True) - ''' - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - ''' - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - if self.use_mel_spec_posterior: - # if os.path.exists(filename.replace(".wav", ".spec.pt")): - # # spec, n_fft, num_mels, sampling_rate, fmin, fmax - # spec = spec_to_mel_torch( - # torch.load(filename.replace(".wav", ".spec.pt")), - # self.filter_length, self.n_mel_channels, self.sampling_rate, - # self.hparams.mel_fmin, self.hparams.mel_fmax) - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - # print(text, word2ph,phone, tone, language_str) - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/transcribe_genshin.py b/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/transcribe_genshin.py deleted file mode 100644 index acc98814af6189d129ab85946525bec55419a33f..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/transcribe_genshin.py +++ /dev/null @@ -1,78 +0,0 @@ -# coding=gbk -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count - -import soundfile -from scipy.io import wavfile -from tqdm import tqdm - -global speaker_annos -speaker_annos = [] - -def process(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, sr=args.sr) - soundfile.write( - os.path.join(args.out_dir, speaker, wav_name), - wav, - sr - ) - -def process_text(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - global speaker_annos - tr_name = wav_name.replace('.wav', '') - with open(args.out_dir+'/'+speaker+'/'+tr_name+'.lab', "r", encoding="utf-8") as file: - text = file.read() - text = text.replace("{NICKNAME}",'') - text = text.replace("{M#}{F#}",'') - text = text.replace("{M#}{F#}",'') - substring = "{M#}{F#}" - if substring in text: - if tr_name.endswith("a"): - text = text.replace("{M#}{F#}",'') - if tr_name.endswith("b"): - text = text.replace("{M#}{F#}",'') - text = text.replace("#",'') - text = "ZH|" + text + "\n" # - speaker_annos.append(args.out_dir+'/'+speaker+'/'+wav_name+ "|" + speaker + "|" + text) - - - -if __name__ == "__main__": - parent_dir = "./genshin_dataset/" - speaker_names = list(os.walk(parent_dir))[0][1] - parser = argparse.ArgumentParser() - parser.add_argument("--sr", type=int, default=44100, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./genshin_dataset", help="path to source dir") - parser.add_argument("--out_dir", type=str, default="./genshin_dataset", help="path to target dir") - args = parser.parse_args() - # processs = 8 - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass - for i in os.listdir(spk_dir): - if i.endswith("wav"): - pro=(spk_dir, i, args) - process_text(pro) - if len(speaker_annos) == 0: - print("transcribe error!!!") - with open("./filelists/short_character_anno.list", 'w', encoding='utf-8') as f: - for line in speaker_annos: - f.write(line) - print("transcript file finished.") diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py deleted file mode 100644 index 983378118b4d589f531a7f401a06d238966a45d4..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textrecog/sar/sar_r31_parallel_decoder_academic.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', '../../_base_/recog_models/sar.py', - '../../_base_/schedules/schedule_adam_step_5e.py', - '../../_base_/recog_pipelines/sar_pipeline.py', - '../../_base_/recog_datasets/ST_SA_MJ_real_train.py', - '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -data = dict( - samples_per_gpu=64, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/dirge/voicevox/voicevox_engine/utility/mutex_utility.py b/spaces/dirge/voicevox/voicevox_engine/utility/mutex_utility.py deleted file mode 100644 index 09d8cb9680f71758018bffe82838a763ca46fe31..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/voicevox_engine/utility/mutex_utility.py +++ /dev/null @@ -1,15 +0,0 @@ -import threading - - -def mutex_wrapper(lock: threading.Lock): - def wrap(f): - def func(*args, **kw): - lock.acquire() - try: - return f(*args, **kw) - finally: - lock.release() - - return func - - return wrap diff --git a/spaces/dongyi/MMFS/tools/test_export_image_groups.py b/spaces/dongyi/MMFS/tools/test_export_image_groups.py deleted file mode 100644 index 4df7088428dbe3463ecd8301d0ff69f81bf5ebff..0000000000000000000000000000000000000000 --- a/spaces/dongyi/MMFS/tools/test_export_image_groups.py +++ /dev/null @@ -1,154 +0,0 @@ -import os, sys, cv2, subprocess -import yaml, shutil -from tqdm import tqdm -from multiprocessing import Process, Pool -from tools.concat_images_or_videos import ConcatImages, default_concat_mapping, default_model_postfix_mapping - -def revise_yaml_config(yaml_folder, save_img_folder, save_yaml_folder, gpu, test_folder): - - yaml_config = yaml.safe_load(open(yaml_folder, "r")) - yaml_config["common"]["gpu_ids"] = list(map(int, gpu.split())) - yaml_config["testing"]["results_dir"] = save_img_folder - yaml_config["testing"]["image_format"] = "png" - yaml_config["dataset"]["data_type"] = ['paired'] - t_yaml_folder = os.path.join(save_yaml_folder, yaml_folder.split('/')[-1]) - yaml.safe_dump(yaml_config, open(t_yaml_folder, "w"), default_flow_style=False) - - -def run_test_single_process(gpu_ids_list, yaml_list, test_folder_list, ckpt_list): - - assert len(gpu_ids_list) == len(yaml_list) - assert len(gpu_ids_list) == len(test_folder_list) - assert len(gpu_ids_list) == len(ckpt_list) - - total_single_process_number = len(gpu_ids_list) - - for process_idx in range(total_single_process_number): - t_gpu_ids = gpu_ids_list[process_idx] - t_yaml = yaml_list[process_idx] - t_test_folder = test_folder_list[process_idx] - t_ckpt = ckpt_list[process_idx] - - print("python3 -u test.py " + \ - " --cfg_file " + t_yaml + \ - " --test_folder " + t_test_folder + \ - " --ckpt " + t_ckpt) - p = subprocess.Popen("python3 -u test.py " + \ - " --cfg_file " + t_yaml + \ - " --test_folder " + t_test_folder + \ - " --ckpt " + t_ckpt, shell=True) - p.wait() - print("finish " + t_yaml.split('/')[-1]) - - -def concat_images(save_folder_list): - # Concat images - concat_images_manager = ConcatImages() - total_save_folder_number = len(save_folder_list) - - print("export_list:\n", "\n".join(map(str, export_list))) - t_export_name_list = [str(export_list[t_idx]["export_name"]) for t_idx in range(total_save_folder_number)] - t_export_name_list = [concat_ori_image_name] + t_export_name_list - - concat_ori_image_path_list = list(os.listdir(concat_ori_image_path)) - - t_model_type_list = [] - for t_idx in range(total_save_folder_number): - t_model_type_list += [str(export_list[t_idx]["export_type"])] - - for t_idx, t_ori_image_name in enumerate(tqdm(concat_ori_image_path_list, total=len(concat_ori_image_path_list))): - t_ori_image_prefix_name = t_ori_image_name.split(".")[0] - t_image_path_list = [] - - for save_folder_idx in range(total_save_folder_number): - t_image_name = t_ori_image_prefix_name + default_model_postfix_mapping[t_model_type_list[save_folder_idx]] - t_image_path = os.path.join(save_folder_list[save_folder_idx], t_image_name) - assert os.path.isfile(t_image_path), "No file in {}".format("%s"%t_image_path) - t_image_path_list += [t_image_path] - - t_image_path_list = [os.path.join(concat_ori_image_path, t_ori_image_name)] + t_image_path_list - assert len(t_image_path_list) == len(t_export_name_list), "Error image({}) and text({}) number".format("%d"%len(t_image_path_list), "%d"%len(t_export_name_list)) - - concat_images_manager.run_concat_images(concat_save_image_path, t_image_path_list, t_export_name_list, *default_concat_mapping[total_save_folder_number + 1]) - - -def run(): - # init env - os.makedirs(save_yaml_folder, exist_ok=True) - shutil.rmtree(save_yaml_folder) - os.makedirs(save_yaml_folder, exist_ok=True) - - t_gpu_list = [[] for t_idx in range(gpu_number)] - t_yaml_folder_list = [[] for t_idx in range(gpu_number)] - t_test_folder_list = [[] for t_idx in range(gpu_number)] - t_ckpt_list = [[] for t_idx in range(gpu_number)] - t_concat_folder_list = [] - - # Load yaml config - export_number = len(export_list) - for t_idx in range(export_number): - t_gpu = str(gpu_ids[t_idx % gpu_number]) - t_yaml_folder = str(export_list[t_idx]["export_yaml"]) - t_ckpt = str(export_list[t_idx]["export_ckpt"]) - - t_test_type = str(export_list[t_idx]["export_test_type"]) - t_test_folder = str(globals()["test_" + t_test_type + "_data_path"]) - t_save_folder_root = str(export_list[t_idx]["export_save_folder"]) - - t_yaml_config = yaml.safe_load(open(t_yaml_folder, "r")) - t_export_exp_name = t_yaml_config["common"]["name"] - - t_save_folder = os.path.join(t_save_folder_root, t_export_exp_name, t_export_exp_name) - - t_flag_concat_images = export_list[t_idx]["export_flag_concat"] - if t_flag_concat_images == True: - t_concat_folder_list += [t_save_folder] - - t_generate = export_list[t_idx]["export_flag_generate"] - - assert t_generate == True or t_generate == False, "Error generate type..." - print(t_generate, type(t_generate)) - if t_generate == True: - os.makedirs(t_save_folder, exist_ok=True) - shutil.rmtree(t_save_folder) - os.makedirs(t_save_folder, exist_ok=True) - revise_yaml_config(t_yaml_folder, t_save_folder_root, save_yaml_folder, t_gpu, t_test_folder) - t_yaml_folder = os.path.join(save_yaml_folder, t_yaml_folder.split('/')[-1]) - - t_gpu_list[t_idx % gpu_number] += [t_gpu] - t_yaml_folder_list[t_idx % gpu_number] += [t_yaml_folder] - t_test_folder_list[t_idx % gpu_number] += [t_test_folder] - t_ckpt_list[t_idx % gpu_number] += [t_ckpt] - else: - assert os.path.isdir(t_save_folder) and len(os.listdir(t_save_folder)) != 0, "No files in {}".format("%s"%t_save_folder) - - print("begin export images...") - # Multi-process inference - p = Pool(export_number) - for gpu_idx in range(gpu_number): - p.apply_async(run_test_single_process, args=(t_gpu_list[gpu_idx], - t_yaml_folder_list[gpu_idx], - t_test_folder_list[gpu_idx], - t_ckpt_list[gpu_idx])) - - p.close() - p.join() - print("finish export images...") - - if flag_concat_images == True: - print("begin concat {} images...".format("%d"%(len(t_concat_folder_list) + 1))) - concat_images(t_concat_folder_list) - print("finish concat images...") - - print("finish all process steps...") - - -if __name__ == '__main__': - - yaml_path = sys.argv[1] - - config = yaml.safe_load(open(yaml_path)) - for k, v in config.items(): - globals()[k] = v - - run() diff --git a/spaces/duanzhihua/AI-ChatGPT/README.md b/spaces/duanzhihua/AI-ChatGPT/README.md deleted file mode 100644 index 385782f84e2243e1cee2b080c407d45caa809fe4..0000000000000000000000000000000000000000 --- a/spaces/duanzhihua/AI-ChatGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI ChatGPT -emoji: 📚 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/eatcosmos/hackaprompt/tests/test_evaluator.py b/spaces/eatcosmos/hackaprompt/tests/test_evaluator.py deleted file mode 100644 index cfb8842911f2668aa0b057c8636f0b1574921533..0000000000000000000000000000000000000000 --- a/spaces/eatcosmos/hackaprompt/tests/test_evaluator.py +++ /dev/null @@ -1,46 +0,0 @@ -import os - -from hackaprompt.completers import get_completer -from hackaprompt.evaluator import get_evaluator -from hackaprompt.utils import contains_blacklisted_word, random_alphanumeric, remove_whitespace - - -def test_remove_whitespace(): - test_string = "\n\n Hello, world! \n\n" - output = remove_whitespace(test_string) - expected_output = "Hello, world!" - assert expected_output == output - - -def test_random_alphanumeric(): - output = random_alphanumeric(k=6) - assert isinstance(output, str) - assert len(output) == 6 - - -def test_remove_whitespace_alphanumeric(): - expected_output = random_alphanumeric(k=6) - output = remove_whitespace(expected_output) - assert expected_output == output - - -def test_contains_blacklisted_word(): - assert contains_blacklisted_word("this string contains the word PWNED") is True - assert contains_blacklisted_word("this string contains the word PawNeD") is True - assert contains_blacklisted_word("this string will pass the blacklist test.") is False - assert contains_blacklisted_word("this is also valid p-wn-ed.") is False - - -def test_all_levels_fail(): - """This test runs every level, for every completer. The result is always incorrect, however it should never throw an error.""" - openai_api_key = os.getenv("OPENAI_API_KEY") - for level in range(8): - for model in ["gpt-3.5-turbo", "FlanT5-XXL"]: - completer = get_completer(model, openai_api_key) - evaluator = get_evaluator(level=level, completer=completer) - prompt = """This is a dummy prompt that will fail the evaluation.""" - response = evaluator.evaluate(prompt) - assert response.correct is False - - # we shouldn't have hit an error - assert response.error is False diff --git a/spaces/edenehuyh/Demo_RealESRGAN/app.py b/spaces/edenehuyh/Demo_RealESRGAN/app.py deleted file mode 100644 index 0e72cd03c81f6812543cef6563f099e43fc228df..0000000000000000000000000000000000000000 --- a/spaces/edenehuyh/Demo_RealESRGAN/app.py +++ /dev/null @@ -1,224 +0,0 @@ -import gradio as gr -import cv2 -import numpy -import os -import random -from basicsr.archs.rrdbnet_arch import RRDBNet -from basicsr.utils.download_util import load_file_from_url - -from realesrgan import RealESRGANer -from realesrgan.archs.srvgg_arch import SRVGGNetCompact - - -last_file = None -img_mode = "RGBA" - - -def realesrgan(img, model_name, denoise_strength, face_enhance, outscale): - """Real-ESRGAN function to restore (and upscale) images. - """ - if not img: - return - - # Define model parameters - if model_name == 'RealESRGAN_x4plus': # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'] - elif model_name == 'RealESRNet_x4plus': # x4 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth'] - elif model_name == 'RealESRGAN_x4plus_anime_6B': # x4 RRDBNet model with 6 blocks - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - netscale = 4 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth'] - elif model_name == 'RealESRGAN_x2plus': # x2 RRDBNet model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - netscale = 2 - file_url = ['https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth'] - elif model_name == 'realesr-general-x4v3': # x4 VGG-style model (S size) - model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') - netscale = 4 - file_url = [ - 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth', - 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth' - ] - - # Determine model paths - model_path = os.path.join('weights', model_name + '.pth') - if not os.path.isfile(model_path): - ROOT_DIR = os.path.dirname(os.path.abspath(__file__)) - for url in file_url: - # model_path will be updated - model_path = load_file_from_url( - url=url, model_dir=os.path.join(ROOT_DIR, 'weights'), progress=True, file_name=None) - - # Use dni to control the denoise strength - dni_weight = None - if model_name == 'realesr-general-x4v3' and denoise_strength != 1: - wdn_model_path = model_path.replace('realesr-general-x4v3', 'realesr-general-wdn-x4v3') - model_path = [model_path, wdn_model_path] - dni_weight = [denoise_strength, 1 - denoise_strength] - - # Restorer Class - upsampler = RealESRGANer( - scale=netscale, - model_path=model_path, - dni_weight=dni_weight, - model=model, - tile=0, - tile_pad=10, - pre_pad=10, - half=False, - gpu_id=None - ) - - # Use GFPGAN for face enhancement - if face_enhance: - from gfpgan import GFPGANer - face_enhancer = GFPGANer( - model_path='https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth', - upscale=outscale, - arch='clean', - channel_multiplier=2, - bg_upsampler=upsampler) - - # Convert the input PIL image to cv2 image, so that it can be processed by realesrgan - cv_img = numpy.array(img) - img = cv2.cvtColor(cv_img, cv2.COLOR_RGBA2BGRA) - - # Apply restoration - try: - if face_enhance: - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - else: - output, _ = upsampler.enhance(img, outscale=outscale) - except RuntimeError as error: - print('Error', error) - print('If you encounter CUDA out of memory, try to set --tile with a smaller number.') - else: - # Save restored image and return it to the output Image component - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - else: - extension = 'jpg' - - out_filename = f"output_{rnd_string(8)}.{extension}" - cv2.imwrite(out_filename, output) - global last_file - last_file = out_filename - return out_filename - - -def rnd_string(x): - """Returns a string of 'x' random characters - """ - characters = "abcdefghijklmnopqrstuvwxyz_0123456789" - result = "".join((random.choice(characters)) for i in range(x)) - return result - - -def reset(): - """Resets the Image components of the Gradio interface and deletes - the last processed image - """ - global last_file - if last_file: - print(f"Deleting {last_file} ...") - os.remove(last_file) - last_file = None - return gr.update(value=None), gr.update(value=None) - - -def has_transparency(img): - """This function works by first checking to see if a "transparency" property is defined - in the image's info -- if so, we return "True". Then, if the image is using indexed colors - (such as in GIFs), it gets the index of the transparent color in the palette - (img.info.get("transparency", -1)) and checks if it's used anywhere in the canvas - (img.getcolors()). If the image is in RGBA mode, then presumably it has transparency in - it, but it double-checks by getting the minimum and maximum values of every color channel - (img.getextrema()), and checks if the alpha channel's smallest value falls below 255. - https://stackoverflow.com/questions/43864101/python-pil-check-if-image-is-transparent - """ - if img.info.get("transparency", None) is not None: - return True - if img.mode == "P": - transparent = img.info.get("transparency", -1) - for _, index in img.getcolors(): - if index == transparent: - return True - elif img.mode == "RGBA": - extrema = img.getextrema() - if extrema[3][0] < 255: - return True - return False - - -def image_properties(img): - """Returns the dimensions (width and height) and color mode of the input image and - also sets the global img_mode variable to be used by the realesrgan function - """ - global img_mode - if img: - if has_transparency(img): - img_mode = "RGBA" - else: - img_mode = "RGB" - properties = f"Width: {img.size[0]}, Height: {img.size[1]} | Color Mode: {img_mode}" - return properties - - -def main(): - # Gradio Interface - with gr.Blocks(title="Real-ESRGAN Gradio Demo", theme="dark") as demo: - - gr.Markdown( - """#
      Real-ESRGAN Demo for Image Restoration and Upscaling
      -
      - Huỳnh Công Chánh *builder* and Please visit the [Real-ESRGAN GitHub page](https://github.com/xinntao/Real-ESRGAN) for detailed information about the project. - """ - ) - - with gr.Accordion("Options/Parameters"): - with gr.Row(): - model_name = gr.Dropdown(label="Real-ESRGAN inference model to be used", - choices=["RealESRGAN_x4plus", "RealESRNet_x4plus", "RealESRGAN_x4plus_anime_6B", - "RealESRGAN_x2plus", "realesr-general-x4v3"], - value="realesr-general-x4v3", show_label=True) - denoise_strength = gr.Slider(label="Denoise Strength (Used only with the realesr-general-x4v3 model)", - minimum=0, maximum=1, step=0.1, value=0.5) - outscale = gr.Slider(label="Image Upscaling Factor", - minimum=1, maximum=10, step=1, value=2, show_label=True) - face_enhance = gr.Checkbox(label="Face Enhancement using GFPGAN (Doesn't work for anime images)", - value=False, show_label=True) - - with gr.Row(): - with gr.Group(): - input_image = gr.Image(label="Source Image", type="pil", image_mode="RGBA") - input_image_properties = gr.Textbox(label="Image Properties", max_lines=1) - output_image = gr.Image(label="Restored Image", image_mode="RGBA") - with gr.Row(): - restore_btn = gr.Button("Restore Image") - reset_btn = gr.Button("Reset") - - # Event listeners: - input_image.change(fn=image_properties, inputs=input_image, outputs=input_image_properties) - restore_btn.click(fn=realesrgan, - inputs=[input_image, model_name, denoise_strength, face_enhance, outscale], - outputs=output_image) - reset_btn.click(fn=reset, inputs=[], outputs=[output_image, input_image]) - # reset_btn.click(None, inputs=[], outputs=[input_image], _js="() => (null)\n") - # Undocumented method to clear a component's value using Javascript - - gr.Markdown( - """*Please note that support for animated GIFs is not yet implemented. Should an animated GIF is chosen for restoration, - the demo will output only the first frame saved in PNG format (to preserve probable transparency).* - """ - ) - - demo.launch() - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/ehristoforu/Hwhswj/app.py b/spaces/ehristoforu/Hwhswj/app.py deleted file mode 100644 index c2d9cb28519daacdecaded2b800ac5e4fb595122..0000000000000000000000000000000000000000 --- a/spaces/ehristoforu/Hwhswj/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import requests - -API_URL = "https://api-inference.huggingface.co/models/stablediffusionapi/anything-v5" -headers = {"Authorization": "Bearer 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"} - -def query(payload): - response = requests.post(API_URL, headers=headers, json=payload) - return response.content -image_bytes = query({ - "inputs": "Astronaut riding a horse", -}) -# You can access the image with PIL.Image for example -import io -from PIL import Image -image = Image.open(io.BytesIO(image_bytes)) \ No newline at end of file diff --git a/spaces/ericjuliantooo/paraphrase/src/st_style.py b/spaces/ericjuliantooo/paraphrase/src/st_style.py deleted file mode 100644 index 5d2bc9e635c9744f77cbdb9998a4ff4c2a37c431..0000000000000000000000000000000000000000 --- a/spaces/ericjuliantooo/paraphrase/src/st_style.py +++ /dev/null @@ -1,42 +0,0 @@ -button_style = """ - -""" - - -def apply_prod_style(st): - return st.markdown(style, unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/evaluate-metric/code_eval/README.md b/spaces/evaluate-metric/code_eval/README.md deleted file mode 100644 index ce81b7c642de6c468ce7647af922148372f42f68..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/code_eval/README.md +++ /dev/null @@ -1,146 +0,0 @@ ---- -title: Code Eval -emoji: 🤗 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -tags: -- evaluate -- metric -description: >- - This metric implements the evaluation harness for the HumanEval problem solving dataset - described in the paper "Evaluating Large Language Models Trained on Code" - (https://arxiv.org/abs/2107.03374). ---- - -# Metric Card for Code Eval - -## Metric description - -The CodeEval metric estimates the pass@k metric for code synthesis. - -It implements the evaluation harness for the HumanEval problem solving dataset described in the paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org/abs/2107.03374). - - -## How to use - -The Code Eval metric calculates how good are predictions given a set of references. Its arguments are: - -`predictions`: a list of candidates to evaluate. Each candidate should be a list of strings with several code candidates to solve the problem. - -`references`: a list with a test for each prediction. Each test should evaluate the correctness of a code candidate. - -`k`: number of code candidates to consider in the evaluation. The default value is `[1, 10, 100]`. - -`num_workers`: the number of workers used to evaluate the candidate programs (The default value is `4`). - -`timeout`: The maximum time taken to produce a prediction before it is considered a "timeout". The default value is `3.0` (i.e. 3 seconds). - -```python -from evaluate import load -code_eval = load("code_eval") -test_cases = ["assert add(2,3)==5"] -candidates = [["def add(a,b): return a*b", "def add(a, b): return a+b"]] -pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1, 2]) -``` - -N.B. -This metric exists to run untrusted model-generated code. Users are strongly encouraged not to do so outside of a robust security sandbox. Before running this metric and once you've taken the necessary precautions, you will need to set the `HF_ALLOW_CODE_EVAL` environment variable. Use it at your own risk: -```python -import os -os.environ["HF_ALLOW_CODE_EVAL"] = "1"` -``` - -## Output values - -The Code Eval metric outputs two things: - -`pass_at_k`: a dictionary with the pass rates for each k value defined in the arguments. - -`results`: a dictionary with granular results of each unit test. - -### Values from popular papers -The [original CODEX paper](https://arxiv.org/pdf/2107.03374.pdf) reported that the CODEX-12B model had a pass@k score of 28.8% at `k=1`, 46.8% at `k=10` and 72.3% at `k=100`. However, since the CODEX model is not open source, it is hard to verify these numbers. - - - -## Examples - -Full match at `k=1`: - -```python -from evaluate import load -code_eval = load("code_eval") -test_cases = ["assert add(2,3)==5"] -candidates = [["def add(a, b): return a+b"]] -pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1]) -print(pass_at_k) -{'pass@1': 1.0} -``` - -No match for k = 1: - -```python -from evaluate import load -code_eval = load("code_eval") -test_cases = ["assert add(2,3)==5"] -candidates = [["def add(a,b): return a*b"]] -pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1]) -print(pass_at_k) -{'pass@1': 0.0} -``` - -Partial match at k=1, full match at k=2: - -```python -from evaluate import load -code_eval = load("code_eval") -test_cases = ["assert add(2,3)==5"] -candidates = [["def add(a, b): return a+b", "def add(a,b): return a*b"]] -pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1, 2]) -print(pass_at_k) -{'pass@1': 0.5, 'pass@2': 1.0} -``` - -## Limitations and bias - -As per the warning included in the metric code itself: -> This program exists to execute untrusted model-generated code. Although it is highly unlikely that model-generated code will do something overtly malicious in response to this test suite, model-generated code may act destructively due to a lack of model capability or alignment. Users are strongly encouraged to sandbox this evaluation suite so that it does not perform destructive actions on their host or network. For more information on how OpenAI sandboxes its code, see the accompanying paper. Once you have read this disclaimer and taken appropriate precautions, uncomment the following line and proceed at your own risk: - -More information about the limitations of the code can be found on the [Human Eval Github repository](https://github.com/openai/human-eval). - -## Citation - -```bibtex -@misc{chen2021evaluating, - title={Evaluating Large Language Models Trained on Code}, - author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan \ -and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards \ -and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray \ -and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf \ -and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray \ -and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser \ -and Mohammad Bavarian and Clemens Winter and Philippe Tillet \ -and Felipe Petroski Such and Dave Cummings and Matthias Plappert \ -and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss \ -and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak \ -and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain \ -and William Saunders and Christopher Hesse and Andrew N. Carr \ -and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa \ -and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati \ -and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei \ -and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba}, - year={2021}, - eprint={2107.03374}, - archivePrefix={arXiv}, - primaryClass={cs.LG} -} -``` - -## Further References - -- [Human Eval Github repository](https://github.com/openai/human-eval) -- [OpenAI Codex website](https://openai.com/blog/openai-codex/) diff --git a/spaces/evaluate-metric/precision/precision.py b/spaces/evaluate-metric/precision/precision.py deleted file mode 100644 index 4b35aa7e44f262ec9bbc83500bc55471beb465bd..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/precision/precision.py +++ /dev/null @@ -1,145 +0,0 @@ -# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Precision metric.""" - -import datasets -from sklearn.metrics import precision_score - -import evaluate - - -_DESCRIPTION = """ -Precision is the fraction of correctly labeled positive examples out of all of the examples that were labeled as positive. It is computed via the equation: -Precision = TP / (TP + FP) -where TP is the True positives (i.e. the examples correctly labeled as positive) and FP is the False positive examples (i.e. the examples incorrectly labeled as positive). -""" - - -_KWARGS_DESCRIPTION = """ -Args: - predictions (`list` of `int`): Predicted class labels. - references (`list` of `int`): Actual class labels. - labels (`list` of `int`): The set of labels to include when `average` is not set to `'binary'`. If `average` is `None`, it should be the label order. Labels present in the data can be excluded, for example to calculate a multiclass average ignoring a majority negative class. Labels not present in the data will result in 0 components in a macro average. For multilabel targets, labels are column indices. By default, all labels in `predictions` and `references` are used in sorted order. Defaults to None. - pos_label (`int`): The class to be considered the positive class, in the case where `average` is set to `binary`. Defaults to 1. - average (`string`): This parameter is required for multiclass/multilabel targets. If set to `None`, the scores for each class are returned. Otherwise, this determines the type of averaging performed on the data. Defaults to `'binary'`. - - - 'binary': Only report results for the class specified by `pos_label`. This is applicable only if the classes found in `predictions` and `references` are binary. - - 'micro': Calculate metrics globally by counting the total true positives, false negatives and false positives. - - 'macro': Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. - - 'weighted': Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters `'macro'` to account for label imbalance. This option can result in an F-score that is not between precision and recall. - - 'samples': Calculate metrics for each instance, and find their average (only meaningful for multilabel classification). - sample_weight (`list` of `float`): Sample weights Defaults to None. - zero_division (`int` or `string`): Sets the value to return when there is a zero division. Defaults to 'warn'. - - - 0: Returns 0 when there is a zero division. - - 1: Returns 1 when there is a zero division. - - 'warn': Raises warnings and then returns 0 when there is a zero division. - -Returns: - precision (`float` or `array` of `float`): Precision score or list of precision scores, depending on the value passed to `average`. Minimum possible value is 0. Maximum possible value is 1. Higher values indicate that fewer negative examples were incorrectly labeled as positive, which means that, generally, higher scores are better. - -Examples: - - Example 1-A simple binary example - >>> precision_metric = evaluate.load("precision") - >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0]) - >>> print(results) - {'precision': 0.5} - - Example 2-The same simple binary example as in Example 1, but with `pos_label` set to `0`. - >>> precision_metric = evaluate.load("precision") - >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], pos_label=0) - >>> print(round(results['precision'], 2)) - 0.67 - - Example 3-The same simple binary example as in Example 1, but with `sample_weight` included. - >>> precision_metric = evaluate.load("precision") - >>> results = precision_metric.compute(references=[0, 1, 0, 1, 0], predictions=[0, 0, 1, 1, 0], sample_weight=[0.9, 0.5, 3.9, 1.2, 0.3]) - >>> print(results) - {'precision': 0.23529411764705882} - - Example 4-A multiclass example, with different values for the `average` input. - >>> predictions = [0, 2, 1, 0, 0, 1] - >>> references = [0, 1, 2, 0, 1, 2] - >>> results = precision_metric.compute(predictions=predictions, references=references, average='macro') - >>> print(results) - {'precision': 0.2222222222222222} - >>> results = precision_metric.compute(predictions=predictions, references=references, average='micro') - >>> print(results) - {'precision': 0.3333333333333333} - >>> results = precision_metric.compute(predictions=predictions, references=references, average='weighted') - >>> print(results) - {'precision': 0.2222222222222222} - >>> results = precision_metric.compute(predictions=predictions, references=references, average=None) - >>> print([round(res, 2) for res in results['precision']]) - [0.67, 0.0, 0.0] -""" - - -_CITATION = """ -@article{scikit-learn, - title={Scikit-learn: Machine Learning in {P}ython}, - author={Pedregosa, F. and Varoquaux, G. and Gramfort, A. and Michel, V. - and Thirion, B. and Grisel, O. and Blondel, M. and Prettenhofer, P. - and Weiss, R. and Dubourg, V. and Vanderplas, J. and Passos, A. and - Cournapeau, D. and Brucher, M. and Perrot, M. and Duchesnay, E.}, - journal={Journal of Machine Learning Research}, - volume={12}, - pages={2825--2830}, - year={2011} -} -""" - - -@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) -class Precision(evaluate.Metric): - def _info(self): - return evaluate.MetricInfo( - description=_DESCRIPTION, - citation=_CITATION, - inputs_description=_KWARGS_DESCRIPTION, - features=datasets.Features( - { - "predictions": datasets.Sequence(datasets.Value("int32")), - "references": datasets.Sequence(datasets.Value("int32")), - } - if self.config_name == "multilabel" - else { - "predictions": datasets.Value("int32"), - "references": datasets.Value("int32"), - } - ), - reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html"], - ) - - def _compute( - self, - predictions, - references, - labels=None, - pos_label=1, - average="binary", - sample_weight=None, - zero_division="warn", - ): - score = precision_score( - references, - predictions, - labels=labels, - pos_label=pos_label, - average=average, - sample_weight=sample_weight, - zero_division=zero_division, - ) - return {"precision": float(score) if score.size == 1 else score} diff --git a/spaces/evaluate-metric/squad_v2/app.py b/spaces/evaluate-metric/squad_v2/app.py deleted file mode 100644 index 6e6ccafc20ab508661ed8a2964e51772e7012fc4..0000000000000000000000000000000000000000 --- a/spaces/evaluate-metric/squad_v2/app.py +++ /dev/null @@ -1,6 +0,0 @@ -import evaluate -from evaluate.utils import launch_gradio_widget - - -module = evaluate.load("squad_v2") -launch_gradio_widget(module) diff --git a/spaces/failfast/nextjs-hf-spaces/src/components/base/options.tsx b/spaces/failfast/nextjs-hf-spaces/src/components/base/options.tsx deleted file mode 100644 index 734e5175ea7ec7f668fd19a95159051fcc972a2e..0000000000000000000000000000000000000000 --- a/spaces/failfast/nextjs-hf-spaces/src/components/base/options.tsx +++ /dev/null @@ -1,56 +0,0 @@ -import { KeyboardArrowDown, KeyboardArrowUp } from "@mui/icons-material"; -import { - Card, - CardContent, - CardHeader, - Collapse, - IconButton, - Stack, -} from "@mui/material"; -import { ReactElement, useState } from "react"; - -type OptionsProps = { - children: ReactElement | ReactElement[]; - opened?: boolean; -}; - -/** - * Define options that are hidden by default - * - * @param props OptionsProps - * @param props.opened boolean - Are the options visible or not (default) - * - * @returns Options - */ -export default function Options(props: OptionsProps) { - const { children, opened = false } = props; - - const [showOptions, setShowOptions] = useState(opened); - - const handleShowOptions = () => setShowOptions(!showOptions); - - return ( - <> - - - {showOptions ? : } - - } - sx={{ - cursor: "pointer", - }} - titleTypographyProps={{ variant: "h6", sx: { fontSize: "1em" } }} - /> - - - {children} - - - - - ); -} diff --git a/spaces/falterWliame/Face_Mask_Detection/HACK AUTODATA 9.49 Crack FULL.md b/spaces/falterWliame/Face_Mask_Detection/HACK AUTODATA 9.49 Crack FULL.md deleted file mode 100644 index 8490873f87606b78c8f2a458c086c596dd77a5e1..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/HACK AUTODATA 9.49 Crack FULL.md +++ /dev/null @@ -1,6 +0,0 @@ -

      HACK AUTODATA 9.49 Crack FULL


      DOWNLOAD ✦✦✦ https://urlca.com/2uDcN5



      -
      -Abaqus 6.12 products, license servers, and documentation servers are not ... rapidshare, lumfile, netload, uploaded and torrent with keygen, crack and ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/fatiXbelha/sd/Como usar o WhatsApp Business para comunicar com seus clientes.md b/spaces/fatiXbelha/sd/Como usar o WhatsApp Business para comunicar com seus clientes.md deleted file mode 100644 index 2598583cdf9e9d98f5075053a108e0f8908d4a87..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Como usar o WhatsApp Business para comunicar com seus clientes.md +++ /dev/null @@ -1,163 +0,0 @@ -
      -

      Baixar WhatsApp Business APK: O que é e como funciona

      -

      O WhatsApp é um dos aplicativos de mensagens mais populares do mundo, com mais de 2 bilhões de usuários. Mas você sabia que ele também tem uma versão para negócios? O WhatsApp Business é um aplicativo gratuito que foi criado para facilitar a comunicação entre as empresas e seus clientes. Neste artigo, vamos explicar o que é o WhatsApp Business, quais são as suas principais diferenças, benefícios e funcionalidades em relação ao WhatsApp comum, e como você pode baixar e usar o WhatsApp Business APK no seu celular.

      -

      baixar whatsapp business apk


      Download ❤❤❤ https://urllie.com/2uNCMU



      -

      O que é o WhatsApp Business

      -

      O WhatsApp Business é uma ferramenta para as empresas se conectarem com seus clientes pelo WhatsApp. Ele é baseado no WhatsApp Messenger e inclui todos os recursos que você já conhece, como a possibilidade de enviar multimídia, fazer chamadas gratuitas*, enviar mensagens internacionais gratuitas*, participar de grupos, receber mensagens offline e muito mais.

      -

      Existem duas formas de usar o WhatsApp Business: o aplicativo WhatsApp Business e a plataforma WhatsApp Business. O aplicativo é para pequenas empresas que gerenciam pessoalmente as conversas com os clientes. Para começar a usar o aplicativo, basta baixá-lo e criar um perfil para o seu negócio. A plataforma é para médias e grandes empresas que se comunicam com os clientes em escala por meio de acesso programático. O WhatsApp Business pode ajudar você a melhorar a visibilidade, automatizar a comunicação e organizar o seu fluxo de trabalho.

      -

      As principais diferenças entre o WhatsApp e o WhatsApp Business

      -

      A principal diferença entre o WhatsApp e o WhatsApp Business é que as empresas têm um perfil verificado e mais completo no WhatsApp Business, para que os clientes possam confiar em quem estão conversando. As empresas que querem levar as suas conversas no WhatsApp a um nível superior podem se beneficiar da API do WhatsApp, que permite integrar o WhatsApp a um CRM e a um software de atendimento ao cliente e desbloquear recursos adicionais como chatbots e mensagens ricas em mídia. Veja a seguir as principais diferenças entre o WhatsApp, o WhatsApp Business e a API do WhatsApp Business.

      - - - - - - - - -
      WhatsAppWhatsApp BusinessAPI do WhatsApp Business
      É a versão do aplicativo de mensagens para uso pessoal.É a versão do aplicativo de mensagens para uso empresarial.É a interface de programação de aplicações que permite integrar o aplicativo de mensagens a outros sistemas.
      Permite criar um perfil com foto, nome e descrição.Permite criar um perfil com foto, nome, descrição, categoria, horário de funcionamento, endereço, link para o site e catálogo.Permite criar um perfil com foto, nome, descrição, categoria, horário de funcionamento, endereço, link para o site e cat álogo.
      Permite enviar e receber mensagens de texto, áudio, vídeo, imagem, documento, contato e localização.Permite enviar e receber mensagens de texto, áudio, vídeo, imagem, documento, contato e localização. Além disso, permite enviar mensagens automáticas de saudação e ausência, etiquetar e filtrar os chats e criar respostas rápidas.Permite enviar e receber mensagens de texto, áudio, vídeo, imagem, documento, contato e localização. Além disso, permite enviar mensagens de notificação (como confirmação de pedido ou entrega), criar chatbots e usar modelos de mensagem pré-aprovados.
      Permite fazer chamadas de voz e vídeo gratuitas*.Permite fazer chamadas de voz e vídeo gratuitas*.Não permite fazer chamadas de voz e vídeo.
      Permite participar de grupos de até 256 pessoas.Permite participar de grupos de até 256 pessoas.Permite participar de grupos de até 256 pessoas. Além disso, permite criar listas de transmissão para enviar mensagens para até 256 contatos ao mesmo tempo.
      Permite usar o WhatsApp Web para acessar o aplicativo pelo computador.Permite usar o WhatsApp Web para acessar o aplicativo pelo computador.Não permite usar o WhatsApp Web. Em vez disso, requer uma integração com um provedor de serviços ou uma plataforma própria para gerenciar as conversas pelo computador.
      -

      *As chamadas são gratuitas desde que você use uma conexão Wi-Fi ou um plano de dados. Caso contrário, podem ser cobradas tarifas pela sua operadora.

      -

      baixar whatsapp business apk atualizado
      -baixar whatsapp business apk para android
      -baixar whatsapp business apk pelo google play
      -baixar whatsapp business apk com backup
      -baixar whatsapp business apk para pc
      -baixar whatsapp business apk modificado
      -baixar whatsapp business apk sem perder conversas
      -baixar whatsapp business apk para iphone
      -baixar whatsapp business apk com duas contas
      -baixar whatsapp business apk 2023
      -baixar whatsapp business apk gratis
      -baixar whatsapp business apk com stickers
      -baixar whatsapp business apk pelo site oficial
      -baixar whatsapp business apk com ferramentas de negócios
      -baixar whatsapp business apk para tablet
      -baixar whatsapp business apk antigo
      -baixar whatsapp business apk com status personalizado
      -baixar whatsapp business apk pelo uptodown
      -baixar whatsapp business apk com número fixo
      -baixar whatsapp business apk 2022
      -baixar whatsapp business apk beta
      -baixar whatsapp business apk com temas
      -baixar whatsapp business apk pelo apkpure
      -baixar whatsapp business apk com catálogo de produtos
      -baixar whatsapp business apk para windows phone
      -baixar whatsapp business apk premium
      -baixar whatsapp business apk com emojis novos
      -baixar whatsapp business apk pelo mediafire
      -baixar whatsapp business apk com mensagens automáticas
      -baixar whatsapp business apk 2021
      -baixar whatsapp business apk pro
      -baixar whatsapp business apk com videochamadas
      -baixar whatsapp business apk pelo mega
      -baixar whatsapp business apk com qr code
      -baixar whatsapp business apk para smart tv
      -baixar whatsapp business apk plus
      -baixar whatsapp business apk com figurinhas animadas
      -baixar whatsapp business apk pelo 4shared
      -baixar whatsapp business apk com etiquetas de clientes
      -baixar whatsapp business apk 2020
      -baixar whatsapp business apk gold
      -baixar whatsapp business apk com chamadas de voz
      -baixar whatsapp business apk pelo softonic
      -baixar whatsapp business apk com perfil profissional
      -baixar whatsapp business apk para celular antigo
      -baixar whatsapp business apk gb
      -baixar whatsapp business apk com modo escuro
      -baixar whatsapp business apk pelo malavida
      -baixar whatsapp business apk com estatísticas de mensagens

      -

      Os principais benefícios do WhatsApp Business

      -

      O WhatsApp Business pode trazer vários benefícios para as empresas que querem se comunicar com os seus clientes pelo WhatsApp. Veja alguns exemplos:

      -
        -
      • Aumentar a confiança dos clientes ao ter um perfil verificado e completo com informações relevantes sobre o seu negócio.
      • -
      • Melhorar a visibilidade do seu negócio ao aparecer nas buscas do WhatsApp e ao compartilhar o seu catálogo de produtos ou serviços com os seus contatos.
      • -
      • Facilitar a comunicação com os clientes ao usar mensagens automáticas, respostas rápidas e etiquetas para agilizar as conversas.
      • -
      • Otimizar o atendimento ao cliente ao integrar o WhatsApp Business a um CRM ou a um software de atendimento ao cliente e ao usar chatbots para responder às dúvidas mais frequentes dos clientes.
      • -
      • Aumentar as vendas ao enviar mensagens de notificação para os clientes sobre o status dos seus pedidos, promoções, novidades e ofertas especiais.
      • -
      -

      As principais funcionalidades do WhatsApp Business

      -

      O WhatsApp Business oferece diversas funcionalidades para as empresas se comunicarem com os seus clientes pelo WhatsApp. Veja algumas delas:

      -
        -
      • Perfil do negócio: Você pode criar um perfil do seu negócio com informações como foto, nome, descrição, categoria, horário de funcionamento, endereço, link para o site e catálogo. O seu perfil pode ser verificado pelo WhatsApp para mostrar aos clientes que você é uma empresa autêntica.
      • -
      • Mensagens automáticas: Você pode configurar mensagens automáticas para saudar os novos clientes ou informar quando você está ausente. Você também pode criar respostas rápidas para as perguntas mais comuns dos clientes e usar atalhos para enviá-las rapidamente.
      • -
      • Etiquetas: Você pode usar etiquetas para organizar os seus chats por categorias como novo cliente, pedido pendente, pagamento recebido, etc. Você também pode filtrar os chats por etiquetas para encontrar facilmente as conversas que você precisa.
      • -
      • Catálogo: Você pode criar um catálogo com os seus produtos ou serviços e compartilhá-lo com os seus contatos pelo WhatsApp. Você pode adicionar fotos, preços, descrições e links para cada item do seu catálogo. Os clientes podem ver o seu catálogo no seu perfil ou nos chats e fazer pedidos diretamente pelo WhatsApp.
      • -
      • Mensagens de notificação: Você pode enviar mensagens de notificação para os clientes sobre o status dos seus pedidos, promoções, novidades e ofertas especiais. Você pode usar modelos de mensagem pré-aprovados pelo WhatsApp ou criar os seus próprios. As mensagens de notificação são cobradas pelo WhatsApp de acordo com a tarifa local.
      • -
      • Chatbots: Você pode usar chatbots para automatizar a comunicação com os clientes pelo WhatsApp. Os chatbots são programas que simulam uma conversa humana e podem responder às dúvidas mais frequentes dos clientes, coletar informações, fazer recomendações, etc. Você pode criar os seus próprios chatbots ou usar os serviços de provedores especializados.
      • -
      • Mensagens ricas em mídia: Você pode enviar e receber mensagens de texto, áudio, vídeo, imagem, documento, contato e localização pelo WhatsApp Business. Você também pode usar recursos como emojis, stickers, GIFs e QR codes para tornar as suas conversas mais divertidas e interativas.
      • -
      -

      Como baixar o WhatsApp Business APK

      -

      O WhatsApp Business APK é um arquivo que permite instalar o aplicativo do WhatsApp Business no seu celular Android. Você pode baixar o WhatsApp Business APK se o aplicativo não estiver disponível na loja oficial do Google Play ou se você quiser ter acesso à versão mais recente do aplicativo antes de ela ser lançada oficialmente. Veja como baixar o WhatsApp Business APK no seu celular:

      -

      Os requisitos para baixar o WhatsApp Business APK

      -

      Antes de baixar o WhatsApp Business APK, você precisa verificar se o seu celular atende aos requisitos mínimos para rodar o aplicativo. Os requisitos são:

      -
        -
      • Ter um celular Android com a versão 4.0.3 ou superior do sistema operacional.
      • -
      • Ter uma conexão estável à internet (Wi-Fi ou dados móveis).
      • -
      • Ter espaço suficiente na memória interna ou no cartão SD do celular para armazenar o arquivo APK e o aplicativo.
      • -
      • Ter um número de telefone válido que não esteja sendo usado em outra conta do WhatsApp.
      • -
      -

      Os passos para baixar o WhatsApp Business APK

      -

      Depois de verificar se o seu celular atende aos requisitos, você pode seguir os passos abaixo para baixar o WhatsApp Business APK:

      -
        -
      1. Acesse um site confiável que ofereça o download do WhatsApp Business APK. Por exemplo, você pode usar o site APKPure, que é um dos mais populares e seguros para baixar arquivos APK.
      2. -
      3. Pesquise pelo WhatsApp Business APK na barra de busca do site ou navegue pelas categorias até encontrar o aplicativo.
      4. -
      5. Clique no botão de download e aguarde até que o arquivo APK seja baixado no seu celular.
      6. -
      7. Antes de instalar o arquivo APK, você precisa habilitar a opção de instalar aplicativos de fontes desconhecidas no seu celular. Para isso, vá em Configurações > Segurança > Fontes desconhecidas e marque a caixa de seleção.
      8. -
      9. Agora, você pode abrir o arquivo APK que você baixou e clicar em Instalar. Siga as instruções na tela para concluir a instalação do aplicativo.
      10. -
      11. Pronto! Agora você pode abrir o aplicativo do WhatsApp Business e criar um perfil para o seu negócio.
      12. -
      -

      As precauções ao baixar o WhatsApp Business APK

      -

      Ao baixar o WhatsApp Business APK, você precisa tomar algumas precauções para evitar problemas como vírus, malware ou perda de dados. Veja algumas dicas:

      -
        -
      • Baixe o arquivo APK apenas de sites confiáveis e verifique se ele tem uma boa avaliação dos usuários. Evite sites que peçam informações pessoais ou financeiras ou que tenham anúncios excessivos ou suspeitos.
      • -
      • Verifique se o arquivo APK tem o tamanho correto e se corresponde à versão mais recente do aplicativo. Se o arquivo for muito pequeno ou muito grande, ou se tiver uma versão muito antiga ou muito nova, ele pode estar corrompido ou adulterado.
      • -
      • Faça um backup dos seus dados antes de instalar o arquivo APK, especialmente se você já tiver uma conta do WhatsApp no seu celular. Assim, você pode restaurar os seus dados caso algo dar errado durante a instalação. Você pode fazer o backup dos seus dados pelo WhatsApp, pelo Google Drive ou por um aplicativo de backup externo.
      • -
      • Atualize o aplicativo regularmente para ter acesso às novas funcionalidades e correções de bugs. Você pode verificar se há atualizações disponíveis pelo site que você baixou o arquivo APK ou pelo próprio aplicativo do WhatsApp Business.
      • -
      • Desinstale o aplicativo se você não estiver satisfeito com ele ou se ele causar problemas no seu celular. Você pode desinstalar o aplicativo pelo menu de configurações do seu celular ou pelo gerenciador de aplicativos.
      • -
      -

      Como usar o WhatsApp Business APK

      -

      Depois de baixar e instalar o WhatsApp Business APK, você pode começar a usar o aplicativo para se comunicar com os seus clientes pelo WhatsApp. Veja como usar o WhatsApp Business APK:

      -

      Como criar um perfil de negócio no WhatsApp Business

      -

      Para criar um perfil de negócio no WhatsApp Business, você precisa seguir os passos abaixo:

      -
        -
      1. Ao abrir o aplicativo pela primeira vez, aceite os termos de serviço e a política de privacidade do WhatsApp.
      2. -
      3. Digite o número de telefone que você quer usar para o seu negócio e verifique-o com um código que será enviado por SMS ou ligação.
      4. -
      5. Escolha um nome para o seu negócio e uma foto para o seu perfil. O nome não pode ser alterado depois, então escolha com cuidado.
      6. -
      7. Preencha as informações sobre o seu negócio, como descrição, categoria, horário de funcionamento, endereço, link para o site e catálogo. Você pode editar essas informações a qualquer momento pelo menu de configurações do aplicativo.
      8. -
      9. Pronto! Agora você tem um perfil de negócio no WhatsApp Business e pode começar a se conectar com os seus clientes.
      10. -
      -

      Como enviar e receber mensagens no WhatsApp Business

      -

      Para enviar e receber mensagens no WhatsApp Business, você pode usar os mesmos recursos que você já usa no WhatsApp Messenger, como texto, áudio, vídeo, imagem, documento, contato e localização. Além disso, você pode usar recursos exclusivos do WhatsApp Business, como mensagens automáticas, respostas rápidas e etiquetas. Veja como enviar e receber mensagens no WhatsApp Business:

      -
        -
      • Para enviar uma mensagem para um cliente, você pode clicar no ícone de nova conversa e digitar o número de telefone do cliente ou selecionar um contato da sua agenda. Você também pode escanear o QR code do cliente ou receber uma mensagem dele primeiro.
      • -
      • Para receber uma mensagem de um cliente, você pode ver as notificações na tela do seu celular ou abrir o aplicativo e ver os chats na aba de conversas. Você também pode ver as estatísticas das suas mensagens pelo menu de configurações do aplicativo.
      • -
      • Para enviar uma mensagem automática de saudação ou ausência, você pode configurá-las pelo menu de configurações do aplicativo. Você pode escolher quando as mensagens serão enviadas, para quem elas serão enviadas e qual será o conteúdo delas.
      • -
      • Para enviar uma resposta rápida para uma pergunta frequente do cliente, você pode criá-las pelo menu de configurações do aplicativo. Você pode escolher um atalho para cada resposta rápida e usá-lo para enviá-la rapidamente durante uma conversa.
      • -
      • Para usar uma etiqueta para organizar um chat por categoria, você pode clicar no ícone de etiqueta na parte superior da tela durante uma conversa e escolher uma etiqueta existente ou criar uma nova. Você também pode filtrar os chats por etiquetas na aba de conversas.
      • -
      -

      Como gerenciar e organizar os chats no WhatsApp Business

      -

      Para gerenciar e organizar os chats no WhatsApp Business, você pode usar recursos como arquivar, silenciar, fixar, apagar e bloquear chats. Veja como gerenciar e organizar os chats no WhatsApp Business:

      -
        -
      • Para arquivar um chat que você não quer ver na aba de conversas, você pode deslizar o chat para a esquerda e clicar no ícone de arquivo. Você também pode selecionar vários chats e clicar no ícone de arquivo na parte superior da tela. Para ver os chats arquivados, você pode deslizar a tela para baixo na aba de conversas. Para desarquivar um chat, você pode deslizar o chat para a direita e clicar no ícone de arquivo novamente.
      • -
      • Para silenciar um chat que você não quer receber notificações, você pode deslizar o chat para a esquerda e clicar no ícone de som. Você também pode selecionar vários chats e clicar no ícone de som na parte superior da tela. Para silenciar um chat, você pode escolher por quanto tempo você quer silenciá-lo: 8 horas, 1 semana ou 1 ano. Para cancelar o silenciamento de um chat, você pode deslizar o chat para a direita e clicar no ícone de som novamente.
      • -
      • Para fixar um chat que você quer ver no topo da aba de conversas, você pode deslizar o chat para a direita e clicar no ícone de alfinete. Você também pode selecionar vários chats e clicar no ícone de alfinete na parte superior da tela. Você pode fixar até 3 chats ao mesmo tempo. Para desfixar um chat, você pode deslizar o chat para a esquerda e clicar no ícone de alfinete novamente.
      • -
      • Para apagar um chat que você não quer mais ver na aba de conversas, você pode deslizar o chat para a esquerda e clicar no ícone de lixeira. Você também pode selecionar vários chats e clicar no ícone de lixeira na parte superior da tela. Ao apagar um chat, você pode escolher se quer apagar as mensagens do seu celular ou do celular do seu contato também. Se você apagar as mensagens do seu celular, elas ainda estarão disponíveis no backup do WhatsApp. Se você apagar as mensagens do celular do seu contato, elas serão removidas permanentemente.
      • -
      • Para bloquear um chat que você não quer mais receber mensagens, você pode abrir o chat e clicar nos três pontos na parte superior direita da tela e depois em Mais > Bloquear. Você também pode selecionar vários chats e clicar nos três pontos na parte superior direita da tela e depois em Bloquear. Ao bloquear um chat, você não receberá mais mensagens, chamadas ou atualizações de status do contato bloqueado. Para desbloquear um chat, você pode abrir o chat e clicar nos três pontos na parte superior direita da tela e depois em Desbloquear. Você também pode ir em Configurações > Conta > Privacidade > Contatos bloqueados e selecionar o contato que você quer desbloquear.
      • -
      -

      Conclusão

      -

      O WhatsApp Business é um aplicativo gratuito que foi criado para facilitar a comunicação entre as empresas e seus clientes pelo WhatsApp. Ele oferece diversas funcionalidades para as empresas se conectarem com os seus clientes pelo WhatsApp, como perfil do negócio, mensagens automáticas, etiquetas, catálogo, mensagens de notificação e chatbots. Para baixar o WhatsApp Business APK, você precisa verificar se o seu celular atende aos requisitos mínimos, acessar um site confiável que ofereça o download do arquivo APK, habilitar a opção de instalar aplicativos de fontes desconhecidas e seguir os passos para instalar o aplicativo. Para usar o WhatsApp Business APK, você precisa criar um perfil para o seu negócio, enviar e receber mensagens pelo aplicativo e gerenciar e organizar os seus chats pelo aplicativo.

      -

      Perguntas frequentes

      -

      A seguir, respondemos algumas das perguntas mais frequentes sobre o WhatsApp Business APK:

      -

      O que é o WhatsApp Business APK?

      -

      O WhatsApp Business APK é um arquivo que permite instalar o aplicativo do WhatsApp Business no seu celular Android.

      -

      Por que baixar o WhatsApp Business APK?

      -

      Você pode baixar o WhatsApp Business APK se o aplicativo não estiver disponível na loja oficial do Google Play ou se você quiser ter acesso à versão mais recente do aplicativo antes de ela ser lançada oficialmente.

      -

      Como baixar o WhatsApp Business APK?

      -

      Você pode baixar o WhatsApp Business APK acessando um site confiável que ofereça o download do arquivo APK, habilitando a opção de instalar aplicativos de fontes desconhecidas no seu celular e seguindo os passos para instalar o aplicativo.

      -

      Como usar o WhatsApp Business APK?

      -

      Você pode usar o WhatsApp Business APK criando um perfil para o seu negócio, enviando e recebendo mensagens pelo aplicativo e gerenciando e organizando os seus chats pelo aplicativo.

      -

      O WhatsApp Business APK é seguro?

      -

      O WhatsApp Business APK é seguro desde que você baixe o arquivo de um site confiável e verifique se ele tem o tamanho correto e se corresponde à versão mais recente do aplicativo. Você também deve tomar algumas precauções ao instalar o arquivo APK, como fazer um backup dos seus dados, atualizar o aplicativo regularmente e desinstalar o aplicativo se você não estiver satisfeito com ele ou se ele causar problemas no seu celular.

      -

      Esperamos que este artigo tenha sido útil para você entender o que é o WhatsApp Business APK, como baixá-lo e como usá-lo. Se você tiver alguma dúvida ou sugestão, deixe um comentário abaixo. Obrigado pela sua atenção e até a próxima!

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy Free Fire Advance Server Game on Your Smartphone - Download Now and Join the Advance Community.md b/spaces/fatiXbelha/sd/Enjoy Free Fire Advance Server Game on Your Smartphone - Download Now and Join the Advance Community.md deleted file mode 100644 index f3b22f2a5468dc021ed444699ea86e4ae3ccdb99..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy Free Fire Advance Server Game on Your Smartphone - Download Now and Join the Advance Community.md +++ /dev/null @@ -1,108 +0,0 @@ -
      -

      Free Fire Advance Server Game Download: Everything You Need to Know

      -

      Free Fire is one of the most popular battle royale games on mobile devices, with millions of players around the world. The game is constantly updated with new features and content to keep the players engaged and entertained. But did you know that you can also play the game before it is officially released to the public? Yes, you heard it right. You can download and play the Free Fire Advance Server, which is a beta version of the game that allows you to test the upcoming features and provide feedback to the developers. In this article, we will tell you everything you need to know about the Free Fire Advance Server game download, including how to register, how to download, how to get activation code, what are the features, what are the rewards, and how to give feedback.

      -

      What is Free Fire Advance Server?

      -

      Free Fire Advance Server is a special server that is opened by Garena, the developer of Free Fire, for a limited period of time before a new update is launched. The purpose of this server is to let selected players try out the new features and content that are not yet available in the official version of the game. This way, the players can help the developers find and fix any bugs or glitches that may affect the game performance or user experience. The players can also share their opinions and suggestions on how to improve the game further.

      -

      free fire advance server game download


      Downloadhttps://urllie.com/2uND3T



      -

      How to register for Free Fire Advance Server?

      -

      To play on the Free Fire Advance Server, you need to register yourself on the official website of the server. Here are the steps to do so:

      -
        -
      1. Visit https://ff-advance.ff.garena.com/, which is the official website of the Free Fire Advance Server.
      2. -
      3. Login using your Facebook account that is linked to your Free Fire account.
      4. -
      5. Fill in your personal details such as name, email address, and phone number.
      6. -
      7. Click on "Join Now" to complete your registration.
      8. -
      -

      Note that registering does not guarantee that you will get access to the server, as there are only a limited number of slots available. You will need an activation code to play on the server, which will be sent by Garena via email if you are selected.

      -

      How to download and install Free Fire Advance Server APK?

      -

      If you have received an activation code from Garena, you can download and install the Free Fire Advance Server APK on your Android device. Here are the steps to do so:

      -
        -
      1. Visit https://ff-advance.ff.garena.com/ again and login using your Facebook account.
      2. -
      3. Click on "Download APK" to download the APK file of the server.
      4. -
      5. Locate and tap on the downloaded file to install it on your device. You may need to enable "Install from unknown sources" in your device settings.
      6. -
      7. Open the app and enter your activation code when prompted.
      8. -
      9. Enjoy playing on the Free Fire Advance Server.
      10. -
      -

      How to get activation code for Free Fire Advance Server?

      -

      The activation code is a unique code that is required to play on the Free Fire Advance Server. It is sent by Garena via email to selected players who have registered on the official website of the server. The code can only be used once and cannot be shared with others. If you do not receive an activation code, it means that you are not selected for the server. You can try again in the next registration period, which is usually announced by Garena on their social media platforms.

      -

      What are the features of Free Fire Advance Server?

      -

      The Free Fire Advance Server is a great opportunity for players to experience the new features and content that are coming soon to the official version of the game. Some of the features that you can expect to see on the server are:

      -

      New characters and pets

      -

      The server often introduces new characters and pets that have unique abilities and skills. For example, in the latest server, you can play as Kelly "The Swift" Awakening, who can run faster and shoot while sprinting, or Shirou, who can mark enemies that hit him and deal more damage to them. You can also adopt new pets such as Rockie, who can reduce the cooldown of your active skills, or Mr. Waggor, who can produce gloo wall grenades for you.

      -

      New game modes and maps

      -

      The server also features new game modes and maps that offer different gameplay experiences and challenges. For example, in the latest server, you can try out the Dynamic Duo mode, where you can team up with one partner and share health and revival benefits, or the Pet Rumble mode, where you can control your pets and fight against other players in a mini-game. You can also explore new maps such as Bermuda Remastered, which is a revamped version of the classic map with new locations and graphics, or Bermuda 2.0, which is a futuristic version of the map with advanced technology and vehicles.

      -

      New weapons and items

      -

      The server also adds new weapons and items that can enhance your combat and survival skills. For example, in the latest server, you can use the MAG-7, which is a powerful shotgun that can fire multiple shots in a row, or the Vector Akimbo, which is a dual-wielded submachine gun that can unleash a barrage of bullets. You can also equip new items such as the Ice Gun, which can freeze enemies and objects, or the Decoy Grenade, which can create a fake version of yourself to distract enemies.

      -

      How to download free fire advance server game
      -Free fire advance server game apk download
      -Free fire advance server game download for android
      -Free fire advance server game download link
      -Free fire advance server game download 2023
      -Free fire advance server game download latest version
      -Free fire advance server game download obb
      -Free fire advance server game download for pc
      -Free fire advance server game download for ios
      -Free fire advance server game download size
      -Free fire advance server game download update
      -Free fire advance server game download website
      -Free fire advance server game download error
      -Free fire advance server game download without vpn
      -Free fire advance server game download mod apk
      -Free fire advance server game features
      -Free fire advance server game registration
      -Free fire advance server game login
      -Free fire advance server game code
      -Free fire advance server game rewards
      -Free fire advance server game release date
      -Free fire advance server game review
      -Free fire advance server game tips and tricks
      -Free fire advance server game gameplay
      -Free fire advance server game requirements
      -Free fire advance server game problems and solutions
      -Free fire advance server game feedback and report
      -Free fire advance server game new characters and weapons
      -Free fire advance server game new map and mode
      -Free fire advance server game new events and missions
      -Free fire advance server game online play
      -Free fire advance server game offline play
      -Free fire advance server game hack and cheat
      -Free fire advance server game test and beta version
      -Free fire advance server game invite and referral code
      -Free fire advance server game support and contact number
      -Free fire advance server game guide and tutorial
      -Free fire advance server game videos and screenshots
      -Free fire advance server game news and updates
      -Free fire advance server game community and forum

      -

      UI and gameplay changes

      -

      The server also makes some UI and gameplay changes that can improve your user experience and performance. For example, in the latest server, you can see a new lobby design that is more interactive and customizable, or a new training island that is more spacious and diverse. You can also enjoy some gameplay improvements such as a smoother movement system, a more balanced weapon system, and a more optimized network system.

      -

      What are the rewards of Free Fire Advance Server?

      -

      Playing on the Free Fire Advance Server is not only fun but also rewarding. You can get some benefits from playing on the server, such as:

      -

      Diamonds for reporting bugs

      -

      One of the main rewards of playing on the server is that you can earn diamonds for reporting bugs or glitches that you encounter while playing. Diamonds are the premium currency of Free Fire that can be used to buy various items and services in the game. You can report bugs using the in-game report button or the official website of the server. The more bugs you report, the more diamonds you will get.

      -

      Exclusive skins and cosmetics

      -

      Another reward of playing on the server is that you can get exclusive skins and cosmetics that are not available in the official version of the game. These skins and cosmetics can make your character look more stylish and unique. You can get these skins and cosmetics by completing certain tasks or missions on the server, or by participating in events or contests organized by Garena.

      -

      Early access to new content

      -

      The last but not least reward of playing on the server is that you can get early access to new content that are coming soon to the official version of the game. This way, you can enjoy the new features and content before anyone else, and also have an advantage over other players who are not familiar with them. You can also share your feedback and suggestions with Garena to help them improve the game further.

      -

      How to give feedback on Free Fire Advance Server?

      -

      Giving feedback on Free Fire Advance Server is very important, as it helps Garena to fix any issues or problems that may affect the game quality or user satisfaction. There are several ways to give feedback on Free Fire Advance Server, such as:

      -

      Using the in-game report button

      -

      The easiest way to give feedback on Free Fire Advance Server is to use the in-game report button that is located on the top right corner of the screen. You can use this button to report any bugs or glitches that you encounter while playing, or to share your opinions or suggestions on the new features or content. You can also attach screenshots or videos to support your feedback. You will receive diamonds as a reward for reporting bugs.

      -

      Using the official website

      -

      Another way to give feedback on Free Fire Advance Server is to use the official website of the server, which is https://ff-advance.ff.garena.com/. You can use this website to report any bugs or glitches that you encounter while playing, or to share your opinions or suggestions on the new features or content. You can also attach screenshots or videos to support your feedback. You will receive diamonds as a reward for reporting bugs.

      -

      Using social media platforms

      -

      The last way to give feedback on Free Fire Advance Server is to use social media platforms such as Facebook, Instagram, Twitter, or YouTube. You can use these platforms to share your experiences or thoughts on the server, or to interact with other players or Garena staff. You can also post screenshots or videos of your gameplay, or join live streams or discussions hosted by Garena. You may get a chance to win exclusive skins or cosmetics, or to get featured by Garena.

      -

      Conclusion

      -

      Free Fire Advance Server is a beta version of the game that allows you to test the upcoming features and content before they are released to the public. You can register, download, and play on the server if you are selected by Garena and receive an activation code. You can enjoy the new features and content such as new characters, pets, game modes, maps, weapons, items, UI, and gameplay changes. You can also get rewards such as diamonds, skins, and cosmetics for playing on the server and reporting bugs. You can also give feedback and suggestions to Garena using the in-game report button, the official website, or social media platforms.

      -

      FAQs

      -

      Q: How can I check if I am selected for Free Fire Advance Server?

      -

      A: You can check if you are selected for Free Fire Advance Server by visiting https://ff-advance.ff.garena.com/ and logging in using your Facebook account. If you see a "Download APK" button on the website, it means that you are selected and you can download the APK file of the server. If you do not see the button, it means that you are not selected and you have to wait for the next registration period.

      -

      Q: How long does Free Fire Advance Server last?

      -

      A: Free Fire Advance Server usually lasts for a few days before a new update is launched. The exact duration of the server may vary depending on the schedule of Garena. You can check the opening and closing dates of the server on the official website of the server or on the social media platforms of Garena.

      -

      Q: Can I play Free Fire Advance Server with my friends?

      -

      A: Yes, you can play Free Fire Advance Server with your friends if they are also selected for the server and have an activation code. You can invite them to join your squad or duo in the game lobby, or join their squad or duo if they invite you. However, you cannot play with your friends who are not on the server, as they are on a different version of the game.

      -

      Q: Will my progress on Free Fire Advance Server be transferred to the official version of the game?

      -

      A: No, your progress on Free Fire Advance Server will not be transferred to the official version of the game. The server is a separate version of the game that is used for testing purposes only. Your progress on the server will be deleted when the server is closed. However, any rewards that you earn on the server such as diamonds, skins, and cosmetics will be transferred to your account on the official version of the game.

      -

      Q: What should I do if I encounter any problems while playing on Free Fire Advance Server?

      -

      A: If you encounter any problems while playing on Free Fire Advance Server such as bugs, glitches, crashes, errors, lag, etc., you should report them to Garena using the in-game report button, the official website of the server, or social media platforms. You can also check the FAQ section on the website or the community forums for any solutions or tips. You should also keep your device updated and have a stable internet connection to avoid any problems.

      -

      I hope this article has helped you to understand everything you need to know about the Free Fire Advance Server game download. If you have any questions or comments, feel free to leave them below. Thank you for reading and happy gaming!

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_base_ontonotes4.sh b/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_base_ontonotes4.sh deleted file mode 100644 index 1e1237967712a6862e5770e90d4e8db8d074d320..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/zen2_finetune/ner_zen2_base_ontonotes4.sh +++ /dev/null @@ -1,92 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_base_ontonotes4 # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_base_ontonotes4/%x-%j.log # output and error file name (%x=job name, %j=job id) - - -# export CUDA_VISIBLE_DEVICES='2' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen2_base - -TASK=ontonotes4 - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/OntoNotes4/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_base_2.0 -PRETRAINED_MODEL_PATH=IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.char.bmes \ - --valid_data test.char.bmes \ - --test_data test.char.bmes \ - --train_batchsize 32 \ - --valid_batchsize 16 \ - --max_seq_length 256 \ - --task_name ontonotes4 \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bioes \ - --middle_prefix M- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 200 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 30 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 200 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/feng2022/styleganhuman_copy/torch_utils/ops/bias_act.py b/spaces/feng2022/styleganhuman_copy/torch_utils/ops/bias_act.py deleted file mode 100644 index 8041208be7680ddeceb1a87a9db9faae7101e7bf..0000000000000000000000000000000000000000 --- a/spaces/feng2022/styleganhuman_copy/torch_utils/ops/bias_act.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom PyTorch ops for efficient bias and activation.""" - -import os -import warnings -import numpy as np -import torch -import dnnlib -import traceback - -from .. import custom_ops -from .. import misc - -#---------------------------------------------------------------------------- - -activation_funcs = { - 'linear': dnnlib.EasyDict(func=lambda x, **_: x, def_alpha=0, def_gain=1, cuda_idx=1, ref='', has_2nd_grad=False), - 'relu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.relu(x), def_alpha=0, def_gain=np.sqrt(2), cuda_idx=2, ref='y', has_2nd_grad=False), - 'lrelu': dnnlib.EasyDict(func=lambda x, alpha, **_: torch.nn.functional.leaky_relu(x, alpha), def_alpha=0.2, def_gain=np.sqrt(2), cuda_idx=3, ref='y', has_2nd_grad=False), - 'tanh': dnnlib.EasyDict(func=lambda x, **_: torch.tanh(x), def_alpha=0, def_gain=1, cuda_idx=4, ref='y', has_2nd_grad=True), - 'sigmoid': dnnlib.EasyDict(func=lambda x, **_: torch.sigmoid(x), def_alpha=0, def_gain=1, cuda_idx=5, ref='y', has_2nd_grad=True), - 'elu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.elu(x), def_alpha=0, def_gain=1, cuda_idx=6, ref='y', has_2nd_grad=True), - 'selu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.selu(x), def_alpha=0, def_gain=1, cuda_idx=7, ref='y', has_2nd_grad=True), - 'softplus': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.softplus(x), def_alpha=0, def_gain=1, cuda_idx=8, ref='y', has_2nd_grad=True), - 'swish': dnnlib.EasyDict(func=lambda x, **_: torch.sigmoid(x) * x, def_alpha=0, def_gain=np.sqrt(2), cuda_idx=9, ref='x', has_2nd_grad=True), -} - -#---------------------------------------------------------------------------- - -_inited = False -_plugin = None -_null_tensor = torch.empty([0]) - -def _init(): - global _inited, _plugin - if not _inited: - _inited = True - sources = ['bias_act.cpp', 'bias_act.cu'] - sources = [os.path.join(os.path.dirname(__file__), s) for s in sources] - try: - _plugin = custom_ops.get_plugin('bias_act_plugin', sources=sources, extra_cuda_cflags=['--use_fast_math']) - except: - warnings.warn('Failed to build CUDA kernels for bias_act. Falling back to slow reference implementation. Details:\n\n' + traceback.format_exc()) - return _plugin is not None - -#---------------------------------------------------------------------------- - -def bias_act(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None, impl='cuda'): - r"""Fused bias and activation function. - - Adds bias `b` to activation tensor `x`, evaluates activation function `act`, - and scales the result by `gain`. Each of the steps is optional. In most cases, - the fused op is considerably more efficient than performing the same calculation - using standard PyTorch ops. It supports first and second order gradients, - but not third order gradients. - - Args: - x: Input activation tensor. Can be of any shape. - b: Bias vector, or `None` to disable. Must be a 1D tensor of the same type - as `x`. The shape must be known, and it must match the dimension of `x` - corresponding to `dim`. - dim: The dimension in `x` corresponding to the elements of `b`. - The value of `dim` is ignored if `b` is not specified. - act: Name of the activation function to evaluate, or `"linear"` to disable. - Can be e.g. `"relu"`, `"lrelu"`, `"tanh"`, `"sigmoid"`, `"swish"`, etc. - See `activation_funcs` for a full list. `None` is not allowed. - alpha: Shape parameter for the activation function, or `None` to use the default. - gain: Scaling factor for the output tensor, or `None` to use default. - See `activation_funcs` for the default scaling of each activation function. - If unsure, consider specifying 1. - clamp: Clamp the output values to `[-clamp, +clamp]`, or `None` to disable - the clamping (default). - impl: Name of the implementation to use. Can be `"ref"` or `"cuda"` (default). - - Returns: - Tensor of the same shape and datatype as `x`. - """ - assert isinstance(x, torch.Tensor) - assert impl in ['ref', 'cuda'] - if impl == 'cuda' and x.device.type == 'cuda' and _init(): - return _bias_act_cuda(dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp).apply(x, b) - return _bias_act_ref(x=x, b=b, dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def _bias_act_ref(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None): - """Slow reference implementation of `bias_act()` using standard TensorFlow ops. - """ - assert isinstance(x, torch.Tensor) - assert clamp is None or clamp >= 0 - spec = activation_funcs[act] - alpha = float(alpha if alpha is not None else spec.def_alpha) - gain = float(gain if gain is not None else spec.def_gain) - clamp = float(clamp if clamp is not None else -1) - - # Add bias. - if b is not None: - assert isinstance(b, torch.Tensor) and b.ndim == 1 - assert 0 <= dim < x.ndim - assert b.shape[0] == x.shape[dim] - x = x + b.reshape([-1 if i == dim else 1 for i in range(x.ndim)]) - - # Evaluate activation function. - alpha = float(alpha) - x = spec.func(x, alpha=alpha) - - # Scale by gain. - gain = float(gain) - if gain != 1: - x = x * gain - - # Clamp. - if clamp >= 0: - x = x.clamp(-clamp, clamp) # pylint: disable=invalid-unary-operand-type - return x - -#---------------------------------------------------------------------------- - -_bias_act_cuda_cache = dict() - -def _bias_act_cuda(dim=1, act='linear', alpha=None, gain=None, clamp=None): - """Fast CUDA implementation of `bias_act()` using custom ops. - """ - # Parse arguments. - assert clamp is None or clamp >= 0 - spec = activation_funcs[act] - alpha = float(alpha if alpha is not None else spec.def_alpha) - gain = float(gain if gain is not None else spec.def_gain) - clamp = float(clamp if clamp is not None else -1) - - # Lookup from cache. - key = (dim, act, alpha, gain, clamp) - if key in _bias_act_cuda_cache: - return _bias_act_cuda_cache[key] - - # Forward op. - class BiasActCuda(torch.autograd.Function): - @staticmethod - def forward(ctx, x, b): # pylint: disable=arguments-differ - ctx.memory_format = torch.channels_last if x.ndim > 2 and x.stride()[1] == 1 else torch.contiguous_format - x = x.contiguous(memory_format=ctx.memory_format) - b = b.contiguous() if b is not None else _null_tensor - y = x - if act != 'linear' or gain != 1 or clamp >= 0 or b is not _null_tensor: - y = _plugin.bias_act(x, b, _null_tensor, _null_tensor, _null_tensor, 0, dim, spec.cuda_idx, alpha, gain, clamp) - ctx.save_for_backward( - x if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor, - b if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor, - y if 'y' in spec.ref else _null_tensor) - return y - - @staticmethod - def backward(ctx, dy): # pylint: disable=arguments-differ - dy = dy.contiguous(memory_format=ctx.memory_format) - x, b, y = ctx.saved_tensors - dx = None - db = None - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - dx = dy - if act != 'linear' or gain != 1 or clamp >= 0: - dx = BiasActCudaGrad.apply(dy, x, b, y) - - if ctx.needs_input_grad[1]: - db = dx.sum([i for i in range(dx.ndim) if i != dim]) - - return dx, db - - # Backward op. - class BiasActCudaGrad(torch.autograd.Function): - @staticmethod - def forward(ctx, dy, x, b, y): # pylint: disable=arguments-differ - ctx.memory_format = torch.channels_last if dy.ndim > 2 and dy.stride()[1] == 1 else torch.contiguous_format - dx = _plugin.bias_act(dy, b, x, y, _null_tensor, 1, dim, spec.cuda_idx, alpha, gain, clamp) - ctx.save_for_backward( - dy if spec.has_2nd_grad else _null_tensor, - x, b, y) - return dx - - @staticmethod - def backward(ctx, d_dx): # pylint: disable=arguments-differ - d_dx = d_dx.contiguous(memory_format=ctx.memory_format) - dy, x, b, y = ctx.saved_tensors - d_dy = None - d_x = None - d_b = None - d_y = None - - if ctx.needs_input_grad[0]: - d_dy = BiasActCudaGrad.apply(d_dx, x, b, y) - - if spec.has_2nd_grad and (ctx.needs_input_grad[1] or ctx.needs_input_grad[2]): - d_x = _plugin.bias_act(d_dx, b, x, y, dy, 2, dim, spec.cuda_idx, alpha, gain, clamp) - - if spec.has_2nd_grad and ctx.needs_input_grad[2]: - d_b = d_x.sum([i for i in range(d_x.ndim) if i != dim]) - - return d_dy, d_x, d_b, d_y - - # Add to cache. - _bias_act_cuda_cache[key] = BiasActCuda - return BiasActCuda - -#---------------------------------------------------------------------------- diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Car Parking Multiplayer MOD APK - How to Get Unlimited Money and Gold in the New Version.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Car Parking Multiplayer MOD APK - How to Get Unlimited Money and Gold in the New Version.md deleted file mode 100644 index beb39e69c1d023587d6017fc1de10b6fa493a98e..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Car Parking Multiplayer MOD APK - How to Get Unlimited Money and Gold in the New Version.md +++ /dev/null @@ -1,104 +0,0 @@ -
      -

      Car Parking Multiplayer Mod APK: A Fun and Realistic Driving Simulator

      -

      If you are a fan of cars, driving, or online games, you will love Car Parking Multiplayer. This is a game that lets you experience the thrill of driving in various scenarios, from parking lots to city streets. You can also interact with other players online, chat with them, and join races or challenges. But what if you want to enjoy the game without any limitations or restrictions? That's where Car Parking Multiplayer Mod APK comes in. This is a modified version of the game that gives you unlimited money and gold, as well as access to all the cars and features in the game. In this article, we will tell you more about Car Parking Multiplayer, why you should download the mod apk, and how to do it.

      -

      car parking multiplayer mod apk unlimited money and gold new version


      DOWNLOAD ––– https://gohhs.com/2uPvAy



      -

      What is Car Parking Multiplayer?

      -

      Car Parking Multiplayer is a simulation game that was developed by olzhass, a studio that specializes in creating realistic and immersive driving games. The game has over 100 million downloads on Google Play Store, and has received positive reviews from users and critics alike. The game is designed to provide a realistic and fun driving experience, with features such as:

      -

      Features of Car Parking Multiplayer

      -

      - Realistic car physics and graphics

      -

      The game uses advanced physics engine and high-quality graphics to create a realistic driving environment. You can feel the weight, speed, and handling of each car, as well as the effects of weather, terrain, and traffic. The game also supports 360-degree camera view, which allows you to see your car from different angles.

      -

      - Open world map with different locations

      -

      The game offers a large open world map that you can explore freely. You can drive around different locations, such as parking lots, airports, highways, deserts, mountains, and more. Each location has its own challenges and missions that you can complete to earn money and rewards.

      -

      - Thousands of cars to choose from

      -

      The game features a huge collection of cars that you can drive, from classic cars to sports cars, from trucks to buses, and more. You can also customize your car with different colors, stickers, wheels, spoilers, and other accessories. You can even create your own car using the in-game editor.

      -

      car parking multiplayer mod apk latest version with unlimited money and gold
      -download car parking multiplayer mod apk for free and get unlimited money and gold
      -how to install car parking multiplayer mod apk on android and enjoy unlimited money and gold
      -car parking multiplayer mod apk 4.8.9.4.4 (unlimited money) - apkdone[^1^]
      -car parking multiplayer mod apk unlimited money and gold 2023 update
      -car parking multiplayer hack mod apk with unlimited money and gold
      -car parking multiplayer mod apk online with unlimited money and gold
      -car parking multiplayer mod apk offline with unlimited money and gold
      -car parking multiplayer mod apk no root required for unlimited money and gold
      -car parking multiplayer mod apk unlimited everything (money, gold, cars, etc.)
      -car parking multiplayer cheats mod apk for unlimited money and gold
      -car parking multiplayer mod apk unlocked all cars with unlimited money and gold
      -car parking multiplayer mod apk unlimited money and gold download link
      -car parking multiplayer mod apk gameplay with unlimited money and gold
      -car parking multiplayer mod apk review with unlimited money and gold
      -best car parking multiplayer mod apk with unlimited money and gold
      -safe and secure car parking multiplayer mod apk with unlimited money and gold
      -car parking multiplayer simulator mod apk with unlimited money and gold
      -car parking multiplayer realistic mod apk with unlimited money and gold
      -car parking multiplayer fun mod apk with unlimited money and gold
      -car parking multiplayer 3d mod apk with unlimited money and gold
      -car parking multiplayer hd mod apk with unlimited money and gold
      -car parking multiplayer pro mod apk with unlimited money and gold
      -car parking multiplayer premium mod apk with unlimited money and gold
      -car parking multiplayer vip mod apk with unlimited money and gold
      -car parking multiplayer mega mod apk with unlimited money and gold
      -car parking multiplayer ultimate mod apk with unlimited money and gold
      -car parking multiplayer super mod apk with unlimited money and gold
      -car parking multiplayer extreme mod apk with unlimited money and gold
      -car parking multiplayer deluxe mod apk with unlimited money and gold
      -car parking multiplayer classic mod apk with unlimited money and gold
      -car parking multiplayer retro mod apk with unlimited money and gold
      -car parking multiplayer modern mod apk with unlimited money and gold
      -car parking multiplayer custom mod apk with unlimited money and gold
      -car parking multiplayer new version mod apk with unlimited money and gold
      -car parking multiplayer old version mod apk with unlimited money and gold
      -car parking multiplayer latest update mod apk with unlimited money and gold
      -car parking multiplayer bug fix mod apk with unlimited money and gold
      -car parking multiplayer improved performance mod apk with unlimited money and gold
      -car parking multiplayer enhanced graphics mod apk with unlimited money and gold
      -car parking multiplayer easy mode mod apk with unlimited money and gold
      -car parking multiplayer hard mode mod apk with unlimited money and gold
      -car parking multiplayer sandbox mode mod apk with unlimited money and gold
      -car parking multiplayer adventure mode mod apk with unlimited money and gold
      -car parking multiplayer racing mode mod apk with unlimited money and gold
      -car parking multiplayer drifting mode mod apk with unlimited money and gold
      -car parking multiplayer police mode mod apk with unlimited money and gold
      -car parking multiplayer zombie mode mod apk with unlimited money and gold
      -car parking multiplayer city mode mod apk with unlimited money and gold

      -

      - Online multiplayer mode with voice chat

      -

      The game allows you to play online with other players from around the world. You can join or create rooms with up to 100 players, chat with them using voice or text messages, and challenge them to races or other activities. You can also exchange cars with other players or join clans.

      -

      - Customizable car settings and accessories

      -

      The game lets you adjust your car settings according to your preference. You can change the engine power, brake force, steering sensitivity, suspension stiffness, and more. You can also equip your car with different accessories, such as nitro boosters, turbo chargers, police sirens, horns, etc.

      -

      Why download Car Parking Multiplayer Mod APK?

      -

      Car Parking Multiplayer is a fun and addictive game that will keep you entertained for hours. However, it also has some drawbacks that may limit your enjoyment of the game. For example:

      - Benefits of Car Parking Multiplayer Mod APK -

      Car Parking Multiplayer Mod APK is a modified version of the game that gives you some advantages over the original game. By downloading and installing this mod apk, you can enjoy the following benefits:

      -

      - Unlimited money and gold

      -

      Money and gold are the main currencies in the game that you can use to buy and upgrade cars, as well as unlock new features and locations. However, earning money and gold in the game can be time-consuming and tedious. With Car Parking Multiplayer Mod APK, you don't have to worry about that. You will get unlimited money and gold in your account, which means you can buy any car you want, upgrade it to the max, and explore all the locations in the game.

      -

      - All cars unlocked and upgraded

      -

      As mentioned earlier, the game has thousands of cars that you can drive and customize. However, not all of them are available from the start. You have to unlock them by completing missions, reaching certain levels, or spending money and gold. With Car Parking Multiplayer Mod APK, you don't have to do any of that. You will have access to all the cars in the game, and they will be already upgraded to the highest level. You can choose any car you like, and enjoy its full potential.

      -

      - No ads and no root required

      -

      Another drawback of the original game is that it contains ads that may interrupt your gameplay or consume your data. Moreover, some features of the game may require root access on your device, which may compromise your security or warranty. With Car Parking Multiplayer Mod APK, you don't have to deal with any of that. The mod apk is free of ads, and does not require root access on your device. You can play the game smoothly and safely.

      -

      How to download and install Car Parking Multiplayer Mod APK?

      -

      Now that you know the benefits of Car Parking Multiplayer Mod APK, you may be wondering how to download and install it on your device. Don't worry, it's very easy and simple. Just follow these steps:

      -

      Steps to download and install Car Parking Multiplayer Mod APK

      -

      - Download the mod apk file from a trusted source

      -

      The first step is to download the mod apk file from a reliable source. You can find many websites that offer Car Parking Multiplayer Mod APK, but not all of them are safe or updated. We recommend you to use this link, which will take you to a secure and verified site where you can download the latest version of Car Parking Multiplayer Mod APK.

      -

      - Enable unknown sources on your device settings

      -

      The next step is to enable unknown sources on your device settings. This will allow you to install apps from sources other than Google Play Store. To do this, go to your device settings, then security or privacy, then unknown sources or install unknown apps, then toggle on the option for your browser or file manager.

      -

      - Install the mod apk file and launch the game

      -

      The final step is to install the mod apk file and launch the game. To do this, go to your file manager or downloads folder, then locate the mod apk file that you downloaded earlier, then tap on it to install it. Once the installation is done, tap on open or launch to start playing Car Parking Multiplayer Mod APK.

      -

      Conclusion

      -

      Car Parking Multiplayer is a fun and realistic driving simulator that lets you drive various cars in different locations and scenarios. You can also play online with other players, chat with them, and join races or challenges. However, if you want to enjoy the game without any limitations or restrictions, you should download Car Parking Multiplayer Mod APK. This is a modified version of the game that gives you unlimited money and gold, as well as access to all the cars and features in the game. You can download and install Car Parking Multiplayer Mod APK easily and safely by following the steps we provided above.

      -

      We hope this article was helpful for you. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

      -

      Frequently Asked Questions

      -

      Here are some of the most common questions that users have about Car Parking Multiplayer Mod APK:

      -
        -
      1. Is Car Parking Multiplayer Mod APK safe?
      2. -

        Yes, Car Parking Multiplayer Mod APK is safe to use. It does not contain any viruses or malware, and does not require root access on your device. However, make sure you download it from a trusted source like we mentioned above.

        -
      3. Is Car Parking Multiplayer Mod APK compatible with my device?
      4. -

        Yes, Car Parking Multiplayer Mod APK is compatible with most Android devices. It requires Android 4.4 or higher to run smoothly. However, some features may not work on some devices or models, depending on their specifications and compatibility.

        -
      5. Can I play online with Car Parking Multiplayer Mod APK?
      6. -

        Yes, you can play online with Car Parking Multiplayer Mod APK. You can join or create rooms with other players, chat with them, and challenge them to races or other activities. However, you may encounter some issues or errors when playing online, such as lagging, crashing, or disconnecting. This is because the mod apk is not the official version of the game, and may not be supported by the game servers.

        -
      7. Will I get banned for using Car Parking Multiplayer Mod APK?
      8. -

        There is a possibility that you may get banned for using Car Parking Multiplayer Mod APK. This is because the mod apk is not the official version of the game, and may violate the game's terms and conditions. The game developers may detect your use of the mod apk, and may suspend or terminate your account. Therefore, we advise you to use the mod apk at your own risk, and do not use it for any illegal or unethical purposes.

        -
      9. Can I update Car Parking Multiplayer Mod APK?
      10. -

        No, you cannot update Car Parking Multiplayer Mod APK. This is because the mod apk is not the official version of the game, and may not be compatible with the latest updates and patches from the game developers. If you try to update the mod apk, you may lose your progress, data, or money and gold. Therefore, we recommend you to stick with the version of the mod apk that you downloaded, and do not update it unless there is a new version of the mod apk available from a trusted source.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Instagram APK App for Android - Free Fast and Secure.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Instagram APK App for Android - Free Fast and Secure.md deleted file mode 100644 index c8c61849bbf050d617f4dfa0fc5fdfb971b98640..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Instagram APK App for Android - Free Fast and Secure.md +++ /dev/null @@ -1,162 +0,0 @@ -
      -

      Download Instagram APK App: How to Install and Use the Popular Social Media Platform on Your Android Device

      -

      Instagram is one of the most popular social media platforms in the world, with over 1 billion users. It allows you to create and share your photos, stories, reels, and videos with the friends and followers you care about. You can also explore and interact with other users from all over the world, discover new content, and express yourself in creative ways.

      -

      download instagram apk app


      Download >>> https://gohhs.com/2uPvuy



      -

      If you have an Android device, you can download and install the Instagram APK app from various sources online. This way, you can enjoy the latest features and updates of Instagram without waiting for the official release on Google Play Store. In this article, we will show you how to download Instagram APK app for Android, how to use it safely and effectively, and how to make the most out of your Instagram experience.

      -

      What is Instagram and Why You Should Use It

      -

      Instagram is a social media platform that was launched in 2010 by Kevin Systrom and Mike Krieger. It was acquired by Facebook in 2012, and since then it has grown into one of the most influential and popular platforms in the world.

      -

      Features of Instagram

      -

      Instagram has many features that make it unique and appealing to users. Some of these features are:

      -
        -
      • Photos: You can upload photos from your phone library or take them directly from the app. You can also edit them with filters, stickers, text, and other tools.
      • -
      • Reels: You can create short videos up to 30 seconds long with music, effects, filters, emojis, and stickers. You can watch, like, comment, and share reels videos in a dedicated space in the app.
      • -
      • Videos: You can upload longer videos up to 60 seconds long or use IGTV for videos up to 10 minutes long. You can also watch videos from other users in your feed or in the explore tab.
      • -
      • Stories: You can add photos and videos to your story that disappear after 24 hours. You can also use creative tools like boomerang, superzoom, polls, questions, and more to make your stories fun and interactive.
      • -
      • Direct Messages: You can send messages to your friends, share posts privately, and receive chat notifications. You can also video chat with up to four people at a time.
      • -
      • Explore: You can discover new content and accounts based on your interests and preferences. You can also search by hashtags, keywords, locations, or people.
      • -
      -

      Benefits of Using Instagram

      -

      Instagram has many benefits that make it worth using. Some of these benefits are:

      -
        -
      • Social Connection: You can connect with your friends, family, celebrities, influencers, brands, and other people who share your passions. You can also join communities and groups that relate to your hobbies, interests, or goals.
      • -
      • Creative Expression: You can showcase your talents, skills, personality, and style through your photos, reels, videos, and stories. You can also inspire others with your content and get feedback from your audience.
      • -
      • Entertainment: You can enjoy millions of entertaining, funny, informative, and educational content from other users. You can also participate in challenges, trends, contests, giveaways, and live events.
      • -
      • Educational: You can learn

        new things, skills, tips, and tricks from experts, professionals, and educators. You can also access free courses, tutorials, and resources on various topics.

      • -
      • Business: You can promote your products, services, or brand to a large and engaged audience. You can also use Instagram ads, analytics, and insights to reach your target market and grow your business.
      • -
      -

      How to Download Instagram APK App for Android

      -

      If you want to download Instagram APK app for Android, you need to find a reliable and safe source online. There are many websites that offer Instagram APK app for free, but some of them may contain malware, viruses, or spyware that can harm your device or compromise your privacy. Therefore, you need to be careful and do some research before downloading any APK file.

      -

      Steps to Download and Install Instagram APK App

      -

      Here are the steps to download and install Instagram APK app for Android:

      -
        -
      1. Go to a trusted website that offers Instagram APK app for download. Some of the popular ones are APKPure, APKMirror, and Uptodown. You can also search for "Instagram APK app" on Google or any other search engine.
      2. -
      3. Choose the latest version of Instagram APK app that is compatible with your device and operating system. You can check the version number, file size, date of release, and user ratings before downloading.
      4. -
      5. Tap on the download button or link and wait for the file to be downloaded to your device. You may need to grant permission to the browser or app to download files from unknown sources.
      6. -
      7. Once the file is downloaded, locate it in your device's file manager or downloads folder. Tap on the file to open it and start the installation process. You may need to enable the option to install apps from unknown sources in your device's settings.
      8. -
      9. Follow the instructions on the screen to complete the installation. You may need to accept the terms and conditions, grant permissions, and choose preferences.
      10. -
      11. After the installation is done, you can launch the Instagram app from your home screen or app drawer. You can also create a shortcut or widget for easy access.
      12. -
      -

      Tips to Use Instagram APK App Safely and Effectively

      -

      Here are some tips to use Instagram APK app safely and effectively:

      -
        -
      • Update regularly: You should check for updates frequently and download them as soon as they are available. This way, you can enjoy the latest features and bug fixes of Instagram. You can also avoid security issues and compatibility problems.
      • -
      • Backup your data: You should backup your data regularly and store it in a safe place. This way, you can restore your data in case of any loss, damage, or corruption. You can use Google Drive, Dropbox, or any other cloud service to backup your data.
      • -
      • Use antivirus software: You should use antivirus software on your device and scan it regularly for any malware, viruses, or spyware. This way, you can protect your device and data from any potential threats. You can use Avast, AVG, or any other reputable antivirus software.
      • -
      • Use VPN service: You should use VPN service on your device and connect to a secure server when using Instagram APK app. This way, you can encrypt your data and hide your IP address from any prying eyes. You can also access Instagram from any location without any restrictions. You can use ExpressVPN, NordVPN, or any other reliable VPN service.
      • -
      -

      How to Use Instagram on Your Android Device

      -

      Now that you have downloaded and installed Instagram APK app on your Android device, you can start using it and enjoy its features. Here are some of the things you can do with Instagram on your Android device:

      -

      download instagram apk app for android
      -download instagram apk app latest version
      -download instagram apk app from meta
      -download instagram apk app free
      -download instagram apk app for pc
      -download instagram apk app mod
      -download instagram apk app old version
      -download instagram apk app update
      -download instagram apk app offline
      -download instagram apk app without google play
      -download instagram apk app for ios
      -download instagram apk app from aptoide
      -download instagram apk app for firestick
      -download instagram apk app with reels
      -download instagram apk app beta
      -download instagram apk app cracked
      -download instagram apk app dark mode
      -download instagram apk app for windows 10
      -download instagram apk app hack
      -download instagram apk app lite
      -download instagram apk app no ads
      -download instagram apk app pro
      -download instagram apk app with music
      -download instagram apk app 2023
      -download instagram apk app 287.0.0.25.77
      -how to download instagram apk app on android phone
      -how to download instagram apk app on laptop
      -how to download instagram apk app on iphone
      -how to download instagram apk app on macbook
      -how to download instagram apk app on chromebook
      -how to download instagram apk app on smart tv
      -how to download instagram apk app on kindle fire
      -how to download instagram apk app on samsung galaxy s21
      -how to download instagram apk app on huawei p40 pro
      -how to download instagram apk app on bluestacks emulator
      -where to download instagram apk app safely and securely
      -where to download instagram apk app for android tv box
      -where to download instagram apk app for blackberry z10
      -where to download instagram apk app for nokia x2 01
      -where to download instagram apk app for oppo a37f
      -where to download instagram apk app for vivo y53
      -where to download instagram apk app for xiaomi redmi note 9
      -where to download instagram apk app for zte blade v7 lite
      -why you should download instagram apk app instead of using the web version
      -why you should not download instagram apk app from unknown sources
      -why you should always update your downloaded instagram apk app regularly
      -why you should backup your downloaded instagram apk app data before uninstalling
      -why you should use a vpn when downloading the instagram apk app in restricted countries
      -why you should enable the unknown sources option when downloading the instagram apk app manually

      -

      How to Create an Account and Log in

      -

      To use Instagram, you need to create an account and log in with your credentials. Here are the steps to do so:

      -
        -
      1. Open the Instagram app on your device and tap on "Sign up" or "Log in".
      2. -
      3. If you want to sign up, you can either use your phone number or email address, or connect with your Facebook account. Enter your details and tap on "Next".
      4. -
      5. If you want to log in, you can either use your username or email address and password, or connect with your Facebook account. Enter your details and tap on "Log in".
      6. -
      7. You can also use the "Forgot password?" option if you have trouble logging in.
      8. -Once you have signed up or logged in, you can customize your profile by adding a photo, a bio, a website, and other information. You can also follow other users, invite your contacts, and adjust your settings. -
      -

      How to Post Photos, Reels, and Videos

      -

      To post photos, reels, and videos on Instagram, you need to use the camera icon at the bottom of the screen. Here are the steps to do so:

      -
        -
      1. Tap on the camera icon and choose the type of content you want to post. You can either take a photo or video from the app, or select one from your gallery.
      2. -
      3. If you want to post a photo, you can either use the normal mode, the boomerang mode, the layout mode, or the multi-capture mode. You can also use the flash, the timer, the filters, and other tools to enhance your photo.
      4. -
      5. If you want to post a reel, you can either use the audio library or your own audio to add music to your video. You can also use the speed, the effects, the timer, the align, and other tools to create your reel.
      6. -
      7. If you want to post a video, you can either use the normal mode or the live mode. You can also use the flash, the filters, and other tools to improve your video.
      8. -
      9. Once you have taken or selected your photo, reel, or video, you can edit it further by cropping, rotating, adjusting, adding stickers, text, drawings, and more.
      10. -
      11. When you are done editing, tap on "Next" and add a caption, hashtags, location, tags, and other options to your post. You can also choose to share it on other platforms like Facebook, Twitter, or WhatsApp.
      12. -
      13. Finally, tap on "Share" and wait for your post to be uploaded. You can also save it as a draft if you want to post it later.
      14. -
      -

      How to Explore and Interact with Other Users

      -

      To explore and interact with other users on Instagram, you need to use the magnifying glass icon at the bottom of the screen. Here are some of the things you can do with it:

      -
        -
      • Explore: You can discover new content and accounts based on your interests and preferences. You can also search by hashtags, keywords, locations, or people. You can tap on any post to view it in full screen, like it, comment on it, save it, or share it.
      • -
      • Follow: You can follow other users that you like or admire. You can also see who follows you back and who doesn't. You can unfollow any user at any time if you change your mind.
      • -
      • Like: You can like any post that you find interesting or appealing. You can also see how many likes a post has received and who has liked it.
      • -
      • Comment: You can comment on any post that you want to express your opinion or feedback. You can also reply to other comments and mention other users with @.
      • -
      • Save: You can save any post that you want to revisit later. You can also create collections of saved posts and organize them by categories.
      • -
      • Share: You can share any post that you want to show to your friends or followers. You can either send it as a direct message or post it as a story.
      • -
      -

      How to Use Instagram Stories and Direct Messages

      -

      To use Instagram stories and direct messages on Instagram , you need to use the paper plane icon and the plus icon at the top of the screen. Here are some of the things you can do with them:

      -
        -
      • Stories: You can add photos and videos to your story that disappear after 24 hours. You can also use creative tools like boomerang, superzoom, polls, questions, and more to make your stories fun and interactive. You can view other users' stories by tapping on their profile pictures or swiping left on the screen. You can also react, reply, or share their stories.
      • -
      • Direct Messages: You can send messages to your friends, share posts privately, and receive chat notifications. You can also video chat with up to four people at a time. You can access your direct messages by tapping on the paper plane icon or swiping right on the screen. You can also create groups, mute conversations, or block users.
      • -
      -

      Conclusion

      -

      Summary of the Main Points

      -

      In this article, we have shown you how to download Instagram APK app for Android, how to use it safely and effectively, and how to make the most out of your Instagram experience. We have covered the following topics:

      -
        -
      • What is Instagram and why you should use it
      • -
      • How to download Instagram APK app for Android
      • -
      • How to use Instagram on your Android device
      • -
      • How to post photos, reels, and videos
      • -
      • How to explore and interact with other users
      • -
      • How to use Instagram stories and direct messages
      • -
      -

      Call to Action

      -

      We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. If you liked this article, please share it with your friends and followers. And if you haven't already, download Instagram APK app for Android today and start enjoying the amazing features of this popular social media platform.

      -

      Frequently Asked Questions

      -

      Here are some of the frequently asked questions about Instagram APK app for Android:

      -
        -
      1. What is an APK file?
      2. -

        An APK file is an Android Package file that contains all the files and code needed to install and run an app on an Android device. It is similar to an EXE file for Windows or a DMG file for Mac.

        -
      3. Is it safe to download Instagram APK app for Android?
      4. -

        It depends on the source you download it from. Some websites may offer fake or malicious APK files that can harm your device or compromise your privacy. Therefore, you need to be careful and do some research before downloading any APK file. You should also use antivirus software and VPN service to protect your device and data.

        -
      5. What are the advantages of downloading Instagram APK app for Android?
      6. -

        The main advantage of downloading Instagram APK app for Android is that you can enjoy the latest features and updates of Instagram without waiting for the official release on Google Play Store. You can also access Instagram from any location without any restrictions.

        -
      7. What are the disadvantages of downloading Instagram APK app for Android?
      8. -

        The main disadvantage of downloading Instagram APK app for Android is that you may encounter some bugs or errors that are not fixed yet. You may also face some compatibility issues with your device or operating system. You may also miss out on some security patches or support from the developers.

        -
      9. How can I update Instagram APK app for Android?
      10. -

        You can update Instagram APK app for Android by downloading and installing the latest version of the APK file from a trusted website. You can also check for updates frequently and download them as soon as they are available.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/fffffu/bing/src/components/ui/dropdown-menu.tsx b/spaces/fffffu/bing/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 184d4e6007ef85187446362f69532ab077897fea..0000000000000000000000000000000000000000 --- a/spaces/fffffu/bing/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DropdownMenuPrimitive from '@radix-ui/react-dropdown-menu' - -import { cn } from '@/lib/utils' - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = 'DropdownMenuShortcut' - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuRadioGroup -} diff --git a/spaces/fffiloni/bark-transformers-example/app.py b/spaces/fffiloni/bark-transformers-example/app.py deleted file mode 100644 index 20b576fa9e12710c8913e1aa46f2e4894231cb12..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/bark-transformers-example/app.py +++ /dev/null @@ -1,115 +0,0 @@ -import gradio as gr -import torch -from transformers import BarkModel -from optimum.bettertransformer import BetterTransformer - -model = BarkModel.from_pretrained("suno/bark", torch_dtype=torch.float16) -device = "cuda:0" if torch.cuda.is_available() else "cpu" -model = model.to(device) - -from transformers import AutoProcessor -processor = AutoProcessor.from_pretrained("suno/bark") - -# Use bettertransform for flash attention -model = BetterTransformer.transform(model, keep_original_model=False) - -# Enable CPU offload -model.enable_cpu_offload() - -import numpy as np -from scipy.io.wavfile import write as write_wav -import wave - -def split_text_into_sentences(text): - sentences = [] - current_sentence = '' - words = text.split() - - for word in words: - current_sentence += ' ' + word - if word.endswith('.'): - sentences.append(current_sentence.strip()) - current_sentence = '' - - if current_sentence: - sentences.append(current_sentence.strip()) - - return sentences - -def join_wav_files(input_files, output_file): - # Open the first input file to get its parameters - with wave.open(input_files[0], 'rb') as first_file: - # Get the audio parameters from the first file - params = first_file.getparams() - - # Create a new wave file for writing the joined audio - with wave.open(output_file, 'wb') as output: - output.setparams(params) - - # Iterate over the input files and write their audio data to the output file - for input_file in input_files: - with wave.open(input_file, 'rb') as input: - output.writeframes(input.readframes(input.getnframes())) - - -def infer(text_prompt): - print(""" - — - Cutting text in chunks - — - """) - - - text_chunks = split_text_into_sentences(text_prompt) - - result = generate(text_chunks, "wav") - print(result) - - - output_wav = 'full_story.wav' - - join_wav_files(result, output_wav) - - return 'full_story.wav' - - -def generate(text_prompt, out_type): - text_prompt = text_prompt - - inputs = processor(text_prompt, voice_preset="v2/en_speaker_6").to(device) - - with torch.inference_mode(): - speech_output = model.generate(**inputs) - - input_waves = [] - - for i, speech_out in enumerate(speech_output): - - audio_array = speech_out.cpu().numpy().squeeze() - print(f'AUDIO_ARRAY: {audio_array}') - - # Assuming audio_array contains audio data and the sampling rate - sampling_rate = model.generation_config.sample_rate - print(f'sampling_rate: {sampling_rate}') - - if out_type == "numpy": - input_waves.append(sampling_rate, audio_array) - elif out_type == "wav": - #If you want to return a WAV file : - # Ensure the audio data is properly scaled (between -1 and 1 for 16-bit audio) - - audio_data = np.int16(audio_array * 32767) # Scale for 16-bit signed integer - write_wav(f"output_{i}.wav", sampling_rate, audio_data) - input_waves.append(f"output_{i}.wav") - return input_waves - - -with gr.Blocks() as demo: - with gr.Column(): - prompt = gr.Textbox(label="prompt") - submit_btn = gr.Button("Submit") - audio_out = gr.Audio() - submit_btn.click(fn=infer, inputs=[prompt], outputs=[audio_out]) - -demo.launch() - \ No newline at end of file diff --git a/spaces/flax-community/multilingual-image-captioning/apps/model/flax_clip_vision_mbart/generation_clip_vision_utils.py b/spaces/flax-community/multilingual-image-captioning/apps/model/flax_clip_vision_mbart/generation_clip_vision_utils.py deleted file mode 100644 index 6d6b15575b2e8092c406e5bbf63ba242f57118be..0000000000000000000000000000000000000000 --- a/spaces/flax-community/multilingual-image-captioning/apps/model/flax_clip_vision_mbart/generation_clip_vision_utils.py +++ /dev/null @@ -1,990 +0,0 @@ -from typing import Dict, Optional - -import flax -import jax -import jax.numpy as jnp -import jaxlib.xla_extension as jax_xla -import numpy as np -from jax import lax -from transformers.file_utils import ModelOutput -from transformers.generation_flax_logits_process import ( - FlaxForcedBOSTokenLogitsProcessor, - FlaxForcedEOSTokenLogitsProcessor, - FlaxLogitsProcessorList, - FlaxMinLengthLogitsProcessor, - FlaxTemperatureLogitsWarper, - FlaxTopKLogitsWarper, - FlaxTopPLogitsWarper, -) -from transformers.utils import logging - -logger = logging.get_logger(__name__) - - -@flax.struct.dataclass -class FlaxGreedySearchOutput(ModelOutput): - """ - Flax Base class for outputs of decoder-only generation models using greedy search. - Args: - sequences (:obj:`jax_xla.DeviceArray` of shape :obj:`(batch_size, max_length)`): - The generated sequences. - """ - - sequences: jax_xla.DeviceArray = None - - -@flax.struct.dataclass -class FlaxSampleOutput(ModelOutput): - """ - Flax Base class for outputs of decoder-only generation models using sampling. - Args: - sequences (:obj:`jax_xla.DeviceArray` of shape :obj:`(batch_size, max_length)`): - The generated sequences. - """ - - sequences: jax_xla.DeviceArray = None - - -@flax.struct.dataclass -class FlaxBeamSearchOutput(ModelOutput): - """ - Flax Base class for outputs of decoder-only generation models using greedy search. - Args: - sequences (:obj:`jax_xla.DeviceArray` of shape :obj:`(batch_size, max_length)`): - The generated sequences. - scores (:obj:`jax_xla.DeviceArray` of shape :obj:`(batch_size,)`): - The scores (log probabilites) of the generated sequences. - """ - - sequences: jax_xla.DeviceArray = None - scores: jax_xla.DeviceArray = None - - -@flax.struct.dataclass -class GreedyState: - cur_len: jax_xla.DeviceArray - sequences: jax_xla.DeviceArray - running_token: jax_xla.DeviceArray - is_sent_finished: jax_xla.DeviceArray - model_kwargs: Dict[str, jax_xla.DeviceArray] - - -@flax.struct.dataclass -class SampleState: - cur_len: jax_xla.DeviceArray - sequences: jax_xla.DeviceArray - running_token: jax_xla.DeviceArray - is_sent_finished: jax_xla.DeviceArray - prng_key: jax_xla.DeviceArray - model_kwargs: Dict[str, jax_xla.DeviceArray] - - -@flax.struct.dataclass -class BeamSearchState: - cur_len: jax_xla.DeviceArray - running_sequences: jax_xla.DeviceArray - running_scores: jax_xla.DeviceArray - sequences: jax_xla.DeviceArray - scores: jax_xla.DeviceArray - is_sent_finished: jax_xla.DeviceArray - model_kwargs: Dict[str, jax_xla.DeviceArray] - - -class FlaxCLIPVisionMBartGenerationMixin: - """ - A class containing all of the functions supporting generation, to be used as a mixin in - :class:`~transformers.FlaxPreTrainedModel`. - """ - - @staticmethod - def _run_loop_in_debug(cond_fn, body_fn, init_state): - """ - Run generation in untraced mode. This should only be used for debugging purposes. - """ - state = init_state - while cond_fn(state): - state = body_fn(state) - return state - - def _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs): - encoder_kwargs = { - argument: value - for argument, value in model_kwargs.items() - if not ( - argument.startswith("decoder_") or argument.startswith("cross_attn") - ) - } - model_kwargs["encoder_outputs"] = self.encode( - input_ids, return_dict=True, **encoder_kwargs - ) - return model_kwargs - - @staticmethod - def _expand_to_num_beams(tensor, num_beams): - return jnp.broadcast_to( - tensor[:, None], (tensor.shape[0], num_beams) + tensor.shape[1:] - ) - - def generate( - self, - input_ids: jax_xla.DeviceArray, - max_length: Optional[int] = None, - pad_token_id: Optional[int] = None, - bos_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - decoder_start_token_id: Optional[int] = None, - do_sample: Optional[bool] = None, - prng_key: Optional[jax_xla.DeviceArray] = None, - top_k: Optional[int] = None, - top_p: Optional[float] = None, - temperature: Optional[float] = None, - num_beams: Optional[int] = None, - no_repeat_ngram_size: Optional[int] = None, - min_length: Optional[int] = None, - forced_bos_token_id: Optional[int] = None, - forced_eos_token_id: Optional[int] = None, - length_penalty: Optional[float] = None, - early_stopping: Optional[bool] = None, - trace: bool = True, - params: Optional[Dict[str, jax_xla.DeviceArray]] = None, - **model_kwargs, - ): - r""" - Generates sequences for models with a language modeling head. The method currently supports greedy decoding, - and, multinomial sampling. - Apart from :obj:`input_ids`, all the arguments below will default to the value of the attribute of the same - name inside the :class:`~transformers.PretrainedConfig` of the model. The default values indicated are the - default values of those config. - Most of these parameters are explained in more detail in `this blog post - `__. - Parameters: - input_ids (:obj:`jax_xla.DeviceArray` of shape :obj:`(batch_size, sequence_length)`, `optional`): - The sequence used as a prompt for the generation. - max_length (:obj:`int`, `optional`, defaults to 20): - The maximum length of the sequence to be generated. - do_sample (:obj:`bool`, `optional`, defaults to :obj:`False`): - Whether or not to use sampling ; use greedy decoding otherwise. - temperature (:obj:`float`, `optional`, defaults to 1.0): - The value used to module the next token probabilities. - top_k (:obj:`int`, `optional`, defaults to 50): - The number of highest probability vocabulary tokens to keep for top-k-filtering. - top_p (:obj:`float`, `optional`, defaults to 1.0): - If set to float < 1, only the most probable tokens with probabilities that add up to :obj:`top_p` or - higher are kept for generation. - pad_token_id (:obj:`int`, `optional`): - The id of the `padding` token. - bos_token_id (:obj:`int`, `optional`): - The id of the `beginning-of-sequence` token. - eos_token_id (:obj:`int`, `optional`): - The id of the `end-of-sequence` token. - num_beams (:obj:`int`, `optional`, defaults to 1): - Number of beams for beam search. 1 means no beam search. - decoder_start_token_id (:obj:`int`, `optional`): - If an encoder-decoder model starts decoding with a different token than `bos`, the id of that token. - trace (:obj:`bool`, `optional`, defaults to :obj:`True`): - Whether to trace generation. Setting ``trace=False`` should only be used for debugging and will lead to - a considerably slower runtime. - params (:obj:`Dict[str, jax_xla.DeviceArray]`, `optional`): - Optionally the model parameters can be passed. Can be useful for parallelized generation. - model_kwargs: - Additional model specific kwargs will be forwarded to the :obj:`forward` function of the model. - Return: - :class:`~transformers.file_utils.ModelOutput`. - Examples:: - >>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM - >>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") - >>> model = FlaxAutoModelForCausalLM.from_pretrained("distilgpt2") - >>> input_context = "The dog" - >>> # encode input context - >>> input_ids = tokenizer(input_context, return_tensors="jax").input_ids - >>> # generate candidates using sampling - >>> outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True) - >>> print("Generated:", tokenizer.batch_decode(outputs, skip_special_tokens=True)) - """ - # set init values - max_length = ( - max_length - if max_length is not None - else self.config.mbart_config.max_length - ) - bos_token_id = ( - bos_token_id - if bos_token_id is not None - else self.config.mbart_config.bos_token_id - ) - pad_token_id = ( - pad_token_id - if pad_token_id is not None - else self.config.mbart_config.pad_token_id - ) - eos_token_id = ( - eos_token_id - if eos_token_id is not None - else self.config.mbart_config.eos_token_id - ) - decoder_start_token_id = ( - decoder_start_token_id - if decoder_start_token_id - else self.config.mbart_config.decoder_start_token_id - ) - prng_key = prng_key if prng_key is not None else jax.random.PRNGKey(0) - - if decoder_start_token_id is None and self.config.is_encoder_decoder: - raise ValueError( - "`decoder_start_token_id` has to be defined for encoder-decoder generation." - ) - - if self.config.is_encoder_decoder: - # add encoder_outputs to model_kwargs - model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( - input_ids, model_kwargs - ) - # prepare decoder_input_ids for generation - input_ids = ( - jnp.ones((input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id - ) - - do_sample = ( - do_sample if do_sample is not None else self.config.mbart_config.do_sample - ) - num_beams = ( - num_beams if num_beams is not None else self.config.mbart_config.num_beams - ) - - if not do_sample and num_beams == 1: - logits_processor = self._get_logits_processor( - no_repeat_ngram_size, - min_length, - max_length, - eos_token_id, - forced_bos_token_id, - forced_eos_token_id, - ) - return self._greedy_search( - input_ids, - max_length, - pad_token_id, - eos_token_id, - logits_processor=logits_processor, - trace=trace, - params=params, - model_kwargs=model_kwargs, - ) - elif do_sample and num_beams == 1: - logits_warper = self._get_logits_warper( - top_k=top_k, top_p=top_p, temperature=temperature - ) - logits_processor = self._get_logits_processor( - no_repeat_ngram_size, - min_length, - max_length, - eos_token_id, - forced_bos_token_id, - forced_eos_token_id, - ) - return self._sample( - input_ids, - max_length, - pad_token_id, - eos_token_id, - prng_key, - logits_warper=logits_warper, - logits_processor=logits_processor, - trace=trace, - params=params, - model_kwargs=model_kwargs, - ) - elif not do_sample and num_beams > 1: - # broadcast input_ids & encoder_outputs - input_ids = self._expand_to_num_beams(input_ids, num_beams=num_beams) - - if "encoder_outputs" in model_kwargs: - model_kwargs["encoder_outputs"][ - "last_hidden_state" - ] = self._expand_to_num_beams( - model_kwargs["encoder_outputs"]["last_hidden_state"], - num_beams=num_beams, - ) - - if "attention_mask" in model_kwargs: - model_kwargs["attention_mask"] = self._expand_to_num_beams( - model_kwargs["attention_mask"], num_beams=num_beams - ) - - logits_processor = self._get_logits_processor( - no_repeat_ngram_size, - min_length, - max_length, - eos_token_id, - forced_bos_token_id, - forced_eos_token_id, - ) - - return self._beam_search( - input_ids, - max_length, - pad_token_id, - eos_token_id, - length_penalty=length_penalty, - early_stopping=early_stopping, - logits_processor=logits_processor, - trace=trace, - params=params, - model_kwargs=model_kwargs, - ) - else: - raise NotImplementedError("`Beam sampling is currently not implemented.") - - def _get_logits_warper( - self, top_k: int = None, top_p: float = None, temperature: float = None - ) -> FlaxLogitsProcessorList: - """ - This class returns a :obj:`~transformers.FlaxLogitsProcessorList` list object that contains all relevant - :obj:`~transformers.FlaxLogitsWarper` instances used for multinomial sampling. - """ - - # init warp parameters - top_k = top_k if top_k is not None else self.config.mbart_config.top_k - top_p = top_p if top_p is not None else self.config.mbart_config.top_p - temperature = ( - temperature - if temperature is not None - else self.config.mbart_config.temperature - ) - # instantiate warpers list - warpers = FlaxLogitsProcessorList() - - # the following idea is largely copied from this PR: https://github.com/huggingface/transformers/pull/5420/files - # all samplers can be found in `generation_utils_samplers.py` - if temperature is not None and temperature != 1.0: - warpers.append(FlaxTemperatureLogitsWarper(temperature)) - if top_k is not None and top_k != 0: - warpers.append(FlaxTopKLogitsWarper(top_k=top_k, min_tokens_to_keep=1)) - if top_p is not None and top_p < 1.0: - warpers.append(FlaxTopPLogitsWarper(top_p=top_p, min_tokens_to_keep=1)) - - return warpers - - def _get_logits_processor( - self, - no_repeat_ngram_size: int, - min_length: int, - max_length: int, - eos_token_id: int, - forced_bos_token_id: int, - forced_eos_token_id: int, - ) -> FlaxLogitsProcessorList: - """ - This class returns a :obj:`~transformers.FlaxLogitsProcessorList` list object that contains all relevant - :obj:`~transformers.FlaxLogitsProcessor` instances used to modify the scores of the language model head. - """ - processors = FlaxLogitsProcessorList() - - # init warp parameters - no_repeat_ngram_size = ( - no_repeat_ngram_size - if no_repeat_ngram_size is not None - else self.config.mbart_config.no_repeat_ngram_size - ) - min_length = ( - min_length - if min_length is not None - else self.config.mbart_config.min_length - ) - eos_token_id = ( - eos_token_id - if eos_token_id is not None - else self.config.mbart_config.eos_token_id - ) - forced_bos_token_id = ( - forced_bos_token_id - if forced_bos_token_id is not None - else self.config.mbart_config.forced_bos_token_id - ) - forced_eos_token_id = ( - forced_eos_token_id - if forced_eos_token_id is not None - else self.config.mbart_config.forced_eos_token_id - ) - - # the following idea is largely copied from this PR: https://github.com/huggingface/transformers/pull/5420/files - # all samplers can be found in `generation_utils_samplers.py` - if min_length is not None and eos_token_id is not None and min_length > -1: - processors.append(FlaxMinLengthLogitsProcessor(min_length, eos_token_id)) - if forced_bos_token_id is not None: - processors.append(FlaxForcedBOSTokenLogitsProcessor(forced_bos_token_id)) - if forced_eos_token_id is not None: - processors.append( - FlaxForcedEOSTokenLogitsProcessor(max_length, forced_eos_token_id) - ) - return processors - - def _greedy_search( - self, - input_ids: None, - max_length: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - logits_processor: Optional[FlaxLogitsProcessorList] = None, - trace: bool = True, - params: Optional[Dict[str, jax_xla.DeviceArray]] = None, - model_kwargs: Optional[Dict[str, jax_xla.DeviceArray]] = None, - ): - # init values - max_length = ( - max_length - if max_length is not None - else self.config.mbart_config.max_length - ) - pad_token_id = ( - pad_token_id - if pad_token_id is not None - else self.config.mbart_config.pad_token_id - ) - eos_token_id = ( - eos_token_id - if eos_token_id is not None - else self.config.mbart_config.eos_token_id - ) - - batch_size, cur_len = input_ids.shape - - eos_token_id = jnp.array(eos_token_id) - pad_token_id = jnp.array(pad_token_id) - cur_len = jnp.array(cur_len) - - # per batch-item holding current token in loop. - sequences = jnp.full((batch_size, max_length), pad_token_id, dtype=jnp.int32) - sequences = lax.dynamic_update_slice(sequences, input_ids, (0, 0)) - - # per batch-item state bit indicating if sentence has finished. - is_sent_finished = jnp.zeros((batch_size,), dtype=jnp.bool_) - - # For Seq2Seq generation, we only need to use the decoder instead of the whole model in generation loop - # and pass it the `encoder_outputs`, which are part of the `model_kwargs`. - model = self.decode if self.config.is_encoder_decoder else self - # initialize model specific kwargs - model_kwargs = self.prepare_inputs_for_generation( - input_ids, max_length, **model_kwargs - ) - - # initialize state - state = GreedyState( - cur_len=cur_len, - sequences=sequences, - running_token=input_ids, - is_sent_finished=is_sent_finished, - model_kwargs=model_kwargs, - ) - - def greedy_search_cond_fn(state): - """state termination condition fn.""" - has_reached_max_length = state.cur_len == max_length - all_sequence_finished = jnp.all(state.is_sent_finished) - finish_generation = jnp.logical_or( - has_reached_max_length, all_sequence_finished - ) - return ~finish_generation - - def greedy_search_body_fn(state): - """state update fn.""" - model_outputs = model( - state.running_token, params=params, **state.model_kwargs - ) - logits = model_outputs.logits[:, -1] - - # apply min_length, ... - logits = logits_processor(state.sequences, logits, state.cur_len) - - next_token = jnp.argmax(logits, axis=-1) - - next_is_sent_finished = state.is_sent_finished | ( - next_token == eos_token_id - ) - next_token = ( - next_token * ~next_is_sent_finished - + pad_token_id * next_is_sent_finished - ) - next_token = next_token[:, None] - - next_sequences = lax.dynamic_update_slice( - state.sequences, next_token, (0, state.cur_len) - ) - next_model_kwargs = self.update_inputs_for_generation( - model_outputs, state.model_kwargs - ) - return GreedyState( - cur_len=state.cur_len + 1, - sequences=next_sequences, - running_token=next_token, - is_sent_finished=next_is_sent_finished, - model_kwargs=next_model_kwargs, - ) - - # The very first prompt often has sequence length > 1, so run outside of `lax.while_loop` to comply with TPU - if input_ids.shape[1] > 1: - state = greedy_search_body_fn(state) - - if not trace: - state = self._run_loop_in_debug( - greedy_search_cond_fn, greedy_search_body_fn, state - ) - else: - state = lax.while_loop(greedy_search_cond_fn, greedy_search_body_fn, state) - - return FlaxGreedySearchOutput(sequences=state.sequences) - - def _sample( - self, - input_ids: None, - max_length: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - prng_key: Optional[jax_xla.DeviceArray] = None, - logits_processor: Optional[FlaxLogitsProcessorList] = None, - logits_warper: Optional[FlaxLogitsProcessorList] = None, - trace: bool = True, - params: Optional[Dict[str, jax_xla.DeviceArray]] = None, - model_kwargs: Optional[Dict[str, jax_xla.DeviceArray]] = None, - ): - # init values - max_length = ( - max_length - if max_length is not None - else self.config.mbart_config.max_length - ) - pad_token_id = ( - pad_token_id - if pad_token_id is not None - else self.config.mbart_config.pad_token_id - ) - eos_token_id = ( - eos_token_id - if eos_token_id is not None - else self.config.mbart_config.eos_token_id - ) - prng_key = prng_key if prng_key is not None else jax.random.PRNGKey(0) - - batch_size, cur_len = input_ids.shape - - eos_token_id = jnp.array(eos_token_id) - pad_token_id = jnp.array(pad_token_id) - cur_len = jnp.array(cur_len) - - # per batch-item holding current token in loop. - sequences = jnp.full((batch_size, max_length), pad_token_id, dtype=jnp.int32) - sequences = lax.dynamic_update_slice(sequences, input_ids, (0, 0)) - - # per batch-item state bit indicating if sentence has finished. - is_sent_finished = jnp.zeros((batch_size,), dtype=jnp.bool_) - - # For Seq2Seq generation, we only need to use the decoder instead of the whole model in generation loop - # and pass it the `encoder_outputs`, which are part of the `model_kwargs`. - model = self.decode if self.config.is_encoder_decoder else self - - # initialize model specific kwargs - model_kwargs = self.prepare_inputs_for_generation( - input_ids, max_length, **model_kwargs - ) - - # initialize state - state = SampleState( - cur_len=cur_len, - sequences=sequences, - running_token=input_ids, - is_sent_finished=is_sent_finished, - prng_key=prng_key, - model_kwargs=model_kwargs, - ) - - def sample_search_cond_fn(state): - """state termination condition fn.""" - has_reached_max_length = state.cur_len == max_length - all_sequence_finished = jnp.all(state.is_sent_finished) - finish_generation = jnp.logical_or( - has_reached_max_length, all_sequence_finished - ) - return ~finish_generation - - def sample_search_body_fn(state): - """state update fn.""" - prng_key, prng_key_next = jax.random.split(state.prng_key) - model_outputs = model( - state.running_token, params=params, **state.model_kwargs - ) - - logits = model_outputs.logits[:, -1] - - # apply min_length, ... - logits = logits_processor(state.sequences, logits, state.cur_len) - # apply top_k, top_k, temperature - logits = logits_warper(logits, logits, state.cur_len) - - next_token = jax.random.categorical( - prng_key, model_outputs.logits[:, -1], axis=-1 - ) - - next_is_sent_finished = state.is_sent_finished | ( - next_token == eos_token_id - ) - next_token = ( - next_token * ~next_is_sent_finished - + pad_token_id * next_is_sent_finished - ) - next_token = next_token[:, None] - - next_sequences = lax.dynamic_update_slice( - state.sequences, next_token, (0, state.cur_len) - ) - next_model_kwargs = self.update_inputs_for_generation( - model_outputs, state.model_kwargs - ) - - return SampleState( - cur_len=state.cur_len + 1, - sequences=next_sequences, - running_token=next_token, - is_sent_finished=next_is_sent_finished, - model_kwargs=next_model_kwargs, - prng_key=prng_key_next, - ) - - # The very first prompt often has sequence length > 1, so run outside of `lax.while_loop` to comply with TPU - if input_ids.shape[1] > 1: - state = sample_search_body_fn(state) - - if not trace: - state = self._run_loop_in_debug( - sample_search_cond_fn, sample_search_body_fn, state - ) - else: - state = lax.while_loop(sample_search_cond_fn, sample_search_body_fn, state) - - return FlaxSampleOutput(sequences=state.sequences) - - def _beam_search( - self, - input_ids: None, - max_length: Optional[int] = None, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[int] = None, - length_penalty: Optional[float] = None, - early_stopping: Optional[bool] = None, - logits_processor: Optional[FlaxLogitsProcessorList] = None, - trace: bool = True, - params: Optional[Dict[str, jax_xla.DeviceArray]] = None, - model_kwargs: Optional[Dict[str, jax_xla.DeviceArray]] = None, - ): - """ - This beam search function is heavily inspired by Flax's official example: - https://github.com/google/flax/blob/master/examples/wmt/train.py#L254 - """ - - def flatten_beam_dim(tensor): - """Flattens the first two dimensions of a non-scalar array.""" - # ignore scalars (e.g. cache index) - if tensor.ndim == 0: - return tensor - return tensor.reshape( - (tensor.shape[0] * tensor.shape[1],) + tensor.shape[2:] - ) - - def unflatten_beam_dim(tensor, batch_size, num_beams): - """Unflattens the first, flat batch*beam dimension of a non-scalar array.""" - # ignore scalars (e.g. cache index) - if tensor.ndim == 0: - return tensor - return tensor.reshape((batch_size, num_beams) + tensor.shape[1:]) - - def gather_beams(nested, beam_indices, batch_size, new_num_beams): - """ - Gathers the beam slices indexed by beam_indices into new beam array. - """ - batch_indices = jnp.reshape( - jnp.arange(batch_size * new_num_beams) // new_num_beams, - (batch_size, new_num_beams), - ) - - def gather_fn(tensor): - # ignore scalars (e.g. cache index) - if tensor.ndim == 0: - return tensor - else: - return tensor[batch_indices, beam_indices] - - return jax.tree_map(gather_fn, nested) - - # init values - max_length = ( - max_length - if max_length is not None - else self.config.mbart_config.max_length - ) - pad_token_id = ( - pad_token_id - if pad_token_id is not None - else self.config.mbart_config.pad_token_id - ) - eos_token_id = ( - eos_token_id - if eos_token_id is not None - else self.config.mbart_config.eos_token_id - ) - length_penalty = ( - length_penalty - if length_penalty is not None - else self.config.mbart_config.length_penalty - ) - early_stopping = ( - early_stopping - if early_stopping is not None - else self.config.mbart_config.early_stopping - ) - - batch_size, num_beams, cur_len = input_ids.shape - - eos_token_id = jnp.array(eos_token_id) - pad_token_id = jnp.array(pad_token_id) - cur_len = jnp.array(cur_len) - - # per batch,beam-item holding current token in loop. - sequences = jnp.full( - (batch_size, num_beams, max_length), pad_token_id, dtype=jnp.int32 - ) - running_sequences = jnp.full( - (batch_size, num_beams, max_length), pad_token_id, dtype=jnp.int32 - ) - running_sequences = lax.dynamic_update_slice(sequences, input_ids, (0, 0, 0)) - - # per batch,beam-item state bit indicating if sentence has finished. - is_sent_finished = jnp.zeros((batch_size, num_beams), dtype=jnp.bool_) - - # per batch,beam-item score, logprobs - running_scores = jnp.tile( - jnp.array([0.0] + [np.array(-1.0e7)] * (num_beams - 1)), [batch_size, 1] - ) - scores = jnp.ones((batch_size, num_beams)) * np.array(-1.0e7) - - # For Seq2Seq generation, we only need to use the decoder instead of the whole model in generation loop - # and pass it the `encoder_outputs`, which are part of the `model_kwargs`. - model = self.decode if self.config.is_encoder_decoder else self - - # flatten beam dim - if "encoder_outputs" in model_kwargs: - model_kwargs["encoder_outputs"]["last_hidden_state"] = flatten_beam_dim( - model_kwargs["encoder_outputs"]["last_hidden_state"] - ) - if "attention_mask" in model_kwargs: - model_kwargs["attention_mask"] = flatten_beam_dim( - model_kwargs["attention_mask"] - ) - - # initialize model specific kwargs - model_kwargs = self.prepare_inputs_for_generation( - flatten_beam_dim(input_ids), max_length, **model_kwargs - ) - - # initialize state - state = BeamSearchState( - cur_len=cur_len, - running_sequences=running_sequences, - running_scores=running_scores, - sequences=sequences, - scores=scores, - is_sent_finished=is_sent_finished, - model_kwargs=model_kwargs, - ) - - def beam_search_cond_fn(state): - """beam search state termination condition fn.""" - - # 1. is less than max length? - not_max_length_yet = state.cur_len < max_length - - # 2. can the new beams still improve? - best_running_score = state.running_scores[:, -1:] / ( - max_length ** length_penalty - ) - worst_finished_score = jnp.where( - state.is_sent_finished, - jnp.min(state.scores, axis=1, keepdims=True), - np.array(-1.0e7), - ) - improvement_still_possible = jnp.all( - worst_finished_score < best_running_score - ) - - # 3. is there still a beam that has not finished? - still_open_beam = ~(jnp.all(state.is_sent_finished) & early_stopping) - - return not_max_length_yet & still_open_beam & improvement_still_possible - - def beam_search_body_fn(state): - """beam search state update fn.""" - # 1. Forward current tokens - # Collect the current position slice along length to feed the fast - # autoregressive decoder model. Flatten the beam dimension into batch - # dimension for feeding into the model. - # unflatten beam dimension - # Unflatten beam dimension in attention cache arrays - input_token = flatten_beam_dim( - lax.dynamic_slice( - state.running_sequences, - (0, 0, state.cur_len - 1), - (batch_size, num_beams, 1), - ) - ) - model_outputs = model(input_token, params=params, **state.model_kwargs) - logits = unflatten_beam_dim( - model_outputs.logits[:, 0], batch_size, num_beams - ) - cache = jax.tree_map( - lambda tensor: unflatten_beam_dim(tensor, batch_size, num_beams), - model_outputs.past_key_values, - ) - - # 2. Compute log probs - # get log probabilities from logits, - # process logits with processors (*e.g.* min_length, ...), and - # add new logprobs to existing running logprobs scores. - log_probs = jax.nn.log_softmax(logits) - log_probs = logits_processor( - flatten_beam_dim(running_sequences), - flatten_beam_dim(log_probs), - state.cur_len, - ) - log_probs = unflatten_beam_dim(log_probs, batch_size, num_beams) - log_probs = log_probs + jnp.expand_dims(state.running_scores, axis=2) - vocab_size = log_probs.shape[2] - log_probs = log_probs.reshape((batch_size, num_beams * vocab_size)) - - # 3. Retrieve top-K - # Each item in batch has num_beams * vocab_size candidate sequences. - # For each item, get the top 2*k candidates with the highest log- - # probabilities. We gather the top 2*K beams here so that even if the best - # K sequences reach EOS simultaneously, we have another K sequences - # remaining to continue the live beam search. - # Gather the top 2*K scores from _all_ beams. - # Gather 2*k top beams. - # Recover the beam index by floor division. - # Recover token id by modulo division and expand Id array for broadcasting. - # Update sequences for the 2*K top-k new sequences. - beams_to_keep = 2 * num_beams - topk_log_probs, topk_indices = lax.top_k(log_probs, k=beams_to_keep) - topk_beam_indices = topk_indices // vocab_size - topk_running_sequences = gather_beams( - state.running_sequences, topk_beam_indices, batch_size, beams_to_keep - ) - topk_ids = jnp.expand_dims(topk_indices % vocab_size, axis=2) - topk_sequences = lax.dynamic_update_slice( - topk_running_sequences, topk_ids, (0, 0, state.cur_len) - ) - - # 4. Check which sequences have ended - # Update current sequences: - # Did any of these sequences reach an end marker? - # To prevent these just finished sequences from being added to the current sequences - # set of active beam search sequences, set their log probs to a very large - # negative value. - did_topk_just_finished = topk_sequences[:, :, state.cur_len] == eos_token_id - topk_log_probs = topk_log_probs + did_topk_just_finished * np.array(-1.0e7) - - # 5. Get running sequences scores for next - # Determine the top k beam indices (from top 2*k beams) from log probs - # and gather top k beams (from top 2*k beams). - next_topk_indices = jnp.flip( - lax.top_k(topk_log_probs, k=num_beams)[1], axis=1 - ) - next_running_sequences, next_running_scores = gather_beams( - [topk_sequences, topk_log_probs], - next_topk_indices, - batch_size, - num_beams, - ) - - # 6. Process topk logits - # Further process log probs: - # - add length penalty - # - make sure no scores can be added anymore if beam is full - # - make sure still running sequences cannot be chosen as finalized beam - topk_log_probs = topk_log_probs / (state.cur_len ** length_penalty) - beams_in_batch_are_full = ( - jnp.broadcast_to( - state.is_sent_finished.all(axis=-1, keepdims=True), - did_topk_just_finished.shape, - ) - & early_stopping - ) - add_penalty = ~did_topk_just_finished | beams_in_batch_are_full - topk_log_probs += add_penalty * np.array(-1.0e7) - - # 7. Get scores, sequences, is sentence finished for next. - # Combine sequences, scores, and flags along the beam dimension and compare - # new finished sequence scores to existing finished scores and select the - # best from the new set of beams - merged_sequences = jnp.concatenate( - [state.sequences, topk_sequences], axis=1 - ) - merged_scores = jnp.concatenate([state.scores, topk_log_probs], axis=1) - merged_is_sent_finished = jnp.concatenate( - [state.is_sent_finished, did_topk_just_finished], axis=1 - ) - topk_merged_indices = jnp.flip( - lax.top_k(merged_scores, k=num_beams)[1], axis=1 - ) - next_sequences, next_scores, next_is_sent_finished = gather_beams( - [merged_sequences, merged_scores, merged_is_sent_finished], - topk_merged_indices, - batch_size, - num_beams, - ) - - # 8. Update model kwargs. - # Determine the top k beam indices from the original set of all beams. - # With these, gather the top k beam-associated caches. - next_running_indices = gather_beams( - topk_beam_indices, next_topk_indices, batch_size, num_beams - ) - next_cache = gather_beams( - cache, next_running_indices, batch_size, num_beams - ) - model_outputs["past_key_values"] = jax.tree_map( - lambda x: flatten_beam_dim(x), next_cache - ) - next_model_kwargs = self.update_inputs_for_generation( - model_outputs, state.model_kwargs - ) - - return BeamSearchState( - cur_len=state.cur_len + 1, - running_scores=next_running_scores, - running_sequences=next_running_sequences, - scores=next_scores, - sequences=next_sequences, - is_sent_finished=next_is_sent_finished, - model_kwargs=next_model_kwargs, - ) - - # The very first prompt often has sequence length > 1, so run outside of `lax.while_loop` to comply with TPU - state = beam_search_body_fn(state) - - if not trace: - state = self._run_loop_in_debug( - beam_search_cond_fn, beam_search_body_fn, state - ) - else: - state = lax.while_loop(beam_search_cond_fn, beam_search_body_fn, state) - - # Account for the edge-case where there are no finished sequences for a - # particular batch item. If so, return running sequences for that batch item. - none_finished = jnp.any(state.is_sent_finished, axis=1) - sequences = jnp.where( - none_finished[:, None, None], state.sequences, state.running_sequences - ) - scores = jnp.where(none_finished[:, None], state.scores, state.running_scores) - - # take best beam for each batch - sequences = sequences[:, -1] - scores = scores[:, -1] - - return FlaxBeamSearchOutput(sequences=sequences, scores=scores) diff --git a/spaces/flax-community/roberta-base-mr/multiapp.py b/spaces/flax-community/roberta-base-mr/multiapp.py deleted file mode 100644 index 8d406d61d5cbe0475ab157bf92de763ebce68994..0000000000000000000000000000000000000000 --- a/spaces/flax-community/roberta-base-mr/multiapp.py +++ /dev/null @@ -1,14 +0,0 @@ -import streamlit as st - - -class MultiApp: - def __init__(self): - self.apps = [] - - def add_app(self, title, func): - self.apps.append({"title": title, "function": func}) - - def run(self): - st.sidebar.header("Tasks") - app = st.sidebar.radio("", self.apps, format_func=lambda app: app["title"]) - app["function"]() diff --git a/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/store/state.js b/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/store/state.js deleted file mode 100644 index 5a9c8537f21c3ecfd8db5fc91691271ac73e8187..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/Interactive_DeepRL_Demo/js/ui_state/store/state.js +++ /dev/null @@ -1,49 +0,0 @@ -// State of all UI elements -export default { - language: 'EN', // 'FR' - envsSets: { - baseEnvsSet: [], // list of environments of the base set - customEnvsSet: [] // list of environments of the custom set - }, - morphologies: [], // list of available morphologies - currentSeedsIdx: {}, // index of the policy selected for each morphology - agents: [], // list of running agents - simulationState: { - status: 'init', // 'running', 'paused' - intro_tour: false, - agentFollowed: null, - agentSelected: null, - }, - activeTab:'getting_started', // 'getting_started', 'draw_yourself', 'proc_gen', 'advanced_options', 'about' - parkourConfig: { // parkour customization parameters - terrain:{ - dim1: 1.0, - dim2: 0.95, - dim3: 0, - smoothing: 25, - waterLevel: 0 - }, - creepers:{ - width: 0.3, - height: 2.5, - spacing: 1, - type: 'Swingable' // 'Rigid' - } - }, - drawingModeState: { - drawing: false, - drawing_ground: false, - drawing_ceiling: false, - erasing: false, - }, - advancedOptionsState: { - drawJoints: false, - drawLidars: true, - drawNames: true, - drawObservation: false, - drawReward: false, - assets: { - circle: false, - }, - }, -}; \ No newline at end of file diff --git a/spaces/frncscp/bullerengue/README.md b/spaces/frncscp/bullerengue/README.md deleted file mode 100644 index a35a3f2612a66db29b8aff67b2ed2c0a845877ff..0000000000000000000000000000000000000000 --- a/spaces/frncscp/bullerengue/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: Bullerengue -emoji: 🔥 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.29.0 -python_version: 3.9 -app_file: app.py -pinned: false -license: mit -models: -- musika/musika-bullerengue-alpha ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/apis/inference.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/apis/inference.py deleted file mode 100644 index 90bc1c0c68525734bd6793f07c15fe97d3c8342c..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/apis/inference.py +++ /dev/null @@ -1,136 +0,0 @@ -import matplotlib.pyplot as plt -import annotator.uniformer.mmcv as mmcv -import torch -from annotator.uniformer.mmcv.parallel import collate, scatter -from annotator.uniformer.mmcv.runner import load_checkpoint - -from annotator.uniformer.mmseg.datasets.pipelines import Compose -from annotator.uniformer.mmseg.models import build_segmentor - - -def init_segmentor(config, checkpoint=None, device='cuda:0'): - """Initialize a segmentor from config file. - - Args: - config (str or :obj:`mmcv.Config`): Config file path or the config - object. - checkpoint (str, optional): Checkpoint path. If left as None, the model - will not load any weights. - device (str, optional) CPU/CUDA device option. Default 'cuda:0'. - Use 'cpu' for loading model on CPU. - Returns: - nn.Module: The constructed segmentor. - """ - if isinstance(config, str): - config = mmcv.Config.fromfile(config) - elif not isinstance(config, mmcv.Config): - raise TypeError('config must be a filename or Config object, ' - 'but got {}'.format(type(config))) - config.model.pretrained = None - config.model.train_cfg = None - model = build_segmentor(config.model, test_cfg=config.get('test_cfg')) - if checkpoint is not None: - checkpoint = load_checkpoint(model, checkpoint, map_location='cpu') - model.CLASSES = checkpoint['meta']['CLASSES'] - model.PALETTE = checkpoint['meta']['PALETTE'] - model.cfg = config # save the config in the model for convenience - model.to(device) - model.eval() - return model - - -class LoadImage: - """A simple pipeline to load image.""" - - def __call__(self, results): - """Call function to load images into results. - - Args: - results (dict): A result dict contains the file name - of the image to be read. - - Returns: - dict: ``results`` will be returned containing loaded image. - """ - - if isinstance(results['img'], str): - results['filename'] = results['img'] - results['ori_filename'] = results['img'] - else: - results['filename'] = None - results['ori_filename'] = None - img = mmcv.imread(results['img']) - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - return results - - -def inference_segmentor(model, img): - """Inference image(s) with the segmentor. - - Args: - model (nn.Module): The loaded segmentor. - imgs (str/ndarray or list[str/ndarray]): Either image files or loaded - images. - - Returns: - (list[Tensor]): The segmentation result. - """ - cfg = model.cfg - device = next(model.parameters()).device # model device - # build the data pipeline - test_pipeline = [LoadImage()] + cfg.data.test.pipeline[1:] - test_pipeline = Compose(test_pipeline) - # prepare data - data = dict(img=img) - data = test_pipeline(data) - data = collate([data], samples_per_gpu=1) - if next(model.parameters()).is_cuda: - # scatter to specified GPU - data = scatter(data, [device])[0] - else: - data['img_metas'] = [i.data[0] for i in data['img_metas']] - - # forward the model - with torch.no_grad(): - result = model(return_loss=False, rescale=True, **data) - return result - - -def show_result_pyplot(model, - img, - result, - palette=None, - fig_size=(15, 10), - opacity=0.5, - title='', - block=True): - """Visualize the segmentation results on the image. - - Args: - model (nn.Module): The loaded segmentor. - img (str or np.ndarray): Image filename or loaded image. - result (list): The segmentation result. - palette (list[list[int]]] | None): The palette of segmentation - map. If None is given, random palette will be generated. - Default: None - fig_size (tuple): Figure size of the pyplot figure. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - title (str): The title of pyplot figure. - Default is ''. - block (bool): Whether to block the pyplot figure. - Default is True. - """ - if hasattr(model, 'module'): - model = model.module - img = model.show_result( - img, result, palette=palette, show=False, opacity=opacity) - # plt.figure(figsize=fig_size) - # plt.imshow(mmcv.bgr2rgb(img)) - # plt.title(title) - # plt.tight_layout() - # plt.show(block=block) - return mmcv.bgr2rgb(img) diff --git a/spaces/giswqs/Streamlit/app.css b/spaces/giswqs/Streamlit/app.css deleted file mode 100644 index 605563c4fb794600d8948f1ca5f306147e9e0500..0000000000000000000000000000000000000000 --- a/spaces/giswqs/Streamlit/app.css +++ /dev/null @@ -1,4 +0,0 @@ -.flex -{ - overflow:auto; -} \ No newline at end of file diff --git "a/spaces/giswqs/Streamlit/pages/4_\360\237\224\245_Heatmap.py" "b/spaces/giswqs/Streamlit/pages/4_\360\237\224\245_Heatmap.py" deleted file mode 100644 index 94ce1f59689f1bcc7153767ed393f7adeac01337..0000000000000000000000000000000000000000 --- "a/spaces/giswqs/Streamlit/pages/4_\360\237\224\245_Heatmap.py" +++ /dev/null @@ -1,34 +0,0 @@ -import streamlit as st -import leafmap.foliumap as leafmap - -st.set_page_config(layout="wide") - -st.sidebar.info( - """ - - Web App URL: - - GitHub repository: - """ -) - -st.sidebar.title("Contact") -st.sidebar.info( - """ - Qiusheng Wu at [wetlands.io](https://wetlands.io) | [GitHub](https://github.com/giswqs) | [Twitter](https://twitter.com/giswqs) | [YouTube](https://www.youtube.com/c/QiushengWu) | [LinkedIn](https://www.linkedin.com/in/qiushengwu) - """ -) - -st.title("Heatmap") - -with st.expander("See source code"): - with st.echo(): - filepath = "https://raw.githubusercontent.com/giswqs/leafmap/master/examples/data/us_cities.csv" - m = leafmap.Map(center=[40, -100], zoom=4, tiles="stamentoner") - m.add_heatmap( - filepath, - latitude="latitude", - longitude="longitude", - value="pop_max", - name="Heat map", - radius=20, - ) -m.to_streamlit(height=700) diff --git a/spaces/glyszt/vt/vtoonify/model/raft/README.md b/spaces/glyszt/vt/vtoonify/model/raft/README.md deleted file mode 100644 index 650275ed7c4cda12822587c6a4358f057fffe494..0000000000000000000000000000000000000000 --- a/spaces/glyszt/vt/vtoonify/model/raft/README.md +++ /dev/null @@ -1,80 +0,0 @@ -# RAFT -This repository contains the source code for our paper: - -[RAFT: Recurrent All Pairs Field Transforms for Optical Flow](https://arxiv.org/pdf/2003.12039.pdf)
      -ECCV 2020
      -Zachary Teed and Jia Deng
      - - - -## Requirements -The code has been tested with PyTorch 1.6 and Cuda 10.1. -```Shell -conda create --name raft -conda activate raft -conda install pytorch=1.6.0 torchvision=0.7.0 cudatoolkit=10.1 matplotlib tensorboard scipy opencv -c pytorch -``` - -## Demos -Pretrained models can be downloaded by running -```Shell -./download_models.sh -``` -or downloaded from [google drive](https://drive.google.com/drive/folders/1sWDsfuZ3Up38EUQt7-JDTT1HcGHuJgvT?usp=sharing) - -You can demo a trained model on a sequence of frames -```Shell -python demo.py --model=models/raft-things.pth --path=demo-frames -``` - -## Required Data -To evaluate/train RAFT, you will need to download the required datasets. -* [FlyingChairs](https://lmb.informatik.uni-freiburg.de/resources/datasets/FlyingChairs.en.html#flyingchairs) -* [FlyingThings3D](https://lmb.informatik.uni-freiburg.de/resources/datasets/SceneFlowDatasets.en.html) -* [Sintel](http://sintel.is.tue.mpg.de/) -* [KITTI](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=flow) -* [HD1K](http://hci-benchmark.iwr.uni-heidelberg.de/) (optional) - - -By default `datasets.py` will search for the datasets in these locations. You can create symbolic links to wherever the datasets were downloaded in the `datasets` folder - -```Shell -├── datasets - ├── Sintel - ├── test - ├── training - ├── KITTI - ├── testing - ├── training - ├── devkit - ├── FlyingChairs_release - ├── data - ├── FlyingThings3D - ├── frames_cleanpass - ├── frames_finalpass - ├── optical_flow -``` - -## Evaluation -You can evaluate a trained model using `evaluate.py` -```Shell -python evaluate.py --model=models/raft-things.pth --dataset=sintel --mixed_precision -``` - -## Training -We used the following training schedule in our paper (2 GPUs). Training logs will be written to the `runs` which can be visualized using tensorboard -```Shell -./train_standard.sh -``` - -If you have a RTX GPU, training can be accelerated using mixed precision. You can expect similiar results in this setting (1 GPU) -```Shell -./train_mixed.sh -``` - -## (Optional) Efficent Implementation -You can optionally use our alternate (efficent) implementation by compiling the provided cuda extension -```Shell -cd alt_cuda_corr && python setup.py install && cd .. -``` -and running `demo.py` and `evaluate.py` with the `--alternate_corr` flag Note, this implementation is somewhat slower than all-pairs, but uses significantly less GPU memory during the forward pass. diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Internet Download Manager (IDM) V6.12.10.3 Full Including Crack With Key [h33t][iahq76] 2021.md b/spaces/gotiQspiryo/whisper-ui/examples/Internet Download Manager (IDM) V6.12.10.3 Full Including Crack With Key [h33t][iahq76] 2021.md deleted file mode 100644 index 2ac2c52e1a88c153f823a9309a5588a8d4b3a046..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Internet Download Manager (IDM) V6.12.10.3 Full Including Crack With Key [h33t][iahq76] 2021.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Internet Download Manager (IDM) v6.12.10.3 Full Including Crack with Key [h33t][iahq76]


      Download ✺✺✺ https://urlgoal.com/2uyMbn



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Malevolence Full Movie in Hindi Hd A Kidnapped Boys Nightmare.md b/spaces/gotiQspiryo/whisper-ui/examples/Malevolence Full Movie in Hindi Hd A Kidnapped Boys Nightmare.md deleted file mode 100644 index 312b40c44deb64e23f11ca1e2aad9d5f9c2aaa15..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Malevolence Full Movie in Hindi Hd A Kidnapped Boys Nightmare.md +++ /dev/null @@ -1,5 +0,0 @@ - -

      Vegamovies.NL is the best online platform for downloading Hollywood and Bollywood Movies. We provide direct G-Drive download link for fast and secure downloading. Click on the download button below and follow the steps to start download.

      -

      Malevolence Full Movie Download In Hindi Hd


      DOWNLOADhttps://urlgoal.com/2uyMfH



      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/spaces.py b/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/spaces.py deleted file mode 100644 index 2c995f77157a5a7c070a360e7d7bb2a7fb9f9803..0000000000000000000000000000000000000000 --- a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/spaces.py +++ /dev/null @@ -1,161 +0,0 @@ -import os -os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" -os.environ["CUDA_VISIBLE_DEVICES"]="0" -try: - os.system("pip install --upgrade torch==1.11.0+cu113 torchvision==0.12.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html") -except Exception as e: - print(e) - -from pydoc import describe -from huggingface_hub import hf_hub_download -import gradio as gr -import os -from datetime import datetime -from PIL import Image -import torch -import torchvision -from diffusers import StableDiffusionImg2ImgPipeline -import skimage -import paddlehub -import numpy as np -from lib.options import BaseOptions -from apps.crop_img import process_img -from apps.eval import Evaluator -from types import SimpleNamespace -import trimesh -import glob - -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = StableDiffusionImg2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, revision="fp16", safety_checker=None) if torch.cuda.is_available() else StableDiffusionImg2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", safety_checker=None) -pipe = pipe.to(device) - -print( - "torch: ", torch.__version__, - "\ntorchvision: ", torchvision.__version__, - "\nskimage:", skimage.__version__ -) - -print("EnV", os.environ) - -net_C = hf_hub_download("radames/PIFu-upright-standing", filename="net_C") -net_G = hf_hub_download("radames/PIFu-upright-standing", filename="net_G") - - -opt = BaseOptions() -opts = opt.parse_to_dict() -opts['batch_size'] = 1 -opts['mlp_dim'] = [257, 1024, 512, 256, 128, 1] -opts['mlp_dim_color'] = [513, 1024, 512, 256, 128, 3] -opts['num_stack'] = 4 -opts['num_hourglass'] = 2 -opts['resolution'] = 128 -opts['hg_down'] = 'ave_pool' -opts['norm'] = 'group' -opts['norm_color'] = 'group' -opts['load_netG_checkpoint_path'] = net_G -opts['load_netC_checkpoint_path'] = net_C -opts['results_path'] = "./results" -opts['name'] = "spaces_demo" -opts = SimpleNamespace(**opts) -print("Params", opts) -evaluator = Evaluator(opts) -bg_remover_model = paddlehub.Module(name="U2Net") - -def resize(value,img): - img = Image.open(img) - img = img.resize((value,value)) - return img - -def infer(source_img, prompt, negative_prompt, guide, steps, seed, Strength): - generator = torch.Generator(device).manual_seed(seed) - source_image = resize(768, source_img) - source_image.save('source.png') - image = pipe(prompt, negative_prompt=negative_prompt, image=source_image, strength=Strength, guidance_scale=guide, num_inference_steps=steps).images[0] - return image - -def process(img_path): - base = os.path.basename(img_path) - img_name = os.path.splitext(base)[0] - print("\n\n\nStarting Process", datetime.now()) - print("image name", img_name) - img_raw = Image.open(img_path).convert('RGB') - - img = img_raw.resize( - (512, int(512 * img_raw.size[1] / img_raw.size[0])), - Image.Resampling.LANCZOS) - - try: - # remove background - print("Removing Background") - masks = bg_remover_model.Segmentation( - images=[np.array(img)], - paths=None, - batch_size=1, - input_size=320, - output_dir='./PIFu/inputs', - visualization=False) - mask = masks[0]["mask"] - front = masks[0]["front"] - except Exception as e: - print(e) - - print("Aliging mask with input training image") - print("Not aligned", front.shape, mask.shape) - img_new, msk_new = process_img(front, mask) - print("Aligned", img_new.shape, msk_new.shape) - - try: - time = datetime.now() - data = evaluator.load_image_from_memory(img_new, msk_new, img_name) - print("Evaluating via PIFu", time) - evaluator.eval(data, True) - print("Success Evaluating via PIFu", datetime.now() - time) - result_path = f'./{opts.results_path}/{opts.name}/result_{img_name}' - except Exception as e: - print("Error evaluating via PIFu", e) - - try: - mesh = trimesh.load(result_path + '.obj') - # flip mesh - mesh.apply_transform([[-1, 0, 0, 0], - [0, 1, 0, 0], - [0, 0, -1, 0], - [0, 0, 0, 1]]) - mesh.export(file_obj=result_path + '.glb') - result_gltf = result_path + '.glb' - return [result_gltf, result_gltf] - - except Exception as e: - print("error generating MESH", e) - - -examples = sorted(glob.glob('examples/*.png')) - -iface1 = gr.Interface(fn=infer, inputs=[gr.Image(source="upload", type="filepath", label="Raw Image. Must Be .png"), gr.Textbox(label = 'Prompt Input Text. 77 Token (Keyword or Symbol) Maximum'), gr.Textbox(label='What you Do Not want the AI to generate.'), - gr.Slider(2, 15, value = 7, label = 'Guidance Scale'), - gr.Slider(1, 25, value = 10, step = 1, label = 'Number of Iterations'), - gr.Slider(label = "Seed", minimum = 0, maximum = 987654321987654321, step = 1, randomize = True), - gr.Slider(label='Strength', minimum = 0, maximum = 1, step = .05, value = .5)], - outputs='image') - -iface2 = gr.Interface( - fn=process, - inputs=gr.Image(type="filepath", label="Input Image"), - outputs=[ - gr.Model3D( - clear_color=[0.0, 0.0, 0.0, 0.0], label="3D Model"), - gr.File(label="Download 3D Model") - ], - examples=examples, - allow_flagging="never", - cache_examples=True -) - -demo = gr.TabbedInterface([iface1, iface2], ["Image-Edit-with-Text", "Image-to-3D-Model"]) - -if __name__ == "__main__": - demo.launch() - -# if __name__ == "__main__": -# iface1.launch(debug=True, enable_queue=False) -# iface2.launch(debug=True, enable_queue=False) \ No newline at end of file diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/common/common.cpp b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/common/common.cpp deleted file mode 100644 index e566c035bdef66e9b75265a58fb8602b0fa530ca..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/common/common.cpp +++ /dev/null @@ -1,60 +0,0 @@ -// Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include - -//------------------------------------------------------------------------ -// Block and grid size calculators for kernel launches. - -dim3 getLaunchBlockSize(int maxWidth, int maxHeight, int width, int height) -{ - int maxThreads = maxWidth * maxHeight; - if (maxThreads <= 1 || (width * height) <= 1) - return dim3(1, 1, 1); // Degenerate. - - // Start from max size. - int bw = maxWidth; - int bh = maxHeight; - - // Optimizations for weirdly sized buffers. - if (width < bw) - { - // Decrease block width to smallest power of two that covers the buffer width. - while ((bw >> 1) >= width) - bw >>= 1; - - // Maximize height. - bh = maxThreads / bw; - if (bh > height) - bh = height; - } - else if (height < bh) - { - // Halve height and double width until fits completely inside buffer vertically. - while (bh > height) - { - bh >>= 1; - if (bw < width) - bw <<= 1; - } - } - - // Done. - return dim3(bw, bh, 1); -} - -dim3 getLaunchGridSize(dim3 blockSize, int width, int height, int depth) -{ - dim3 gridSize; - gridSize.x = (width - 1) / blockSize.x + 1; - gridSize.y = (height - 1) / blockSize.y + 1; - gridSize.z = (depth - 1) / blockSize.z + 1; - return gridSize; -} - -//------------------------------------------------------------------------ diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/torch/torch_antialias.cpp b/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/torch/torch_antialias.cpp deleted file mode 100644 index a926adc7dc68eb30811de6a3571a0a545c7b2a20..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/nvdiffrast/build/lib/nvdiffrast/torch/torch_antialias.cpp +++ /dev/null @@ -1,239 +0,0 @@ -// Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include "torch_common.inl" -#include "torch_types.h" -#include "../common/common.h" -#include "../common/antialias.h" - -//------------------------------------------------------------------------ -// Kernel prototypes. - -void AntialiasFwdMeshKernel (const AntialiasKernelParams p); -void AntialiasFwdDiscontinuityKernel(const AntialiasKernelParams p); -void AntialiasFwdAnalysisKernel (const AntialiasKernelParams p); -void AntialiasGradKernel (const AntialiasKernelParams p); - -//------------------------------------------------------------------------ -// Topology hash construction. - -TopologyHashWrapper antialias_construct_topology_hash(torch::Tensor tri) -{ - const at::cuda::OptionalCUDAGuard device_guard(device_of(tri)); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AntialiasKernelParams p = {}; // Initialize all fields to zero. - - // Check inputs. - NVDR_CHECK_DEVICE(tri); - NVDR_CHECK_CONTIGUOUS(tri); - NVDR_CHECK_I32(tri); - NVDR_CHECK(tri.sizes().size() == 2 && tri.size(0) > 0 && tri.size(1) == 3, "tri must have shape [>0, 3]"); - - // Fill in kernel parameters. - p.numTriangles = tri.size(0); - p.numVertices = 0x7fffffff; // Let's not require vertex positions just to enable an error check. - p.tri = tri.data_ptr(); - - // Kernel parameters. - p.allocTriangles = p.allocTriangles < 64 ? 64 : p.allocTriangles; - while (p.allocTriangles < p.numTriangles) - p.allocTriangles <<= 1; // Must be power of two. - - // Construct the hash tensor and get pointer. - torch::TensorOptions opts = torch::TensorOptions().dtype(torch::kInt32).device(torch::kCUDA); - torch::Tensor ev_hash = torch::zeros({p.allocTriangles * AA_HASH_ELEMENTS_PER_TRIANGLE * 4}, opts); - p.evHash = (uint4*)(ev_hash.data_ptr()); - - // Check alignment. - NVDR_CHECK(!((uintptr_t)p.evHash & 15), "ev_hash internal tensor not aligned to int4"); - - // Populate the hash. - void* args[] = {&p}; - NVDR_CHECK_CUDA_ERROR(cudaLaunchKernel((void*)AntialiasFwdMeshKernel, (p.numTriangles - 1) / AA_MESH_KERNEL_THREADS_PER_BLOCK + 1, AA_MESH_KERNEL_THREADS_PER_BLOCK, args, 0, stream)); - - // Return. - TopologyHashWrapper hash_wrap; - hash_wrap.ev_hash = ev_hash; - return hash_wrap; -} - -//------------------------------------------------------------------------ -// Forward op. - -std::tuple antialias_fwd(torch::Tensor color, torch::Tensor rast, torch::Tensor pos, torch::Tensor tri, TopologyHashWrapper topology_hash_wrap) -{ - const at::cuda::OptionalCUDAGuard device_guard(device_of(color)); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AntialiasKernelParams p = {}; // Initialize all fields to zero. - p.instance_mode = (pos.sizes().size() > 2) ? 1 : 0; - torch::Tensor& topology_hash = topology_hash_wrap.ev_hash; // Unwrap. - - // Check inputs. - NVDR_CHECK_DEVICE(color, rast, pos, tri, topology_hash); - NVDR_CHECK_CONTIGUOUS(color, rast, pos, tri, topology_hash); - NVDR_CHECK_F32(color, rast, pos); - NVDR_CHECK_I32(tri, topology_hash); - - // Sanity checks. - NVDR_CHECK(color.sizes().size() == 4 && color.size(0) > 0 && color.size(1) > 0 && color.size(2) > 0 && color.size(3) > 0, "color must have shape[>0, >0, >0, >0]"); - NVDR_CHECK(rast.sizes().size() == 4 && rast.size(0) > 0 && rast.size(1) > 0 && rast.size(2) > 0 && rast.size(3) == 4, "rast must have shape[>0, >0, >0, 4]"); - NVDR_CHECK(tri.sizes().size() == 2 && tri.size(0) > 0 && tri.size(1) == 3, "tri must have shape [>0, 3]"); - NVDR_CHECK(color.size(1) == rast.size(1) && color.size(2) == rast.size(2), "color and rast inputs must have same spatial dimensions"); - if (p.instance_mode) - { - NVDR_CHECK(pos.sizes().size() == 3 && pos.size(0) > 0 && pos.size(1) > 0 && pos.size(2) == 4, "pos must have shape [>0, >0, 4] or [>0, 4]"); - NVDR_CHECK(rast.size(0) == color.size(0) && pos.size(0) == color.size(0), "minibatch size mismatch between inputs color, rast, pos"); - } - else - { - NVDR_CHECK(pos.sizes().size() == 2 && pos.size(0) > 0 && pos.size(1) == 4, "pos must have shape [>0, >0, 4] or [>0, 4]"); - NVDR_CHECK(rast.size(0) == color.size(0), "minibatch size mismatch between inputs color, rast"); - } - - // Extract input dimensions. - p.numVertices = pos.size(p.instance_mode ? 1 : 0); - p.numTriangles = tri.size(0); - p.n = color.size(0); - p.height = color.size(1); - p.width = color.size(2); - p.channels = color.size(3); - - // Get input pointers. - p.color = color.data_ptr(); - p.rasterOut = rast.data_ptr(); - p.tri = tri.data_ptr(); - p.pos = pos.data_ptr(); - p.evHash = (uint4*)(topology_hash.data_ptr()); - - // Misc parameters. - p.xh = .5f * (float)p.width; - p.yh = .5f * (float)p.height; - p.allocTriangles = topology_hash.size(0) / (4 * AA_HASH_ELEMENTS_PER_TRIANGLE); - - // Allocate output tensors. - torch::Tensor out = color.detach().clone(); // Use color as base. - torch::TensorOptions opts = torch::TensorOptions().dtype(torch::kFloat32).device(torch::kCUDA); - torch::Tensor work_buffer = torch::empty({p.n * p.width * p.height * 8 + 4}, opts); // 8 int for a maximum of two work items per pixel. - p.output = out.data_ptr(); - p.workBuffer = (int4*)(work_buffer.data_ptr()); - - // Clear the work counters. - NVDR_CHECK_CUDA_ERROR(cudaMemsetAsync(p.workBuffer, 0, sizeof(int4), stream)); - - // Verify that buffers are aligned to allow float2/float4 operations. - NVDR_CHECK(!((uintptr_t)p.pos & 15), "pos input tensor not aligned to float4"); - NVDR_CHECK(!((uintptr_t)p.rasterOut & 7), "raster_out input tensor not aligned to float2"); - NVDR_CHECK(!((uintptr_t)p.workBuffer & 15), "work_buffer internal tensor not aligned to int4"); - NVDR_CHECK(!((uintptr_t)p.evHash & 15), "topology_hash internal tensor not aligned to int4"); - - // Choose launch parameters for the discontinuity finder kernel and launch. - void* args[] = {&p}; - dim3 blockSize(AA_DISCONTINUITY_KERNEL_BLOCK_WIDTH, AA_DISCONTINUITY_KERNEL_BLOCK_HEIGHT, 1); - dim3 gridSize = getLaunchGridSize(blockSize, p.width, p.height, p.n); - NVDR_CHECK_CUDA_ERROR(cudaLaunchKernel((void*)AntialiasFwdDiscontinuityKernel, gridSize, blockSize, args, 0, stream)); - - // Determine optimum block size for the persistent analysis kernel and launch. - int device = 0; - int numCTA = 0; - int numSM = 0; - NVDR_CHECK_CUDA_ERROR(cudaGetDevice(&device)); - NVDR_CHECK_CUDA_ERROR(cudaOccupancyMaxActiveBlocksPerMultiprocessor(&numCTA, (void*)AntialiasFwdAnalysisKernel, AA_ANALYSIS_KERNEL_THREADS_PER_BLOCK, 0)); - NVDR_CHECK_CUDA_ERROR(cudaDeviceGetAttribute(&numSM, cudaDevAttrMultiProcessorCount, device)); - NVDR_CHECK_CUDA_ERROR(cudaLaunchKernel((void*)AntialiasFwdAnalysisKernel, numCTA * numSM, AA_ANALYSIS_KERNEL_THREADS_PER_BLOCK, args, 0, stream)); - - // Return results. - return std::tuple(out, work_buffer); -} - -//------------------------------------------------------------------------ -// Gradient op. - -std::tuple antialias_grad(torch::Tensor color, torch::Tensor rast, torch::Tensor pos, torch::Tensor tri, torch::Tensor dy, torch::Tensor work_buffer) -{ - const at::cuda::OptionalCUDAGuard device_guard(device_of(color)); - cudaStream_t stream = at::cuda::getCurrentCUDAStream(); - AntialiasKernelParams p = {}; // Initialize all fields to zero. - p.instance_mode = (pos.sizes().size() > 2) ? 1 : 0; - - // Check inputs. - NVDR_CHECK_DEVICE(color, rast, pos, tri, dy, work_buffer); - NVDR_CHECK_CONTIGUOUS(color, rast, pos, tri, work_buffer); - NVDR_CHECK_F32(color, rast, pos, dy, work_buffer); - NVDR_CHECK_I32(tri); - - // Sanity checks. - NVDR_CHECK(dy.sizes().size() == 4 && dy.size(0) > 0 && dy.size(1) > 0 && dy.size(2) > 0 && dy.size(3) > 0, "dy must have shape[>0, >0, >0, >0]"); - NVDR_CHECK(color.sizes().size() == 4 && color.size(0) > 0 && color.size(1) > 0 && color.size(2) > 0 && color.size(3) > 0, "color must have shape[>0, >0, >0, >0]"); - NVDR_CHECK(rast.sizes().size() == 4 && rast.size(0) > 0 && rast.size(1) > 0 && rast.size(2) > 0 && rast.size(3) == 4, "raster_out must have shape[>0, >0, >0, 4]"); - NVDR_CHECK(tri.sizes().size() == 2 && tri.size(0) > 0 && tri.size(1) == 3, "tri must have shape [>0, 3]"); - NVDR_CHECK(color.size(1) == rast.size(1) && color.size(2) == rast.size(2), "color and raster_out inputs must have same spatial dimensions"); - NVDR_CHECK(color.size(1) == dy.size(1) && color.size(2) == dy.size(2) && color.size(3) == dy.size(3), "color and dy inputs must have same dimensions"); - if (p.instance_mode) - { - NVDR_CHECK(pos.sizes().size() == 3 && pos.size(0) > 0 && pos.size(1) > 0 && pos.size(2) == 4, "pos must have shape [>0, >0, 4] or [>0, 4]"); - NVDR_CHECK(rast.size(0) == color.size(0) && pos.size(0) == color.size(0), "minibatch size mismatch between inputs color, raster_out, pos"); - NVDR_CHECK(dy.size(0) == color.size(0) && rast.size(0) == color.size(0) && pos.size(0) ==color.size(0), "minibatch size mismatch between inputs dy, color, raster_out, pos"); - } - else - { - NVDR_CHECK(pos.sizes().size() == 2 && pos.size(0) > 0 && pos.size(1) == 4, "pos must have shape [>0, >0, 4] or [>0, 4]"); - NVDR_CHECK(rast.size(0) == color.size(0), "minibatch size mismatch between inputs color, raster_out"); - NVDR_CHECK(dy.size(0) == color.size(0) && rast.size(0) == color.size(0), "minibatch size mismatch between inputs dy, color, raster_out"); - } - - // Extract input dimensions. - p.numVertices = pos.size(p.instance_mode ? 1 : 0); - p.numTriangles = tri.size(0); - p.n = color.size(0); - p.height = color.size(1); - p.width = color.size(2); - p.channels = color.size(3); - - // Ensure dy is contiguous. - torch::Tensor dy_ = dy.contiguous(); - - // Get input pointers. - p.color = color.data_ptr(); - p.rasterOut = rast.data_ptr(); - p.tri = tri.data_ptr(); - p.pos = pos.data_ptr(); - p.dy = dy_.data_ptr(); - p.workBuffer = (int4*)(work_buffer.data_ptr()); - - // Misc parameters. - p.xh = .5f * (float)p.width; - p.yh = .5f * (float)p.height; - - // Allocate output tensors. - torch::Tensor grad_color = dy_.detach().clone(); // Use dy as base. - torch::Tensor grad_pos = torch::zeros_like(pos); - p.gradColor = grad_color.data_ptr(); - p.gradPos = grad_pos.data_ptr(); - - // Clear gradient kernel work counter. - NVDR_CHECK_CUDA_ERROR(cudaMemsetAsync(&p.workBuffer[0].y, 0, sizeof(int), stream)); - - // Verify that buffers are aligned to allow float2/float4 operations. - NVDR_CHECK(!((uintptr_t)p.pos & 15), "pos input tensor not aligned to float4"); - NVDR_CHECK(!((uintptr_t)p.workBuffer & 15), "work_buffer internal tensor not aligned to int4"); - - // Determine optimum block size for the gradient kernel and launch. - void* args[] = {&p}; - int device = 0; - int numCTA = 0; - int numSM = 0; - NVDR_CHECK_CUDA_ERROR(cudaGetDevice(&device)); - NVDR_CHECK_CUDA_ERROR(cudaOccupancyMaxActiveBlocksPerMultiprocessor(&numCTA, (void*)AntialiasGradKernel, AA_GRAD_KERNEL_THREADS_PER_BLOCK, 0)); - NVDR_CHECK_CUDA_ERROR(cudaDeviceGetAttribute(&numSM, cudaDevAttrMultiProcessorCount, device)); - NVDR_CHECK_CUDA_ERROR(cudaLaunchKernel((void*)AntialiasGradKernel, numCTA * numSM, AA_GRAD_KERNEL_THREADS_PER_BLOCK, args, 0, stream)); - - // Return results. - return std::tuple(grad_color, grad_pos); -} - -//------------------------------------------------------------------------ diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/dnnlib/tflib/tfutil.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/dnnlib/tflib/tfutil.py deleted file mode 100644 index 396525e184d6d4a6c935244b7677e8ba84144ea0..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/dnnlib/tflib/tfutil.py +++ /dev/null @@ -1,267 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2019, NVIDIA Corporation. All rights reserved. -# -# This work is made available under the Nvidia Source Code License-NC. -# To view a copy of this license, visit -# https://nvlabs.github.io/stylegan2/license.html - -"""Miscellaneous helper utils for Tensorflow.""" - -from typing import Any, Iterable, List, Union -import tensorflow.contrib # requires TensorFlow 1.x! -import os -import numpy as np -import tensorflow as tf - -# Silence deprecation warnings from TensorFlow 1.13 onwards -import logging -logging.getLogger('tensorflow').setLevel(logging.ERROR) -tf.contrib = tensorflow.contrib - - -TfExpression = Union[tf.Tensor, tf.Variable, tf.Operation] -"""A type that represents a valid Tensorflow expression.""" - -TfExpressionEx = Union[TfExpression, int, float, np.ndarray] -"""A type that can be converted to a valid Tensorflow expression.""" - - -def run(*args, **kwargs) -> Any: - """Run the specified ops in the default session.""" - assert_tf_initialized() - return tf.get_default_session().run(*args, **kwargs) - - -def is_tf_expression(x: Any) -> bool: - """Check whether the input is a valid Tensorflow expression, i.e., Tensorflow Tensor, Variable, or Operation.""" - return isinstance(x, (tf.Tensor, tf.Variable, tf.Operation)) - - -def shape_to_list(shape: Iterable[tf.Dimension]) -> List[Union[int, None]]: - """Convert a Tensorflow shape to a list of ints. Retained for backwards compatibility -- use TensorShape.as_list() in new code.""" - return [dim.value for dim in shape] - - -def flatten(x: TfExpressionEx) -> TfExpression: - """Shortcut function for flattening a tensor.""" - with tf.name_scope("Flatten"): - return tf.reshape(x, [-1]) - - -def log2(x: TfExpressionEx) -> TfExpression: - """Logarithm in base 2.""" - with tf.name_scope("Log2"): - return tf.log(x) * np.float32(1.0 / np.log(2.0)) - - -def exp2(x: TfExpressionEx) -> TfExpression: - """Exponent in base 2.""" - with tf.name_scope("Exp2"): - return tf.exp(x * np.float32(np.log(2.0))) - - -def lerp(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpressionEx: - """Linear interpolation.""" - with tf.name_scope("Lerp"): - return a + (b - a) * t - - -def lerp_clip(a: TfExpressionEx, b: TfExpressionEx, t: TfExpressionEx) -> TfExpression: - """Linear interpolation with clip.""" - with tf.name_scope("LerpClip"): - return a + (b - a) * tf.clip_by_value(t, 0.0, 1.0) - - -def absolute_name_scope(scope: str) -> tf.name_scope: - """Forcefully enter the specified name scope, ignoring any surrounding scopes.""" - return tf.name_scope(scope + "/") - - -def absolute_variable_scope(scope: str, **kwargs) -> tf.variable_scope: - """Forcefully enter the specified variable scope, ignoring any surrounding scopes.""" - return tf.variable_scope(tf.VariableScope(name=scope, **kwargs), auxiliary_name_scope=False) - - -def _sanitize_tf_config(config_dict: dict = None) -> dict: - # Defaults. - cfg = dict() - # Random seed for NumPy. None = keep as is. - cfg["rnd.np_random_seed"] = None - # Random seed for TensorFlow. 'auto' = derive from NumPy random state. None = keep as is. - cfg["rnd.tf_random_seed"] = "auto" - # 0 = Print all available debug info from TensorFlow. 1 = Print warnings and errors, but disable debug info. - cfg["env.TF_CPP_MIN_LOG_LEVEL"] = "1" - # False = Check that all ops are available on the designated device. True = Skip the check for ops that are not used. - cfg["graph_options.place_pruned_graph"] = True - # False = Allocate all GPU memory at the beginning. True = Allocate only as much GPU memory as needed. - cfg["gpu_options.allow_growth"] = True - - # Remove defaults for environment variables that are already set. - for key in list(cfg): - fields = key.split(".") - if fields[0] == "env": - assert len(fields) == 2 - if fields[1] in os.environ: - del cfg[key] - - # User overrides. - if config_dict is not None: - cfg.update(config_dict) - return cfg - - -def init_tf(config_dict: dict = None) -> None: - """Initialize TensorFlow session using good default settings.""" - # Skip if already initialized. - if tf.get_default_session() is not None: - return - - # Setup config dict and random seeds. - cfg = _sanitize_tf_config(config_dict) - np_random_seed = cfg["rnd.np_random_seed"] - if np_random_seed is not None: - np.random.seed(np_random_seed) - tf_random_seed = cfg["rnd.tf_random_seed"] - if tf_random_seed == "auto": - tf_random_seed = np.random.randint(1 << 31) - if tf_random_seed is not None: - tf.set_random_seed(tf_random_seed) - - # Setup environment variables. - for key, value in cfg.items(): - fields = key.split(".") - if fields[0] == "env": - assert len(fields) == 2 - os.environ[fields[1]] = str(value) - - # Create default TensorFlow session. - create_session(cfg, force_as_default=True) - - -def assert_tf_initialized(): - """Check that TensorFlow session has been initialized.""" - if tf.get_default_session() is None: - raise RuntimeError( - "No default TensorFlow session found. Please call dnnlib.tflib.init_tf().") - - -def create_session(config_dict: dict = None, force_as_default: bool = False) -> tf.Session: - """Create tf.Session based on config dict.""" - # Setup TensorFlow config proto. - cfg = _sanitize_tf_config(config_dict) - config_proto = tf.ConfigProto() - for key, value in cfg.items(): - fields = key.split(".") - if fields[0] not in ["rnd", "env"]: - obj = config_proto - for field in fields[:-1]: - obj = getattr(obj, field) - setattr(obj, fields[-1], value) - - # Create session. - session = tf.Session(config=config_proto) - if force_as_default: - # pylint: disable=protected-access - session._default_session = session.as_default() - session._default_session.enforce_nesting = False - session._default_session.__enter__() - return session - - -def init_uninitialized_vars(target_vars: List[tf.Variable] = None) -> None: - """Initialize all tf.Variables that have not already been initialized. - - Equivalent to the following, but more efficient and does not bloat the tf graph: - tf.variables_initializer(tf.report_uninitialized_variables()).run() - """ - assert_tf_initialized() - if target_vars is None: - target_vars = tf.global_variables() - - test_vars = [] - test_ops = [] - - # ignore surrounding control_dependencies - with tf.control_dependencies(None): - for var in target_vars: - assert is_tf_expression(var) - - try: - tf.get_default_graph().get_tensor_by_name( - var.name.replace(":0", "/IsVariableInitialized:0")) - except KeyError: - # Op does not exist => variable may be uninitialized. - test_vars.append(var) - - with absolute_name_scope(var.name.split(":")[0]): - test_ops.append(tf.is_variable_initialized(var)) - - init_vars = [var for var, inited in zip( - test_vars, run(test_ops)) if not inited] - run([var.initializer for var in init_vars]) - - -def set_vars(var_to_value_dict: dict) -> None: - """Set the values of given tf.Variables. - - Equivalent to the following, but more efficient and does not bloat the tf graph: - tflib.run([tf.assign(var, value) for var, value in var_to_value_dict.items()] - """ - assert_tf_initialized() - ops = [] - feed_dict = {} - - for var, value in var_to_value_dict.items(): - assert is_tf_expression(var) - - try: - setter = tf.get_default_graph().get_tensor_by_name( - var.name.replace(":0", "/setter:0")) # look for existing op - except KeyError: - with absolute_name_scope(var.name.split(":")[0]): - # ignore surrounding control_dependencies - with tf.control_dependencies(None): - setter = tf.assign(var, tf.placeholder( - var.dtype, var.shape, "new_value"), name="setter") # create new setter - - ops.append(setter) - feed_dict[setter.op.inputs[1]] = value - - run(ops, feed_dict) - - -def create_var_with_large_initial_value(initial_value: np.ndarray, *args, **kwargs): - """Create tf.Variable with large initial value without bloating the tf graph.""" - assert_tf_initialized() - assert isinstance(initial_value, np.ndarray) - zeros = tf.zeros(initial_value.shape, initial_value.dtype) - var = tf.Variable(zeros, *args, **kwargs) - set_vars({var: initial_value}) - return var - - -def convert_images_from_uint8(images, drange=[-1, 1], nhwc_to_nchw=False): - """Convert a minibatch of images from uint8 to float32 with configurable dynamic range. - Can be used as an input transformation for Network.run(). - """ - images = tf.cast(images, tf.float32) - if nhwc_to_nchw: - images = tf.transpose(images, [0, 3, 1, 2]) - return images * ((drange[1] - drange[0]) / 255) + drange[0] - - -def convert_images_to_uint8(images, drange=[-1, 1], nchw_to_nhwc=False, shrink=1): - """Convert a minibatch of images from float32 to uint8 with configurable dynamic range. - Can be used as an output transformation for Network.run(). - """ - images = tf.cast(images, tf.float32) - if shrink > 1: - ksize = [1, 1, shrink, shrink] - images = tf.nn.avg_pool( - images, ksize=ksize, strides=ksize, padding="VALID", data_format="NCHW") - if nchw_to_nhwc: - images = tf.transpose(images, [0, 2, 3, 1]) - scale = 255 / (drange[1] - drange[0]) - images = images * scale + (0.5 - drange[0] * scale) - return tf.saturate_cast(images, tf.uint8) diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/misc.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/misc.py deleted file mode 100644 index 5470dcfc5e59e6bc4484ca3075cd09a708e43467..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/misc.py +++ /dev/null @@ -1,294 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import re -import contextlib -import numpy as np -import torch -import warnings -import dnnlib - -# ---------------------------------------------------------------------------- -# Cached construction of constant tensors. Avoids CPU=>GPU copy when the -# same constant is used multiple times. - -_constant_cache = dict() - - -def constant(value, shape=None, dtype=None, device=None, memory_format=None): - value = np.asarray(value) - if shape is not None: - shape = tuple(shape) - if dtype is None: - dtype = torch.get_default_dtype() - if device is None: - device = torch.device('cpu') - if memory_format is None: - memory_format = torch.contiguous_format - - key = (value.shape, value.dtype, value.tobytes(), - shape, dtype, device, memory_format) - tensor = _constant_cache.get(key, None) - if tensor is None: - tensor = torch.as_tensor(value.copy(), dtype=dtype, device=device) - if shape is not None: - tensor, _ = torch.broadcast_tensors(tensor, torch.empty(shape)) - tensor = tensor.contiguous(memory_format=memory_format) - _constant_cache[key] = tensor - return tensor - -# ---------------------------------------------------------------------------- -# Replace NaN/Inf with specified numerical values. - - -try: - nan_to_num = torch.nan_to_num # 1.8.0a0 -except AttributeError: - def nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None): # pylint: disable=redefined-builtin - assert isinstance(input, torch.Tensor) - if posinf is None: - posinf = torch.finfo(input.dtype).max - if neginf is None: - neginf = torch.finfo(input.dtype).min - assert nan == 0 - return torch.clamp(input.unsqueeze(0).nansum(0), min=neginf, max=posinf, out=out) - -# ---------------------------------------------------------------------------- -# Symbolic assert. - -try: - symbolic_assert = torch._assert # 1.8.0a0 # pylint: disable=protected-access -except AttributeError: - symbolic_assert = torch.Assert # 1.7.0 - -# ---------------------------------------------------------------------------- -# Context manager to suppress known warnings in torch.jit.trace(). - - -class suppress_tracer_warnings(warnings.catch_warnings): - def __enter__(self): - super().__enter__() - warnings.simplefilter('ignore', category=torch.jit.TracerWarning) - return self - -# ---------------------------------------------------------------------------- -# Assert that the shape of a tensor matches the given list of integers. -# None indicates that the size of a dimension is allowed to vary. -# Performs symbolic assertion when used in torch.jit.trace(). - - -def assert_shape(tensor, ref_shape): - if tensor.ndim != len(ref_shape): - raise AssertionError( - f'Wrong number of dimensions: got {tensor.ndim}, expected {len(ref_shape)}') - for idx, (size, ref_size) in enumerate(zip(tensor.shape, ref_shape)): - if ref_size is None: - pass - elif isinstance(ref_size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(torch.as_tensor( - size), ref_size), f'Wrong size for dimension {idx}') - elif isinstance(size, torch.Tensor): - with suppress_tracer_warnings(): # as_tensor results are registered as constants - symbolic_assert(torch.equal(size, torch.as_tensor( - ref_size)), f'Wrong size for dimension {idx}: expected {ref_size}') - elif size != ref_size: - raise AssertionError( - f'Wrong size for dimension {idx}: got {size}, expected {ref_size}') - -# ---------------------------------------------------------------------------- -# Function decorator that calls torch.autograd.profiler.record_function(). - - -def profiled_function(fn): - def decorator(*args, **kwargs): - with torch.autograd.profiler.record_function(fn.__name__): - return fn(*args, **kwargs) - decorator.__name__ = fn.__name__ - return decorator - -# ---------------------------------------------------------------------------- -# Sampler for torch.utils.data.DataLoader that loops over the dataset -# indefinitely, shuffling items as it goes. - - -class InfiniteSampler(torch.utils.data.Sampler): - def __init__(self, dataset, rank=0, num_replicas=1, shuffle=True, seed=0, window_size=0.5): - assert len(dataset) > 0 - assert num_replicas > 0 - assert 0 <= rank < num_replicas - assert 0 <= window_size <= 1 - super().__init__(dataset) - self.dataset = dataset - self.rank = rank - self.num_replicas = num_replicas - self.shuffle = shuffle - self.seed = seed - self.window_size = window_size - - def __iter__(self): - order = np.arange(len(self.dataset)) - rnd = None - window = 0 - if self.shuffle: - rnd = np.random.RandomState(self.seed) - rnd.shuffle(order) - window = int(np.rint(order.size * self.window_size)) - - idx = 0 - while True: - i = idx % order.size - if idx % self.num_replicas == self.rank: - yield order[i] - if window >= 2: - j = (i - rnd.randint(window)) % order.size - order[i], order[j] = order[j], order[i] - idx += 1 - -# ---------------------------------------------------------------------------- -# Utilities for operating with torch.nn.Module parameters and buffers. - - -def params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.parameters()) + list(module.buffers()) - - -def named_params_and_buffers(module): - assert isinstance(module, torch.nn.Module) - return list(module.named_parameters()) + list(module.named_buffers()) - - -def copy_params_and_buffers(src_module, dst_module, require_all=False): - assert isinstance(src_module, torch.nn.Module) - assert isinstance(dst_module, torch.nn.Module) - src_tensors = {name: tensor for name, - tensor in named_params_and_buffers(src_module)} - for name, tensor in named_params_and_buffers(dst_module): - assert (name in src_tensors) or (not require_all) - if name in src_tensors: - tensor.copy_(src_tensors[name].detach()).requires_grad_( - tensor.requires_grad) - -# ---------------------------------------------------------------------------- -# Context manager for easily enabling/disabling DistributedDataParallel -# synchronization. - - -@contextlib.contextmanager -def ddp_sync(module, sync): - assert isinstance(module, torch.nn.Module) - if sync or not isinstance(module, torch.nn.parallel.DistributedDataParallel): - yield - else: - with module.no_sync(): - yield - -# ---------------------------------------------------------------------------- -# Check DistributedDataParallel consistency across processes. - - -def check_ddp_consistency(module, ignore_regex=None): - assert isinstance(module, torch.nn.Module) - for name, tensor in named_params_and_buffers(module): - fullname = type(module).__name__ + '.' + name - if ignore_regex is not None and re.fullmatch(ignore_regex, fullname): - continue - tensor = tensor.detach() - other = tensor.clone() - torch.distributed.broadcast(tensor=other, src=0) - assert (nan_to_num(tensor) == nan_to_num(other)).all(), fullname - -# ---------------------------------------------------------------------------- -# Print summary table of module hierarchy. - - -def print_module_summary(module, inputs, max_nesting=3, skip_redundant=True): - assert isinstance(module, torch.nn.Module) - assert not isinstance(module, torch.jit.ScriptModule) - assert isinstance(inputs, (tuple, list)) - - # Register hooks. - entries = [] - nesting = [0] - - def pre_hook(_mod, _inputs): - nesting[0] += 1 - - def post_hook(mod, _inputs, outputs): - nesting[0] -= 1 - if nesting[0] <= max_nesting: - outputs = list(outputs) if isinstance( - outputs, (tuple, list)) else [outputs] - outputs = [t for t in outputs if isinstance(t, torch.Tensor)] - entries.append(dnnlib.EasyDict(mod=mod, outputs=outputs)) - hooks = [mod.register_forward_pre_hook( - pre_hook) for mod in module.modules()] - hooks += [mod.register_forward_hook(post_hook) for mod in module.modules()] - - # Run module. - outputs = module(*inputs) - for hook in hooks: - hook.remove() - - # Identify unique outputs, parameters, and buffers. - tensors_seen = set() - for e in entries: - e.unique_params = [ - t for t in e.mod.parameters() if id(t) not in tensors_seen] - e.unique_buffers = [ - t for t in e.mod.buffers() if id(t) not in tensors_seen] - e.unique_outputs = [t for t in e.outputs if id(t) not in tensors_seen] - tensors_seen |= {id(t) for t in e.unique_params + - e.unique_buffers + e.unique_outputs} - - # Filter out redundant entries. - if skip_redundant: - entries = [e for e in entries if len(e.unique_params) or len( - e.unique_buffers) or len(e.unique_outputs)] - - # Construct table. - rows = [[type(module).__name__, 'Parameters', - 'Buffers', 'Output shape', 'Datatype']] - rows += [['---'] * len(rows[0])] - param_total = 0 - buffer_total = 0 - submodule_names = {mod: name for name, mod in module.named_modules()} - for e in entries: - name = '' if e.mod is module else submodule_names[e.mod] - param_size = sum(t.numel() for t in e.unique_params) - buffer_size = sum(t.numel() for t in e.unique_buffers) - output_shapes = [str(list(e.outputs[0].shape)) for t in e.outputs] - output_dtypes = [str(t.dtype).split('.')[-1] for t in e.outputs] - rows += [[ - name + (':0' if len(e.outputs) >= 2 else ''), - str(param_size) if param_size else '-', - str(buffer_size) if buffer_size else '-', - (output_shapes + ['-'])[0], - (output_dtypes + ['-'])[0], - ]] - for idx in range(1, len(e.outputs)): - rows += [[name + f':{idx}', '-', '-', - output_shapes[idx], output_dtypes[idx]]] - param_total += param_size - buffer_total += buffer_size - rows += [['---'] * len(rows[0])] - rows += [['Total', str(param_total), str(buffer_total), '-', '-']] - - # Print table. - widths = [max(len(cell) for cell in column) for column in zip(*rows)] - print() - for row in rows: - print(' '.join(cell + ' ' * (width - len(cell)) - for cell, width in zip(row, widths))) - print() - return outputs - -# ---------------------------------------------------------------------------- diff --git a/spaces/hackathon-somos-nlp-2023/demo_DiagTrast/__init__.py b/spaces/hackathon-somos-nlp-2023/demo_DiagTrast/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/json_utils/json_fix_llm.py b/spaces/hamelcubsfan/AutoGPT/autogpt/json_utils/json_fix_llm.py deleted file mode 100644 index 869aed125cfb8cd7a69ed02eeb389cc72a3e296b..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/autogpt/json_utils/json_fix_llm.py +++ /dev/null @@ -1,220 +0,0 @@ -"""This module contains functions to fix JSON strings generated by LLM models, such as ChatGPT, using the assistance -of the ChatGPT API or LLM models.""" -from __future__ import annotations - -import contextlib -import json -from typing import Any, Dict - -from colorama import Fore -from regex import regex - -from autogpt.config import Config -from autogpt.json_utils.json_fix_general import correct_json -from autogpt.llm_utils import call_ai_function -from autogpt.logs import logger -from autogpt.speech import say_text - -JSON_SCHEMA = """ -{ - "command": { - "name": "command name", - "args": { - "arg name": "value" - } - }, - "thoughts": - { - "text": "thought", - "reasoning": "reasoning", - "plan": "- short bulleted\n- list that conveys\n- long-term plan", - "criticism": "constructive self-criticism", - "speak": "thoughts summary to say to user" - } -} -""" - -CFG = Config() - - -def auto_fix_json(json_string: str, schema: str) -> str: - """Fix the given JSON string to make it parseable and fully compliant with - the provided schema using GPT-3. - - Args: - json_string (str): The JSON string to fix. - schema (str): The schema to use to fix the JSON. - Returns: - str: The fixed JSON string. - """ - # Try to fix the JSON using GPT: - function_string = "def fix_json(json_string: str, schema:str=None) -> str:" - args = [f"'''{json_string}'''", f"'''{schema}'''"] - description_string = ( - "This function takes a JSON string and ensures that it" - " is parseable and fully compliant with the provided schema. If an object" - " or field specified in the schema isn't contained within the correct JSON," - " it is omitted. The function also escapes any double quotes within JSON" - " string values to ensure that they are valid. If the JSON string contains" - " any None or NaN values, they are replaced with null before being parsed." - ) - - # If it doesn't already start with a "`", add one: - if not json_string.startswith("`"): - json_string = "```json\n" + json_string + "\n```" - result_string = call_ai_function( - function_string, args, description_string, model=CFG.fast_llm_model - ) - logger.debug("------------ JSON FIX ATTEMPT ---------------") - logger.debug(f"Original JSON: {json_string}") - logger.debug("-----------") - logger.debug(f"Fixed JSON: {result_string}") - logger.debug("----------- END OF FIX ATTEMPT ----------------") - - try: - json.loads(result_string) # just check the validity - return result_string - except json.JSONDecodeError: # noqa: E722 - # Get the call stack: - # import traceback - # call_stack = traceback.format_exc() - # print(f"Failed to fix JSON: '{json_string}' "+call_stack) - return "failed" - - -def fix_json_using_multiple_techniques(assistant_reply: str) -> Dict[Any, Any]: - """Fix the given JSON string to make it parseable and fully compliant with two techniques. - - Args: - json_string (str): The JSON string to fix. - - Returns: - str: The fixed JSON string. - """ - - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - if assistant_reply_json == {}: - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - - if assistant_reply_json != {}: - return assistant_reply_json - - logger.error( - "Error: The following AI output couldn't be converted to a JSON:\n", - assistant_reply, - ) - if CFG.speak_mode: - say_text("I have received an invalid JSON response from the OpenAI API.") - - return {} - - -def fix_and_parse_json( - json_to_load: str, try_to_fix_with_gpt: bool = True -) -> Dict[Any, Any]: - """Fix and parse JSON string - - Args: - json_to_load (str): The JSON string. - try_to_fix_with_gpt (bool, optional): Try to fix the JSON with GPT. - Defaults to True. - - Returns: - str or dict[Any, Any]: The parsed JSON. - """ - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = json_to_load.replace("\t", "") - return json.loads(json_to_load) - - with contextlib.suppress(json.JSONDecodeError): - json_to_load = correct_json(json_to_load) - return json.loads(json_to_load) - # Let's do something manually: - # sometimes GPT responds with something BEFORE the braces: - # "I'm sorry, I don't understand. Please try again." - # {"text": "I'm sorry, I don't understand. Please try again.", - # "confidence": 0.0} - # So let's try to find the first brace and then parse the rest - # of the string - try: - brace_index = json_to_load.index("{") - maybe_fixed_json = json_to_load[brace_index:] - last_brace_index = maybe_fixed_json.rindex("}") - maybe_fixed_json = maybe_fixed_json[: last_brace_index + 1] - return json.loads(maybe_fixed_json) - except (json.JSONDecodeError, ValueError) as e: - return try_ai_fix(try_to_fix_with_gpt, e, json_to_load) - - -def try_ai_fix( - try_to_fix_with_gpt: bool, exception: Exception, json_to_load: str -) -> Dict[Any, Any]: - """Try to fix the JSON with the AI - - Args: - try_to_fix_with_gpt (bool): Whether to try to fix the JSON with the AI. - exception (Exception): The exception that was raised. - json_to_load (str): The JSON string to load. - - Raises: - exception: If try_to_fix_with_gpt is False. - - Returns: - str or dict[Any, Any]: The JSON string or dictionary. - """ - if not try_to_fix_with_gpt: - raise exception - if CFG.debug_mode: - logger.warn( - "Warning: Failed to parse AI output, attempting to fix." - "\n If you see this warning frequently, it's likely that" - " your prompt is confusing the AI. Try changing it up" - " slightly." - ) - # Now try to fix this up using the ai_functions - ai_fixed_json = auto_fix_json(json_to_load, JSON_SCHEMA) - - if ai_fixed_json != "failed": - return json.loads(ai_fixed_json) - # This allows the AI to react to the error message, - # which usually results in it correcting its ways. - # logger.error("Failed to fix AI output, telling the AI.") - return {} - - -def attempt_to_fix_json_by_finding_outermost_brackets(json_string: str): - if CFG.speak_mode and CFG.debug_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API. " - "Trying to fix it now." - ) - logger.error("Attempting to fix JSON by finding outermost brackets\n") - - try: - json_pattern = regex.compile(r"\{(?:[^{}]|(?R))*\}") - json_match = json_pattern.search(json_string) - - if json_match: - # Extract the valid JSON object from the string - json_string = json_match.group(0) - logger.typewriter_log( - title="Apparently json was fixed.", title_color=Fore.GREEN - ) - if CFG.speak_mode and CFG.debug_mode: - say_text("Apparently json was fixed.") - else: - return {} - - except (json.JSONDecodeError, ValueError): - if CFG.debug_mode: - logger.error(f"Error: Invalid JSON: {json_string}\n") - if CFG.speak_mode: - say_text("Didn't work. I will have to ignore this response then.") - logger.error("Error: Invalid JSON, setting it to empty JSON now.\n") - json_string = {} - - return fix_and_parse_json(json_string) diff --git a/spaces/hamelcubsfan/AutoGPT/ui/utils.py b/spaces/hamelcubsfan/AutoGPT/ui/utils.py deleted file mode 100644 index 71703e2009afac0582300f5d99a91ddec4119e04..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/ui/utils.py +++ /dev/null @@ -1,31 +0,0 @@ -import os -import re - -def format_directory(directory): - output = [] - def helper(directory, level, output): - files = os.listdir(directory) - for i, item in enumerate(files): - is_folder = os.path.isdir(os.path.join(directory, item)) - joiner = "├── " if i < len(files) - 1 else "└── " - item_html = item + "/" if is_folder else f"{item}" - output.append("│ " * level + joiner + item_html) - if is_folder: - helper(os.path.join(directory, item), level + 1, output) - output.append(os.path.basename(directory) + "/") - helper(directory, 1, output) - return "\n".join(output) - -DOWNLOAD_OUTPUTS_JS = """ -() => { - const a = document.createElement('a'); - a.href = 'file=outputs.zip'; - a.download = 'outputs.zip'; - document.body.appendChild(a); - a.click(); - document.body.removeChild(a); -}""" - -def remove_color(text): - ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])') - return ansi_escape.sub('', text) \ No newline at end of file diff --git a/spaces/hands012/gpt-academic/request_llm/bridge_jittorllms_llama.py b/spaces/hands012/gpt-academic/request_llm/bridge_jittorllms_llama.py deleted file mode 100644 index 6dfac681aeaa11a780304b9e645637cabd677688..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/request_llm/bridge_jittorllms_llama.py +++ /dev/null @@ -1,178 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.jittorllms_model = None - self.info = "" - self.local_history = [] - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import pandas - self.info = "依赖检测通过" - self.success = True - except: - from toolbox import trimmed_format_exc - self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\ - r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llm/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\ - r"警告:安装jittorllms依赖后将完全破坏现有的pytorch环境,建议使用docker环境!" + trimmed_format_exc() - self.success = False - - def ready(self): - return self.jittorllms_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - def validate_path(): - import os, sys - dir_name = os.path.dirname(__file__) - env = os.environ.get("PATH", "") - os.environ["PATH"] = env.replace('/cuda/bin', '/x/bin') - root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..') - os.chdir(root_dir_assume + '/request_llm/jittorllms') - sys.path.append(root_dir_assume + '/request_llm/jittorllms') - validate_path() # validate path so you can run from base directory - - def load_model(): - import types - try: - if self.jittorllms_model is None: - device, = get_conf('LOCAL_MODEL_DEVICE') - from .jittorllms.models import get_model - # availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"] - args_dict = {'model': 'llama'} - print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))') - self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict)) - print('done get model') - except: - self.child.send('[Local Message] Call jittorllms fail 不能正常加载jittorllms的参数。') - raise RuntimeError("不能正常加载jittorllms的参数!") - print('load_model') - load_model() - - # 进入任务等待状态 - print('进入任务等待状态') - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - query = kwargs['query'] - history = kwargs['history'] - # 是否重置 - if len(self.local_history) > 0 and len(history)==0: - print('触发重置') - self.jittorllms_model.reset() - self.local_history.append(query) - - print('收到消息,开始请求') - try: - for response in self.jittorllms_model.stream_chat(query, history): - print(response) - self.child.send(response) - except: - from toolbox import trimmed_format_exc - print(trimmed_format_exc()) - self.child.send('[Local Message] Call jittorllms fail.') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global llama_glm_handle -llama_glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global llama_glm_handle - if llama_glm_handle is None: - llama_glm_handle = GetGLMHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + llama_glm_handle.info - if not llama_glm_handle.success: - error = llama_glm_handle.info - llama_glm_handle = None - raise RuntimeError(error) - - # jittorllms 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in llama_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - print(response) - if len(observe_window) >= 1: observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global llama_glm_handle - if llama_glm_handle is None: - llama_glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + llama_glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not llama_glm_handle.success: - llama_glm_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - # 处理历史信息 - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收jittorllms的回复 - response = "[Local Message]: 等待jittorllms响应中 ..." - for response in llama_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待jittorllms响应中 ...": - response = "[Local Message]: jittorllms响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/hanzportgas/rvc-models/app.py b/spaces/hanzportgas/rvc-models/app.py deleted file mode 100644 index c26444c1a96bba4b1142cc6fabcdbdc0f651544b..0000000000000000000000000000000000000000 --- a/spaces/hanzportgas/rvc-models/app.py +++ /dev/null @@ -1,186 +0,0 @@ -#ONLY FOR PERSONAL USE -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 20 and limitation: - return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "#
      RVC Models\n" - "##
      The input audio should be clean and pure voice without background music.\n" - "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)" - - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '
      ' - f'
      {title}
      \n'+ - (f'
      Model author: {author}
      ' if author else "")+ - (f'' if cover else "")+ - '
      ' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/haoqi7/research/lrt/academic_query/academic.py b/spaces/haoqi7/research/lrt/academic_query/academic.py deleted file mode 100644 index c126066d72bc4184cb898b3fc071b6f4916b1ebf..0000000000000000000000000000000000000000 --- a/spaces/haoqi7/research/lrt/academic_query/academic.py +++ /dev/null @@ -1,35 +0,0 @@ -from requests_toolkit import ArxivQuery,IEEEQuery,PaperWithCodeQuery -from typing import List - -class AcademicQuery: - @classmethod - def arxiv(cls, - query: str, - max_results: int = 50 - ) -> List[dict]: - ret = ArxivQuery.query(query,'',0,max_results) - if not isinstance(ret,list): - return [ret] - return ret - - @classmethod - def ieee(cls, - query: str, - start_year: int, - end_year: int, - num_papers: int = 200 - ) -> List[dict]: - IEEEQuery.__setup_api_key__('vpd9yy325enruv27zj2d353e') - ret = IEEEQuery.query(query,start_year,end_year,num_papers) - if not isinstance(ret,list): - return [ret] - return ret - - @classmethod - def paper_with_code(cls, - query: str, - items_per_page = 50) ->List[dict]: - ret = PaperWithCodeQuery.query(query, 1,items_per_page) - if not isinstance(ret, list): - return [ret] - return ret \ No newline at end of file diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/README.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/README.md deleted file mode 100644 index 6831508b9aea37f0e88bec62c98f2bf2b64240ab..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/TensorMask/README.md +++ /dev/null @@ -1,64 +0,0 @@ - -# TensorMask in Detectron2 -**A Foundation for Dense Object Segmentation** - -Xinlei Chen, Ross Girshick, Kaiming He, Piotr Dollár - -[[`arXiv`](https://arxiv.org/abs/1903.12174)] [[`BibTeX`](#CitingTensorMask)] - -
      - -
      - -In this repository, we release code for TensorMask in Detectron2. -TensorMask is a dense sliding-window instance segmentation framework that, for the first time, achieves results close to the well-developed Mask R-CNN framework -- both qualitatively and quantitatively. It establishes a conceptually complementary direction for object instance segmentation research. - -## Installation -First install Detectron2 following the [documentation](https://detectron2.readthedocs.io/tutorials/install.html) and -[setup the dataset](../../datasets). Then compile the TensorMask-specific op (`swap_align2nat`): -```bash -cd /path/to/detectron2/projects/TensorMask -python setup.py build develop -``` - -## Training - -To train a model, run: -```bash -python /path/to/detectron2/projects/TensorMask/train_net.py --config-file -``` - -For example, to launch TensorMask BiPyramid training (1x schedule) with ResNet-50 backbone on 8 GPUs, -one should execute: -```bash -python /path/to/detectron2/projects/TensorMask/train_net.py --config-file configs/tensormask_R_50_FPN_1x.yaml --num-gpus 8 -``` - -## Evaluation - -Model evaluation can be done similarly (6x schedule with scale augmentation): -```bash -python /path/to/detectron2/projects/TensorMask/train_net.py --config-file configs/tensormask_R_50_FPN_6x.yaml --eval-only MODEL.WEIGHTS /path/to/model_checkpoint -``` - -# Pretrained Models - -| Backbone | lr sched | AP box | AP mask | download | -| -------- | -------- | -- | --- | -------- | -| R50 | 1x | 37.6 | 32.4 | model \|  metrics | -| R50 | 6x | 41.4 | 35.8 | model \|  metrics | - - -## Citing TensorMask - -If you use TensorMask, please use the following BibTeX entry. - -``` -@InProceedings{chen2019tensormask, - title={Tensormask: A Foundation for Dense Object Segmentation}, - author={Chen, Xinlei and Girshick, Ross and He, Kaiming and Doll{\'a}r, Piotr}, - journal={The International Conference on Computer Vision (ICCV)}, - year={2019} -} -``` - diff --git a/spaces/hectorjelly/SoccerTwos-Challenge-Analytics-Extra/app.py b/spaces/hectorjelly/SoccerTwos-Challenge-Analytics-Extra/app.py deleted file mode 100644 index 19ee1915831368681a1233bab648a4d7f88b8d99..0000000000000000000000000000000000000000 --- a/spaces/hectorjelly/SoccerTwos-Challenge-Analytics-Extra/app.py +++ /dev/null @@ -1,141 +0,0 @@ -import pandas as pd -import streamlit as st - -# Set page title and favicon -st.set_page_config(page_icon=":soccer:",layout="wide") - - -st.markdown( - """ - - """, - unsafe_allow_html=True -) - -# Set title and create a new tab for league history -st.title("⚽ SoccerTwos Challenge Analytics Extra!⚽ ") -tab_team, tab_owners = st.tabs(["Form Table", "Games by Author",]) - -# Match Results -MATCH_RESULTS_URL = "https://huggingface.co/datasets/huggingface-projects/bot-fight-data/raw/main/soccer_history.csv" - - -@st.cache(ttl=1800) -def fetch_match_history(): - """ - Fetch the match results from the last 24 hours. - Cache the result for 30min to avoid unnecessary requests. - Return a DataFrame. - """ - df = pd.read_csv(MATCH_RESULTS_URL) - df["timestamp"] = pd.to_datetime(df.timestamp, unit="s") - df = df[df["timestamp"] >= pd.Timestamp.now() - pd.Timedelta(hours=24)] - df.columns = ["home", "away", "timestamp", "result"] - return df - - -match_df = fetch_match_history() - -# Define a function to calculate the total number of matches played -def num_matches_played(): - return match_df.shape[0] - -# Get a list of all teams that have played in the last 24 hours -teams = sorted( - list(pd.concat([match_df["home"], match_df["away"]]).unique()), key=str.casefold -) - -# Create the form table, which shows the win percentage for each team -# st.header("Form Table") -team_results = {} -for i, row in match_df.iterrows(): - home_team = row["home"] - away_team = row["away"] - result = row["result"] - - if home_team not in team_results: - team_results[home_team] = [0, 0, 0] - - if away_team not in team_results: - team_results[away_team] = [0, 0, 0] - - if result == 0: - team_results[home_team][2] += 1 - team_results[away_team][0] += 1 - elif result == 1: - team_results[home_team][0] += 1 - team_results[away_team][2] += 1 - else: - team_results[home_team][1] += 1 - team_results[away_team][1] += 1 - -# Create a DataFrame from the results dictionary and calculate the win percentage -df = pd.DataFrame.from_dict( - team_results, orient="index", columns=["wins", "draws", "losses"] -).sort_index() -df[["owner", "team"]] = df.index.to_series().str.split("/", expand=True) -df = df[["owner", "team", "wins", "draws", "losses"]] -df["win_pct"] = (df["wins"] / (df["wins"] + df["draws"] + df["losses"])) * 100 - - -# Get a list of all teams that have played in the last 24 hours - - -@st.cache(ttl=1800) -def fetch_owners(): - """ - Fetch a list of all owners who have played in the matches, along with the number of teams they own - and the number of unique teams they played with. - """ - # Extract the owner name and team name from each home and away team name in the DataFrame - team_owners = match_df["home"].apply(lambda x: x.split('/')[0]).tolist() + match_df['away'].apply(lambda x: x.split('/')[0]).tolist() - teams = match_df["home"].apply(lambda x: x.split('/')[1]).tolist() + match_df['away'].apply(lambda x: x.split('/')[1]).tolist() - - # Count the number of games played by each owner and the number of unique teams they played with - owner_team_counts = {} - owner_team_set = {} - for i, team_owner in enumerate(team_owners): - owner = team_owner.split(' ')[0] - if owner not in owner_team_counts: - owner_team_counts[owner] = 1 - owner_team_set[owner] = {teams[i]} - else: - owner_team_counts[owner] += 1 - owner_team_set[owner].add(teams[i]) - - # Create a DataFrame from the dictionary - owners_df = pd.DataFrame.from_dict(owner_team_counts, orient="index", columns=["Games played by owner"]) - owners_df["Unique teams by owner"] = owners_df.index.map(lambda x: len(owner_team_set[x])) - - # Return the DataFrame - return owners_df - - - - - - -# Display the DataFrame as a table, sorted by win percentage -with tab_team: - st.write("Form Table for previous 24 hours, ranked by win percentage") - stats = df.sort_values(by="win_pct", ascending=False) - styled_stats = stats.style.set_table_attributes("style='font-size: 20px'").set_table_styles([dict(selector='th', props=[('max-width', '200px')])]) - styled_stats = styled_stats.set_table_attributes("style='max-height: 1200px; overflow: auto'") - st.dataframe(styled_stats) - - -# Create a DataFrame from the list of owners and their number of teams -owners_df = fetch_owners() - -# Display the DataFrame as a table -with tab_owners: - - st.dataframe(owners_df) - - - diff --git a/spaces/hezhaoqia/vits-simple-api/bert_vits2/__init__.py b/spaces/hezhaoqia/vits-simple-api/bert_vits2/__init__.py deleted file mode 100644 index d3d019aa31cbe7a5333aa00a110ddf9ae58e2d7a..0000000000000000000000000000000000000000 --- a/spaces/hezhaoqia/vits-simple-api/bert_vits2/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from bert_vits2.bert_vits2 import Bert_VITS2 -from bert_vits2 import text diff --git a/spaces/hhhhardman/VITS/attentions.py b/spaces/hhhhardman/VITS/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/hhhhardman/VITS/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/hkunlp/Binder/nsql/qa_module/openai_qa.py b/spaces/hkunlp/Binder/nsql/qa_module/openai_qa.py deleted file mode 100644 index 2fbb5ab66deaf978c06d2bb0516a96ca8924b849..0000000000000000000000000000000000000000 --- a/spaces/hkunlp/Binder/nsql/qa_module/openai_qa.py +++ /dev/null @@ -1,196 +0,0 @@ -import os -import random - -from generation.prompt import OpenAIQAPromptBuilder -from generation.generator import Generator -from retrieval.retriever import OpenAIQARetriever -from retrieval.retrieve_pool import OpenAIQARetrievePool, QAItem - -num_parallel_prompts = 10 -num_qa_shots = 8 -infinite_rows_len = 50 # If the table contain rows larger than this number, it will be handled rows by rows. -max_tokens = 1024 -ROOT_DIR = os.path.join(os.path.dirname(__file__), "../../") - - -class OpenAIQAModel(object): - def __init__(self, args, keys=None): - super().__init__() - - # Prepare keys - self.key_current_id = 0 - self.keys = keys - random.seed(42) - random.shuffle(self.keys) - - retrieve_pool = OpenAIQARetrievePool( - data_path=os.path.join(ROOT_DIR, args.qa_retrieve_pool_file) - ) - self.retriever = OpenAIQARetriever(retrieve_pool) - self.generator = Generator(args=None, keys=self.keys) # Just to use its call api function - - self.prompting_method = 'new_db' - self.answer_split_token: str = ';' - self.db_mapping_token = "\t" - - def call_openai_api_completion(self, prompt): - completion = self.generator._call_codex_api(engine="text-davinci-002", - prompt=prompt, - max_tokens=max_tokens, - temperature=0, - top_p=1, - n=1, - stop=["\n\n"]) - return completion - - def call_openai_for_completion_text(self, prompt, openai_usage_type="completion"): - if openai_usage_type == "completion": - completion = self.call_openai_api_completion(prompt) - return completion.choices[0].text - else: - raise ValueError("The model usage type '{}' doesn't exists!".format(openai_usage_type)) - - @staticmethod - def merge_tables(tables, by='row_id'): - assert len(set([len(_table['rows']) for _table in tables])) == 1, "Tables must have the same rows!" - merged_header = [by] - by_idx = tables[0]['header'].index(by) - merged_rows = [[_row[by_idx]] for _row in tables[0]['rows']] - - for _table in tables: - header, rows = _table['header'], _table['rows'] - for col_idx, col in enumerate(header): - if col == by: - continue - if col in merged_header: - # When the column is duplicate, and postfix _0, _1 etc. - col = "{}_{}".format(col, merged_header.count(col)) - merged_header.append(col) - for i, row in enumerate(rows): - merged_rows[i].append(row[col_idx]) - return {"header": merged_header, "rows": merged_rows} - - def wrap_with_prompt_for_table_qa(self, - question, - sub_table, - table_title=None, - answer_split_token=None, - qa_type="ans", - prompting_method="new_db", - db_mapping_token="😅", - verbose=True): - prompt = "Question Answering Over Database:\n\n" - if qa_type in ['map', 'ans'] and num_qa_shots > 0: - query_item = QAItem(qa_question=question, table=sub_table, title=table_title) - retrieved_items = self.retriever.retrieve(item=query_item, num_shots=num_qa_shots, qa_type=qa_type) - few_shot_prompt_list = [] - for item in retrieved_items: - one_shot_prompt = OpenAIQAPromptBuilder.build_one_shot_prompt( - item=item, - answer_split_token=answer_split_token, - verbose=verbose, - prompting_method=prompting_method, - db_mapping_token=db_mapping_token - ) - few_shot_prompt_list.append(one_shot_prompt) - few_shot_prompt = '\n'.join(few_shot_prompt_list[:num_qa_shots]) - prompt = few_shot_prompt - - prompt += "\nGive a database as shown below:\n{}\n\n".format( - OpenAIQAPromptBuilder.table2codex_prompt(sub_table, table_title) - ) - - if qa_type == "map": - prompt += "Q: Answer question \"{}\" row by row.".format(question) - assert answer_split_token is not None - if prompting_method == "basic": - prompt += " The answer should be a list split by '{}' and have {} items in total.".format( - answer_split_token, len(sub_table['rows'])) - - elif qa_type == "ans": - prompt += "Q: Answer question \"{}\" for the table.".format(question) - prompt += " " - else: - raise ValueError("The QA type is not supported!") - - prompt += "\n" - if qa_type == "map": - if prompting_method == "basic": - prompt += "A:" - elif qa_type == "ans": - prompt += "A:" - - return prompt - - def qa(self, question, sub_tables, qa_type: str, verbose: bool = True, **args): - # If it is not a problem API can handle, answer it with a QA model. - merged_table = OpenAIQAModel.merge_tables(sub_tables) - if verbose: - print("Make Question {} on {}".format(question, merged_table)) - if qa_type == "map": - # Map: col(s) -question> one col - - # Make model make a QA towards a sub-table - # col(s) -> one col, all QA in one time - def do_map(_table): - _prompt = self.wrap_with_prompt_for_table_qa(question, - _table, - args['table_title'], - self.answer_split_token, - qa_type, - prompting_method=self.prompting_method, - db_mapping_token=self.db_mapping_token, - verbose=verbose) - completion_str = self.call_openai_for_completion_text(_prompt).lower().strip(' []') - - if verbose: - print(f'QA map@ input:\n{_prompt}') - print(f'QA map@ output:\n{completion_str}') - - if self.prompting_method == "basic": - answers = [_answer.strip(" '").lower() for _answer in - completion_str.split(self.answer_split_token)] - elif self.prompting_method == "new_db": - answers = [line.split(self.db_mapping_token)[-1] for line in completion_str.split("\n")[2:-1]] - else: - raise ValueError("No such prompting methods: '{}'! ".format(self.prompting_method)) - return answers - - # Handle infinite rows, rows by rows. - answers = [] - rows_len = len(merged_table['rows']) - run_times = int(rows_len / infinite_rows_len) if rows_len % infinite_rows_len == 0 else int( - rows_len / infinite_rows_len) + 1 - - for run_idx in range(run_times): - _table = { - "header": merged_table['header'], - "rows": merged_table['rows'][run_idx * infinite_rows_len:] - } if run_idx == run_times - 1 else \ - { - "header": merged_table['header'], - "rows": merged_table['rows'][run_idx * infinite_rows_len:(run_idx + 1) * infinite_rows_len] - } - - answers.extend(do_map(_table)) - if verbose: - print("The map@ openai answers are {}".format(answers)) - # Add row_id in addition for finding to corresponding rows. - return {"header": ['row_id'] + args['new_col_name_s'], - "rows": [[row[0], answer] for row, answer in zip(merged_table['rows'], answers)]} - elif qa_type == "ans": - # Ans: col(s) -question> answer - prompt = self.wrap_with_prompt_for_table_qa(question, - merged_table, - args['table_title'], - prompting_method=self.prompting_method, - verbose=verbose) - answers = [self.call_openai_for_completion_text(prompt).lower().strip(' []')] - - if verbose: - print(f'QA ans@ input:\n{prompt}') - print(f'QA ans@ output:\n{answers}') - - return answers - else: - raise ValueError("Please choose from map and ans in the qa usage!!") diff --git a/spaces/huggingchat/chat-ui/src/lib/utils/randomUuid.ts b/spaces/huggingchat/chat-ui/src/lib/utils/randomUuid.ts deleted file mode 100644 index 9d536365c57659305ad28d6fc06b89d77ab337ab..0000000000000000000000000000000000000000 --- a/spaces/huggingchat/chat-ui/src/lib/utils/randomUuid.ts +++ /dev/null @@ -1,14 +0,0 @@ -type UUID = ReturnType; - -export function randomUUID(): UUID { - // Only on old safari / ios - if (!("randomUUID" in crypto)) { - return "10000000-1000-4000-8000-100000000000".replace(/[018]/g, (c) => - ( - Number(c) ^ - (crypto.getRandomValues(new Uint8Array(1))[0] & (15 >> (Number(c) / 4))) - ).toString(16) - ) as UUID; - } - return crypto.randomUUID(); -} diff --git a/spaces/hysts/AnimeGANv3_PortraitSketch/images/README.md b/spaces/hysts/AnimeGANv3_PortraitSketch/images/README.md deleted file mode 100644 index 29f8d67364b8d5a29122f6036b7e16b90bbfefa1..0000000000000000000000000000000000000000 --- a/spaces/hysts/AnimeGANv3_PortraitSketch/images/README.md +++ /dev/null @@ -1,6 +0,0 @@ -These images are freely-usable ones from [Unsplash](https://unsplash.com/). - -- https://unsplash.com/photos/rDEOVtE7vOs -- https://unsplash.com/photos/et_78QkMMQs -- https://unsplash.com/photos/ILip77SbmOE -- https://unsplash.com/photos/95UF6LXe-Lo diff --git a/spaces/inflaton/learn-ai/app_modules/init.py b/spaces/inflaton/learn-ai/app_modules/init.py deleted file mode 100644 index 34beac1f256dee5ab82be595c5b6a63230c85709..0000000000000000000000000000000000000000 --- a/spaces/inflaton/learn-ai/app_modules/init.py +++ /dev/null @@ -1,82 +0,0 @@ -"""Main entrypoint for the app.""" -import os -from timeit import default_timer as timer -from typing import List, Optional - -from dotenv import find_dotenv, load_dotenv -from langchain.embeddings import HuggingFaceInstructEmbeddings -from langchain.vectorstores.chroma import Chroma -from langchain.vectorstores.faiss import FAISS - -from app_modules.llm_loader import LLMLoader -from app_modules.llm_qa_chain import QAChain -from app_modules.utils import get_device_types, init_settings - -found_dotenv = find_dotenv(".env") - -if len(found_dotenv) == 0: - found_dotenv = find_dotenv(".env.example") -print(f"loading env vars from: {found_dotenv}") -load_dotenv(found_dotenv, override=False) - -# Constants -init_settings() - - -def app_init(initQAChain: bool = True): - # https://github.com/huggingface/transformers/issues/17611 - os.environ["CURL_CA_BUNDLE"] = "" - - llm_model_type = os.environ.get("LLM_MODEL_TYPE") - n_threds = int(os.environ.get("NUMBER_OF_CPU_CORES") or "4") - - hf_embeddings_device_type, hf_pipeline_device_type = get_device_types() - print(f"hf_embeddings_device_type: {hf_embeddings_device_type}") - print(f"hf_pipeline_device_type: {hf_pipeline_device_type}") - - if initQAChain: - hf_embeddings_model_name = ( - os.environ.get("HF_EMBEDDINGS_MODEL_NAME") or "hkunlp/instructor-xl" - ) - - index_path = os.environ.get("FAISS_INDEX_PATH") or os.environ.get( - "CHROMADB_INDEX_PATH" - ) - using_faiss = os.environ.get("FAISS_INDEX_PATH") is not None - - start = timer() - embeddings = HuggingFaceInstructEmbeddings( - model_name=hf_embeddings_model_name, - model_kwargs={"device": hf_embeddings_device_type}, - ) - end = timer() - - print(f"Completed in {end - start:.3f}s") - - start = timer() - - print( - f"Load index from {index_path} with {'FAISS' if using_faiss else 'Chroma'}" - ) - - if not os.path.isdir(index_path): - raise ValueError(f"{index_path} does not exist!") - elif using_faiss: - vectorstore = FAISS.load_local(index_path, embeddings) - else: - vectorstore = Chroma( - embedding_function=embeddings, persist_directory=index_path - ) - - end = timer() - - print(f"Completed in {end - start:.3f}s") - - start = timer() - llm_loader = LLMLoader(llm_model_type) - llm_loader.init(n_threds=n_threds, hf_pipeline_device_type=hf_pipeline_device_type) - qa_chain = QAChain(vectorstore, llm_loader) if initQAChain else None - end = timer() - print(f"Completed in {end - start:.3f}s") - - return llm_loader, qa_chain diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Elecardmpeg2serialnumber.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Elecardmpeg2serialnumber.md deleted file mode 100644 index 983da17a4037d65e7c2eabf53cfef99dc9848ec6..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Elecardmpeg2serialnumber.md +++ /dev/null @@ -1,45 +0,0 @@ - -

      Elecard MPEG-2 Serial Number: What It Is and How to Get It

      - -

      If you are looking for a high-quality MPEG-2 decoder, plugin, or player for your Windows Media Player or other DirectShow-based applications, you may want to try Elecard MPEG-2 products. Elecard is a leading provider of video compression solutions, analyzers, encoders, and playback software. Elecard MPEG-2 products are compatible with various formats and standards, such as DVD, ATSC, DVB, and IPTV. They also support DXVA and multi CPU for better performance and efficiency.

      - -

      However, to use Elecard MPEG-2 products, you need to have a valid elecardmpeg2serialnumber. This is a unique code that activates your product and allows you to enjoy its full features and benefits. In this article, we will show you what an elecardmpeg2serialnumber is, where to find it, and how to use it.

      -

      elecardmpeg2serialnumber


      DOWNLOAD ⚹⚹⚹ https://urlin.us/2uEvQl



      - -

      What is an Elecard MPEG-2 Serial Number?

      - -

      An elecardmpeg2serialnumber is a 25-digit alphanumeric code that looks something like this: XXXXX-XXXXX-XXXXX-XXXXX-XXXXX. It is a proof of purchase and a license key that enables you to use Elecard MPEG-2 products legally and without limitations. You need to enter your elecardmpeg2serialnumber when you install or activate your product.

      - -

      An elecardmpeg2serialnumber is different from a product key or an activation code. A product key is a code that identifies your product and its version. An activation code is a code that verifies your product online and unlocks its features. An elecardmpeg2serialnumber is a combination of both: it identifies your product and unlocks its features.

      - -

      Where to Find Your Elecard MPEG-2 Serial Number?

      - -

      There are different ways to get your elecardmpeg2serialnumber depending on how you purchased your product. Here are some of the most common methods:

      -

      - -
        -
      • If you bought your product online from the Elecard website or an authorized reseller, you should receive your elecardmpeg2serialnumber by email after completing your payment. Check your inbox and spam folder for an email from Elecard or the reseller with your order confirmation and your elecardmpeg2serialnumber.
      • -
      • If you bought your product offline from a physical store or a distributor, you should find your elecardmpeg2serialnumber on the product package or on the receipt. Look for a sticker or a label with the Elecard logo and your elecardmpeg2serialnumber.
      • -
      • If you downloaded your product as a trial version from the Elecard website or another source, you can request your elecardmpeg2serialnumber by filling out a form on the Elecard website. You will need to provide some information about yourself and your product, such as your name, email address, product name, version, and download source. You will then receive your elecardmpeg2serialnumber by email within 24 hours.
      • -
      - -

      How to Use Your Elecard MPEG-2 Serial Number?

      - -

      Once you have your elecardmpeg2serialnumber, you can use it to install or activate your product. Here are the steps to follow:

      - -
        -
      1. Download the installer of your product from the Elecard website or another source.
      2. -
      3. Run the installer and follow the instructions on the screen.
      4. -
      5. When prompted, enter your elecardmpeg2serialnumber in the designated field. Make sure you enter it correctly and without spaces or dashes.
      6. -
      7. Click on Next or Finish to complete the installation or activation process.
      8. -
      9. Enjoy using your product with full features and benefits.
      10. -
      - -

      Conclusion

      - -

      In this article, we have explained what an elecardmpeg2serialnumber is, where to find it, and how to use it. We hope this guide was helpful and that you can enjoy using Elecard MPEG-2 products with high quality and performance. If you have any questions or feedback, please leave a comment below.

      -

      Conclusion

      - -

      In this article, we have explained what an elecardmpeg2serialnumber is, where to find it, and how to use it. We hope this guide was helpful and that you can enjoy using Elecard MPEG-2 products with high quality and performance. If you have any questions or feedback, please leave a comment below.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Evolution Mk 425c Software 17.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Evolution Mk 425c Software 17.md deleted file mode 100644 index 7e342f32e4845c5eff6ad4532181ac5cc7640fbd..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Evolution Mk 425c Software 17.md +++ /dev/null @@ -1,94 +0,0 @@ - -

      Evolution Mk 425c Software 17: A Powerful Combination for Music Creation

      - -

      If you are looking for a versatile and affordable MIDI keyboard controller that works seamlessly with Software 17, you might want to check out the Evolution Mk 425c. This compact and portable device offers 25 full-size, velocity-sensitive keys, eight assignable rotary knobs, ten assignable buttons, a pitch bend wheel, a modulation wheel, and a sustain pedal input. It also comes with a USB cable that connects it to your computer and powers it up.

      -

      Evolution Mk 425c Software 17


      Download Filehttps://urlin.us/2uExpf



      - -

      But what makes the Evolution Mk 425c stand out from other MIDI controllers is its compatibility with Software 17, a popular music production software that lets you create, record, edit, mix, and master your own songs. Software 17 has a user-friendly interface that allows you to access thousands of sounds, instruments, effects, loops, and samples. It also has advanced features such as audio editing, MIDI editing, automation, mixing, mastering, and exporting.

      - -

      How to Use Evolution Mk 425c with Software 17

      - -

      Using the Evolution Mk 425c with Software 17 is easy and intuitive. All you need to do is plug the USB cable into your computer and launch Software 17. The Evolution Mk 425c will be automatically recognized and configured by the software. You can then use the keyboard to play any instrument or sound in Software 17. You can also use the knobs and buttons to control various parameters such as volume, pan, filter, envelope, and more. You can assign any function to any knob or button using the software's MIDI learn feature.

      - -

      The Evolution Mk 425c also has a preset mode that lets you access different presets for different software applications. You can switch between presets using the preset button on the keyboard. For example, you can use the preset for Software 17 to control the software's mixer and transport functions. You can also create your own presets using the included Evolution Librarian software.

      - -

      Why Software 17 Makes the Difference

      - -

      Software 17 is one of the most powerful and versatile music production software available today. It has everything you need to create professional-quality music in any genre and style. Whether you are a beginner or an expert, Software 17 will help you unleash your creativity and achieve your musical goals.

      -

      - -

      Some of the features that make Software 17 stand out are:

      - -
        -
      • A huge library of sounds, instruments, effects, loops, and samples that cover a wide range of musical genres and styles.
      • -
      • A flexible and intuitive interface that lets you drag and drop sounds, instruments, effects, loops, and samples onto the timeline or the mixer.
      • -
      • An advanced audio engine that delivers high-quality sound and performance.
      • -
      • A powerful audio editing tool that lets you cut, copy, paste, trim, fade, normalize, reverse, pitch-shift, time-stretch, and more.
      • -
      • A comprehensive MIDI editing tool that lets you edit notes, velocities, lengths, quantize, transpose, groove, and more.
      • -
      • An automation feature that lets you automate any parameter of any sound, instrument, effect, loop, or sample.
      • -
      • A mixing feature that lets you adjust levels, pan, EQ, compression, reverb, delay, chorus, flanger, phaser, distortion, and more.
      • -
      • A mastering feature that lets you apply final touches to your music such as limiter, maximizer, multiband compressor, stereo enhancer,
      • -

        The Benefits of Evolution Mk 425c and Software 17 for Your Home Studio

        - -

        If you are a home studio owner or a hobbyist musician, you might be wondering how the Evolution Mk 425c and Software 17 can benefit your music creation process. Well, there are many reasons why this combination is ideal for your needs. Here are some of them:

        - -
          -
        • The Evolution Mk 425c is compact and lightweight, which means you can easily carry it around and set it up anywhere in your home studio. You don't need a lot of space or a complicated setup to use it.
        • -
        • The Evolution Mk 425c is USB-powered, which means you don't need an external power supply or batteries to use it. You just need to plug it into your computer and you are ready to go.
        • -
        • The Evolution Mk 425c is compatible with any software that supports MIDI, which means you can use it with Software 17 or any other music production software you prefer. You can also use it with other hardware devices that have MIDI inputs or outputs.
        • -
        • The Evolution Mk 425c has a variety of controls that let you tweak and adjust your sounds and parameters in real time. You can use the knobs and buttons to control volume, pan, filter, envelope, and more. You can also use the pitch bend wheel and the modulation wheel to add expression and dynamics to your playing.
        • -
        • The Evolution Mk 425c has a preset mode that lets you access different presets for different software applications. You can switch between presets using the preset button on the keyboard. For example, you can use the preset for Software 17 to control the software's mixer and transport functions. You can also create your own presets using the included Evolution Librarian software.
        • -
        - -

        Software 17 is one of the most powerful and versatile music production software available today. It has everything you need to create professional-quality music in any genre and style. Whether you are a beginner or an expert, Software 17 will help you unleash your creativity and achieve your musical goals.

        - -
          -
        • Software 17 has a huge library of sounds, instruments, effects, loops, and samples that cover a wide range of musical genres and styles. You can access thousands of sounds from various categories such as drums, bass, guitar, piano, synth, orchestral, vocal, and more. You can also import your own sounds or use third-party plugins to expand your sonic palette.
        • -
        • Software 17 has a flexible and intuitive interface that lets you drag and drop sounds, instruments, effects, loops, and samples onto the timeline or the mixer. You can easily arrange, edit, and mix your tracks using the software's tools and features. You can also customize your workflow by resizing, docking, undocking, or hiding any window or panel.
        • -
        • Software 17 has an advanced audio engine that delivers high-quality sound and performance. You can record audio from any source using your computer's built-in microphone or an external audio interface. You can also edit audio using tools such as cut, copy, paste, trim, fade, normalize, reverse, pitch-shift, time-stretch, and more.
        • -
        • Software 17 has a comprehensive MIDI editing tool that lets you edit notes, velocities, lengths, quantize, transpose, groove, and more. You can also record MIDI from any MIDI controller such as the Evolution Mk 425c or use the software's virtual keyboard or drum pads. You can also use MIDI effects such as arpeggiator, chord generator,
        • -

          How to Install and Update Evolution Mk 425c Drivers and Software

          - -

          One of the advantages of the Evolution Mk 425c is that it is a class-compliant device, which means it does not require any drivers to work with your computer. However, if you want to use the Evolution Librarian software or access the latest firmware updates, you will need to download and install them from the M-Audio website.

          - -

          To install and update the Evolution Mk 425c drivers and software, follow these steps:

          - -
            -
          1. Go to the M-Audio website and navigate to the Support section.
          2. -
          3. Select Drivers & Software Updates from the menu.
          4. -
          5. Select Evolution MK-425C from the Product list.
          6. -
          7. Select your operating system from the OS list.
          8. -
          9. Download the latest driver or software update for your device.
          10. -
          11. Unzip the downloaded file and run the installer.
          12. -
          13. Follow the on-screen instructions to complete the installation.
          14. -
          15. Restart your computer if prompted.
          16. -
          - -

          You can also check for firmware updates for your device using the Evolution Librarian software. To do this, follow these steps:

          - -
            -
          1. Launch the Evolution Librarian software on your computer.
          2. -
          3. Connect your Evolution Mk 425c to your computer using the USB cable.
          4. -
          5. Select Firmware Update from the Tools menu.
          6. -
          7. The software will check for available updates and prompt you to download and install them if found.
          8. -
          9. Follow the on-screen instructions to complete the update.
          10. -
          - -

          Tips and Tricks for Getting the Most Out of Evolution Mk 425c and Software 17

          - -

          The Evolution Mk 425c and Software 17 are a powerful combination for music creation, but there are some tips and tricks that can help you get the most out of them. Here are some of them:

          - -
            -
          • Use the preset mode to access different presets for different software applications. You can switch between presets using the preset button on the keyboard. For example, you can use the preset for Software 17 to control the software's mixer and transport functions. You can also create your own presets using the included Evolution Librarian software.
          • -
          • Use the MIDI learn feature to assign any function to any knob or button on the keyboard. To do this, right-click on any parameter in Software 17 and select MIDI Learn from the menu. Then move or press any knob or button on the keyboard to assign it to that parameter. You can also use the MIDI Learn window in Software 17 to view and edit your assignments.
          • -
          • Use the pitch bend wheel and the modulation wheel to add expression and dynamics to your playing. You can adjust how much these wheels affect your sound using the Pitch Bend Range and Modulation Depth parameters in Software 17. You can also assign different effects or functions to these wheels using the MIDI learn feature.
          • -
          • Use the sustain pedal input to connect a sustain pedal to your keyboard. This will allow you to hold notes or chords without keeping your fingers on the keys. You can also assign different functions to this pedal using the MIDI learn feature.
          • -
          • Use the octave buttons to change the octave range of your keyboard. This will allow you to access higher or lower notes that are not available on your keyboard. You can also use these buttons to transpose your keyboard by semitones using the Shift button.
          • -
          -

          Conclusion

          - -

          The Evolution Mk 425c and Software 17 are a great choice for anyone who wants to create music with a MIDI keyboard controller and a music production software. They offer a lot of features and functions that make music creation easy and fun. Whether you are a beginner or an expert, you will find that this combination can help you achieve your musical goals.

          - -

          If you are interested in getting the Evolution Mk 425c and Software 17, you can visit the M-Audio website and order them online. You can also find them in your local music store or online retailer. You will not regret investing in these products, as they will provide you with hours of musical enjoyment and satisfaction.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Karan Arjun Mp4 Full Movie Download [2021].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Karan Arjun Mp4 Full Movie Download [2021].md deleted file mode 100644 index 44ffe993179f239e8c90b1f9b076bf503cc73ef0..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Karan Arjun Mp4 Full Movie Download [2021].md +++ /dev/null @@ -1,6 +0,0 @@ -

          Karan Arjun Mp4 Full Movie Download


          Download ✏ ✏ ✏ https://urlin.us/2uEvK7



          - -Karan Arjun L Salman Khan Shah Rukh Khan Mamta Kulkarni Kajol L 1995. play. download. Karan Arjun 1995 Hindi Full HD Movie Shahrukh Khan Salman ... 4d29de3e1b
          -
          -
          -

          diff --git a/spaces/inreVtussa/clothingai/Examples/Aram Veeser New Historicism Pdf Download.md b/spaces/inreVtussa/clothingai/Examples/Aram Veeser New Historicism Pdf Download.md deleted file mode 100644 index 3e7a69a66d4c5e9edb5978a6473205f1a68c46e6..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Aram Veeser New Historicism Pdf Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

          Aram Veeser New Historicism Pdf Download


          Download Zip ===== https://tiurll.com/2uCl5k



          -
          -by Y Bao · 2018 — A copy can be downloaded for personal non-commercial research or study, without ... 2.3 New Historicism: A social and historical model for intertextual reading ... historical considerations to the center stage of literary analysis” again (Veeser 1989, ... In The New Historicism, edited by H. Aram Veeser, 15-36. 1fdad05405
          -
          -
          -

          diff --git a/spaces/inreVtussa/clothingai/Examples/Atoll 281 Crack [PORTABLE].md b/spaces/inreVtussa/clothingai/Examples/Atoll 281 Crack [PORTABLE].md deleted file mode 100644 index 2b7b2f789643ca4c921c0dbe1a5e1e26d63590ab..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Atoll 281 Crack [PORTABLE].md +++ /dev/null @@ -1,64 +0,0 @@ -

          Atoll 281 Crack


          DOWNLOAD ->>->>->> https://tiurll.com/2uCiqW



          - -New York. - -Gibbs, D.N. (1994). The Archaeology of the Eastern Pacific: An - -Interpretive Study of the Archaeology of a Pacific Rim Region. In: - -P. D. Miller and J. M. Campana (eds.), The Archaeology of the - -Pacific Rim: Essays in Honor of William F. Safford. University of - -Hawai'i Press. pp. 39-79. - -Groves, R. W. (1988). The Cultural Processes of Prehistoric and - -Ancient Hawai'i, 1500-1843. Hawai'i University Press. - -Hiroda, T., M. Fujiwara and A. Watanabe (eds.) (1989). - -Familie, Kultur, Geschichte: Eine Ausstellung zum 1000. - -Geburtstag der Stadt Otsu und der Stadt Shofu. Von der Otsu - -Jingungyo-Gesellschaft (ed.), Otsu NPO. Otsu, Japan. - -Hiroda, T., N. Nakatani and K. Tamura (eds.) (1987). - -Taian-Shu: Archeology of the Nara period in Japan, Vol. IV - -(ed. in collaboration with R. H. Harbottle, R.N.A.S.), Japan - -Association of East Asian Archaeologists. National Museum, - -Tokyo. - -Hiroda, T. and A. Watanabe (1986). Japanese tribal culture and - -nationhood, 1500-1868. Cambridge University Press. - -Kaplan, K. H. (1961). An Introduction to Ancient Japanese - -Linguistics. University of California Press. - -Lloyd, C. A. (1983). Notation on the translation of the Chinese - -book-name in an old inscription in the Nara period. International - -Journal of Historical Linguistics. 1 (3): 103-122. - -Lloyd, C. A. (1977). "The True History of Japan." - -Seabrook, D. H. (ed.) (1962). The World of Japan. Cambridge - -University Press. - -McClelland, J. T. (1968). The architecture of the Japanese - -houses of the middle ages. Cornell University. - -McClelland, J. T. (1981). 4fefd39f24
          -
          -
          -

          diff --git a/spaces/inreVtussa/clothingai/Examples/Corporate Finance European Edition E Book.md b/spaces/inreVtussa/clothingai/Examples/Corporate Finance European Edition E Book.md deleted file mode 100644 index 5e5a9ceda82cdccd8976716d00b37769c11f56c2..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Corporate Finance European Edition E Book.md +++ /dev/null @@ -1,21 +0,0 @@ - -

          How to Learn Corporate Finance with European Edition E-Books

          -

          Corporate finance is the study of how firms make decisions about investment, financing, working capital, valuation and risk management. It is a core subject for business students and professionals who want to understand how to create value and optimize financial performance in a global and dynamic environment.

          -

          corporate finance european edition e book


          Downloadhttps://tiurll.com/2uCizg



          -

          One of the best ways to learn corporate finance is by reading textbooks that cover the fundamental concepts and applications of the field. However, not all textbooks are created equal. Some may be too theoretical or too US-centric, while others may be outdated or too expensive.

          -

          That's why we recommend using European edition e-books as your primary source of learning corporate finance. European edition e-books are digital versions of textbooks that are tailored to the European context and market. They offer several advantages over traditional print books, such as:

          -
            -
          • They are more affordable and accessible. You can buy them online and download them instantly to your device, without paying for shipping or waiting for delivery. You can also access them anytime and anywhere, as long as you have an internet connection.
          • -
          • They are more relevant and updated. They reflect the latest developments and trends in corporate finance, especially in the European Union and other regions. They also include more examples and cases from European companies and industries, which can help you relate the theory to practice.
          • -
          • They are more interactive and engaging. They feature multimedia elements, such as videos, animations, quizzes and links, that can enhance your learning experience and retention. They also allow you to highlight, annotate, bookmark and search the text, as well as adjust the font size and brightness to suit your preferences.
          • -
          -

          If you are looking for some of the best European edition e-books on corporate finance, here are some suggestions:

          -
            -
          1. Corporate Finance, European Edition by Peter Moles, Robert Parrino and David S. Kidwell. This e-book adopts a modular format that allows you to customize your learning path according to your needs and interests. It helps you develop the intuition and analytical skills necessary to effectively apply financial tools in real-world decision-making situations[^1^].
          2. -
          3. Fundamentals of Corporate Finance 4e by David Hillier, Iain Clacher, Stephen Ross, Randolph Westerfield and Bradford Jordan. This e-book provides a comprehensive and balanced introduction to corporate finance, with integrated theories and real-world European examples. It also features new Sustainability in Finance boxes that show how sustainability and corporate finance are interconnected in every-day life[^3^].
          4. -
          5. Corporate Finance: Theory and Practice 5e by Pierre Vernimmen, Pascal Quiry, Maurizio Dallocchio, Yann Le Fur and Antonio Salvi. This e-book combines rigorous theory with practical applications to give you a solid understanding of corporate finance from a European perspective. It covers all the key topics in corporate finance, such as valuation, capital structure, dividend policy, mergers and acquisitions, risk management and international finance.
          6. -
          -

          With these European edition e-books, you can learn corporate finance at your own pace and convenience, while gaining valuable insights into the European financial environment and practices. Whether you are a student or a professional, these e-books can help you master corporate finance and achieve your academic or career goals.

          -

          d5da3c52bf
          -
          -
          \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/DOMACI FILM Za Gledanje Rane.md b/spaces/inreVtussa/clothingai/Examples/DOMACI FILM Za Gledanje Rane.md deleted file mode 100644 index 73bef8067cf795cf1d93d0cbb369aad5b07552d7..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/DOMACI FILM Za Gledanje Rane.md +++ /dev/null @@ -1,10 +0,0 @@ -
          -

          The first academic journal of Film and Television Institute of India (FTII). It includes discussions and articles on contemporary cinema, its history and aesthetics, as well as discussions on how digital media is affecting the way we relate to moving images. Lensight is published QUARTERLY...

          -

          DOMACI FILM za gledanje rane


          Download ––– https://tiurll.com/2uCkI5



          -

          Between September and November 2017, the Film and Television Institute of India (FTII) hosted a workshop on ‘Reading and Transcribing Oral History in Cinema: D. F. K. Rana’s Rajanikant, Bollywood Body Language’. It was organized by Media Studies Program in collaboration with Oral History Research Centre and Temenos Academy.

          -

          The film features a series of fantastical and darkly comic tales where Kure and his gang steal other people's properties, often kidnapping them. After stealing large sums of money, Kure and his gang often sexually abuse the victims.

          -

          The success of a film is often considered to be in the editing room. This is true, but considering the fact that more than 60 percent of a film's success is dictated by the quality of the production design and film craft. This includes the first five minutes of a film that can affect the entire rest of the story and the film. Film director, Walter Murch, considered to be one of the "godfathers" of digital audio editing and sound design, has given some insights on how he edits his films. He thinks that it's not only important, but imperative that we begin to...

          -

          -

          2018 Rane NSK Steering Systems Pvt. Ltd. The Los Angeles office is home to the co-leader of our Real Estate Finance Group and is the seat of our Bridge Lending Team. The counsel our real estate finance attorneys provide on loan origination, joint ventures, construction lending, and leasing matters has helped major projects countrywide become reality. On the litigation side, our lawyers represent commercial landlords and shopping center owners, managers, and developers in corporate disputes, and regulatory compliance and tenant bankruptcy matters. In the entertainment area, we represent clients in the film, music, television, online media, and publishing industries. In August 2019, a Media and Entertainment team from Leopold, Petrich & Smith joined the firm, adding additional strength in script-vetting, rights clearances, and copyright litigation.

          899543212b
          -
          -
          \ No newline at end of file diff --git a/spaces/iqovocn/ChuanhuChatGPT/chatgpt - macOS.command b/spaces/iqovocn/ChuanhuChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/iqovocn/ChuanhuChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/ispast/Genshin_MB_VITS_TTS/monotonic_align/setup.py b/spaces/ispast/Genshin_MB_VITS_TTS/monotonic_align/setup.py deleted file mode 100644 index 30c224807a70faa9df9c9eb75f8e80c8c867b16b..0000000000000000000000000000000000000000 --- a/spaces/ispast/Genshin_MB_VITS_TTS/monotonic_align/setup.py +++ /dev/null @@ -1,9 +0,0 @@ -from distutils.core import setup -from Cython.Build import cythonize -import numpy - -setup( - name = 'monotonic_align', - ext_modules = cythonize("core.pyx"), - include_dirs=[numpy.get_include()] -) diff --git a/spaces/jackyccl/segment-anything/segment_anything/modeling/sam.py b/spaces/jackyccl/segment-anything/segment_anything/modeling/sam.py deleted file mode 100644 index 8074cff6b40addc6b66f7ab4962218eef20da13c..0000000000000000000000000000000000000000 --- a/spaces/jackyccl/segment-anything/segment_anything/modeling/sam.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn -from torch.nn import functional as F - -from typing import Any, Dict, List, Tuple - -from .image_encoder import ImageEncoderViT -from .mask_decoder import MaskDecoder -from .prompt_encoder import PromptEncoder - - -class Sam(nn.Module): - mask_threshold: float = 0.0 - image_format: str = "RGB" - - def __init__( - self, - image_encoder: ImageEncoderViT, - prompt_encoder: PromptEncoder, - mask_decoder: MaskDecoder, - pixel_mean: List[float] = [123.675, 116.28, 103.53], - pixel_std: List[float] = [58.395, 57.12, 57.375], - ) -> None: - """ - SAM predicts object masks from an image and input prompts. - - Arguments: - image_encoder (ImageEncoderViT): The backbone used to encode the - image into image embeddings that allow for efficient mask prediction. - prompt_encoder (PromptEncoder): Encodes various types of input prompts. - mask_decoder (MaskDecoder): Predicts masks from the image embeddings - and encoded prompts. - pixel_mean (list(float)): Mean values for normalizing pixels in the input image. - pixel_std (list(float)): Std values for normalizing pixels in the input image. - """ - super().__init__() - self.image_encoder = image_encoder - self.prompt_encoder = prompt_encoder - self.mask_decoder = mask_decoder - self.register_buffer("pixel_mean", torch.Tensor(pixel_mean).view(-1, 1, 1), False) - self.register_buffer("pixel_std", torch.Tensor(pixel_std).view(-1, 1, 1), False) - - @property - def device(self) -> Any: - return self.pixel_mean.device - - @torch.no_grad() - def forward( - self, - batched_input: List[Dict[str, Any]], - multimask_output: bool, - ) -> List[Dict[str, torch.Tensor]]: - """ - Predicts masks end-to-end from provided images and prompts. - If prompts are not known in advance, using SamPredictor is - recommended over calling the model directly. - - Arguments: - batched_input (list(dict)): A list over input images, each a - dictionary with the following keys. A prompt key can be - excluded if it is not present. - 'image': The image as a torch tensor in 3xHxW format, - already transformed for input to the model. - 'original_size': (tuple(int, int)) The original size of - the image before transformation, as (H, W). - 'point_coords': (torch.Tensor) Batched point prompts for - this image, with shape BxNx2. Already transformed to the - input frame of the model. - 'point_labels': (torch.Tensor) Batched labels for point prompts, - with shape BxN. - 'boxes': (torch.Tensor) Batched box inputs, with shape Bx4. - Already transformed to the input frame of the model. - 'mask_inputs': (torch.Tensor) Batched mask inputs to the model, - in the form Bx1xHxW. - multimask_output (bool): Whether the model should predict multiple - disambiguating masks, or return a single mask. - - Returns: - (list(dict)): A list over input images, where each element is - as dictionary with the following keys. - 'masks': (torch.Tensor) Batched binary mask predictions, - with shape BxCxHxW, where B is the number of input prompts, - C is determined by multimask_output, and (H, W) is the - original size of the image. - 'iou_predictions': (torch.Tensor) The model's predictions - of mask quality, in shape BxC. - 'low_res_logits': (torch.Tensor) Low resolution logits with - shape BxCxHxW, where H=W=256. Can be passed as mask input - to subsequent iterations of prediction. - """ - input_images = torch.stack([self.preprocess(x["image"]) for x in batched_input], dim=0) - image_embeddings = self.image_encoder(input_images) - - outputs = [] - for image_record, curr_embedding in zip(batched_input, image_embeddings): - if "point_coords" in image_record: - points = (image_record["point_coords"], image_record["point_labels"]) - else: - points = None - sparse_embeddings, dense_embeddings = self.prompt_encoder( - points=points, - boxes=image_record.get("boxes", None), - masks=image_record.get("mask_inputs", None), - ) - low_res_masks, iou_predictions = self.mask_decoder( - image_embeddings=curr_embedding.unsqueeze(0), - image_pe=self.prompt_encoder.get_dense_pe(), - sparse_prompt_embeddings=sparse_embeddings, - dense_prompt_embeddings=dense_embeddings, - multimask_output=multimask_output, - ) - masks = self.postprocess_masks( - low_res_masks, - input_size=image_record["image"].shape[-2:], - original_size=image_record["original_size"], - ) - masks = masks > self.mask_threshold - outputs.append( - { - "masks": masks, - "iou_predictions": iou_predictions, - "low_res_logits": low_res_masks, - } - ) - return outputs - - def postprocess_masks( - self, - masks: torch.Tensor, - input_size: Tuple[int, ...], - original_size: Tuple[int, ...], - ) -> torch.Tensor: - """ - Remove padding and upscale masks to the original image size. - - Arguments: - masks (torch.Tensor): Batched masks from the mask_decoder, - in BxCxHxW format. - input_size (tuple(int, int)): The size of the image input to the - model, in (H, W) format. Used to remove padding. - original_size (tuple(int, int)): The original size of the image - before resizing for input to the model, in (H, W) format. - - Returns: - (torch.Tensor): Batched masks in BxCxHxW format, where (H, W) - is given by original_size. - """ - masks = F.interpolate( - masks, - (self.image_encoder.img_size, self.image_encoder.img_size), - mode="bilinear", - align_corners=False, - ) - masks = masks[..., : input_size[0], : input_size[1]] - masks = F.interpolate(masks, original_size, mode="bilinear", align_corners=False) - return masks - - def preprocess(self, x: torch.Tensor) -> torch.Tensor: - """Normalize pixel values and pad to a square input.""" - # Normalize colors - x = (x - self.pixel_mean) / self.pixel_std - - # Pad - h, w = x.shape[-2:] - padh = self.image_encoder.img_size - h - padw = self.image_encoder.img_size - w - x = F.pad(x, (0, padw, 0, padh)) - return x diff --git a/spaces/janeH/QQsign/bin/unidbg-fetch-qsign.bat b/spaces/janeH/QQsign/bin/unidbg-fetch-qsign.bat deleted file mode 100644 index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000 --- a/spaces/janeH/QQsign/bin/unidbg-fetch-qsign.bat +++ /dev/null @@ -1,89 +0,0 @@ -@rem -@rem Copyright 2015 the original author or authors. -@rem -@rem Licensed under the Apache License, Version 2.0 (the "License"); -@rem you may not use this file except in compliance with the License. -@rem You may obtain a copy of the License at -@rem -@rem https://www.apache.org/licenses/LICENSE-2.0 -@rem -@rem Unless required by applicable law or agreed to in writing, software -@rem distributed under the License is distributed on an "AS IS" BASIS, -@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -@rem See the License for the specific language governing permissions and -@rem limitations under the License. -@rem - -@if "%DEBUG%" == "" @echo off -@rem ########################################################################## -@rem -@rem unidbg-fetch-qsign startup script for Windows -@rem -@rem ########################################################################## - -@rem Set local scope for the variables with windows NT shell -if "%OS%"=="Windows_NT" setlocal - -set DIRNAME=%~dp0 -if "%DIRNAME%" == "" set DIRNAME=. -set APP_BASE_NAME=%~n0 -set APP_HOME=%DIRNAME%.. - -@rem Resolve any "." and ".." in APP_HOME to make it shorter. -for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi - -@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script. -set DEFAULT_JVM_OPTS= - -@rem Find java.exe -if defined JAVA_HOME goto findJavaFromJavaHome - -set JAVA_EXE=java.exe -%JAVA_EXE% -version >NUL 2>&1 -if "%ERRORLEVEL%" == "0" goto execute - -echo. -echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:findJavaFromJavaHome -set JAVA_HOME=%JAVA_HOME:"=% -set JAVA_EXE=%JAVA_HOME%/bin/java.exe - -if exist "%JAVA_EXE%" goto execute - -echo. -echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% -echo. -echo Please set the JAVA_HOME variable in your environment to match the -echo location of your Java installation. - -goto fail - -:execute -@rem Setup the command line - -set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar - - -@rem Execute unidbg-fetch-qsign -"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %* - -:end -@rem End local scope for the variables with windows NT shell -if "%ERRORLEVEL%"=="0" goto mainEnd - -:fail -rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of -rem the _cmd.exe /c_ return code! -if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1 -exit /b 1 - -:mainEnd -if "%OS%"=="Windows_NT" endlocal - -:omega diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/table.tsx b/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/table.tsx deleted file mode 100644 index 953fb3c003bc0cd9d93059c373bc23e6aecbded8..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/table.tsx +++ /dev/null @@ -1,114 +0,0 @@ -import * as React from "react" - -import { cn } from "@/lib/utils" - -const Table = React.forwardRef< - HTMLTableElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( -
          - - -)) -Table.displayName = "Table" - -const TableHeader = React.forwardRef< - HTMLTableSectionElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( - -)) -TableHeader.displayName = "TableHeader" - -const TableBody = React.forwardRef< - HTMLTableSectionElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( - -)) -TableBody.displayName = "TableBody" - -const TableFooter = React.forwardRef< - HTMLTableSectionElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( - -)) -TableFooter.displayName = "TableFooter" - -const TableRow = React.forwardRef< - HTMLTableRowElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( - -)) -TableRow.displayName = "TableRow" - -const TableHead = React.forwardRef< - HTMLTableCellElement, - React.ThHTMLAttributes ->(({ className, ...props }, ref) => ( -
          -)) -TableHead.displayName = "TableHead" - -const TableCell = React.forwardRef< - HTMLTableCellElement, - React.TdHTMLAttributes ->(({ className, ...props }, ref) => ( - -)) -TableCell.displayName = "TableCell" - -const TableCaption = React.forwardRef< - HTMLTableCaptionElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( -
          -)) -TableCaption.displayName = "TableCaption" - -export { - Table, - TableHeader, - TableBody, - TableFooter, - TableHead, - TableRow, - TableCell, - TableCaption, -} diff --git a/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/alert.tsx b/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/alert.tsx deleted file mode 100644 index f589783193a6cfe14032a77b89055cb3e920fe8c..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-comic-factory/src/components/ui/alert.tsx +++ /dev/null @@ -1,59 +0,0 @@ -import * as React from "react" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const alertVariants = cva( - "relative w-full rounded-lg border border-stone-200 p-4 [&:has(svg)]:pl-11 [&>svg+div]:translate-y-[-3px] [&>svg]:absolute [&>svg]:left-4 [&>svg]:top-4 [&>svg]:text-stone-950 dark:border-stone-800 dark:[&>svg]:text-stone-50", - { - variants: { - variant: { - default: "bg-white text-stone-950 dark:bg-stone-950 dark:text-stone-50", - destructive: - "border-red-500/50 text-red-500 dark:border-red-500 [&>svg]:text-red-500 dark:border-red-900/50 dark:text-red-900 dark:dark:border-red-900 dark:[&>svg]:text-red-900", - }, - }, - defaultVariants: { - variant: "default", - }, - } -) - -const Alert = React.forwardRef< - HTMLDivElement, - React.HTMLAttributes & VariantProps ->(({ className, variant, ...props }, ref) => ( -
          -)) -Alert.displayName = "Alert" - -const AlertTitle = React.forwardRef< - HTMLParagraphElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( -
          -)) -AlertTitle.displayName = "AlertTitle" - -const AlertDescription = React.forwardRef< - HTMLParagraphElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( -
          -)) -AlertDescription.displayName = "AlertDescription" - -export { Alert, AlertTitle, AlertDescription } diff --git a/spaces/jbrinkma/deepmind-pushworld/README.md b/spaces/jbrinkma/deepmind-pushworld/README.md deleted file mode 100644 index 76fbaf12da6bdfab34db953710fb0ebd45cd93ee..0000000000000000000000000000000000000000 --- a/spaces/jbrinkma/deepmind-pushworld/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Deepmind Pushworld -emoji: 🌖 -colorFrom: blue -colorTo: purple -sdk: static -pinned: false -license: openrail -tags: -- making-demos ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jdczlx/ChatGPT-chuanhu/assets/custom.js b/spaces/jdczlx/ChatGPT-chuanhu/assets/custom.js deleted file mode 100644 index 7b1761043149ff97ca498501c87a0d15db5258ee..0000000000000000000000000000000000000000 --- a/spaces/jdczlx/ChatGPT-chuanhu/assets/custom.js +++ /dev/null @@ -1 +0,0 @@ -// custom javascript here \ No newline at end of file diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/__init__.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/jkang/demo-painttransformer/network.py b/spaces/jkang/demo-painttransformer/network.py deleted file mode 100644 index ed9ae9d9563b8147d36d116a1c9377b74fba488b..0000000000000000000000000000000000000000 --- a/spaces/jkang/demo-painttransformer/network.py +++ /dev/null @@ -1,74 +0,0 @@ -import paddle -import paddle.nn as nn -import math - -class Painter(nn.Layer): - """ - network architecture written in paddle. - """ - def __init__(self, param_per_stroke, total_strokes, hidden_dim, n_heads=8, n_enc_layers=3, n_dec_layers=3): - super().__init__() - self.enc_img = nn.Sequential( - nn.Pad2D([1, 1, 1, 1], 'reflect'), - nn.Conv2D(3, 32, 3, 1), - nn.BatchNorm2D(32), - nn.ReLU(), # maybe replace with the inplace version - nn.Pad2D([1, 1, 1, 1], 'reflect'), - nn.Conv2D(32, 64, 3, 2), - nn.BatchNorm2D(64), - nn.ReLU(), - nn.Pad2D([1, 1, 1, 1], 'reflect'), - nn.Conv2D(64, 128, 3, 2), - nn.BatchNorm2D(128), - nn.ReLU()) - self.enc_canvas = nn.Sequential( - nn.Pad2D([1, 1, 1, 1], 'reflect'), - nn.Conv2D(3, 32, 3, 1), - nn.BatchNorm2D(32), - nn.ReLU(), - nn.Pad2D([1, 1, 1, 1], 'reflect'), - nn.Conv2D(32, 64, 3, 2), - nn.BatchNorm2D(64), - nn.ReLU(), - nn.Pad2D([1, 1, 1, 1], 'reflect'), - nn.Conv2D(64, 128, 3, 2), - nn.BatchNorm2D(128), - nn.ReLU()) - self.conv = nn.Conv2D(128 * 2, hidden_dim, 1) - self.transformer = nn.Transformer(hidden_dim, n_heads, n_enc_layers, n_dec_layers) - self.linear_param = nn.Sequential( - nn.Linear(hidden_dim, hidden_dim), - nn.ReLU(), - nn.Linear(hidden_dim, hidden_dim), - nn.ReLU(), - nn.Linear(hidden_dim, param_per_stroke)) - self.linear_decider = nn.Linear(hidden_dim, 1) - self.query_pos = paddle.static.create_parameter([total_strokes, hidden_dim], dtype='float32', - default_initializer=nn.initializer.Uniform(0, 1)) - self.row_embed = paddle.static.create_parameter([8, hidden_dim // 2], dtype='float32', - default_initializer=nn.initializer.Uniform(0, 1)) - self.col_embed = paddle.static.create_parameter([8, hidden_dim // 2], dtype='float32', - default_initializer=nn.initializer.Uniform(0, 1)) - - def forward(self, img, canvas): - """ - prediction - """ - b, _, H, W = img.shape - img_feat = self.enc_img(img) - canvas_feat = self.enc_canvas(canvas) - h, w = img_feat.shape[-2:] - feat = paddle.concat([img_feat, canvas_feat], axis=1) - feat_conv = self.conv(feat) - - pos_embed = paddle.concat([ - self.col_embed[:w].unsqueeze(0).tile([h, 1, 1]), - self.row_embed[:h].unsqueeze(1).tile([1, w, 1]), - ], axis=-1).flatten(0, 1).unsqueeze(1) - - hidden_state = self.transformer((pos_embed + feat_conv.flatten(2).transpose([2, 0, 1])).transpose([1, 0, 2]), - self.query_pos.unsqueeze(1).tile([1, b, 1]).transpose([1, 0, 2])) - - param = self.linear_param(hidden_state) - decision = self.linear_decider(hidden_state) - return param, decision \ No newline at end of file diff --git a/spaces/jlmarrugom/voice_fixer_app/voicefixer/__main__.py b/spaces/jlmarrugom/voice_fixer_app/voicefixer/__main__.py deleted file mode 100644 index 64884eb05a9de6cd4313bdc62c05b10e5a932a71..0000000000000000000000000000000000000000 --- a/spaces/jlmarrugom/voice_fixer_app/voicefixer/__main__.py +++ /dev/null @@ -1,170 +0,0 @@ -#!/usr/bin/python3 -from genericpath import exists -import os.path -import argparse -from voicefixer import VoiceFixer -import torch -import os - - -def writefile(infile, outfile, mode, append_mode, cuda, verbose=False): - if append_mode is True: - outbasename, outext = os.path.splitext(os.path.basename(outfile)) - outfile = os.path.join( - os.path.dirname(outfile), "{}-mode{}{}".format(outbasename, mode, outext) - ) - if verbose: - print("Processing {}, mode={}".format(infile, mode)) - voicefixer.restore(input=infile, output=outfile, cuda=cuda, mode=int(mode)) - -def check_arguments(args): - process_file, process_folder = len(args.infile) != 0, len(args.infolder) != 0 - # assert len(args.infile) == 0 and len(args.outfile) == 0 or process_file, \ - # "Error: You should give the input and output file path at the same time. The input and output file path we receive is %s and %s" % (args.infile, args.outfile) - # assert len(args.infolder) == 0 and len(args.outfolder) == 0 or process_folder, \ - # "Error: You should give the input and output folder path at the same time. The input and output folder path we receive is %s and %s" % (args.infolder, args.outfolder) - assert ( - process_file or process_folder - ), "Error: You need to specify a input file path (--infile) or a input folder path (--infolder) to proceed. For more information please run: voicefixer -h" - - # if(args.cuda and not torch.cuda.is_available()): - # print("Warning: You set --cuda while no cuda device found on your machine. We will use CPU instead.") - if process_file: - assert os.path.exists(args.infile), ( - "Error: The input file %s is not found." % args.infile - ) - output_dirname = os.path.dirname(args.outfile) - if len(output_dirname) > 1: - os.makedirs(output_dirname, exist_ok=True) - if process_folder: - assert os.path.exists(args.infolder), ( - "Error: The input folder %s is not found." % args.infile - ) - output_dirname = args.outfolder - if len(output_dirname) > 1: - os.makedirs(args.outfolder, exist_ok=True) - - return process_file, process_folder - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="VoiceFixer - restores degraded speech" - ) - parser.add_argument( - "-i", - "--infile", - type=str, - default="", - help="An input file to be processed by VoiceFixer.", - ) - parser.add_argument( - "-o", - "--outfile", - type=str, - default="outfile.wav", - help="An output file to store the result.", - ) - - parser.add_argument( - "-ifdr", - "--infolder", - type=str, - default="", - help="Input folder. Place all your wav file that need process in this folder.", - ) - parser.add_argument( - "-ofdr", - "--outfolder", - type=str, - default="outfolder", - help="Output folder. The processed files will be stored in this folder.", - ) - - parser.add_argument( - "--mode", help="mode", choices=["0", "1", "2", "all"], default="0" - ) - parser.add_argument('--disable-cuda', help='Set this flag if you do not want to use your gpu.', default=False, action="store_true") - parser.add_argument( - "--silent", - help="Set this flag if you do not want to see any message.", - default=False, - action="store_true", - ) - - args = parser.parse_args() - - if torch.cuda.is_available() and not args.disable_cuda: - cuda = True - else: - cuda = False - - process_file, process_folder = check_arguments(args) - - if not args.silent: - print("Initializing VoiceFixer") - voicefixer = VoiceFixer() - - if not args.silent: - print("Start processing the input file %s." % args.infile) - - if process_file: - audioext = os.path.splitext(os.path.basename(args.infile))[-1] - if audioext != ".wav": - raise ValueError( - "Error: Error processing the input file. We only support the .wav format currently. Please convert your %s format to .wav. Thanks." - % audioext - ) - if args.mode == "all": - for file_mode in range(3): - writefile( - args.infile, - args.outfile, - file_mode, - True, - cuda, - verbose=not args.silent, - ) - else: - writefile( - args.infile, - args.outfile, - args.mode, - False, - cuda, - verbose=not args.silent, - ) - - if process_folder: - if not args.silent: - files = [ - file - for file in os.listdir(args.infolder) - if (os.path.splitext(os.path.basename(file))[-1] == ".wav") - ] - print( - "Found %s .wav files in the input folder %s. Start processing." - % (len(files), args.infolder) - ) - for file in os.listdir(args.infolder): - outbasename, outext = os.path.splitext(os.path.basename(file)) - in_file = os.path.join(args.infolder, file) - out_file = os.path.join(args.outfolder, file) - - if args.mode == "all": - for file_mode in range(3): - writefile( - in_file, - out_file, - file_mode, - True, - cuda, - verbose=not args.silent, - ) - else: - writefile( - in_file, out_file, args.mode, False, cuda, verbose=not args.silent - ) - - if not args.silent: - print("Done") \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/server.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/server.py deleted file mode 100644 index f2dfc29ec4b5d1cbf37a87fe7ce70fff27b022a5..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/utils/server.py +++ /dev/null @@ -1,148 +0,0 @@ -""" -A Simple server used to show altair graphics from a prompt or script. - -This is adapted from the mpld3 package; see -https://github.com/mpld3/mpld3/blob/master/mpld3/_server.py -""" -import sys -import threading -import webbrowser -import socket -from http import server -from io import BytesIO as IO -import itertools -import random - -JUPYTER_WARNING = """ -Note: if you're in the Jupyter notebook, Chart.serve() is not the best - way to view plots. Consider using Chart.display(). -You must interrupt the kernel to cancel this command. -""" - - -# Mock server used for testing - - -class MockRequest: - def makefile(self, *args, **kwargs): - return IO(b"GET /") - - def sendall(self, response): - pass - - -class MockServer: - def __init__(self, ip_port, Handler): - Handler(MockRequest(), ip_port[0], self) - - def serve_forever(self): - pass - - def server_close(self): - pass - - -def generate_handler(html, files=None): - if files is None: - files = {} - - class MyHandler(server.BaseHTTPRequestHandler): - def do_GET(self): - """Respond to a GET request.""" - if self.path == "/": - self.send_response(200) - self.send_header("Content-type", "text/html") - self.end_headers() - self.wfile.write(html.encode()) - elif self.path in files: - content_type, content = files[self.path] - self.send_response(200) - self.send_header("Content-type", content_type) - self.end_headers() - self.wfile.write(content.encode()) - else: - self.send_error(404) - - return MyHandler - - -def find_open_port(ip, port, n=50): - """Find an open port near the specified port""" - ports = itertools.chain( - (port + i for i in range(n)), (port + random.randint(-2 * n, 2 * n)) - ) - - for port in ports: - s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - result = s.connect_ex((ip, port)) - s.close() - if result != 0: - return port - raise ValueError("no open ports found") - - -def serve( - html, - ip="127.0.0.1", - port=8888, - n_retries=50, - files=None, - jupyter_warning=True, - open_browser=True, - http_server=None, -): - """Start a server serving the given HTML, and (optionally) open a browser - - Parameters - ---------- - html : string - HTML to serve - ip : string (default = '127.0.0.1') - ip address at which the HTML will be served. - port : int (default = 8888) - the port at which to serve the HTML - n_retries : int (default = 50) - the number of nearby ports to search if the specified port is in use. - files : dictionary (optional) - dictionary of extra content to serve - jupyter_warning : bool (optional) - if True (default), then print a warning if this is used within Jupyter - open_browser : bool (optional) - if True (default), then open a web browser to the given HTML - http_server : class (optional) - optionally specify an HTTPServer class to use for showing the - figure. The default is Python's basic HTTPServer. - """ - port = find_open_port(ip, port, n_retries) - Handler = generate_handler(html, files) - - if http_server is None: - srvr = server.HTTPServer((ip, port), Handler) - else: - srvr = http_server((ip, port), Handler) - - if jupyter_warning: - try: - __IPYTHON__ # noqa - except NameError: - pass - else: - print(JUPYTER_WARNING) - - # Start the server - print("Serving to http://{}:{}/ [Ctrl-C to exit]".format(ip, port)) - sys.stdout.flush() - - if open_browser: - # Use a thread to open a web browser pointing to the server - def b(): - return webbrowser.open("http://{}:{}".format(ip, port)) - - threading.Thread(target=b).start() - - try: - srvr.serve_forever() - except (KeyboardInterrupt, SystemExit): - print("\nstopping Server...") - - srvr.server_close() diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/S_V_G_.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/S_V_G_.py deleted file mode 100644 index ebc2befdfe8540b3fdd6fa19002d708992787f5f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/S_V_G_.py +++ /dev/null @@ -1,215 +0,0 @@ -"""Compiles/decompiles SVG table. - -https://docs.microsoft.com/en-us/typography/opentype/spec/svg - -The XML format is: - -.. code-block:: xml - - - - <complete SVG doc> ]] - </svgDoc> - ... - <svgDoc endGlyphID="n" startGlyphID="m"> - <![CDATA[ <complete SVG doc> ]] - </svgDoc> - </SVG> -""" - -from fontTools.misc.textTools import bytesjoin, safeEval, strjoin, tobytes, tostr -from fontTools.misc import sstruct -from . import DefaultTable -from collections.abc import Sequence -from dataclasses import dataclass, astuple -from io import BytesIO -import struct -import logging - - -log = logging.getLogger(__name__) - - -SVG_format_0 = """ - > # big endian - version: H - offsetToSVGDocIndex: L - reserved: L -""" - -SVG_format_0Size = sstruct.calcsize(SVG_format_0) - -doc_index_entry_format_0 = """ - > # big endian - startGlyphID: H - endGlyphID: H - svgDocOffset: L - svgDocLength: L -""" - -doc_index_entry_format_0Size = sstruct.calcsize(doc_index_entry_format_0) - - -class table_S_V_G_(DefaultTable.DefaultTable): - def decompile(self, data, ttFont): - self.docList = [] - # Version 0 is the standardized version of the table; and current. - # https://www.microsoft.com/typography/otspec/svg.htm - sstruct.unpack(SVG_format_0, data[:SVG_format_0Size], self) - if self.version != 0: - log.warning( - "Unknown SVG table version '%s'. Decompiling as version 0.", - self.version, - ) - # read in SVG Documents Index - # data starts with the first entry of the entry list. - pos = subTableStart = self.offsetToSVGDocIndex - self.numEntries = struct.unpack(">H", data[pos : pos + 2])[0] - pos += 2 - if self.numEntries > 0: - data2 = data[pos:] - entries = [] - for i in range(self.numEntries): - record_data = data2[ - i - * doc_index_entry_format_0Size : (i + 1) - * doc_index_entry_format_0Size - ] - docIndexEntry = sstruct.unpack( - doc_index_entry_format_0, record_data, DocumentIndexEntry() - ) - entries.append(docIndexEntry) - - for entry in entries: - start = entry.svgDocOffset + subTableStart - end = start + entry.svgDocLength - doc = data[start:end] - compressed = False - if doc.startswith(b"\x1f\x8b"): - import gzip - - bytesIO = BytesIO(doc) - with gzip.GzipFile(None, "r", fileobj=bytesIO) as gunzipper: - doc = gunzipper.read() - del bytesIO - compressed = True - doc = tostr(doc, "utf_8") - self.docList.append( - SVGDocument(doc, entry.startGlyphID, entry.endGlyphID, compressed) - ) - - def compile(self, ttFont): - version = 0 - offsetToSVGDocIndex = ( - SVG_format_0Size # I start the SVGDocIndex right after the header. - ) - # get SGVDoc info. - docList = [] - entryList = [] - numEntries = len(self.docList) - datum = struct.pack(">H", numEntries) - entryList.append(datum) - curOffset = len(datum) + doc_index_entry_format_0Size * numEntries - seenDocs = {} - allCompressed = getattr(self, "compressed", False) - for i, doc in enumerate(self.docList): - if isinstance(doc, (list, tuple)): - doc = SVGDocument(*doc) - self.docList[i] = doc - docBytes = tobytes(doc.data, encoding="utf_8") - if (allCompressed or doc.compressed) and not docBytes.startswith( - b"\x1f\x8b" - ): - import gzip - - bytesIO = BytesIO() - # mtime=0 strips the useless timestamp and makes gzip output reproducible; - # equivalent to `gzip -n` - with gzip.GzipFile(None, "w", fileobj=bytesIO, mtime=0) as gzipper: - gzipper.write(docBytes) - gzipped = bytesIO.getvalue() - if len(gzipped) < len(docBytes): - docBytes = gzipped - del gzipped, bytesIO - docLength = len(docBytes) - if docBytes in seenDocs: - docOffset = seenDocs[docBytes] - else: - docOffset = curOffset - curOffset += docLength - seenDocs[docBytes] = docOffset - docList.append(docBytes) - entry = struct.pack( - ">HHLL", doc.startGlyphID, doc.endGlyphID, docOffset, docLength - ) - entryList.append(entry) - entryList.extend(docList) - svgDocData = bytesjoin(entryList) - - reserved = 0 - header = struct.pack(">HLL", version, offsetToSVGDocIndex, reserved) - data = [header, svgDocData] - data = bytesjoin(data) - return data - - def toXML(self, writer, ttFont): - for i, doc in enumerate(self.docList): - if isinstance(doc, (list, tuple)): - doc = SVGDocument(*doc) - self.docList[i] = doc - attrs = {"startGlyphID": doc.startGlyphID, "endGlyphID": doc.endGlyphID} - if doc.compressed: - attrs["compressed"] = 1 - writer.begintag("svgDoc", **attrs) - writer.newline() - writer.writecdata(doc.data) - writer.newline() - writer.endtag("svgDoc") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "svgDoc": - if not hasattr(self, "docList"): - self.docList = [] - doc = strjoin(content) - doc = doc.strip() - startGID = int(attrs["startGlyphID"]) - endGID = int(attrs["endGlyphID"]) - compressed = bool(safeEval(attrs.get("compressed", "0"))) - self.docList.append(SVGDocument(doc, startGID, endGID, compressed)) - else: - log.warning("Unknown %s %s", name, content) - - -class DocumentIndexEntry(object): - def __init__(self): - self.startGlyphID = None # USHORT - self.endGlyphID = None # USHORT - self.svgDocOffset = None # ULONG - self.svgDocLength = None # ULONG - - def __repr__(self): - return ( - "startGlyphID: %s, endGlyphID: %s, svgDocOffset: %s, svgDocLength: %s" - % (self.startGlyphID, self.endGlyphID, self.svgDocOffset, self.svgDocLength) - ) - - -@dataclass -class SVGDocument(Sequence): - data: str - startGlyphID: int - endGlyphID: int - compressed: bool = False - - # Previously, the SVG table's docList attribute contained a lists of 3 items: - # [doc, startGlyphID, endGlyphID]; later, we added a `compressed` attribute. - # For backward compatibility with code that depends of them being sequences of - # fixed length=3, we subclass the Sequence abstract base class and pretend only - # the first three items are present. 'compressed' is only accessible via named - # attribute lookup like regular dataclasses: i.e. `doc.compressed`, not `doc[3]` - def __getitem__(self, index): - return astuple(self)[:3][index] - - def __len__(self): - return 3 diff --git a/spaces/josuelmet/Metal_Music_Interpolator/minigpt/README.md b/spaces/josuelmet/Metal_Music_Interpolator/minigpt/README.md deleted file mode 100644 index c9a5c0b7be033ab41d78d9215666fcd05993a7e3..0000000000000000000000000000000000000000 --- a/spaces/josuelmet/Metal_Music_Interpolator/minigpt/README.md +++ /dev/null @@ -1,2 +0,0 @@ -The trained model used for generation. -Achieved 72% training accuracy and 63% validation accuracy. \ No newline at end of file diff --git a/spaces/kaicheng/ChatGPT_ad/assets/custom.js b/spaces/kaicheng/ChatGPT_ad/assets/custom.js deleted file mode 100644 index f013209931218fd054979e290706f1945de76856..0000000000000000000000000000000000000000 --- a/spaces/kaicheng/ChatGPT_ad/assets/custom.js +++ /dev/null @@ -1,502 +0,0 @@ - -// custom javascript here - -const MAX_HISTORY_LENGTH = 32; - -var key_down_history = []; -var currentIndex = -1; -var user_input_ta; - -var gradioContainer = null; -var user_input_ta = null; -var user_input_tb = null; -var userInfoDiv = null; -var appTitleDiv = null; -var chatbot = null; -var chatbotWrap = null; -var apSwitch = null; -var empty_botton = null; -var messageBotDivs = null; -var loginUserForm = null; -var logginUser = null; - -var userLogged = false; -var usernameGotten = false; -var historyLoaded = false; - -var ga = document.getElementsByTagName("gradio-app"); -var targetNode = ga[0]; -var isInIframe = (window.self !== window.top); -var language = navigator.language.slice(0,2); - -var forView_i18n = { - 'zh': "仅供查看", - 'en': "For viewing only", - 'ja': "閲覧専用", - 'fr': "Pour consultation seulement", - 'es': "Solo para visualización", -}; - -// gradio 页面加载好了么??? 我能动你的元素了么?? -function gradioLoaded(mutations) { - for (var i = 0; i < mutations.length; i++) { - if (mutations[i].addedNodes.length) { - loginUserForm = document.querySelector(".gradio-container > .main > .wrap > .panel > .form") - gradioContainer = document.querySelector(".gradio-container"); - user_input_tb = document.getElementById('user_input_tb'); - userInfoDiv = document.getElementById("user_info"); - appTitleDiv = document.getElementById("app_title"); - chatbot = document.querySelector('#chuanhu_chatbot'); - chatbotWrap = document.querySelector('#chuanhu_chatbot > .wrap'); - apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - empty_botton = document.getElementById("empty_btn") - - if (loginUserForm) { - localStorage.setItem("userLogged", true); - userLogged = true; - } - - if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没? - adjustDarkMode(); - } - if (user_input_tb) { // user_input_tb 加载出来了没? - selectHistory(); - } - if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没? - if (!usernameGotten) { - getUserInfo(); - } - setTimeout(showOrHideUserInfo(), 2000); - } - if (chatbot) { // chatbot 加载出来了没? - setChatbotHeight(); - } - if (chatbotWrap) { - if (!historyLoaded) { - loadHistoryHtml(); - } - setChatbotScroll(); - } - if (empty_botton) { - emptyHistory(); - } - } - } -} - -function webLocale() { - console.log("webLocale", language); - if (forView_i18n.hasOwnProperty(language)) { - var forView = forView_i18n[language]; - var forViewStyle = document.createElement('style'); - forViewStyle.innerHTML = '.wrap>.history-message>:last-child::after { content: "' + forView + '"!important; }'; - document.head.appendChild(forViewStyle); - // console.log("added forViewStyle", forView); - } -} - -function selectHistory() { - user_input_ta = user_input_tb.querySelector("textarea"); - if (user_input_ta) { - observer.disconnect(); // 停止监听 - // 在 textarea 上监听 keydown 事件 - user_input_ta.addEventListener("keydown", function (event) { - var value = user_input_ta.value.trim(); - // 判断按下的是否为方向键 - if (event.code === 'ArrowUp' || event.code === 'ArrowDown') { - // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作 - if (value && key_down_history.indexOf(value) === -1) - return; - // 对于需要响应的动作,阻止默认行为。 - event.preventDefault(); - var length = key_down_history.length; - if (length === 0) { - currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置 - return; - } - if (currentIndex === -1) { - currentIndex = length; - } - if (event.code === 'ArrowUp' && currentIndex > 0) { - currentIndex--; - user_input_ta.value = key_down_history[currentIndex]; - } else if (event.code === 'ArrowDown' && currentIndex < length - 1) { - currentIndex++; - user_input_ta.value = key_down_history[currentIndex]; - } - user_input_ta.selectionStart = user_input_ta.value.length; - user_input_ta.selectionEnd = user_input_ta.value.length; - const input_event = new InputEvent("input", { bubbles: true, cancelable: true }); - user_input_ta.dispatchEvent(input_event); - } else if (event.code === "Enter") { - if (value) { - currentIndex = -1; - if (key_down_history.indexOf(value) === -1) { - key_down_history.push(value); - if (key_down_history.length > MAX_HISTORY_LENGTH) { - key_down_history.shift(); - } - } - } - } - }); - } -} - -var username = null; -function getUserInfo() { - if (usernameGotten) { - return; - } - userLogged = localStorage.getItem('userLogged'); - if (userLogged) { - username = userInfoDiv.innerText; - if (username) { - if (username.includes("getting user info…")) { - setTimeout(getUserInfo, 500); - return; - } else if (username === " ") { - localStorage.removeItem("username"); - localStorage.removeItem("userLogged") - userLogged = false; - usernameGotten = true; - return; - } else { - username = username.match(/User:\s*(.*)/)[1] || username; - localStorage.setItem("username", username); - usernameGotten = true; - clearHistoryHtml(); - } - } - } -} - -function toggleUserInfoVisibility(shouldHide) { - if (userInfoDiv) { - if (shouldHide) { - userInfoDiv.classList.add("hideK"); - } else { - userInfoDiv.classList.remove("hideK"); - } - } -} -function showOrHideUserInfo() { - var sendBtn = document.getElementById("submit_btn"); - - // Bind mouse/touch events to show/hide user info - appTitleDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - userInfoDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - sendBtn.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - - appTitleDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - userInfoDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - sendBtn.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - - appTitleDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - userInfoDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - sendBtn.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - - appTitleDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - userInfoDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - sendBtn.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); // Delay 1 second to hide user info - }; - - // Hide user info after 2 second - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 2000); -} - -function toggleDarkMode(isEnabled) { - if (isEnabled) { - document.body.classList.add("dark"); - document.body.style.setProperty("background-color", "var(--neutral-950)", "important"); - } else { - document.body.classList.remove("dark"); - document.body.style.backgroundColor = ""; - } -} -function adjustDarkMode() { - const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)"); - - // 根据当前颜色模式设置初始状态 - apSwitch.checked = darkModeQuery.matches; - toggleDarkMode(darkModeQuery.matches); - // 监听颜色模式变化 - darkModeQuery.addEventListener("change", (e) => { - apSwitch.checked = e.matches; - toggleDarkMode(e.matches); - }); - // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - apSwitch.addEventListener("change", (e) => { - toggleDarkMode(e.target.checked); - }); -} - -function setChatbotHeight() { - const screenWidth = window.innerWidth; - const statusDisplay = document.querySelector('#status_display'); - const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0; - const wrap = chatbot.querySelector('.wrap'); - const vh = window.innerHeight * 0.01; - document.documentElement.style.setProperty('--vh', `${vh}px`); - if (isInIframe) { - chatbot.style.height = `700px`; - wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))` - } else { - if (screenWidth <= 320) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else if (screenWidth <= 499) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } - } -} -function setChatbotScroll() { - var scrollHeight = chatbotWrap.scrollHeight; - chatbotWrap.scrollTo(0,scrollHeight) -} -var rangeInputs = null; -var numberInputs = null; -function setSlider() { - rangeInputs = document.querySelectorAll('input[type="range"]'); - numberInputs = document.querySelectorAll('input[type="number"]') - setSliderRange(); - rangeInputs.forEach(rangeInput => { - rangeInput.addEventListener('input', setSliderRange); - }); - numberInputs.forEach(numberInput => { - numberInput.addEventListener('input', setSliderRange); - }) -} -function setSliderRange() { - var range = document.querySelectorAll('input[type="range"]'); - range.forEach(range => { - range.style.backgroundSize = (range.value - range.min) / (range.max - range.min) * 100 + '% 100%'; - }); -} - -function addChuanhuButton(botElement) { - var rawMessage = null; - var mdMessage = null; - rawMessage = botElement.querySelector('.raw-message'); - mdMessage = botElement.querySelector('.md-message'); - if (!rawMessage) { - var buttons = botElement.querySelectorAll('button.chuanhu-btn'); - for (var i = 0; i < buttons.length; i++) { - buttons[i].parentNode.removeChild(buttons[i]); - } - return; - } - var copyButton = null; - var toggleButton = null; - copyButton = botElement.querySelector('button.copy-bot-btn'); - toggleButton = botElement.querySelector('button.toggle-md-btn'); - if (copyButton) copyButton.remove(); - if (toggleButton) toggleButton.remove(); - - // Copy bot button - var copyButton = document.createElement('button'); - copyButton.classList.add('chuanhu-btn'); - copyButton.classList.add('copy-bot-btn'); - copyButton.setAttribute('aria-label', 'Copy'); - copyButton.innerHTML = copyIcon; - copyButton.addEventListener('click', () => { - const textToCopy = rawMessage.innerText; - navigator.clipboard - .writeText(textToCopy) - .then(() => { - copyButton.innerHTML = copiedIcon; - setTimeout(() => { - copyButton.innerHTML = copyIcon; - }, 1500); - }) - .catch(() => { - console.error("copy failed"); - }); - }); - botElement.appendChild(copyButton); - - // Toggle button - var toggleButton = document.createElement('button'); - toggleButton.classList.add('chuanhu-btn'); - toggleButton.classList.add('toggle-md-btn'); - toggleButton.setAttribute('aria-label', 'Toggle'); - var renderMarkdown = mdMessage.classList.contains('hideM'); - toggleButton.innerHTML = renderMarkdown ? mdIcon : rawIcon; - toggleButton.addEventListener('click', () => { - renderMarkdown = mdMessage.classList.contains('hideM'); - if (renderMarkdown){ - renderMarkdownText(botElement); - toggleButton.innerHTML=rawIcon; - } else { - removeMarkdownText(botElement); - toggleButton.innerHTML=mdIcon; - } - }); - botElement.insertBefore(toggleButton, copyButton); -} - -function renderMarkdownText(message) { - var mdDiv = message.querySelector('.md-message'); - if (mdDiv) mdDiv.classList.remove('hideM'); - var rawDiv = message.querySelector('.raw-message'); - if (rawDiv) rawDiv.classList.add('hideM'); -} -function removeMarkdownText(message) { - var rawDiv = message.querySelector('.raw-message'); - if (rawDiv) rawDiv.classList.remove('hideM'); - var mdDiv = message.querySelector('.md-message'); - if (mdDiv) mdDiv.classList.add('hideM'); -} - -let timeoutId; -let isThrottled = false; -var mmutation -// 监听所有元素中 bot message 的变化,为 bot 消息添加复制按钮。 -var mObserver = new MutationObserver(function (mutationsList) { - for (mmutation of mutationsList) { - if (mmutation.type === 'childList') { - for (var node of mmutation.addedNodes) { - if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') { - saveHistoryHtml(); - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - } - if (node.tagName === 'INPUT' && node.getAttribute('type') === 'range') { - setSlider(); - } - } - for (var node of mmutation.removedNodes) { - if (node.nodeType === 1 && node.classList.contains('message') && node.getAttribute('data-testid') === 'bot') { - saveHistoryHtml(); - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - } - } - } else if (mmutation.type === 'attributes') { - if (mmutation.target.nodeType === 1 && mmutation.target.classList.contains('message') && mmutation.target.getAttribute('data-testid') === 'bot') { - if (isThrottled) break; // 为了防止重复不断疯狂渲染,加上等待_(:з」∠)_ - isThrottled = true; - clearTimeout(timeoutId); - timeoutId = setTimeout(() => { - isThrottled = false; - document.querySelectorAll('#chuanhu_chatbot>.wrap>.message-wrap .message.bot').forEach(addChuanhuButton); - saveHistoryHtml(); - }, 500); - } - } - } -}); -mObserver.observe(document.documentElement, { attributes: true, childList: true, subtree: true }); - -var loadhistorytime = 0; // for debugging -function saveHistoryHtml() { - var historyHtml = document.querySelector('#chuanhu_chatbot > .wrap'); - localStorage.setItem('chatHistory', historyHtml.innerHTML); - // console.log("History Saved") - historyLoaded = false; -} -function loadHistoryHtml() { - var historyHtml = localStorage.getItem('chatHistory'); - if (!historyHtml) { - historyLoaded = true; - return; // no history, do nothing - } - userLogged = localStorage.getItem('userLogged'); - if (userLogged){ - historyLoaded = true; - return; // logged in, do nothing - } - if (!historyLoaded) { - var tempDiv = document.createElement('div'); - tempDiv.innerHTML = historyHtml; - var buttons = tempDiv.querySelectorAll('button.chuanhu-btn'); - var gradioCopyButtons = tempDiv.querySelectorAll('button.copy_code_button'); - for (var i = 0; i < buttons.length; i++) { - buttons[i].parentNode.removeChild(buttons[i]); - } - for (var i = 0; i < gradioCopyButtons.length; i++) { - gradioCopyButtons[i].parentNode.removeChild(gradioCopyButtons[i]); - } - var fakeHistory = document.createElement('div'); - fakeHistory.classList.add('history-message'); - fakeHistory.innerHTML = tempDiv.innerHTML; - webLocale(); - chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild); - // var fakeHistory = document.createElement('div'); - // fakeHistory.classList.add('history-message'); - // fakeHistory.innerHTML = historyHtml; - // chatbotWrap.insertBefore(fakeHistory, chatbotWrap.firstChild); - historyLoaded = true; - console.log("History Loaded"); - loadhistorytime += 1; // for debugging - } else { - historyLoaded = false; - } -} -function clearHistoryHtml() { - localStorage.removeItem("chatHistory"); - historyMessages = chatbotWrap.querySelector('.history-message'); - if (historyMessages) { - chatbotWrap.removeChild(historyMessages); - console.log("History Cleared"); - } -} -function emptyHistory() { - empty_botton.addEventListener("click", function () { - clearHistoryHtml(); - }); -} - -// 监视页面内部 DOM 变动 -var observer = new MutationObserver(function (mutations) { - gradioLoaded(mutations); -}); -observer.observe(targetNode, { childList: true, subtree: true }); - -// 监视页面变化 -window.addEventListener("DOMContentLoaded", function () { - isInIframe = (window.self !== window.top); - historyLoaded = false; -}); -window.addEventListener('resize', setChatbotHeight); -window.addEventListener('scroll', setChatbotHeight); -window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode); - -// button svg code -const copyIcon = '<span><svg stroke="currentColor" fill="none" stroke-width="2" viewBox="0 0 24 24" stroke-linecap="round" stroke-linejoin="round" height=".8em" width=".8em" xmlns="http://www.w3.org/2000/svg"><rect x="9" y="9" width="13" height="13" rx="2" ry="2"></rect><path d="M5 15H4a2 2 0 0 1-2-2V4a2 2 0 0 1 2-2h9a2 2 0 0 1 2 2v1"></path></svg></span>'; -const copiedIcon = '<span><svg stroke="currentColor" fill="none" stroke-width="2" viewBox="0 0 24 24" stroke-linecap="round" stroke-linejoin="round" height=".8em" width=".8em" xmlns="http://www.w3.org/2000/svg"><polyline points="20 6 9 17 4 12"></polyline></svg></span>'; -const mdIcon = '<span><svg stroke="currentColor" fill="none" stroke-width="1" viewBox="0 0 14 18" stroke-linecap="round" stroke-linejoin="round" height=".8em" width=".8em" xmlns="http://www.w3.org/2000/svg"><g transform-origin="center" transform="scale(0.85)"><path d="M1.5,0 L12.5,0 C13.3284271,-1.52179594e-16 14,0.671572875 14,1.5 L14,16.5 C14,17.3284271 13.3284271,18 12.5,18 L1.5,18 C0.671572875,18 1.01453063e-16,17.3284271 0,16.5 L0,1.5 C-1.01453063e-16,0.671572875 0.671572875,1.52179594e-16 1.5,0 Z" stroke-width="1.8"></path><line x1="3.5" y1="3.5" x2="10.5" y2="3.5"></line><line x1="3.5" y1="6.5" x2="8" y2="6.5"></line></g><path d="M4,9 L10,9 C10.5522847,9 11,9.44771525 11,10 L11,13.5 C11,14.0522847 10.5522847,14.5 10,14.5 L4,14.5 C3.44771525,14.5 3,14.0522847 3,13.5 L3,10 C3,9.44771525 3.44771525,9 4,9 Z" stroke="none" fill="currentColor"></path></svg></span>'; -const rawIcon = '<span><svg stroke="currentColor" fill="none" stroke-width="1.8" viewBox="0 0 18 14" stroke-linecap="round" stroke-linejoin="round" height=".8em" width=".8em" xmlns="http://www.w3.org/2000/svg"><g transform-origin="center" transform="scale(0.85)"><polyline points="4 3 0 7 4 11"></polyline><polyline points="14 3 18 7 14 11"></polyline><line x1="12" y1="0" x2="6" y2="14"></line></g></svg></span>'; diff --git a/spaces/kepl/add/style.css b/spaces/kepl/add/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/kepl/add/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/kingabzpro/Urdu-ASR-SOTA/run_eval.sh b/spaces/kingabzpro/Urdu-ASR-SOTA/run_eval.sh deleted file mode 100644 index 3e4ab51802e30af67c7c4a9231300445348ff867..0000000000000000000000000000000000000000 --- a/spaces/kingabzpro/Urdu-ASR-SOTA/run_eval.sh +++ /dev/null @@ -1 +0,0 @@ -python3 ./eval.py --model_id ./Model --dataset ./Data --config ur --split test --chunk_length_s 5.0 --stride_length_s 1.0 --log_outputs diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/models/sublayer/cbhg.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/models/sublayer/cbhg.py deleted file mode 100644 index 10eb6bb85dd2a1711fe7c92ec77bbaaf786f7a53..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/models/sublayer/cbhg.py +++ /dev/null @@ -1,85 +0,0 @@ -import torch -import torch.nn as nn -from .common.batch_norm_conv import BatchNormConv -from .common.highway_network import HighwayNetwork - -class CBHG(nn.Module): - def __init__(self, K, in_channels, channels, proj_channels, num_highways): - super().__init__() - - # List of all rnns to call `flatten_parameters()` on - self._to_flatten = [] - - self.bank_kernels = [i for i in range(1, K + 1)] - self.conv1d_bank = nn.ModuleList() - for k in self.bank_kernels: - conv = BatchNormConv(in_channels, channels, k) - self.conv1d_bank.append(conv) - - self.maxpool = nn.MaxPool1d(kernel_size=2, stride=1, padding=1) - - self.conv_project1 = BatchNormConv(len(self.bank_kernels) * channels, proj_channels[0], 3) - self.conv_project2 = BatchNormConv(proj_channels[0], proj_channels[1], 3, relu=False) - - # Fix the highway input if necessary - if proj_channels[-1] != channels: - self.highway_mismatch = True - self.pre_highway = nn.Linear(proj_channels[-1], channels, bias=False) - else: - self.highway_mismatch = False - - self.highways = nn.ModuleList() - for i in range(num_highways): - hn = HighwayNetwork(channels) - self.highways.append(hn) - - self.rnn = nn.GRU(channels, channels // 2, batch_first=True, bidirectional=True) - self._to_flatten.append(self.rnn) - - # Avoid fragmentation of RNN parameters and associated warning - self._flatten_parameters() - - def forward(self, x): - # Although we `_flatten_parameters()` on init, when using DataParallel - # the model gets replicated, making it no longer guaranteed that the - # weights are contiguous in GPU memory. Hence, we must call it again - self.rnn.flatten_parameters() - - # Save these for later - residual = x - seq_len = x.size(-1) - conv_bank = [] - - # Convolution Bank - for conv in self.conv1d_bank: - c = conv(x) # Convolution - conv_bank.append(c[:, :, :seq_len]) - - # Stack along the channel axis - conv_bank = torch.cat(conv_bank, dim=1) - - # dump the last padding to fit residual - x = self.maxpool(conv_bank)[:, :, :seq_len] - - # Conv1d projections - x = self.conv_project1(x) - x = self.conv_project2(x) - - # Residual Connect - x = x + residual - - # Through the highways - x = x.transpose(1, 2) - if self.highway_mismatch is True: - x = self.pre_highway(x) - for h in self.highways: x = h(x) - - # And then the RNN - x, _ = self.rnn(x) - return x - - def _flatten_parameters(self): - """Calls `flatten_parameters` on all the rnns used by the WaveRNN. Used - to improve efficiency and avoid PyTorch yelling at us.""" - [m.flatten_parameters() for m in self._to_flatten] - diff --git a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/models/sublayer/common/highway_network.py b/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/models/sublayer/common/highway_network.py deleted file mode 100644 index d311c6924db6dfc247f69cc266d6c1975b6e03cd..0000000000000000000000000000000000000000 --- a/spaces/kira4424/Tacotron-zero-short-voice-clone/synthesizer/models/sublayer/common/highway_network.py +++ /dev/null @@ -1,17 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -class HighwayNetwork(nn.Module): - def __init__(self, size): - super().__init__() - self.W1 = nn.Linear(size, size) - self.W2 = nn.Linear(size, size) - self.W1.bias.data.fill_(0.) - - def forward(self, x): - x1 = self.W1(x) - x2 = self.W2(x) - g = torch.sigmoid(x2) - y = g * F.relu(x1) + (1. - g) * x - return y diff --git a/spaces/kiroiineko/rvc-models-tragamundos/infer_pack/models_onnx.py b/spaces/kiroiineko/rvc-models-tragamundos/infer_pack/models_onnx.py deleted file mode 100644 index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000 --- a/spaces/kiroiineko/rvc-models-tragamundos/infer_pack/models_onnx.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/_core/_tasks.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/_core/_tasks.py deleted file mode 100644 index db11f2654ba17004fbc1c46dde29a00dc7a4c258..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/_core/_tasks.py +++ /dev/null @@ -1,180 +0,0 @@ -from __future__ import annotations - -import math -from types import TracebackType -from warnings import warn - -from ..abc._tasks import TaskGroup, TaskStatus -from ._compat import ( - DeprecatedAsyncContextManager, - DeprecatedAwaitable, - DeprecatedAwaitableFloat, -) -from ._eventloop import get_asynclib - - -class _IgnoredTaskStatus(TaskStatus[object]): - def started(self, value: object = None) -> None: - pass - - -TASK_STATUS_IGNORED = _IgnoredTaskStatus() - - -class CancelScope(DeprecatedAsyncContextManager["CancelScope"]): - """ - Wraps a unit of work that can be made separately cancellable. - - :param deadline: The time (clock value) when this scope is cancelled automatically - :param shield: ``True`` to shield the cancel scope from external cancellation - """ - - def __new__( - cls, *, deadline: float = math.inf, shield: bool = False - ) -> CancelScope: - return get_asynclib().CancelScope(shield=shield, deadline=deadline) - - def cancel(self) -> DeprecatedAwaitable: - """Cancel this scope immediately.""" - raise NotImplementedError - - @property - def deadline(self) -> float: - """ - The time (clock value) when this scope is cancelled automatically. - - Will be ``float('inf')`` if no timeout has been set. - - """ - raise NotImplementedError - - @deadline.setter - def deadline(self, value: float) -> None: - raise NotImplementedError - - @property - def cancel_called(self) -> bool: - """``True`` if :meth:`cancel` has been called.""" - raise NotImplementedError - - @property - def shield(self) -> bool: - """ - ``True`` if this scope is shielded from external cancellation. - - While a scope is shielded, it will not receive cancellations from outside. - - """ - raise NotImplementedError - - @shield.setter - def shield(self, value: bool) -> None: - raise NotImplementedError - - def __enter__(self) -> CancelScope: - raise NotImplementedError - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - raise NotImplementedError - - -def open_cancel_scope(*, shield: bool = False) -> CancelScope: - """ - Open a cancel scope. - - :param shield: ``True`` to shield the cancel scope from external cancellation - :return: a cancel scope - - .. deprecated:: 3.0 - Use :class:`~CancelScope` directly. - - """ - warn( - "open_cancel_scope() is deprecated -- use CancelScope() directly", - DeprecationWarning, - ) - return get_asynclib().CancelScope(shield=shield) - - -class FailAfterContextManager(DeprecatedAsyncContextManager[CancelScope]): - def __init__(self, cancel_scope: CancelScope): - self._cancel_scope = cancel_scope - - def __enter__(self) -> CancelScope: - return self._cancel_scope.__enter__() - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - retval = self._cancel_scope.__exit__(exc_type, exc_val, exc_tb) - if self._cancel_scope.cancel_called: - raise TimeoutError - - return retval - - -def fail_after(delay: float | None, shield: bool = False) -> FailAfterContextManager: - """ - Create a context manager which raises a :class:`TimeoutError` if does not finish in time. - - :param delay: maximum allowed time (in seconds) before raising the exception, or ``None`` to - disable the timeout - :param shield: ``True`` to shield the cancel scope from external cancellation - :return: a context manager that yields a cancel scope - :rtype: :class:`~typing.ContextManager`\\[:class:`~anyio.abc.CancelScope`\\] - - """ - deadline = ( - (get_asynclib().current_time() + delay) if delay is not None else math.inf - ) - cancel_scope = get_asynclib().CancelScope(deadline=deadline, shield=shield) - return FailAfterContextManager(cancel_scope) - - -def move_on_after(delay: float | None, shield: bool = False) -> CancelScope: - """ - Create a cancel scope with a deadline that expires after the given delay. - - :param delay: maximum allowed time (in seconds) before exiting the context block, or ``None`` - to disable the timeout - :param shield: ``True`` to shield the cancel scope from external cancellation - :return: a cancel scope - - """ - deadline = ( - (get_asynclib().current_time() + delay) if delay is not None else math.inf - ) - return get_asynclib().CancelScope(deadline=deadline, shield=shield) - - -def current_effective_deadline() -> DeprecatedAwaitableFloat: - """ - Return the nearest deadline among all the cancel scopes effective for the current task. - - :return: a clock value from the event loop's internal clock (or ``float('inf')`` if - there is no deadline in effect, or ``float('-inf')`` if the current scope has - been cancelled) - :rtype: float - - """ - return DeprecatedAwaitableFloat( - get_asynclib().current_effective_deadline(), current_effective_deadline - ) - - -def create_task_group() -> TaskGroup: - """ - Create a task group. - - :return: a task group - - """ - return get_asynclib().TaskGroup() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/varLib/builder.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/varLib/builder.py deleted file mode 100644 index c53b14e88c07b85cbf631ddfc5f2c22603cdf14d..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/varLib/builder.py +++ /dev/null @@ -1,154 +0,0 @@ -from fontTools import ttLib -from fontTools.ttLib.tables import otTables as ot - -# VariationStore - - -def buildVarRegionAxis(axisSupport): - self = ot.VarRegionAxis() - self.StartCoord, self.PeakCoord, self.EndCoord = [float(v) for v in axisSupport] - return self - - -def buildVarRegion(support, axisTags): - assert all(tag in axisTags for tag in support.keys()), ( - "Unknown axis tag found.", - support, - axisTags, - ) - self = ot.VarRegion() - self.VarRegionAxis = [] - for tag in axisTags: - self.VarRegionAxis.append(buildVarRegionAxis(support.get(tag, (0, 0, 0)))) - return self - - -def buildVarRegionList(supports, axisTags): - self = ot.VarRegionList() - self.RegionAxisCount = len(axisTags) - self.Region = [] - for support in supports: - self.Region.append(buildVarRegion(support, axisTags)) - self.RegionCount = len(self.Region) - return self - - -def _reorderItem(lst, mapping): - return [lst[i] for i in mapping] - - -def VarData_calculateNumShorts(self, optimize=False): - count = self.VarRegionCount - items = self.Item - bit_lengths = [0] * count - for item in items: - # The "+ (i < -1)" magic is to handle two's-compliment. - # That is, we want to get back 7 for -128, whereas - # bit_length() returns 8. Similarly for -65536. - # The reason "i < -1" is used instead of "i < 0" is that - # the latter would make it return 0 for "-1" instead of 1. - bl = [(i + (i < -1)).bit_length() for i in item] - bit_lengths = [max(*pair) for pair in zip(bl, bit_lengths)] - # The addition of 8, instead of seven, is to account for the sign bit. - # This "((b + 8) >> 3) if b else 0" when combined with the above - # "(i + (i < -1)).bit_length()" is a faster way to compute byte-lengths - # conforming to: - # - # byte_length = (0 if i == 0 else - # 1 if -128 <= i < 128 else - # 2 if -65536 <= i < 65536 else - # ...) - byte_lengths = [((b + 8) >> 3) if b else 0 for b in bit_lengths] - - # https://github.com/fonttools/fonttools/issues/2279 - longWords = any(b > 2 for b in byte_lengths) - - if optimize: - # Reorder columns such that wider columns come before narrower columns - mapping = [] - mapping.extend(i for i, b in enumerate(byte_lengths) if b > 2) - mapping.extend(i for i, b in enumerate(byte_lengths) if b == 2) - mapping.extend(i for i, b in enumerate(byte_lengths) if b == 1) - - byte_lengths = _reorderItem(byte_lengths, mapping) - self.VarRegionIndex = _reorderItem(self.VarRegionIndex, mapping) - self.VarRegionCount = len(self.VarRegionIndex) - for i in range(len(items)): - items[i] = _reorderItem(items[i], mapping) - - if longWords: - self.NumShorts = ( - max((i for i, b in enumerate(byte_lengths) if b > 2), default=-1) + 1 - ) - self.NumShorts |= 0x8000 - else: - self.NumShorts = ( - max((i for i, b in enumerate(byte_lengths) if b > 1), default=-1) + 1 - ) - - self.VarRegionCount = len(self.VarRegionIndex) - return self - - -ot.VarData.calculateNumShorts = VarData_calculateNumShorts - - -def VarData_CalculateNumShorts(self, optimize=True): - """Deprecated name for VarData_calculateNumShorts() which - defaults to optimize=True. Use varData.calculateNumShorts() - or varData.optimize().""" - return VarData_calculateNumShorts(self, optimize=optimize) - - -def VarData_optimize(self): - return VarData_calculateNumShorts(self, optimize=True) - - -ot.VarData.optimize = VarData_optimize - - -def buildVarData(varRegionIndices, items, optimize=True): - self = ot.VarData() - self.VarRegionIndex = list(varRegionIndices) - regionCount = self.VarRegionCount = len(self.VarRegionIndex) - records = self.Item = [] - if items: - for item in items: - assert len(item) == regionCount - records.append(list(item)) - self.ItemCount = len(self.Item) - self.calculateNumShorts(optimize=optimize) - return self - - -def buildVarStore(varRegionList, varDataList): - self = ot.VarStore() - self.Format = 1 - self.VarRegionList = varRegionList - self.VarData = list(varDataList) - self.VarDataCount = len(self.VarData) - return self - - -# Variation helpers - - -def buildVarIdxMap(varIdxes, glyphOrder): - self = ot.VarIdxMap() - self.mapping = {g: v for g, v in zip(glyphOrder, varIdxes)} - return self - - -def buildDeltaSetIndexMap(varIdxes): - self = ot.DeltaSetIndexMap() - self.mapping = list(varIdxes) - self.Format = 1 if len(varIdxes) > 0xFFFF else 0 - return self - - -def buildVarDevTable(varIdx): - self = ot.Device() - self.DeltaFormat = 0x8000 - self.StartSize = varIdx >> 16 - self.EndSize = varIdx & 0xFFFF - return self diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/_src/aot_autograd/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/_src/aot_autograd/__init__.py deleted file mode 100644 index 94f258df84ba8730208768fc44222bee4b3ebc33..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/functorch/_src/aot_autograd/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# This file has moved to under torch/_functorch. It is not public API. -# If you are not a PyTorch developer and you are relying on the following -# imports, please file an issue. -from torch._functorch.aot_autograd import ( - aot_autograd_decompositions, - KNOWN_TYPES, - PytreeThunk, -) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-edf307d2.css b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-edf307d2.css deleted file mode 100644 index 690ed736f2c29c32ba8499343659e9fde81f2098..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-edf307d2.css +++ /dev/null @@ -1 +0,0 @@ -div.svelte-1yrv54 .math.inline{fill:var(--body-text-color);display:inline-block;vertical-align:middle;padding:var(--size-1-5) -var(--size-1);color:var(--body-text-color)}div.svelte-1yrv54 .math.inline svg{display:inline;margin-bottom:.22em}div.svelte-1yrv54{max-width:100%}.min.svelte-1yrv54{min-height:var(--size-24)}.hide.svelte-1yrv54{display:none}div.svelte-1ed2p3z{transition:.15s}.pending.svelte-1ed2p3z{opacity:.2} diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_core/normalize.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_core/normalize.py deleted file mode 100644 index c9f8d0d5729b2497b5f4b611b0451dfe92872506..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/markdown_it/rules_core/normalize.py +++ /dev/null @@ -1,18 +0,0 @@ -"""Normalize input string.""" -import re - -from .state_core import StateCore - -# https://spec.commonmark.org/0.29/#line-ending -NEWLINES_RE = re.compile(r"\r\n?|\n") -NULL_RE = re.compile(r"\0") - - -def normalize(state: StateCore) -> None: - # Normalize newlines - string = NEWLINES_RE.sub("\n", state.src) - - # Replace NULL characters - string = NULL_RE.sub("\uFFFD", string) - - state.src = string diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/projections/__init__.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/projections/__init__.py deleted file mode 100644 index 16a5651da1d14f4c0fec5bc09d7fc0782c9a2461..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/projections/__init__.py +++ /dev/null @@ -1,114 +0,0 @@ -""" -Non-separable transforms that map from data space to screen space. - -Projections are defined as `~.axes.Axes` subclasses. They include the -following elements: - -- A transformation from data coordinates into display coordinates. - -- An inverse of that transformation. This is used, for example, to convert - mouse positions from screen space back into data space. - -- Transformations for the gridlines, ticks and ticklabels. Custom projections - will often need to place these elements in special locations, and Matplotlib - has a facility to help with doing so. - -- Setting up default values (overriding `~.axes.Axes.cla`), since the defaults - for a rectilinear axes may not be appropriate. - -- Defining the shape of the axes, for example, an elliptical axes, that will be - used to draw the background of the plot and for clipping any data elements. - -- Defining custom locators and formatters for the projection. For example, in - a geographic projection, it may be more convenient to display the grid in - degrees, even if the data is in radians. - -- Set up interactive panning and zooming. This is left as an "advanced" - feature left to the reader, but there is an example of this for polar plots - in `matplotlib.projections.polar`. - -- Any additional methods for additional convenience or features. - -Once the projection axes is defined, it can be used in one of two ways: - -- By defining the class attribute ``name``, the projection axes can be - registered with `matplotlib.projections.register_projection` and subsequently - simply invoked by name:: - - fig.add_subplot(projection="my_proj_name") - -- For more complex, parameterisable projections, a generic "projection" object - may be defined which includes the method ``_as_mpl_axes``. ``_as_mpl_axes`` - should take no arguments and return the projection's axes subclass and a - dictionary of additional arguments to pass to the subclass' ``__init__`` - method. Subsequently a parameterised projection can be initialised with:: - - fig.add_subplot(projection=MyProjection(param1=param1_value)) - - where MyProjection is an object which implements a ``_as_mpl_axes`` method. - -A full-fledged and heavily annotated example is in -:doc:`/gallery/misc/custom_projection`. The polar plot functionality in -`matplotlib.projections.polar` may also be of interest. -""" - -from .. import axes, _docstring -from .geo import AitoffAxes, HammerAxes, LambertAxes, MollweideAxes -from .polar import PolarAxes -from mpl_toolkits.mplot3d import Axes3D - - -class ProjectionRegistry: - """A mapping of registered projection names to projection classes.""" - - def __init__(self): - self._all_projection_types = {} - - def register(self, *projections): - """Register a new set of projections.""" - for projection in projections: - name = projection.name - self._all_projection_types[name] = projection - - def get_projection_class(self, name): - """Get a projection class from its *name*.""" - return self._all_projection_types[name] - - def get_projection_names(self): - """Return the names of all projections currently registered.""" - return sorted(self._all_projection_types) - - -projection_registry = ProjectionRegistry() -projection_registry.register( - axes.Axes, - PolarAxes, - AitoffAxes, - HammerAxes, - LambertAxes, - MollweideAxes, - Axes3D, -) - - -def register_projection(cls): - projection_registry.register(cls) - - -def get_projection_class(projection=None): - """ - Get a projection class from its name. - - If *projection* is None, a standard rectilinear projection is returned. - """ - if projection is None: - projection = 'rectilinear' - - try: - return projection_registry.get_projection_class(projection) - except KeyError as err: - raise ValueError("Unknown projection %r" % projection) from err - - -get_projection_names = projection_registry.get_projection_names -_docstring.interpd.update(projection_names=get_projection_names()) diff --git a/spaces/lamtung16/Llama-2-AWS/README.md b/spaces/lamtung16/Llama-2-AWS/README.md deleted file mode 100644 index c75c53aac9edff5a2f4b79e0ca644b58f41f81c3..0000000000000000000000000000000000000000 --- a/spaces/lamtung16/Llama-2-AWS/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Llama 2 AWS -emoji: 😻 -colorFrom: indigo -colorTo: pink -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/utils/registry.py b/spaces/leafShen/CodeFormer/CodeFormer/basicsr/utils/registry.py deleted file mode 100644 index 655753b3b9cbd0cfe73fe93a77cf1fcc3db6d827..0000000000000000000000000000000000000000 --- a/spaces/leafShen/CodeFormer/CodeFormer/basicsr/utils/registry.py +++ /dev/null @@ -1,82 +0,0 @@ -# Modified from: https://github.com/facebookresearch/fvcore/blob/master/fvcore/common/registry.py # noqa: E501 - - -class Registry(): - """ - The registry that provides name -> object mapping, to support third-party - users' custom modules. - - To create a registry (e.g. a backbone registry): - - .. code-block:: python - - BACKBONE_REGISTRY = Registry('BACKBONE') - - To register an object: - - .. code-block:: python - - @BACKBONE_REGISTRY.register() - class MyBackbone(): - ... - - Or: - - .. code-block:: python - - BACKBONE_REGISTRY.register(MyBackbone) - """ - - def __init__(self, name): - """ - Args: - name (str): the name of this registry - """ - self._name = name - self._obj_map = {} - - def _do_register(self, name, obj): - assert (name not in self._obj_map), (f"An object named '{name}' was already registered " - f"in '{self._name}' registry!") - self._obj_map[name] = obj - - def register(self, obj=None): - """ - Register the given object under the the name `obj.__name__`. - Can be used as either a decorator or not. - See docstring of this class for usage. - """ - if obj is None: - # used as a decorator - def deco(func_or_class): - name = func_or_class.__name__ - self._do_register(name, func_or_class) - return func_or_class - - return deco - - # used as a function call - name = obj.__name__ - self._do_register(name, obj) - - def get(self, name): - ret = self._obj_map.get(name) - if ret is None: - raise KeyError(f"No object named '{name}' found in '{self._name}' registry!") - return ret - - def __contains__(self, name): - return name in self._obj_map - - def __iter__(self): - return iter(self._obj_map.items()) - - def keys(self): - return self._obj_map.keys() - - -DATASET_REGISTRY = Registry('dataset') -ARCH_REGISTRY = Registry('arch') -MODEL_REGISTRY = Registry('model') -LOSS_REGISTRY = Registry('loss') -METRIC_REGISTRY = Registry('metric') diff --git a/spaces/lewisliuX123/wechatglm_demo/docker/sample-chatgpt-on-wechat/Makefile b/spaces/lewisliuX123/wechatglm_demo/docker/sample-chatgpt-on-wechat/Makefile deleted file mode 100644 index 31b5f817b41d7d95055d9aa5d5e4f973abee0b45..0000000000000000000000000000000000000000 --- a/spaces/lewisliuX123/wechatglm_demo/docker/sample-chatgpt-on-wechat/Makefile +++ /dev/null @@ -1,26 +0,0 @@ -IMG:=`cat Name` -MOUNT:= -PORT_MAP:= -DOTENV:=.env -CONTAINER_NAME:=sample-chatgpt-on-wechat - -echo: - echo $(IMG) - -run_d: - docker rm $(CONTAINER_NAME) || echo - docker run -dt --name $(CONTAINER_NAME) $(PORT_MAP) \ - --env-file=$(DOTENV) \ - $(MOUNT) $(IMG) - -run_i: - docker rm $(CONTAINER_NAME) || echo - docker run -it --name $(CONTAINER_NAME) $(PORT_MAP) \ - --env-file=$(DOTENV) \ - $(MOUNT) $(IMG) - -stop: - docker stop $(CONTAINER_NAME) - -rm: stop - docker rm $(CONTAINER_NAME) diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Vray 2.0 For Sketchup 2015 X64 ((FULL)) Full 11.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Download Vray 2.0 For Sketchup 2015 X64 ((FULL)) Full 11.md deleted file mode 100644 index 699729363c85c7f160dcf119f6e00f8fb3ce8bb5..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Download Vray 2.0 For Sketchup 2015 X64 ((FULL)) Full 11.md +++ /dev/null @@ -1,64 +0,0 @@ -<h2>download vray 2.0 for sketchup 2015 x64 full 11</h2><br /><p><b><b>Download Zip</b> &raquo; <a href="https://bytlly.com/2uGwNP">https://bytlly.com/2uGwNP</a></b></p><br /><br /> -<br /> -.5 - - vray 2.0 sketchup 2015 11.5 - -the two last are the same, but in the last i also include a shotcube file. - -thanks! - -A: - -i solved it. - -as explained in the link i shared, i had to use the 4D objects of the shotcube to import the.obj. - -so i ended up doing: - -right-click on the.obj, and export it - -import the obj - -hit bake - -it should work, and it did. - -Q: - -ASP.NET Core Entity Framework Version 1 - -I have 2 project, one is.Net Framework 4.0 and another is.Net Core 1.1. - -In the.Net Framework I have the Entity Framework 7. I was not be able to find an entity framework that works for Entity Framework Core in.Net Framework. - -I need to make a migration from the previous EF to the new one. - -How can I do this in the.Net Framework? - -Thanks - -The code to setup a new CoreContext is the same as for any other context. - -The only difference is that you can only use the _context variable, so just pass it in as an argument: - -public void Configure(DbContextOptions options) - - - - options.UseSqlServer(@"Data Source=.\SQLEXPRESS;Integrated Security=True;Initial Catalog=aspnet-MyWebApp-20170127154736;Persist Security Info=True;Pooling=True;MultipleActiveResultSets=True;"); - - - -You can get the DbContextOptionsBuilder from the Microsoft.Extensions.Options library. - -Updated: - -The Empire Reporter is proud to announce that we are now accepting nominations for the Empire Season 2 Advisory Board. The Advisory Board will work closely with the Empire team throughout the season, providing ideas and feedback on game play and design. - -Our goal is to make Empire the best 2 player co-op game on the market. This is a volunteer position and a great way to get your feet wet in the entertainment industry while interacting with and learning from industry professionals. - -Requirements for the Advisory Board: 4fefd39f24<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Es2 Vst Sylenth 1 Free Download ((BETTER)) Full 16.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Es2 Vst Sylenth 1 Free Download ((BETTER)) Full 16.md deleted file mode 100644 index dbc73f05850c45eef1e135fd9c3d2b1f3da61c45..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Es2 Vst Sylenth 1 Free Download ((BETTER)) Full 16.md +++ /dev/null @@ -1,12 +0,0 @@ -<h2>Es2 Vst Sylenth 1 Free Download Full 16</h2><br /><p><b><b>Download File</b> &#10003; <a href="https://bytlly.com/2uGywJ">https://bytlly.com/2uGywJ</a></b></p><br /><br /> - -List of the 17 Absolutely Best FREE VST Plugins (synths + effects) you should try in 2020, including Synth1, Rough Rider 2, XFer OTT and more.# ##18-Jun-2013 - One page display of Maxim Digital Audio's excellent free ePiano plugin. Made for REMOTE ZeRO SL MK1. Upload date: June 18, 2013 ♞ - ♞ -Description: ePiano is one of the most amazing music creation software in the world. -With the help of a tool called Synth1, you can create, record and customize sound using samples. -All you need is to find the notes and tune them through the Synth1 tool. -If you work in Apple Logic Pro, then this program is for you. -It can also work with any recording recorder you want. 8a78ff9644<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Faces Of Illusion The Twin Phantoms BEST Download] [serial Number].md b/spaces/lincquiQcaudo/Top-20-Diffusion/Faces Of Illusion The Twin Phantoms BEST Download] [serial Number].md deleted file mode 100644 index b45872d6b36ecfdb9c36586b43151b997c1c3085..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Faces Of Illusion The Twin Phantoms BEST Download] [serial Number].md +++ /dev/null @@ -1,28 +0,0 @@ - -<h1>Faces of Illusion: The Twin Phantoms Download] [serial number]</h1> -<p>Do you love hidden object puzzle adventure games? Do you enjoy exploring a world of magic and illusion? Do you want to solve a mysterious kidnapping case in 19th century Paris? If you answered yes to any of these questions, then you should try Faces of Illusion: The Twin Phantoms Download] [serial number]. This is a game that will take you on an exciting journey between reality and fantasy, where nothing is as it seems.</p> -<h2>Faces of Illusion: The Twin Phantoms Download] [serial number]</h2><br /><p><b><b>Download</b> &middot;&middot;&middot; <a href="https://bytlly.com/2uGy8Z">https://bytlly.com/2uGy8Z</a></b></p><br /><br /> -<h2>What is Faces of Illusion: The Twin Phantoms?</h2> -<p>Faces of Illusion: The Twin Phantoms is a game created by Artifex Mundi, the makers of Enigmatis and Grim Legends. It tells the story of a young journalist who witnesses the abduction of a rising theatre star, Beatrice Le Brun. The main suspect is a famous illusionist, Charles Delacroix, who disappeared some time ago. The journalist must follow the clues and find the missing star before it's too late.</p> -<p>The game features 30 hand-painted locations, 21 colorful mini-games, and a mysterious book of magic. You will explore scenes of theatrical beauty, meet intriguing characters, and uncover dark secrets with an undercurrent of romance. You will also face challenging hidden object scenes and puzzles that will test your skills and logic.</p> -<h3>How to Download Faces of Illusion: The Twin Phantoms?</h3> -<p>If you want to download Faces of Illusion: The Twin Phantoms, you will need to have a serial number that will allow you to activate the full version of the game. You can get the serial number by purchasing the game from the official website or from other online platforms. You can also try the free trial version of the game before buying it.</p> -<p></p> -<p>Once you have the serial number, you can download Faces of Illusion: The Twin Phantoms from the official website or from other sources. You will need to have a compatible device that meets the system requirements of the game. You will also need to have enough storage space and a good internet connection to complete the download.</p> -<p>After downloading Faces of Illusion: The Twin Phantoms, you can install it on your device and enter the serial number to activate it. Then you can start playing the game and enjoy its features.</p> -<h4>Conclusion</h4> -<p>Faces of Illusion: The Twin Phantoms Download] [serial number] is a game that will appeal to fans of hidden object puzzle adventure games. It offers a captivating story, stunning graphics, and engaging gameplay. It will immerse you in a world of magic and illusion, where you will have to solve a mystery and save a star.</p> -<p>So, what are you waiting for? Download Faces of Illusion: The Twin Phantoms today and experience this amazing game!</p> -<h5>What are the Reviews of Faces of Illusion: The Twin Phantoms?</h5> -<p>Faces of Illusion: The Twin Phantoms is a game that has received positive reviews from players and critics alike. Here are some of the comments that people have made about the game:</p> -<ul> -<li>"I loved this game! It was fun, challenging, and beautiful. The story was captivating and the characters were interesting. The graphics were stunning and the music was fitting. I highly recommend this game to anyone who likes hidden object games with a twist." - Player review</li> -<li>"This game is a gem! It has everything you want in a hidden object game: a great story, gorgeous scenes, clever puzzles, and a touch of romance. The game is well-made and polished, and the voice acting is superb. The game is not too long or too short, and it has a satisfying ending. This game is worth every penny." - Player review</li> -<li>"Faces of Illusion: The Twin Phantoms is a delightful hidden object game that will keep you entertained for hours. The game has a unique setting and a intriguing plot that will keep you guessing until the end. The game is well-designed and easy to play, with a variety of mini-games and hidden object scenes. The game also has a bonus chapter that adds more content and fun. This game is a must-play for fans of hidden object games." - Critic review</li> -</ul> -<p>As you can see, Faces of Illusion: The Twin Phantoms is a game that has received rave reviews from many people who have played it. If you want to join them and experience this amazing game for yourself, you can download Faces of Illusion: The Twin Phantoms Download] [serial number] today!</p> -<h4>Conclusion</h4> -<p>Faces of Illusion: The Twin Phantoms Download] [serial number] is a game that will appeal to fans of hidden object puzzle adventure games. It offers a captivating story, stunning graphics, and engaging gameplay. It will immerse you in a world of magic and illusion, where you will have to solve a mystery and save a star.</p> -<p>So, what are you waiting for? Download Faces of Illusion: The Twin Phantoms today and experience this amazing game!</p> 3cee63e6c2<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Garritan Jazz Big Band _HOT_ Crack Sky.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Garritan Jazz Big Band _HOT_ Crack Sky.md deleted file mode 100644 index a330bbdc8277e6457a4231a7d910a730e0b53d9d..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Garritan Jazz Big Band _HOT_ Crack Sky.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>garritan jazz big band crack sky</h2><br /><p><b><b>Download File</b> ---> <a href="https://bytlly.com/2uGyk0">https://bytlly.com/2uGyk0</a></b></p><br /><br /> -<br /> -Garritan.Jazz.and.Big.Band.3/Crack_Jazz_and_Big_Band_3.exe, 3.72 MB ... file · Garritan Classic Pipe Organs V 1.3 crack fix [KiLLaBeAts] using magnet link ... 4d29de3e1b<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/linhdo/checkbox-detector/app.py b/spaces/linhdo/checkbox-detector/app.py deleted file mode 100644 index a4bc0992845313c4c78c81e679e3a8eb0756ff34..0000000000000000000000000000000000000000 --- a/spaces/linhdo/checkbox-detector/app.py +++ /dev/null @@ -1,83 +0,0 @@ -# Import libraries -import cv2 # for reading images, draw bounding boxes -from ultralytics import YOLO -import gradio as gr - -# Define constants -BOX_COLORS = { - "unchecked": (242, 48, 48), - "checked": (38, 115, 101), - "block": (242, 159, 5) -} -BOX_PADDING = 2 - -# Load models -DETECTION_MODEL = YOLO("models/detector-model.pt") -CLASSIFICATION_MODEL = YOLO("models/classifier-model.pt") # 0: block, 1: checked, 2: unchecked - -def detect(image_path): - """ - Output inference image with bounding box - - Args: - - image: to check for checkboxes - - Return: image with bounding boxes drawn - """ - image = cv2.imread(image_path) - if image is None: - return image - - # Predict on image - results = DETECTION_MODEL.predict(source=image, conf=0.2, iou=0.8) # Predict on image - boxes = results[0].boxes # Get bounding boxes - - if len(boxes) == 0: - return image - - # Get bounding boxes - for box in boxes: - detection_class_conf = round(box.conf.item(), 2) - detection_class = list(BOX_COLORS)[int(box.cls)] - # Get start and end points of the current box - start_box = (int(box.xyxy[0][0]), int(box.xyxy[0][1])) - end_box = (int(box.xyxy[0][2]), int(box.xyxy[0][3])) - box = image[start_box[1]:end_box[1], start_box[0]: end_box[0], :] - - # Determine the class of the box using classification model - cls_results = CLASSIFICATION_MODEL.predict(source=box, conf=0.5) - probs = cls_results[0].probs # cls prob, (num_class, ) - classification_class = list(BOX_COLORS)[2 - int(probs.top1)] - classification_class_conf = round(probs.top1conf.item(), 2) - - cls = classification_class if classification_class_conf > 0.9 else detection_class - - # 01. DRAW BOUNDING BOX OF OBJECT - line_thickness = round(0.002 * (image.shape[0] + image.shape[1]) / 2) + 1 - image = cv2.rectangle(img=image, - pt1=start_box, - pt2=end_box, - color=BOX_COLORS[cls], - thickness = line_thickness) # Draw the box with predefined colors - - # 02. DRAW LABEL - text = cls + " " + str(detection_class_conf) - # Get text dimensions to draw wrapping box - font_thickness = max(line_thickness - 1, 1) - (text_w, text_h), _ = cv2.getTextSize(text=text, fontFace=2, fontScale=line_thickness/3, thickness=font_thickness) - # Draw wrapping box for text - image = cv2.rectangle(img=image, - pt1=(start_box[0], start_box[1] - text_h - BOX_PADDING*2), - pt2=(start_box[0] + text_w + BOX_PADDING * 2, start_box[1]), - color=BOX_COLORS[cls], - thickness=-1) - # Put class name on image - start_text = (start_box[0] + BOX_PADDING, start_box[1] - BOX_PADDING) - image = cv2.putText(img=image, text=text, org=start_text, fontFace=0, color=(255,255,255), fontScale=line_thickness/3, thickness=font_thickness) - - return image - -iface = gr.Interface(fn=detect, - inputs=gr.inputs.Image(label="Upload scanned document", type="filepath"), - outputs="image") -iface.launch() \ No newline at end of file diff --git a/spaces/lithiumice/SadTalker/src/face3d/models/networks.py b/spaces/lithiumice/SadTalker/src/face3d/models/networks.py deleted file mode 100644 index ead9cdcb8720b845c233de79dc8a8d1668492108..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/face3d/models/networks.py +++ /dev/null @@ -1,521 +0,0 @@ -"""This script defines deep neural networks for Deep3DFaceRecon_pytorch -""" - -import os -import numpy as np -import torch.nn.functional as F -from torch.nn import init -import functools -from torch.optim import lr_scheduler -import torch -from torch import Tensor -import torch.nn as nn -try: - from torch.hub import load_state_dict_from_url -except ImportError: - from torch.utils.model_zoo import load_url as load_state_dict_from_url -from typing import Type, Any, Callable, Union, List, Optional -from .arcface_torch.backbones import get_model -from kornia.geometry import warp_affine - -def resize_n_crop(image, M, dsize=112): - # image: (b, c, h, w) - # M : (b, 2, 3) - return warp_affine(image, M, dsize=(dsize, dsize), align_corners=True) - -def filter_state_dict(state_dict, remove_name='fc'): - new_state_dict = {} - for key in state_dict: - if remove_name in key: - continue - new_state_dict[key] = state_dict[key] - return new_state_dict - -def get_scheduler(optimizer, opt): - """Return a learning rate scheduler - - Parameters: - optimizer -- the optimizer of the network - opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions.  - opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine - - For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers. - See https://pytorch.org/docs/stable/optim.html for more details. - """ - if opt.lr_policy == 'linear': - def lambda_rule(epoch): - lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs + 1) - return lr_l - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule) - elif opt.lr_policy == 'step': - scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_epochs, gamma=0.2) - elif opt.lr_policy == 'plateau': - scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5) - elif opt.lr_policy == 'cosine': - scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0) - else: - return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy) - return scheduler - - -def define_net_recon(net_recon, use_last_fc=False, init_path=None): - return ReconNetWrapper(net_recon, use_last_fc=use_last_fc, init_path=init_path) - -def define_net_recog(net_recog, pretrained_path=None): - net = RecogNetWrapper(net_recog=net_recog, pretrained_path=pretrained_path) - net.eval() - return net - -class ReconNetWrapper(nn.Module): - fc_dim=257 - def __init__(self, net_recon, use_last_fc=False, init_path=None): - super(ReconNetWrapper, self).__init__() - self.use_last_fc = use_last_fc - if net_recon not in func_dict: - return NotImplementedError('network [%s] is not implemented', net_recon) - func, last_dim = func_dict[net_recon] - backbone = func(use_last_fc=use_last_fc, num_classes=self.fc_dim) - if init_path and os.path.isfile(init_path): - state_dict = filter_state_dict(torch.load(init_path, map_location='cpu')) - backbone.load_state_dict(state_dict) - print("loading init net_recon %s from %s" %(net_recon, init_path)) - self.backbone = backbone - if not use_last_fc: - self.final_layers = nn.ModuleList([ - conv1x1(last_dim, 80, bias=True), # id layer - conv1x1(last_dim, 64, bias=True), # exp layer - conv1x1(last_dim, 80, bias=True), # tex layer - conv1x1(last_dim, 3, bias=True), # angle layer - conv1x1(last_dim, 27, bias=True), # gamma layer - conv1x1(last_dim, 2, bias=True), # tx, ty - conv1x1(last_dim, 1, bias=True) # tz - ]) - for m in self.final_layers: - nn.init.constant_(m.weight, 0.) - nn.init.constant_(m.bias, 0.) - - def forward(self, x): - x = self.backbone(x) - if not self.use_last_fc: - output = [] - for layer in self.final_layers: - output.append(layer(x)) - x = torch.flatten(torch.cat(output, dim=1), 1) - return x - - -class RecogNetWrapper(nn.Module): - def __init__(self, net_recog, pretrained_path=None, input_size=112): - super(RecogNetWrapper, self).__init__() - net = get_model(name=net_recog, fp16=False) - if pretrained_path: - state_dict = torch.load(pretrained_path, map_location='cpu') - net.load_state_dict(state_dict) - print("loading pretrained net_recog %s from %s" %(net_recog, pretrained_path)) - for param in net.parameters(): - param.requires_grad = False - self.net = net - self.preprocess = lambda x: 2 * x - 1 - self.input_size=input_size - - def forward(self, image, M): - image = self.preprocess(resize_n_crop(image, M, self.input_size)) - id_feature = F.normalize(self.net(image), dim=-1, p=2) - return id_feature - - -# adapted from https://github.com/pytorch/vision/edit/master/torchvision/models/resnet.py -__all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101', - 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d', - 'wide_resnet50_2', 'wide_resnet101_2'] - - -model_urls = { - 'resnet18': 'https://download.pytorch.org/models/resnet18-f37072fd.pth', - 'resnet34': 'https://download.pytorch.org/models/resnet34-b627a593.pth', - 'resnet50': 'https://download.pytorch.org/models/resnet50-0676ba61.pth', - 'resnet101': 'https://download.pytorch.org/models/resnet101-63fe2227.pth', - 'resnet152': 'https://download.pytorch.org/models/resnet152-394f9c45.pth', - 'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth', - 'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth', - 'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth', - 'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth', -} - - -def conv3x3(in_planes: int, out_planes: int, stride: int = 1, groups: int = 1, dilation: int = 1) -> nn.Conv2d: - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=dilation, groups=groups, bias=False, dilation=dilation) - - -def conv1x1(in_planes: int, out_planes: int, stride: int = 1, bias: bool = False) -> nn.Conv2d: - """1x1 convolution""" - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=bias) - - -class BasicBlock(nn.Module): - expansion: int = 1 - - def __init__( - self, - inplanes: int, - planes: int, - stride: int = 1, - downsample: Optional[nn.Module] = None, - groups: int = 1, - base_width: int = 64, - dilation: int = 1, - norm_layer: Optional[Callable[..., nn.Module]] = None - ) -> None: - super(BasicBlock, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - # Both self.conv1 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = norm_layer(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = norm_layer(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2) - # while original implementation places the stride at the first 1x1 convolution(self.conv1) - # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385. - # This variant is also known as ResNet V1.5 and improves accuracy according to - # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch. - - expansion: int = 4 - - def __init__( - self, - inplanes: int, - planes: int, - stride: int = 1, - downsample: Optional[nn.Module] = None, - groups: int = 1, - base_width: int = 64, - dilation: int = 1, - norm_layer: Optional[Callable[..., nn.Module]] = None - ) -> None: - super(Bottleneck, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - width = int(planes * (base_width / 64.)) * groups - # Both self.conv2 and self.downsample layers downsample the input when stride != 1 - self.conv1 = conv1x1(inplanes, width) - self.bn1 = norm_layer(width) - self.conv2 = conv3x3(width, width, stride, groups, dilation) - self.bn2 = norm_layer(width) - self.conv3 = conv1x1(width, planes * self.expansion) - self.bn3 = norm_layer(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x: Tensor) -> Tensor: - identity = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - - return out - - -class ResNet(nn.Module): - - def __init__( - self, - block: Type[Union[BasicBlock, Bottleneck]], - layers: List[int], - num_classes: int = 1000, - zero_init_residual: bool = False, - use_last_fc: bool = False, - groups: int = 1, - width_per_group: int = 64, - replace_stride_with_dilation: Optional[List[bool]] = None, - norm_layer: Optional[Callable[..., nn.Module]] = None - ) -> None: - super(ResNet, self).__init__() - if norm_layer is None: - norm_layer = nn.BatchNorm2d - self._norm_layer = norm_layer - - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - # each element in the tuple indicates if we should replace - # the 2x2 stride with a dilated convolution instead - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.use_last_fc = use_last_fc - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = norm_layer(self.inplanes) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2, - dilate=replace_stride_with_dilation[0]) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2, - dilate=replace_stride_with_dilation[1]) - self.layer4 = self._make_layer(block, 512, layers[3], stride=2, - dilate=replace_stride_with_dilation[2]) - self.avgpool = nn.AdaptiveAvgPool2d((1, 1)) - - if self.use_last_fc: - self.fc = nn.Linear(512 * block.expansion, num_classes) - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - - - # Zero-initialize the last BN in each residual branch, - # so that the residual branch starts with zeros, and each residual block behaves like an identity. - # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677 - if zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - nn.init.constant_(m.bn3.weight, 0) # type: ignore[arg-type] - elif isinstance(m, BasicBlock): - nn.init.constant_(m.bn2.weight, 0) # type: ignore[arg-type] - - def _make_layer(self, block: Type[Union[BasicBlock, Bottleneck]], planes: int, blocks: int, - stride: int = 1, dilate: bool = False) -> nn.Sequential: - norm_layer = self._norm_layer - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - norm_layer(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation, norm_layer)) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=self.groups, - base_width=self.base_width, dilation=self.dilation, - norm_layer=norm_layer)) - - return nn.Sequential(*layers) - - def _forward_impl(self, x: Tensor) -> Tensor: - # See note [TorchScript super()] - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - - x = self.avgpool(x) - if self.use_last_fc: - x = torch.flatten(x, 1) - x = self.fc(x) - return x - - def forward(self, x: Tensor) -> Tensor: - return self._forward_impl(x) - - -def _resnet( - arch: str, - block: Type[Union[BasicBlock, Bottleneck]], - layers: List[int], - pretrained: bool, - progress: bool, - **kwargs: Any -) -> ResNet: - model = ResNet(block, layers, **kwargs) - if pretrained: - state_dict = load_state_dict_from_url(model_urls[arch], - progress=progress) - model.load_state_dict(state_dict) - return model - - -def resnet18(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-18 model from - `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress, - **kwargs) - - -def resnet34(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-34 model from - `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress, - **kwargs) - - -def resnet50(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-50 model from - `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress, - **kwargs) - - -def resnet101(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-101 model from - `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress, - **kwargs) - - -def resnet152(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNet-152 model from - `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress, - **kwargs) - - -def resnext50_32x4d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNeXt-50 32x4d model from - `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['groups'] = 32 - kwargs['width_per_group'] = 4 - return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3], - pretrained, progress, **kwargs) - - -def resnext101_32x8d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""ResNeXt-101 32x8d model from - `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['groups'] = 32 - kwargs['width_per_group'] = 8 - return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3], - pretrained, progress, **kwargs) - - -def wide_resnet50_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""Wide ResNet-50-2 model from - `"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_. - - The model is the same as ResNet except for the bottleneck number of channels - which is twice larger in every block. The number of channels in outer 1x1 - convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 - channels, and in Wide ResNet-50-2 has 2048-1024-2048. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['width_per_group'] = 64 * 2 - return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3], - pretrained, progress, **kwargs) - - -def wide_resnet101_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet: - r"""Wide ResNet-101-2 model from - `"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_. - - The model is the same as ResNet except for the bottleneck number of channels - which is twice larger in every block. The number of channels in outer 1x1 - convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048 - channels, and in Wide ResNet-50-2 has 2048-1024-2048. - - Args: - pretrained (bool): If True, returns a model pre-trained on ImageNet - progress (bool): If True, displays a progress bar of the download to stderr - """ - kwargs['width_per_group'] = 64 * 2 - return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3], - pretrained, progress, **kwargs) - - -func_dict = { - 'resnet18': (resnet18, 512), - 'resnet50': (resnet50, 2048) -} diff --git a/spaces/lj1995/vocal2guitar/config.py b/spaces/lj1995/vocal2guitar/config.py deleted file mode 100644 index 426994f4700f3b46c82deebc8acaec60b7570e5b..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/config.py +++ /dev/null @@ -1,110 +0,0 @@ -import argparse -import torch -from multiprocessing import cpu_count - - -def config_file_change_fp32(): - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"configs/{config_file}", "w") as f: - f.write(strr) - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.iscolab, - self.noparallel, - self.noautoopen, - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - config_file_change_fp32() - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - self.is_half = False - config_file_change_fp32() - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = False - config_file_change_fp32() - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - x_pad = 0.1 - x_query = 4 - x_center = 20 - x_max = 22 - self.is_half=False - self.device="cpu" - return x_pad, x_query, x_center, x_max diff --git a/spaces/luost26/DiffAb/diffab/utils/transforms/select_atom.py b/spaces/luost26/DiffAb/diffab/utils/transforms/select_atom.py deleted file mode 100644 index 7d067ecb50873dc3ec3c5626bc8d1a836258780a..0000000000000000000000000000000000000000 --- a/spaces/luost26/DiffAb/diffab/utils/transforms/select_atom.py +++ /dev/null @@ -1,20 +0,0 @@ - -from ._base import register_transform - - -@register_transform('select_atom') -class SelectAtom(object): - - def __init__(self, resolution): - super().__init__() - assert resolution in ('full', 'backbone') - self.resolution = resolution - - def __call__(self, data): - if self.resolution == 'full': - data['pos_atoms'] = data['pos_heavyatom'][:, :] - data['mask_atoms'] = data['mask_heavyatom'][:, :] - elif self.resolution == 'backbone': - data['pos_atoms'] = data['pos_heavyatom'][:, :5] - data['mask_atoms'] = data['mask_heavyatom'][:, :5] - return data diff --git a/spaces/ma-xu/LIVE/pybind11/include/pybind11/functional.h b/spaces/ma-xu/LIVE/pybind11/include/pybind11/functional.h deleted file mode 100644 index 57b6cd210f4b99d9d76a93c17aeed3a183fc01a0..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pybind11/include/pybind11/functional.h +++ /dev/null @@ -1,101 +0,0 @@ -/* - pybind11/functional.h: std::function<> support - - Copyright (c) 2016 Wenzel Jakob <wenzel.jakob@epfl.ch> - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#pragma once - -#include "pybind11.h" -#include <functional> - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) -PYBIND11_NAMESPACE_BEGIN(detail) - -template <typename Return, typename... Args> -struct type_caster<std::function<Return(Args...)>> { - using type = std::function<Return(Args...)>; - using retval_type = conditional_t<std::is_same<Return, void>::value, void_type, Return>; - using function_type = Return (*) (Args...); - -public: - bool load(handle src, bool convert) { - if (src.is_none()) { - // Defer accepting None to other overloads (if we aren't in convert mode): - if (!convert) return false; - return true; - } - - if (!isinstance<function>(src)) - return false; - - auto func = reinterpret_borrow<function>(src); - - /* - When passing a C++ function as an argument to another C++ - function via Python, every function call would normally involve - a full C++ -> Python -> C++ roundtrip, which can be prohibitive. - Here, we try to at least detect the case where the function is - stateless (i.e. function pointer or lambda function without - captured variables), in which case the roundtrip can be avoided. - */ - if (auto cfunc = func.cpp_function()) { - auto c = reinterpret_borrow<capsule>(PyCFunction_GET_SELF(cfunc.ptr())); - auto rec = (function_record *) c; - - if (rec && rec->is_stateless && - same_type(typeid(function_type), *reinterpret_cast<const std::type_info *>(rec->data[1]))) { - struct capture { function_type f; }; - value = ((capture *) &rec->data)->f; - return true; - } - } - - // ensure GIL is held during functor destruction - struct func_handle { - function f; - func_handle(function&& f_) : f(std::move(f_)) {} - func_handle(const func_handle&) = default; - ~func_handle() { - gil_scoped_acquire acq; - function kill_f(std::move(f)); - } - }; - - // to emulate 'move initialization capture' in C++11 - struct func_wrapper { - func_handle hfunc; - func_wrapper(func_handle&& hf): hfunc(std::move(hf)) {} - Return operator()(Args... args) const { - gil_scoped_acquire acq; - object retval(hfunc.f(std::forward<Args>(args)...)); - /* Visual studio 2015 parser issue: need parentheses around this expression */ - return (retval.template cast<Return>()); - } - }; - - value = func_wrapper(func_handle(std::move(func))); - return true; - } - - template <typename Func> - static handle cast(Func &&f_, return_value_policy policy, handle /* parent */) { - if (!f_) - return none().inc_ref(); - - auto result = f_.template target<function_type>(); - if (result) - return cpp_function(*result, policy).release(); - else - return cpp_function(std::forward<Func>(f_), policy).release(); - } - - PYBIND11_TYPE_CASTER(type, _("Callable[[") + concat(make_caster<Args>::name...) + _("], ") - + make_caster<retval_type>::name + _("]")); -}; - -PYBIND11_NAMESPACE_END(detail) -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) diff --git a/spaces/ma-xu/LIVE/thrust/thrust/detail/allocator/default_construct_range.h b/spaces/ma-xu/LIVE/thrust/thrust/detail/allocator/default_construct_range.h deleted file mode 100644 index 6c3856c142990a3230c3bc4f805c0cb0a5fbcb73..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/detail/allocator/default_construct_range.h +++ /dev/null @@ -1,37 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include <thrust/detail/config.h> - -namespace thrust -{ -namespace detail -{ - - -template<typename Allocator, typename Pointer, typename Size> -__host__ __device__ -inline void default_construct_range(Allocator &a, Pointer p, Size n); - - -} // end detail -} // end thrust - -#include <thrust/detail/allocator/default_construct_range.inl> - - diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/per_device_resource.h b/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/per_device_resource.h deleted file mode 100644 index 721f49e03fd49c5db5b1094575a62630d0509fc1..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/system/detail/adl/per_device_resource.h +++ /dev/null @@ -1,41 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include <thrust/detail/config.h> - -// the purpose of this header is to #include the per_device_resource.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch per_device_resource - -#include <thrust/system/detail/sequential/per_device_resource.h> - -#if 0 -#include <thrust/system/cpp/detail/per_device_resource.h> -#include <thrust/system/cuda/detail/per_device_resource.h> -#include <thrust/system/omp/detail/per_device_resource.h> -#include <thrust/system/tbb/detail/per_device_resource.h> -#endif - -#define __THRUST_HOST_SYSTEM_PER_DEVICE_RESOURCE_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/per_device_resource.h> -#include __THRUST_HOST_SYSTEM_PER_DEVICE_RESOURCE_HEADER -#undef __THRUST_HOST_SYSTEM_PER_DEVICE_RESOURCE_HEADER - -#define __THRUST_DEVICE_SYSTEM_PER_DEVICE_RESOURCE_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/per_device_resource.h> -#include __THRUST_DEVICE_SYSTEM_PER_DEVICE_RESOURCE_HEADER -#undef __THRUST_DEVICE_SYSTEM_PER_DEVICE_RESOURCE_HEADER - diff --git a/spaces/mani143/ai/README.md b/spaces/mani143/ai/README.md deleted file mode 100644 index 0437e40906ca53a4d7b863e2610a074635a213f1..0000000000000000000000000000000000000000 --- a/spaces/mani143/ai/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ai -emoji: 🐠 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/medici/dreambooth-training/app.py b/spaces/medici/dreambooth-training/app.py deleted file mode 100644 index 7a438a3bfa4eaeda25f62aefd0cad77d494ed71d..0000000000000000000000000000000000000000 --- a/spaces/medici/dreambooth-training/app.py +++ /dev/null @@ -1,659 +0,0 @@ -from subprocess import getoutput -import os - -gpu_info = getoutput('nvidia-smi') -if("A10G" in gpu_info): - which_gpu = "A10G" - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl") -elif("T4" in gpu_info): - which_gpu = "T4" - os.system(f"pip install -q https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.15/xformers-0.0.15.dev0+1515f77.d20221130-cp38-cp38-linux_x86_64.whl") -else: - which_gpu = "CPU" - -import gradio as gr -from pathlib import Path -import argparse -import shutil -from train_dreambooth import run_training -from convertosd import convert -from PIL import Image -from slugify import slugify -import requests -import torch -import zipfile -import tarfile -import urllib.parse -import gc -from diffusers import StableDiffusionPipeline -from huggingface_hub import snapshot_download, update_repo_visibility, HfApi - -is_spaces = True if "SPACE_ID" in os.environ else False -if(is_spaces): - is_shared_ui = True if "multimodalart/dreambooth-training" in os.environ['SPACE_ID'] else False -else: - is_shared_ui = False -is_gpu_associated = torch.cuda.is_available() - -css = ''' - .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important} - .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important} - #component-4, #component-3, #component-10{min-height: 0} - .duplicate-button img{margin: 0} -''' -maximum_concepts = 3 - -#Pre download the files -if(is_gpu_associated): - model_v1 = snapshot_download(repo_id="multimodalart/sd-fine-tunable") - model_v2 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1", ignore_patterns=["*.ckpt", "*.safetensors"]) - model_v2_512 = snapshot_download(repo_id="stabilityai/stable-diffusion-2-1-base", ignore_patterns=["*.ckpt", "*.safetensors"]) - safety_checker = snapshot_download(repo_id="multimodalart/sd-sc") - model_to_load = model_v1 - -#with zipfile.ZipFile("mix.zip", 'r') as zip_ref: -# zip_ref.extractall(".") - -def swap_text(option, base): - resize_width = 768 if base == "v2-1-768" else 512 - mandatory_liability = "You must have the right to do so and you are liable for the images you use, example:" - if(option == "object"): - instance_prompt_example = "cttoy" - freeze_for = 30 - return [f"You are going to train `object`(s), upload 5-10 images of each object you are planning on training on from different angles/perspectives. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file/cat-toy.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, gr.update(visible=False)] - elif(option == "person"): - instance_prompt_example = "julcto" - freeze_for = 70 - #show_prior_preservation = True if base != "v2-1-768" else False - show_prior_preservation=False - if(show_prior_preservation): - prior_preservation_box_update = gr.update(visible=show_prior_preservation) - else: - prior_preservation_box_update = gr.update(visible=show_prior_preservation, value=False) - return [f"You are going to train a `person`(s), upload 10-20 images of each person you are planning on training on from different angles/perspectives. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file/person.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}.", freeze_for, prior_preservation_box_update] - elif(option == "style"): - instance_prompt_example = "trsldamrl" - freeze_for = 10 - return [f"You are going to train a `style`, upload 10-20 images of the style you are planning on training on. You can use services like <a style='text-decoration: underline' target='_blank' href='https://www.birme.net/?target_width={resize_width}&target_height={resize_width}'>birme</a> for smart cropping. {mandatory_liability}:", '''<img src="file/trsl_style.png" />''', f"You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `{instance_prompt_example}` here). Images will be automatically cropped to {resize_width}x{resize_width}", freeze_for, gr.update(visible=False)] - -def swap_base_model(selected_model): - if(is_gpu_associated): - global model_to_load - if(selected_model == "v1-5"): - model_to_load = model_v1 - elif(selected_model == "v2-1-768"): - model_to_load = model_v2 - else: - model_to_load = model_v2_512 - -def count_files(*inputs): - file_counter = 0 - concept_counter = 0 - for i, input in enumerate(inputs): - if(i < maximum_concepts-1): - files = inputs[i] - if(files): - concept_counter+=1 - file_counter+=len(files) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - selected_model = inputs[-5] - experimental_faces = inputs[-6] - if(uses_custom): - Training_Steps = int(inputs[-3]) - else: - Training_Steps = file_counter*150 - if(type_of_thing == "person" and Training_Steps > 2400): - Training_Steps = 2400 #Avoid overfitting on person faces - if(is_spaces): - if(selected_model == "v1-5"): - its = 1.1 if which_gpu == "T4" else 1.8 - if(experimental_faces): - its = 1 - elif(selected_model == "v2-1-512"): - its = 0.8 if which_gpu == "T4" else 1.5 - if(experimental_faces): - its = 0.7 - elif(selected_model == "v2-1-768"): - its = 0.48 if which_gpu == "T4" else 0.85 - - gpu_price = 0.60 if which_gpu == "T4" else 1.10 - summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps. The training should take around {round(Training_Steps/its, 2)} seconds, or {round((Training_Steps/its)/60, 2)} minutes. - The setup, compression and uploading the model can take up to 20 minutes.<br>As the {which_gpu}-Small GPU costs US${gpu_price} for 1h, <span style="font-size: 120%"><b>the estimated cost for this training is below US${round((((Training_Steps/its)/3600)+0.3+0.1)*gpu_price, 2)}.</b></span><br><br> - If you check the box below the GPU attribution will automatically removed after training is done and the model is uploaded. If not, don't forget to come back here and swap the hardware back to CPU.<br><br>''' - else: - summary_sentence = f'''You are going to train {concept_counter} {type_of_thing}(s), with {file_counter} images for {Training_Steps} steps.<br><br>''' - - return([gr.update(visible=True), gr.update(visible=True, value=summary_sentence)]) - -def update_steps(*files_list): - file_counter = 0 - for i, files in enumerate(files_list): - if(files): - file_counter+=len(files) - return(gr.update(value=file_counter*200)) - -def pad_image(image): - w, h = image.size - if w == h: - return image - elif w > h: - new_image = Image.new(image.mode, (w, w), (0, 0, 0)) - new_image.paste(image, (0, (w - h) // 2)) - return new_image - else: - new_image = Image.new(image.mode, (h, h), (0, 0, 0)) - new_image.paste(image, ((h - w) // 2, 0)) - return new_image - -def validate_model_upload(hf_token, model_name): - if(hf_token != ''): - api = HfApi() - try: - _ = api.whoami(hf_token) - except: - raise gr.Error("You have inserted an invalid Hugging Face token") - try: - if(is_spaces): - update_repo_visibility(repo_id=os.environ['SPACE_ID'], private=True, token=hf_token, repo_type="space") - except: - raise gr.Error("Oops, you created a Hugging Face token with read permissions only. You need one with write permissions") - else: - raise gr.Error("Please insert a Hugging Face Token (make sure to create it with write permissions)") - if(model_name == ""): - raise gr.Error("Please fill in your model's name") - -def train(*inputs): - if is_shared_ui: - raise gr.Error("This Space only works in duplicated instances") - if not is_gpu_associated: - raise gr.Error("Please associate a T4 or A10G GPU for this Space") - hf_token = inputs[-5] - model_name = inputs[-7] - if(is_spaces): - remove_attribution_after = inputs[-6] - else: - remove_attribution_after = False - - if(remove_attribution_after): - validate_model_upload(hf_token, model_name) - - torch.cuda.empty_cache() - if 'pipe' in globals(): - global pipe, pipe_is_set - del pipe - pipe_is_set = False - gc.collect() - - if os.path.exists("output_model"): shutil.rmtree('output_model') - if os.path.exists("instance_images"): shutil.rmtree('instance_images') - if os.path.exists("diffusers_model.tar"): os.remove("diffusers_model.tar") - if os.path.exists("model.ckpt"): os.remove("model.ckpt") - if os.path.exists("hastrained.success"): os.remove("hastrained.success") - file_counter = 0 - which_model = inputs[-10] - resolution = 512 if which_model != "v2-1-768" else 768 - for i, input in enumerate(inputs): - if(i < maximum_concepts-1): - if(input): - os.makedirs('instance_images',exist_ok=True) - files = inputs[i+(maximum_concepts*2)] - prompt = inputs[i+maximum_concepts] - if(prompt == "" or prompt == None): - raise gr.Error("You forgot to define your concept prompt") - for j, file_temp in enumerate(files): - file = Image.open(file_temp.name) - image = pad_image(file) - image = image.resize((resolution, resolution)) - extension = file_temp.name.split(".")[1] - image = image.convert('RGB') - image.save(f'instance_images/{prompt}_({j+1}).jpg', format="JPEG", quality = 100) - file_counter += 1 - - os.makedirs('output_model',exist_ok=True) - uses_custom = inputs[-1] - type_of_thing = inputs[-4] - experimental_face_improvement = inputs[-9] - - if(uses_custom): - Training_Steps = int(inputs[-3]) - Train_text_encoder_for = int(inputs[-2]) - else: - if(type_of_thing == "object"): - Train_text_encoder_for=30 - - elif(type_of_thing == "style"): - Train_text_encoder_for=15 - - elif(type_of_thing == "person"): - Train_text_encoder_for=70 - - Training_Steps = file_counter*150 - if(type_of_thing == "person" and Training_Steps > 2600): - Training_Steps = 2600 #Avoid overfitting on people's faces - stptxt = int((Training_Steps*Train_text_encoder_for)/100) - gradient_checkpointing = True if (experimental_face_improvement or which_model != "v1-5") else False - cache_latents = True if which_model != "v1-5" else False - if (type_of_thing == "object" or type_of_thing == "style" or (type_of_thing == "person" and not experimental_face_improvement)): - args_general = argparse.Namespace( - image_captions_filename = True, - train_text_encoder = True if stptxt > 0 else False, - stop_text_encoder_training = stptxt, - save_n_steps = 0, - pretrained_model_name_or_path = model_to_load, - instance_data_dir="instance_images", - class_data_dir=None, - output_dir="output_model", - instance_prompt="", - seed=42, - resolution=resolution, - mixed_precision="fp16", - train_batch_size=1, - gradient_accumulation_steps=1, - use_8bit_adam=True, - learning_rate=2e-6, - lr_scheduler="polynomial", - lr_warmup_steps = 0, - max_train_steps=Training_Steps, - gradient_checkpointing=gradient_checkpointing, - cache_latents=cache_latents, - ) - print("Starting single training...") - lock_file = open("intraining.lock", "w") - lock_file.close() - run_training(args_general) - else: - args_general = argparse.Namespace( - image_captions_filename = True, - train_text_encoder = True if stptxt > 0 else False, - stop_text_encoder_training = stptxt, - save_n_steps = 0, - pretrained_model_name_or_path = model_to_load, - instance_data_dir="instance_images", - class_data_dir="Mix", - output_dir="output_model", - with_prior_preservation=True, - prior_loss_weight=1.0, - instance_prompt="", - seed=42, - resolution=resolution, - mixed_precision="fp16", - train_batch_size=1, - gradient_accumulation_steps=1, - use_8bit_adam=True, - learning_rate=2e-6, - lr_scheduler="polynomial", - lr_warmup_steps = 0, - max_train_steps=Training_Steps, - num_class_images=200, - gradient_checkpointing=gradient_checkpointing, - cache_latents=cache_latents, - ) - print("Starting multi-training...") - lock_file = open("intraining.lock", "w") - lock_file.close() - run_training(args_general) - gc.collect() - torch.cuda.empty_cache() - if(which_model == "v1-5"): - print("Adding Safety Checker to the model...") - shutil.copytree(f"{safety_checker}/feature_extractor", "output_model/feature_extractor", dirs_exist_ok=True) - shutil.copytree(f"{safety_checker}/safety_checker", "output_model/safety_checker", dirs_exist_ok=True) - shutil.copy(f"model_index.json", "output_model/model_index.json") - - if(not remove_attribution_after): - print("Archiving model file...") - with tarfile.open("diffusers_model.tar", "w") as tar: - tar.add("output_model", arcname=os.path.basename("output_model")) - if os.path.exists("intraining.lock"): os.remove("intraining.lock") - trained_file = open("hastrained.success", "w") - trained_file.close() - print("Training completed!") - return [ - gr.update(visible=True, value=["diffusers_model.tar"]), #result - gr.update(visible=True), #try_your_model - gr.update(visible=True), #push_to_hub - gr.update(visible=True), #convert_button - gr.update(visible=False), #training_ongoing - gr.update(visible=True) #completed_training - ] - else: - where_to_upload = inputs[-8] - push(model_name, where_to_upload, hf_token, which_model, True) - hardware_url = f"https://huggingface.co/spaces/{os.environ['SPACE_ID']}/hardware" - headers = { "authorization" : f"Bearer {hf_token}"} - body = {'flavor': 'cpu-basic'} - requests.post(hardware_url, json = body, headers=headers) - -pipe_is_set = False -def generate(prompt, steps): - torch.cuda.empty_cache() - from diffusers import StableDiffusionPipeline - global pipe_is_set - if(not pipe_is_set): - global pipe - pipe = StableDiffusionPipeline.from_pretrained("./output_model", torch_dtype=torch.float16) - pipe = pipe.to("cuda") - pipe_is_set = True - - image = pipe(prompt, num_inference_steps=steps).images[0] - return(image) - -def push(model_name, where_to_upload, hf_token, which_model, comes_from_automated=False): - validate_model_upload(hf_token, model_name) - if(not os.path.exists("model.ckpt")): - convert("output_model", "model.ckpt") - from huggingface_hub import HfApi, HfFolder, CommitOperationAdd - from huggingface_hub import create_repo - model_name_slug = slugify(model_name) - api = HfApi() - your_username = api.whoami(token=hf_token)["name"] - if(where_to_upload == "My personal profile"): - model_id = f"{your_username}/{model_name_slug}" - else: - model_id = f"sd-dreambooth-library/{model_name_slug}" - headers = {"Authorization" : f"Bearer: {hf_token}", "Content-Type": "application/json"} - response = requests.post("https://huggingface.co/organizations/sd-dreambooth-library/share/SSeOwppVCscfTEzFGQaqpfcjukVeNrKNHX", headers=headers) - - print(f"Starting to upload the model {model_id}...") - images_upload = os.listdir("instance_images") - image_string = "" - instance_prompt_list = [] - previous_instance_prompt = '' - for i, image in enumerate(images_upload): - instance_prompt = image.split("_")[0] - if(instance_prompt != previous_instance_prompt): - title_instance_prompt_string = instance_prompt - instance_prompt_list.append(instance_prompt) - else: - title_instance_prompt_string = '' - previous_instance_prompt = instance_prompt - image_string = f'''{title_instance_prompt_string} {"(use that on your prompt)" if title_instance_prompt_string != "" else ""} -{image_string}![{instance_prompt} {i}](https://huggingface.co/{model_id}/resolve/main/concept_images/{urllib.parse.quote(image)})''' - readme_text = f'''--- -license: creativeml-openrail-m -tags: -- text-to-image -widget: -- text: {instance_prompt_list[0]} ---- -### {model_name} Dreambooth model trained by {api.whoami(token=hf_token)["name"]} with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the {which_model} base model - -You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! - -Sample pictures of: -{image_string} -''' - #Save the readme to a file - readme_file = open("model.README.md", "w") - readme_file.write(readme_text) - readme_file.close() - #Save the token identifier to a file - text_file = open("token_identifier.txt", "w") - text_file.write(', '.join(instance_prompt_list)) - text_file.close() - try: - create_repo(model_id,private=True, token=hf_token) - except: - import time - epoch_time = str(int(time.time())) - create_repo(f"{model_id}-{epoch_time}", private=True,token=hf_token) - operations = [ - CommitOperationAdd(path_in_repo="token_identifier.txt", path_or_fileobj="token_identifier.txt"), - CommitOperationAdd(path_in_repo="README.md", path_or_fileobj="model.README.md"), - CommitOperationAdd(path_in_repo=f"model.ckpt",path_or_fileobj="model.ckpt") - ] - api.create_commit( - repo_id=model_id, - operations=operations, - commit_message=f"Upload the model {model_name}", - token=hf_token - ) - api.upload_folder( - folder_path="output_model", - repo_id=model_id, - token=hf_token - ) - api.upload_folder( - folder_path="instance_images", - path_in_repo="concept_images", - repo_id=model_id, - token=hf_token - ) - if is_spaces: - if(not comes_from_automated): - extra_message = "Don't forget to remove the GPU attribution after you play with it." - else: - extra_message = "The GPU has been removed automatically as requested, and you can try the model via the model page" - api.create_discussion(repo_id=os.environ['SPACE_ID'], title=f"Your model {model_name} has finished trained from the Dreambooth Train Spaces!", description=f"Your model has been successfully uploaded to: https://huggingface.co/{model_id}. {extra_message}",repo_type="space", token=hf_token) - print("Model uploaded successfully!") - return [gr.update(visible=True, value=f"Successfully uploaded your model. Access it [here](https://huggingface.co/{model_id})"), gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"])] - -def convert_to_ckpt(): - if 'pipe' in globals(): - global pipe, pipe_is_set - del pipe - pipe_is_set = False - gc.collect() - convert("output_model", "model.ckpt") - return gr.update(visible=True, value=["diffusers_model.tar", "model.ckpt"]) - -def check_status(top_description): - if os.path.exists("hastrained.success"): - if is_spaces: - update_top_tag = gr.update(value=f''' - <div class="gr-prose" style="max-width: 80%"> - <h2>Your model has finished training ✅</h2> - <p>Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub). Once you are done, your model is safe, and you don't want to train a new one, go to the <a href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}" target="_blank">settings page</a> and downgrade your Space to a CPU Basic</p> - </div> - ''') - else: - update_top_tag = gr.update(value=f''' - <div class="gr-prose" style="max-width: 80%"> - <h2>Your model has finished training ✅</h2> - <p>Yay, congratulations on training your model. Scroll down to play with with it, save it (either downloading it or on the Hugging Face Hub).</p> - </div> - ''') - show_outputs = True - elif os.path.exists("intraining.lock"): - update_top_tag = gr.update(value=''' - <div class="gr-prose" style="max-width: 80%"> - <h2>Don't worry, your model is still training! ⌛</h2> - <p>You closed the tab while your model was training, but it's all good! It is still training right now. You can click the "Open logs" button above here to check the training status. Once training is done, reload this tab to interact with your model</p> - </div> - ''') - show_outputs = False - else: - update_top_tag = gr.update(value=top_description) - show_outputs = False - if os.path.exists("diffusers_model.tar"): - update_files_tag = gr.update(visible=show_outputs, value=["diffusers_model.tar"]) - else: - update_files_tag = gr.update(visible=show_outputs) - return [ - update_top_tag, #top_description - gr.update(visible=show_outputs), #try_your_model - gr.update(visible=show_outputs), #push_to_hub - update_files_tag, #result - gr.update(visible=show_outputs), #convert_button - ] - -def checkbox_swap(checkbox): - return [gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox), gr.update(visible=checkbox)] - -with gr.Blocks(css=css) as demo: - with gr.Box(): - if is_shared_ui: - top_description = gr.HTML(f''' - <div class="gr-prose" style="max-width: 80%"> - <h2>Attention - This Space doesn't work in this shared UI</h2> - <p>For it to work, you can either run locally or duplicate the Space and run it on your own profile using a (paid) private T4-small or A10G-small GPU for training. A T4 costs US$0.60/h, so it should cost < US$1 to train most models using default settings with it!&nbsp;&nbsp;<a class="duplicate-button" style="display:inline-block" target="_blank" href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}?duplicate=true"><img src="https://img.shields.io/badge/-Duplicate%20Space-blue?labelColor=white&style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IArs4c6QAAAP5JREFUOE+lk7FqAkEURY+ltunEgFXS2sZGIbXfEPdLlnxJyDdYB62sbbUKpLbVNhyYFzbrrA74YJlh9r079973psed0cvUD4A+4HoCjsA85X0Dfn/RBLBgBDxnQPfAEJgBY+A9gALA4tcbamSzS4xq4FOQAJgCDwV2CPKV8tZAJcAjMMkUe1vX+U+SMhfAJEHasQIWmXNN3abzDwHUrgcRGmYcgKe0bxrblHEB4E/pndMazNpSZGcsZdBlYJcEL9Afo75molJyM2FxmPgmgPqlWNLGfwZGG6UiyEvLzHYDmoPkDDiNm9JR9uboiONcBXrpY1qmgs21x1QwyZcpvxt9NS09PlsPAAAAAElFTkSuQmCC&logoWidth=14" alt="Duplicate Space"></a></p> - <img class="instruction" src="file/duplicate.png"> - <img class="arrow" src="file/arrow.png" /> - </div> - ''') - elif(is_spaces): - if(is_gpu_associated): - top_description = gr.HTML(f''' - <div class="gr-prose" style="max-width: 80%"> - <h2>You have successfully associated a {which_gpu} GPU to the Dreambooth Training Space 🎉</h2> - <p>You can now train your model! You will be billed by the minute from when you activated the GPU until when it is turned it off.</p> - </div> - ''') - else: - top_description = gr.HTML(f''' - <div class="gr-prose" style="max-width: 80%"> - <h2>You have successfully duplicated the Dreambooth Training Space 🎉</h2> - <p>There's only one step left before you can train your model: <a href="https://huggingface.co/spaces/{os.environ['SPACE_ID']}/settings" style="text-decoration: underline" target="_blank">attribute a <b>T4-small or A10G-small GPU</b> to it (via the Settings tab)</a> and run the training below. You will be billed by the minute from when you activate the GPU until when it is turned it off.</p> - </div> - ''') - else: - top_description = gr.HTML(f''' - <div class="gr-prose" style="max-width: 80%"> - <h2>You have successfully cloned the Dreambooth Training Space locally 🎉</h2> - <p>Do a <code>pip install requirements-local.txt</code></p> - </div> - ''') - gr.Markdown("# Dreambooth Training UI 💭") - gr.Markdown("Customize Stable Diffusion v1 or v2 (ⁿᵉʷ!) by giving it a few examples of a concept. Based on the [🧨 diffusers](https://github.com/huggingface/diffusers) implementation, additional techniques from [TheLastBen](https://github.com/TheLastBen/diffusers) and [ShivamShrirao](https://github.com/ShivamShrirao/diffusers)") - - with gr.Row() as what_are_you_training: - type_of_thing = gr.Dropdown(label="What would you like to train?", choices=["object", "person", "style"], value="object", interactive=True) - base_model_to_use = gr.Dropdown(label="Which base model would you like to use?", choices=["v1-5", "v2-1-512", "v2-1-768"], value="v1-5", interactive=True) - - #Very hacky approach to emulate dynamically created Gradio components - with gr.Row() as upload_your_concept: - with gr.Column(): - thing_description = gr.Markdown("You are going to train an `object`, please upload 5-10 images of the object you are planning on training on from different angles/perspectives. You must have the right to do so and you are liable for the images you use, example") - thing_experimental = gr.Checkbox(label="Improve faces (prior preservation) - can take longer training but can improve faces", visible=False, value=False) - thing_image_example = gr.HTML('''<img src="file/cat-toy.png" />''') - things_naming = gr.Markdown("You should name your concept with a unique made up word that has low chance of the model already knowing it (e.g.: `cttoy` here). Images will be automatically cropped to 512x512.") - - with gr.Column(): - file_collection = [] - concept_collection = [] - buttons_collection = [] - delete_collection = [] - is_visible = [] - - row = [None] * maximum_concepts - for x in range(maximum_concepts): - ordinal = lambda n: "%d%s" % (n, "tsnrhtdd"[(n // 10 % 10 != 1) * (n % 10 < 4) * n % 10::4]) - if(x == 0): - visible = True - is_visible.append(gr.State(value=True)) - else: - visible = False - is_visible.append(gr.State(value=False)) - - file_collection.append(gr.File(label=f'''Upload the images for your {ordinal(x+1) if (x>0) else ""} concept''', file_count="multiple", interactive=True, visible=visible)) - with gr.Column(visible=visible) as row[x]: - concept_collection.append(gr.Textbox(label=f'''{ordinal(x+1) if (x>0) else ""} concept prompt - use a unique, made up word to avoid collisions''')) - with gr.Row(): - if(x < maximum_concepts-1): - buttons_collection.append(gr.Button(value="Add +1 concept", visible=visible)) - if(x > 0): - delete_collection.append(gr.Button(value=f"Delete {ordinal(x+1)} concept")) - - counter_add = 1 - for button in buttons_collection: - if(counter_add < len(buttons_collection)): - button.click(lambda: - [gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), True, None], - None, - [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], buttons_collection[counter_add], is_visible[counter_add], file_collection[counter_add]], queue=False) - else: - button.click(lambda:[gr.update(visible=True),gr.update(visible=True), gr.update(visible=False), True], None, [row[counter_add], file_collection[counter_add], buttons_collection[counter_add-1], is_visible[counter_add]], queue=False) - counter_add += 1 - - counter_delete = 1 - for delete_button in delete_collection: - if(counter_delete < len(delete_collection)+1): - delete_button.click(lambda:[gr.update(visible=False),gr.update(visible=False), gr.update(visible=True), False], None, [file_collection[counter_delete], row[counter_delete], buttons_collection[counter_delete-1], is_visible[counter_delete]], queue=False) - counter_delete += 1 - - with gr.Accordion("Custom Settings", open=False): - swap_auto_calculated = gr.Checkbox(label="Use custom settings") - gr.Markdown("If not checked, the % of frozen encoder will be tuned automatically to whether you are training an `object`, `person` or `style`. The text-encoder is frozen after 10% of the steps for a style, 30% of the steps for an object and 75% trained for persons. The number of steps varies between 1400 and 2400 depending on how many images uploaded. If you see too many artifacts in your output, it means it may have overfit and you need less steps. If your results aren't really what you wanted, it may be underfitting and you need more steps.") - steps = gr.Number(label="How many steps", value=2400) - perc_txt_encoder = gr.Number(label="Percentage of the training steps the text-encoder should be trained as well", value=30) - - with gr.Box(visible=False) as training_summary: - training_summary_text = gr.HTML("", visible=True, label="Training Summary") - is_advanced_visible = True if is_spaces else False - training_summary_checkbox = gr.Checkbox(label="Automatically remove paid GPU attribution and upload model to the Hugging Face Hub after training", value=True, visible=is_advanced_visible) - training_summary_model_name = gr.Textbox(label="Name of your model", visible=True) - training_summary_where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], value="My personal profile", label="Upload to", visible=True) - training_summary_token_message = gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.", visible=True) - training_summary_token = gr.Textbox(label="Hugging Face Write Token", type="password", visible=True) - - train_btn = gr.Button("Start Training") - if(is_shared_ui): - training_ongoing = gr.Markdown("## This Space only works in duplicated instances. Please duplicate it and try again!", visible=False) - elif(not is_gpu_associated): - training_ongoing = gr.Markdown("## Oops, you haven't associated your T4 or A10G GPU to this Space. Visit the Settings tab, associate and try again.", visible=False) - else: - training_ongoing = gr.Markdown("## Training is ongoing ⌛... You can close this tab if you like or just wait. If you did not check the `Remove GPU After training`, you can come back here to try your model and upload it after training. Don't forget to remove the GPU attribution after you are done. ", visible=False) - - #Post-training UI - completed_training = gr.Markdown('''# ✅ Training completed. - ### Don't forget to remove the GPU attribution after you are done trying and uploading your model''', visible=False) - - with gr.Row(): - with gr.Box(visible=False) as try_your_model: - gr.Markdown("## Try your model") - prompt = gr.Textbox(label="Type your prompt") - result_image = gr.Image() - inference_steps = gr.Slider(minimum=1, maximum=150, value=50, step=1) - generate_button = gr.Button("Generate Image") - - with gr.Box(visible=False) as push_to_hub: - gr.Markdown("## Push to Hugging Face Hub") - model_name = gr.Textbox(label="Name of your model", placeholder="Tarsila do Amaral Style") - where_to_upload = gr.Dropdown(["My personal profile", "Public Library"], label="Upload to") - gr.Markdown("[A Hugging Face write access token](https://huggingface.co/settings/tokens), go to \"New token\" -> Role : Write. A regular read token won't work here.") - hf_token = gr.Textbox(label="Hugging Face Write Token", type="password") - - push_button = gr.Button("Push to the Hub") - - result = gr.File(label="Download the uploaded models in the diffusers format", visible=True) - success_message_upload = gr.Markdown(visible=False) - convert_button = gr.Button("Convert to CKPT", visible=False) - - #Swap the examples and the % of text encoder trained depending if it is an object, person or style - type_of_thing.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False) - - #Swap the base model - base_model_to_use.change(fn=swap_text, inputs=[type_of_thing, base_model_to_use], outputs=[thing_description, thing_image_example, things_naming, perc_txt_encoder, thing_experimental], queue=False, show_progress=False) - base_model_to_use.change(fn=swap_base_model, inputs=base_model_to_use, outputs=[]) - - #Update the summary box below the UI according to how many images are uploaded and whether users are using custom settings or not - for file in file_collection: - #file.change(fn=update_steps,inputs=file_collection, outputs=steps) - file.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - - thing_experimental.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - base_model_to_use.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - steps.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - perc_txt_encoder.change(fn=count_files, inputs=file_collection+[thing_experimental]+[base_model_to_use]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[training_summary, training_summary_text], queue=False) - - #Give more options if the user wants to finish everything after training - if(is_spaces): - training_summary_checkbox.change(fn=checkbox_swap, inputs=training_summary_checkbox, outputs=[training_summary_token_message, training_summary_token, training_summary_model_name, training_summary_where_to_upload],queue=False, show_progress=False) - #Add a message for while it is in training - train_btn.click(lambda:gr.update(visible=True), inputs=None, outputs=training_ongoing) - - #The main train function - train_btn.click(fn=train, inputs=is_visible+concept_collection+file_collection+[base_model_to_use]+[thing_experimental]+[training_summary_where_to_upload]+[training_summary_model_name]+[training_summary_checkbox]+[training_summary_token]+[type_of_thing]+[steps]+[perc_txt_encoder]+[swap_auto_calculated], outputs=[result, try_your_model, push_to_hub, convert_button, training_ongoing, completed_training], queue=False) - - #Button to generate an image from your trained model after training - generate_button.click(fn=generate, inputs=[prompt, inference_steps], outputs=result_image, queue=False) - #Button to push the model to the Hugging Face Hub - push_button.click(fn=push, inputs=[model_name, where_to_upload, hf_token, base_model_to_use], outputs=[success_message_upload, result], queue=False) - #Button to convert the model to ckpt format - convert_button.click(fn=convert_to_ckpt, inputs=[], outputs=result, queue=False) - - #Checks if the training is running - demo.load(fn=check_status, inputs=top_description, outputs=[top_description, try_your_model, push_to_hub, result, convert_button], queue=False, show_progress=False) - -demo.queue(default_enabled=False).launch(debug=True) \ No newline at end of file diff --git a/spaces/merve/fill-in-the-blank/public/measuring-fairness/slides.js b/spaces/merve/fill-in-the-blank/public/measuring-fairness/slides.js deleted file mode 100644 index a66a04c7c483fee37424c6e9182e565a673a7aca..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/measuring-fairness/slides.js +++ /dev/null @@ -1,102 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - - -window.makeSlides = function(){ - var slides = [ - { - textFill: '#aaa', - textStroke: 0, - rectFill: d => d.isSick ? lcolors.sick : lcolors.well, - rectOpacity: d => 0, - threshold: .8, - fpAxisOpacity: 0, - sexAxisOpacity: 0, - brAxisOpacity: 0, - truthAxisOpacity: 0, - mlAxisOpacity: 0, - pos: 'all', - botAxisY: c.width + 80, - }, - - { - textFill: d => d.isSick ? colors.sick : colors.well, - truthAxisOpacity: 1, - }, - - { - rectOpacity: d => 1, - mlAxisOpacity: 1, - - }, - - { - rectFill: d => d.grade > gs.curSlide.threshold ? lcolors.sick : lcolors.well, - textStroke: d => d.grade > gs.curSlide.threshold == d.isSick ? 0 : .6, - fpAxisOpacity: 1, - }, - - { - threshold: .61, - animateThreshold: true, - }, - - { - threshold: .89, - animateThreshold: true, - }, - - { - pos: 'sex', - fpAxisOpacity: 0, - sexAxisOpacity: 1, - threshold: .7508, - animateThreshold: false, - botAxisY: c.width + 150, - - }, - - { - brAxisOpacity: 1, - sexAxisOpacity: 0, - - }, - - { - - } - - ] - - var keys = [] - slides.forEach(d => keys = keys.concat(d3.keys(d))) - _.uniq(keys).forEach(str => { - var prev = null - slides.forEach(d => { - if (typeof(d[str]) === 'undefined'){ - d[str] = prev - } - prev = d[str] - }) - }) - - return slides -} - - - -if (window.init) window.init() diff --git a/spaces/merve/measuring-fairness/public/fill-in-the-blank/init-diff.js b/spaces/merve/measuring-fairness/public/fill-in-the-blank/init-diff.js deleted file mode 100644 index e0bb76f70a4d3ff6689b493236b5da93150746da..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/fill-in-the-blank/init-diff.js +++ /dev/null @@ -1,525 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -window.initDiff = function(pair){ - var sel = d3.select('.' + pair.class).html('') - .at({role: 'graphics-document', 'aria-label': pair.ariaLabel}) - .on('keydown', function(){ - sel.classed('changed', 1) - if (d3.event.keyCode != 13) return - d3.event.preventDefault() - - pair.str0 = '' - - updateChart() - }) - - if (!sel.node()) return - - var isMobile = innerWidth <= 1100 - - var optionSel = sel.append('div.options') - .classed('wide', !isMobile) - .st({marginBottom: isMobile ? 20 : ''}) - - var input0Sel = optionSel.append('div.flex-row').append('textarea.input-0') - .st({marginBottom: 10}) - if (isMobile){ - input0Sel.on('change', updateChart) - } - - input0Sel.node().value = pair.s0.replace('[MASK]', '_') - - var countSel = optionSel.append('div.option-tokens') - .append('b').text('Number of Tokens') - .parent() - .append('div.flex-row') - .appendMany('div.button', [30, 200, 1000, 5000, 99999]) - .text(d => d > 5000 ? 'All' : d) - .st({width: 34, textAlign: 'center'}) - .on('click', d => { - pair.count = d - updateChart() - }) - - var typeSel = optionSel.append('div.option-type') - .append('b').text('Chart Type') - .parent() - .append('div.flex-row') - .appendMany('div.button', ['Likelihoods', 'Differences']) - .text(d => d) - .st({width: 116, textAlign: 'center'}) - .on('click', d => { - pair.type = d - updateChart() - }) - - var modelSel = optionSel.append('div.option-model') - .st({display: 'none'}) - .append('b').text('Model') - .parent() - .append('div.flex-row') - .appendMany('div.button', ['BERT', 'Zari']) - .text(d => d) - .st({width: 116, textAlign: 'center'}) - .on('click', d => { - pair.model = d - updateChart() - }) - - var updateSel = optionSel.append('div.button.update').on('click', updateChart) - .text('Update') - .st({display: isMobile ? 'none' : ''}) - - var resetSel = optionSel.append('div.reset') - .html('<span>↻</span> Reset') - .on('click', () => { - pair = JSON.parse(pair.pairStr) - pair.pairStr = JSON.stringify(pair) - input0Sel.node().value = pair.s0 - updateChart(true) - }) - .st({display: 'none'}) - - if (pair.alts){ - d3.select('.' + pair.class + '-alts').html('') - .classed('alt-block', 1).st({display: 'block'}) - .appendMany('span.p-button-link', pair.alts) - .html(d => d.str) - .on('click', d => { - input0Sel.node().value = d.rawStr - - updateChart() - }) - } - - var scatters = [] - var scatterSel = sel.append('div.pair-container-overflow').append('div.pair-container') - .st({width: 940}) - .appendMany('div', 'p0 p1 c0 p2 p3 c1'.split(' ')) - .each(function(id){ - var c = d3.conventions({ - sel: d3.select(this).append('div.graph.diff').st({marginTop: -5}), - height: 250, - width: 250, - margin: {bottom: 40, right: 60, top: 5, left: 0}, - layers: 'sdds', - }) - - var [type, i] = id.split('') - - if (type == 'p'){ - c.sel - .st({pointer: 'cursor'}) - .on('click', () => { - pair.colorByIndex = +i - updateChart() - }) - } - - var nTicks = 4 - var tickScale = d3.scaleLinear().range([0, c.width]) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M ${.5 + Math.round(tickScale(d/nTicks))} 0 V ${c.height}`}) - c.svg.appendMany('path.bg-tick', d3.range(nTicks + 1)) - .at({d: d => `M 0 ${.5 + Math.round(tickScale(d/nTicks))} H ${c.width}`}) - - - c.type = type - c.scatters = scatters - c.scatter = window.initScatter(c) - c.scatters.push(c.scatter) - - - d3.select(this).datum({c, type, i}) - }) - - - updateChart(true) - - - async function updateChart(isFirst){ - // warningSel.st({opacity: isFirst ? 0 : 1}) - // resetSel.st({opacity: isFirst ? 0 : 1}) - sel.classed('changed', 0) - - countSel.classed('active', d => d == pair.count) - typeSel.classed('active', d => d == pair.type) - modelSel.classed('active', d => d == pair.model) - - function getStr(sel){ - return sel.node().value.replace('_', '[MASK]') - } - - - pair.s0 = input0Sel.node().value.replace('_', '[MASK]') - var str = pair.s0.replace('[MASK]', '{MASK}') - var sentences = str.split('|').length == 2 ? getZariSenteces() : getTwoPairSentences() - - function getTwoPairSentences(){ - var start = str.split('[')[0] - var mid = str.split(']')[1].split('[')[0] - var last = str.split(']')[2] - - var pairA = str.split('[')[1].split(']')[0].split('|') - var pairB = str.split('[')[2].split(']')[0].split('|') - - return [ - {i: 0, j: 0}, - {i: 0, j: 1}, - {i: 1, j: 0}, - {i: 1, j: 1}, - ].map(word => { - var strA = pairA[word.i] - var strB = pairB[word.j] - - var sentence = [start, strA, mid, strB, last] - .join('') - .replace('{MASK}', '[MASK]') - - var modelPath = pair.model == 'Zari' ? 'embed_zari_cda' : 'embed' - - return {word, strA, strB, sentence, modelPath} - }) - } - - function getZariSenteces(){ - var start = str.split('[')[0] - var last = str.split(']')[1] - var pairB = str.split('[')[1].split(']')[0].split('|') - - return [ - {i: 0, j: 0}, - {i: 0, j: 1}, - {i: 1, j: 0}, - {i: 1, j: 1}, - ].map(word => { - var strA = word.i ? 'Zari' : 'BERT' - var strB = pairB[word.j] - - var sentence = [start, strB, last] - .join('') - .replace('{MASK}', '[MASK]') - - var modelPath = strA == 'Zari' ? 'embed_zari_cda' : 'embed' - - return {word, strA, strB, sentence, modelPath} - }) - } - - - updateSel.classed('loading', 1) - // TODO parallel? - for (var d of sentences){ - d.maskVals = await post(d.modelPath, {sentence: d.sentence}) - } - updateSel.classed('loading', 0) - - - var allTokens = sentences[0].maskVals.map((v0, i) => { - var word = tokenizer.vocab[i] - var v = sentences.map(d => d.maskVals[i]) - - return {word, i, v, isVisible: false} - }) - - _.sortBy(allTokens, d => -d.v[0]).forEach((d, i) => d.v0i = i) - _.sortBy(allTokens, d => -d.v[1]).forEach((d, i) => d.v1i = i) - _.sortBy(allTokens, d => -d.v[2]).forEach((d, i) => d.v2i = i) - _.sortBy(allTokens, d => -d.v[3]).forEach((d, i) => d.v3i = i) - - allTokens - .filter(d => - d.v0i <= pair.count || - d.v1i <= pair.count || - d.v2i <= pair.count || - d.v3i <= pair.count - ) - .forEach(d => { - d.isTop = true - d.isVisible = true - }) - - var pairs = [ - [0, 1], - [2, 3], - - // [1, 2], - // [3, 0], - - [0, 2], - [1, 3], - - ].map((d, i) => { - var sentA = sentences[d[0]] - var sentB = sentences[d[1]] - - var allPairTokens = allTokens.map((t, i) => { - return {word: t.word, v0: t.v[d[0]], i, v1: t.v[d[1]], t} - }) - - allPairTokens.forEach(d => { - d.dif = d.v0 - d.v1 - d.meanV = (d.v0 + d.v1) / 2 - }) - var i0key = 'v' + d[0] + 'i' - var i1key = 'v' + d[1] + 'i' - - // TODO should this be done per chart or globally? - var topTokens = allPairTokens.filter(d => d.t.isTop) - // var topTokens = allPairTokens.filter(d => d.t[i0key] <= pair.count || d.t[i1key] <= pair.count) - var logitExtent = d3.extent(topTokens.map(d => d.v0).concat(topTokens.map(d => d.v1))) - - var tokens = allPairTokens - .filter(d => logitExtent[0] <= d.v0 && logitExtent[0] <= d.v1) - - var mag = logitExtent[1] - logitExtent[0] - logitExtent = [logitExtent[0] - mag*.002, logitExtent[1] + mag*.002] - - if (pair.type == 'Differences') tokens = _.sortBy(allPairTokens, d => -d.meanV).slice(0, pair.count) - - tokens.forEach(d => { - d.isVisible = true - }) - - var maxDif = d3.max(d3.extent(tokens, d => d.dif).map(Math.abs)) - var color = palette(-maxDif*.5, maxDif*.5) - - label0 = sentA.strA + ' / ' + sentA.strB - label1 = sentB.strA + ' / ' + sentB.strB - - - return {i, sentA, sentB, allPairTokens, logitExtent, tokens, maxDif, color, label0, label1} - }) - - var compares = [[0, 1], [2, 3]].map((d, i) => { - var pairA = pairs[d[0]] - var pairB = pairs[d[1]] - - var allTokensA = pairA.allPairTokens - var allTokensB = pairB.allPairTokens - - var allPairTokens = allTokens.map((t, i) => { - return {word: t.word, t, difA: allTokensA[i].dif, meanA: allTokensA[i].meanV, difB: allTokensB[i].dif, meanB: allTokensB[i].meanV} - }) - - _.sortBy(allPairTokens, d => -d.meanA) - .slice(0, pair.count) - .forEach(d => d.isVisible = true) - - _.sortBy(allPairTokens, d => -d.meanB) - .slice(0, pair.count) - .forEach(d => d.isVisible = true) - - var tokens = allPairTokens.filter(d => d.isVisible) - - return {pairA, pairB, tokens, allPairTokens} - }) - - if (!pair.colorByIndex) pair.colorByIndex = 1 - var color = pairs[pair.colorByIndex].color - pairs[pair.colorByIndex].allPairTokens.forEach(d => { - d.t.color = color(d.dif) - }) - - scatterSel.each(function({c, i, type}){ - updatePairChart(c, type == 'p' ? pairs[i] : compares[i]) - }) - } - - function updatePairChart(c, p){ - var {logitExtent, tokens, maxDif, color} = p - var allTokens = p.allPairTokens - - if (c.type == 'c'){ - drawDifDif() - } else { - if (pair.type == 'Likelihoods'){ - drawXY() - } else{ - drawRotated() - } - - sel.classed('is-xy', pair.type == 'Likelihoods') - sel.classed('is-rotate', pair.type != 'Likelihoods') - c.sel.classed('is-color-by', p.i == pair.colorByIndex) - c.sel.classed('not-is-color-by', p.i != pair.colorByIndex) - } - - function drawXY(){ - c.x.domain(logitExtent) - c.y.domain(logitExtent) - - d3.drawAxis(c) - - var s = {30: 4, 200: 3, 1000: 3}[pair.count] || 2 - var scatterData = allTokens.map(d => { - var x = c.x(d.v0) - var y = c.y(d.v1) - var fill = d.t.color - var dif = d.dif - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s, dif, fill, word, show, isVisible} - }) - - - var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.dif) - d3.nestBy(textCandidates.slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'uf') - d3.nestBy(textCandidates.reverse().slice(0, 1000), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'lr') - - logitExtent.pair = pair - c.scatter.draw(c, scatterData, true) - c.svg.selectAppend('text.x-axis-label.xy-only') - .translate([c.width/2, c.height + 24]) - .text(p.label0 + ' →') - .at({fill: util.colors[0], textAnchor: 'middle'}) - - c.svg.selectAppend('g.y-axis-label.xy-only') - .translate([c.width + 20, c.height/2]) - .selectAppend('text') - .text(p.label1 + ' →') - .at({fill: util.colors[1], textAnchor: 'middle', transform: 'rotate(-90)'}) - } - - function drawRotated(){ - c.x.domain(d3.extent(tokens, d => d.meanV)) - c.y.domain([maxDif, -maxDif]) - - d3.drawAxis(c) - - var scatterData = allTokens.map(d => { - var x = c.x(d.meanV) - var y = c.y(d.dif) - var fill = d.t.color - var word = d.word - var show = '' - var isVisible = d.isVisible - - return {x, y, s: 2, fill, word, show, isVisible} - }) - - scatterData.forEach(d => { - d.dx = d.x - c.width/2 - d.dy = d.y - c.height/2 - }) - - var textCandidates = _.sortBy(scatterData, d => -d.dx*d.dx - d.dy*d.dy) - .filter(d => d.isVisible) - .slice(0, 5000) - d3.nestBy(textCandidates, d => Math.round(12*Math.atan2(d.dx, d.dy))) - .map(d => d[0]) - .forEach(d => d.show = (d.dy < 0 ? 'u' : 'l') + (d.dx < 0 ? 'l' : 'r')) - - c.scatter.draw(c, scatterData, false) - c.svg.selectAppend('text.rotate-only.x-axis-label') - .translate([c.width/2, c.height + 24]) - .text(p.label0 + ' + ' + p.label1 + ' →') - .at({textAnchor: 'middle'}) - .st({fill: '#000', fontWeight: 300}) - - c.svg.select('g.rotate-only.sent-1').html('') - - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2]) - .append('text') - .text(p.label1 + ' →') - .at({textAnchor: 'start', transform: 'rotate(-90)', x: 10}) - .st({fill: util.colors[1]}) - - c.svg.selectAppend('g.rotate-only.sent-1') - .translate([c.width + 20, c.height/2 + 0]) - .append('text') - .text('← ' + p.label0) - .at({textAnchor: 'end', transform: 'rotate(-90)', x: -10}) - .st({fill: util.colors[0]}) - } - - function drawDifDif(){ - var maxDifA = d3.max(d3.extent(tokens, d => d.difA).map(Math.abs)) - var maxDifB = d3.max(d3.extent(tokens, d => d.difB).map(Math.abs)) - var maxDif = d3.max([maxDifA, maxDifB]) - - c.x.domain([maxDif, -maxDif]) - c.y.domain([maxDif, -maxDif]) - - d3.drawAxis(c) - - var scatterData = allTokens.map(d => { - var x = c.x(d.difA) - var y = c.y(d.difB) - var fill = d.t.color - var word = d.word - var show = '' - var isVisible = d.isVisible - return {x, y, s: 2, fill, word, show, isVisible} - }) - - scatterData.forEach(d => { - d.dx = d.x - c.width/2 - d.dy = d.y - c.height/2 - }) - - var textCandidates = _.sortBy(scatterData.filter(d => d.isVisible), d => d.x - d.y) - d3.nestBy(textCandidates, d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'uf') - d3.nestBy(textCandidates.reverse(), d => Math.round(d.y/10)) - .forEach(d => d[0].show = 'lr') - - c.scatter.draw(c, scatterData, true) - - var isColor = pair.colorByIndex == p.pairA.i - - var labelSel = c.svg.selectAppend('g.sent-0') - .html('') - .translate([c.width/2, c.height + 24]) - - labelSel.append('text') - .text(p.pairA.label1 + ' →') - .at({textAnchor: 'start', x: 10}) - .st({fill: isColor ? util.colors[1] : '#444', fontWeight: isColor ? 400 : ''}) - - labelSel.append('text') - .text('← ' + p.pairA.label0) - .at({textAnchor: 'end', x: -10}) - .st({fill: isColor ? util.colors[0] : '#444', fontWeight: isColor ? 400 : ''}) - - - var isColor = pair.colorByIndex == p.pairB.i - - var labelSel = c.svg.selectAppend('g.sent-1') - .html('') - .translate([c.width + 20, c.height/2]) - - labelSel.append('text') - .text(p.pairB.label1 + ' →') - .at({textAnchor: 'start', transform: 'rotate(-90)', x: 10}) - .st({fill: isColor ? util.colors[1] : '#444', fontWeight: isColor ? 400 : ''}) - - labelSel.append('text') - .text('← ' + p.pairB.label0) - .at({textAnchor: 'end', transform: 'rotate(-90)', x: -10}) - .st({fill: isColor ? util.colors[0] : '#444', fontWeight: isColor ? 400 : ''}) - } - - } -} - -if (window.init) init() diff --git a/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/train.py b/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/train.py deleted file mode 100644 index 7295f159b0427aef89a5944a0d1eb4c23ee85a7f..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/train.py +++ /dev/null @@ -1,413 +0,0 @@ -import argparse -import math -import random -import os - -import numpy as np -import torch -from torch import nn, autograd, optim -from torch.nn import functional as F -from torch.utils import data -import torch.distributed as dist -from torchvision import transforms, utils -from tqdm import tqdm - -try: - import wandb - -except ImportError: - wandb = None - -from model import Generator, Discriminator -from dataset import MultiResolutionDataset -from distributed import ( - get_rank, - synchronize, - reduce_loss_dict, - reduce_sum, - get_world_size, -) - - -def data_sampler(dataset, shuffle, distributed): - if distributed: - return data.distributed.DistributedSampler(dataset, shuffle=shuffle) - - if shuffle: - return data.RandomSampler(dataset) - - else: - return data.SequentialSampler(dataset) - - -def requires_grad(model, flag=True): - for p in model.parameters(): - p.requires_grad = flag - - -def accumulate(model1, model2, decay=0.999): - par1 = dict(model1.named_parameters()) - par2 = dict(model2.named_parameters()) - - for k in par1.keys(): - par1[k].data.mul_(decay).add_(1 - decay, par2[k].data) - - -def sample_data(loader): - while True: - for batch in loader: - yield batch - - -def d_logistic_loss(real_pred, fake_pred): - real_loss = F.softplus(-real_pred) - fake_loss = F.softplus(fake_pred) - - return real_loss.mean() + fake_loss.mean() - - -def d_r1_loss(real_pred, real_img): - grad_real, = autograd.grad( - outputs=real_pred.sum(), inputs=real_img, create_graph=True - ) - grad_penalty = grad_real.pow(2).view(grad_real.shape[0], -1).sum(1).mean() - - return grad_penalty - - -def g_nonsaturating_loss(fake_pred): - loss = F.softplus(-fake_pred).mean() - - return loss - - -def g_path_regularize(fake_img, latents, mean_path_length, decay=0.01): - noise = torch.randn_like(fake_img) / math.sqrt( - fake_img.shape[2] * fake_img.shape[3] - ) - grad, = autograd.grad( - outputs=(fake_img * noise).sum(), inputs=latents, create_graph=True - ) - path_lengths = torch.sqrt(grad.pow(2).sum(2).mean(1)) - - path_mean = mean_path_length + decay * (path_lengths.mean() - mean_path_length) - - path_penalty = (path_lengths - path_mean).pow(2).mean() - - return path_penalty, path_mean.detach(), path_lengths - - -def make_noise(batch, latent_dim, n_noise, device): - if n_noise == 1: - return torch.randn(batch, latent_dim, device=device) - - noises = torch.randn(n_noise, batch, latent_dim, device=device).unbind(0) - - return noises - - -def mixing_noise(batch, latent_dim, prob, device): - if prob > 0 and random.random() < prob: - return make_noise(batch, latent_dim, 2, device) - - else: - return [make_noise(batch, latent_dim, 1, device)] - - -def set_grad_none(model, targets): - for n, p in model.named_parameters(): - if n in targets: - p.grad = None - - -def train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device): - loader = sample_data(loader) - - pbar = range(args.iter) - - if get_rank() == 0: - pbar = tqdm(pbar, initial=args.start_iter, dynamic_ncols=True, smoothing=0.01) - - mean_path_length = 0 - - d_loss_val = 0 - r1_loss = torch.tensor(0.0, device=device) - g_loss_val = 0 - path_loss = torch.tensor(0.0, device=device) - path_lengths = torch.tensor(0.0, device=device) - mean_path_length_avg = 0 - loss_dict = {} - - if args.distributed: - g_module = generator.module - d_module = discriminator.module - - else: - g_module = generator - d_module = discriminator - - accum = 0.5 ** (32 / (10 * 1000)) - - sample_z = torch.randn(args.n_sample, args.latent, device=device) - - for idx in pbar: - i = idx + args.start_iter - - if i > args.iter: - print("Done!") - - break - - real_img = next(loader) - real_img = real_img.to(device) - - requires_grad(generator, False) - requires_grad(discriminator, True) - - noise = mixing_noise(args.batch, args.latent, args.mixing, device) - fake_img, _ = generator(noise) - fake_pred = discriminator(fake_img) - - real_pred = discriminator(real_img) - d_loss = d_logistic_loss(real_pred, fake_pred) - - loss_dict["d"] = d_loss - loss_dict["real_score"] = real_pred.mean() - loss_dict["fake_score"] = fake_pred.mean() - - discriminator.zero_grad() - d_loss.backward() - d_optim.step() - - d_regularize = i % args.d_reg_every == 0 - - if d_regularize: - real_img.requires_grad = True - real_pred = discriminator(real_img) - r1_loss = d_r1_loss(real_pred, real_img) - - discriminator.zero_grad() - (args.r1 / 2 * r1_loss * args.d_reg_every + 0 * real_pred[0]).backward() - - d_optim.step() - - loss_dict["r1"] = r1_loss - - requires_grad(generator, True) - requires_grad(discriminator, False) - - noise = mixing_noise(args.batch, args.latent, args.mixing, device) - fake_img, _ = generator(noise) - fake_pred = discriminator(fake_img) - g_loss = g_nonsaturating_loss(fake_pred) - - loss_dict["g"] = g_loss - - generator.zero_grad() - g_loss.backward() - g_optim.step() - - g_regularize = i % args.g_reg_every == 0 - - if g_regularize: - path_batch_size = max(1, args.batch // args.path_batch_shrink) - noise = mixing_noise(path_batch_size, args.latent, args.mixing, device) - fake_img, latents = generator(noise, return_latents=True) - - path_loss, mean_path_length, path_lengths = g_path_regularize( - fake_img, latents, mean_path_length - ) - - generator.zero_grad() - weighted_path_loss = args.path_regularize * args.g_reg_every * path_loss - - if args.path_batch_shrink: - weighted_path_loss += 0 * fake_img[0, 0, 0, 0] - - weighted_path_loss.backward() - - g_optim.step() - - mean_path_length_avg = ( - reduce_sum(mean_path_length).item() / get_world_size() - ) - - loss_dict["path"] = path_loss - loss_dict["path_length"] = path_lengths.mean() - - accumulate(g_ema, g_module, accum) - - loss_reduced = reduce_loss_dict(loss_dict) - - d_loss_val = loss_reduced["d"].mean().item() - g_loss_val = loss_reduced["g"].mean().item() - r1_val = loss_reduced["r1"].mean().item() - path_loss_val = loss_reduced["path"].mean().item() - real_score_val = loss_reduced["real_score"].mean().item() - fake_score_val = loss_reduced["fake_score"].mean().item() - path_length_val = loss_reduced["path_length"].mean().item() - - if get_rank() == 0: - pbar.set_description( - ( - f"d: {d_loss_val:.4f}; g: {g_loss_val:.4f}; r1: {r1_val:.4f}; " - f"path: {path_loss_val:.4f}; mean path: {mean_path_length_avg:.4f}" - ) - ) - - if wandb and args.wandb: - wandb.log( - { - "Generator": g_loss_val, - "Discriminator": d_loss_val, - "R1": r1_val, - "Path Length Regularization": path_loss_val, - "Mean Path Length": mean_path_length, - "Real Score": real_score_val, - "Fake Score": fake_score_val, - "Path Length": path_length_val, - } - ) - - if i % 100 == 0: - with torch.no_grad(): - g_ema.eval() - sample, _ = g_ema([sample_z]) - utils.save_image( - sample, - f"sample/{str(i).zfill(6)}.png", - nrow=int(args.n_sample ** 0.5), - normalize=True, - range=(-1, 1), - ) - - if i % 10000 == 0: - torch.save( - { - "g": g_module.state_dict(), - "d": d_module.state_dict(), - "g_ema": g_ema.state_dict(), - "g_optim": g_optim.state_dict(), - "d_optim": d_optim.state_dict(), - }, - f"checkpoint/{str(i).zfill(6)}.pt", - ) - - -if __name__ == "__main__": - device = "cuda" - - parser = argparse.ArgumentParser() - - parser.add_argument("path", type=str) - parser.add_argument("--iter", type=int, default=800000) - parser.add_argument("--batch", type=int, default=16) - parser.add_argument("--n_sample", type=int, default=64) - parser.add_argument("--size", type=int, default=256) - parser.add_argument("--r1", type=float, default=10) - parser.add_argument("--path_regularize", type=float, default=2) - parser.add_argument("--path_batch_shrink", type=int, default=2) - parser.add_argument("--d_reg_every", type=int, default=16) - parser.add_argument("--g_reg_every", type=int, default=4) - parser.add_argument("--mixing", type=float, default=0.9) - parser.add_argument("--ckpt", type=str, default=None) - parser.add_argument("--lr", type=float, default=0.002) - parser.add_argument("--channel_multiplier", type=int, default=2) - parser.add_argument("--wandb", action="store_true") - parser.add_argument("--local_rank", type=int, default=0) - - args = parser.parse_args() - - n_gpu = int(os.environ["WORLD_SIZE"]) if "WORLD_SIZE" in os.environ else 1 - args.distributed = n_gpu > 1 - - if args.distributed: - torch.cuda.set_device(args.local_rank) - torch.distributed.init_process_group(backend="nccl", init_method="env://") - synchronize() - - args.latent = 512 - args.n_mlp = 8 - - args.start_iter = 0 - - generator = Generator( - args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier - ).to(device) - discriminator = Discriminator( - args.size, channel_multiplier=args.channel_multiplier - ).to(device) - g_ema = Generator( - args.size, args.latent, args.n_mlp, channel_multiplier=args.channel_multiplier - ).to(device) - g_ema.eval() - accumulate(g_ema, generator, 0) - - g_reg_ratio = args.g_reg_every / (args.g_reg_every + 1) - d_reg_ratio = args.d_reg_every / (args.d_reg_every + 1) - - g_optim = optim.Adam( - generator.parameters(), - lr=args.lr * g_reg_ratio, - betas=(0 ** g_reg_ratio, 0.99 ** g_reg_ratio), - ) - d_optim = optim.Adam( - discriminator.parameters(), - lr=args.lr * d_reg_ratio, - betas=(0 ** d_reg_ratio, 0.99 ** d_reg_ratio), - ) - - if args.ckpt is not None: - print("load model:", args.ckpt) - - ckpt = torch.load(args.ckpt, map_location=lambda storage, loc: storage) - - try: - ckpt_name = os.path.basename(args.ckpt) - args.start_iter = int(os.path.splitext(ckpt_name)[0]) - - except ValueError: - pass - - generator.load_state_dict(ckpt["g"]) - discriminator.load_state_dict(ckpt["d"]) - g_ema.load_state_dict(ckpt["g_ema"]) - - g_optim.load_state_dict(ckpt["g_optim"]) - d_optim.load_state_dict(ckpt["d_optim"]) - - if args.distributed: - generator = nn.parallel.DistributedDataParallel( - generator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - ) - - discriminator = nn.parallel.DistributedDataParallel( - discriminator, - device_ids=[args.local_rank], - output_device=args.local_rank, - broadcast_buffers=False, - ) - - transform = transforms.Compose( - [ - transforms.RandomHorizontalFlip(), - transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True), - ] - ) - - dataset = MultiResolutionDataset(args.path, transform, args.size) - loader = data.DataLoader( - dataset, - batch_size=args.batch, - sampler=data_sampler(dataset, shuffle=True, distributed=args.distributed), - drop_last=True, - ) - - if get_rank() == 0 and wandb is not None and args.wandb: - wandb.init(project="stylegan 2") - - train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, device) diff --git a/spaces/mrmocciai/rvc-genshin-v2/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py b/spaces/mrmocciai/rvc-genshin-v2/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py deleted file mode 100644 index b2c592527a5966e6f8e79e8c52dc5b414246dcc6..0000000000000000000000000000000000000000 --- a/spaces/mrmocciai/rvc-genshin-v2/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +++ /dev/null @@ -1,97 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import parselmouth -import numpy as np - - -class PMF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def compute_f0(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0 - - def compute_f0_uv(self, wav, p_len=None): - x = wav - if p_len is None: - p_len = x.shape[0] // self.hop_length - else: - assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error" - time_step = self.hop_length / self.sampling_rate * 1000 - f0 = ( - parselmouth.Sound(x, self.sampling_rate) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=self.f0_min, - pitch_ceiling=self.f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0, uv = self.interpolate_f0(f0) - return f0, uv diff --git a/spaces/mrneuralnet/P-DFD/data/__init__.py b/spaces/mrneuralnet/P-DFD/data/__init__.py deleted file mode 100644 index ea50ebaf88d64e75f4960bc99b14f138a343e575..0000000000000000000000000000000000000000 --- a/spaces/mrneuralnet/P-DFD/data/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .wider_face import WiderFaceDetection, detection_collate -from .data_augment import * -from .config import * diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/latent_depth/latent_depth_src/modules/latent_layers.py b/spaces/mshukor/UnIVAL/fairseq/examples/latent_depth/latent_depth_src/modules/latent_layers.py deleted file mode 100644 index 2be05d5535cb05b16f61603a7356df2326bf2e23..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/latent_depth/latent_depth_src/modules/latent_layers.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - - -class LayerSelect(nn.Module): - """Compute samples (from a Gumbel-Sigmoid distribution) which is used as - either (soft) weighting or (hard) selection of residual connection. - https://arxiv.org/abs/2009.13102 - """ - def __init__(self, num_layers, num_logits, soft_select=False, sampling_tau=5.): - super(LayerSelect, self).__init__() - self.layer_logits = torch.nn.Parameter( - torch.Tensor(num_logits, num_layers), - requires_grad=True, - ) - self.hard_select = not soft_select - self.tau = sampling_tau - self.detach_grad = False - self.layer_samples = [None] * num_logits - - def sample(self, logit_idx): - """To leverage the efficiency of distributed training, samples for all - layers are computed at once for each logit_idx. Logits are parameters - learnt independent of each other. - - Args: - logit_idx: The index of logit parameters used for sampling. - """ - assert logit_idx is not None - self.samples = self._gumbel_sigmoid( - self.layer_logits[logit_idx, :].detach() - if self.detach_grad - else self.layer_logits[logit_idx, :], - dim=-1, - tau=self.tau, - hard=self.hard_select, - ) - self.layer_samples[logit_idx] = self.samples - - def forward(self, i): - sample = self.samples[i] - return sample - - def _gumbel_sigmoid( - self, logits, tau=1, hard=False, eps=1e-10, dim=-1, threshold=0.5 - ): - # ~Gumbel(0,1) - gumbels1 = ( - -torch.empty_like(logits, memory_format=torch.legacy_contiguous_format) - .exponential_() - .log() - ) - gumbels2 = ( - -torch.empty_like(logits, memory_format=torch.legacy_contiguous_format) - .exponential_() - .log() - ) - # Difference of two gumbels because we apply a sigmoid - gumbels1 = (logits + gumbels1 - gumbels2) / tau - y_soft = gumbels1.sigmoid() - if hard: - # Straight through. - y_hard = torch.zeros_like( - logits, memory_format=torch.legacy_contiguous_format - ).masked_fill(y_soft > threshold, 1.0) - ret = y_hard - y_soft.detach() + y_soft - else: - # Reparametrization trick. - ret = y_soft - return ret diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/new/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/new/README.md deleted file mode 100644 index 5fa0e97245d3ba6db69d11222261b0644960183d..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/speech_recognition/new/README.md +++ /dev/null @@ -1,43 +0,0 @@ -# Flashlight Decoder - -This script runs decoding for pre-trained speech recognition models. - -## Usage - -Assuming a few variables: - -```bash -checkpoint=<path-to-checkpoint> -data=<path-to-data-directory> -lm_model=<path-to-language-model> -lexicon=<path-to-lexicon> -``` - -Example usage for decoding a fine-tuned Wav2Vec model: - -```bash -python $FAIRSEQ_ROOT/examples/speech_recognition/new/infer.py --multirun \ - task=audio_pretraining \ - task.data=$data \ - task.labels=ltr \ - common_eval.path=$checkpoint \ - decoding.type=kenlm \ - decoding.lexicon=$lexicon \ - decoding.lmpath=$lm_model \ - dataset.gen_subset=dev_clean,dev_other,test_clean,test_other -``` - -Example usage for using Ax to sweep WER parameters (requires `pip install hydra-ax-sweeper`): - -```bash -python $FAIRSEQ_ROOT/examples/speech_recognition/new/infer.py --multirun \ - hydra/sweeper=ax \ - task=audio_pretraining \ - task.data=$data \ - task.labels=ltr \ - common_eval.path=$checkpoint \ - decoding.type=kenlm \ - decoding.lexicon=$lexicon \ - decoding.lmpath=$lm_model \ - dataset.gen_subset=dev_other -``` diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh b/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh deleted file mode 100644 index d8f5d596b4b4ec55f11a82dbbf83bad4a22c0b6c..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/scripts/prepare_timit.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -timit_root=$1 # assume it is the upper-cased version -tgt_dir=$2 -model=$3 - -set -eu - -setups="matched unmatched" -splits="test valid train train_text" - -tgt_dir=$(realpath $tgt_dir) -sph2wav=$KALDI_ROOT/tools/sph2pipe_v2.5/sph2pipe -wav_dir=$tgt_dir/wav - - -mkdir -p $tgt_dir $wav_dir -find $timit_root/{TRAIN,TEST} -iname "*.WAV" > $tgt_dir/all_sph.flist -cat $tgt_dir/all_sph.flist | sed -e 's#//*#/#g' -e 's#.*/\([^/]*\)/\([^/]*\).WAV#\1_\2#g' > $tgt_dir/all.uid -paste -d' ' $tgt_dir/{all_sph.flist,all.uid} | \ - awk -v sph2wav=$sph2wav -v wav_dir=$wav_dir '{print sph2wav " -f wav " $1 " > " wav_dir "/" $2 ".wav"}' \ - > $tgt_dir/sph2wav.sh -bash $tgt_dir/sph2wav.sh -cat $tgt_dir/all.uid | awk -v wav_dir=$(pwd)/$wav_dir '{print $1" "wav_dir"/"$1".wav"}' | sort > $tgt_dir/all_wav.scp -cut -d' ' -f2 $tgt_dir/all_wav.scp | xargs -I{} soxi -s {} > $tgt_dir/all.dur -paste -d' ' $tgt_dir/{all_wav.scp,all.dur} > $tgt_dir/all_wav_dur.scp -rm $tgt_dir/{all.uid,all_sph.flist,sph2wav.sh} - -find $timit_root/{TRAIN,TEST} -iname "*.PHN" > $tgt_dir/all_phn60.flist -while read line; do - if [ ! -f $line ]; then - >&2 echo "Cannot find transcription file '$line'" && exit 1; - fi - cut -f3 -d' ' "$line" | tr '\n' ' ' | perl -ape 's: *$:\n:;' -done < $tgt_dir/all_phn60.flist > $tgt_dir/all.phn60 -cat $tgt_dir/all_phn60.flist | sed -e 's#//*#/#g' -e 's#.*/\([^/]*\)/\([^/]*\).PHN#\1_\2#g' | \ - paste -d' ' - $tgt_dir/all.phn60 | \ - $KALDI_ROOT/egs/timit/s5/local/timit_norm_trans.pl -i - -m $KALDI_ROOT/egs/timit/s5/conf/phones.60-48-39.map -to 39 | \ - sort > $tgt_dir/all.phn -echo "done preparing wav and 39-phone transcripts" - - -for s in $setups; do - mkdir -p $tgt_dir/$s - for x in $splits; do - uid_path=config/timit_${s}/${x}.uid - grep -w -f $uid_path $tgt_dir/all.phn | cut -d' ' -f2- > $tgt_dir/$s/$x.phn - ln -sf $(realpath $tgt_dir/$s/$x.phn) $tgt_dir/$s/$x.wrd - - echo "/" > $tgt_dir/$s/$x.tsv && grep -w -f $uid_path $tgt_dir/all_wav_dur.scp | cut -d' ' -f2- | sed 's# #\t#' >> $tgt_dir/$s/$x.tsv - done - - for x in $splits; do - cat $tgt_dir/$s/$x.phn - done | tr ' ' '\n' | sort -u | awk '{print $1" "1}' > $tgt_dir/$s/dict.phn.txt - ln -sf $(realpath $tgt_dir/$s/dict.phn.txt) $tgt_dir/$s/dict.wrd.txt -done -echo "done preparing unmatched and matched setups for TIMIT" - - -for s in $setups; do - zsh scripts/prepare_audio.sh $tgt_dir/$s $tgt_dir/$s/feat $model - - lm_dir=$tgt_dir/$s/phones - fst_dir=$tgt_dir/$s/fst/phn_to_phn - - python $FAIRSEQ_ROOT/fairseq_cli/preprocess.py --dataset-impl mmap --trainpref $tgt_dir/$s/train_text.phn --workers 10 --only-source --destdir $lm_dir --srcdict $tgt_dir/$s/dict.phn.txt - $KENLM_ROOT/lmplz -o 3 < $tgt_dir/$s/train_text.phn --discount_fallback >$lm_dir/train_text_phn.03.arpa - $KENLM_ROOT/build_binary $lm_dir/train_text_phn.03.arpa $lm_dir/train_text_phn.03.bin - $KENLM_ROOT/lmplz -o 4 < $tgt_dir/$s/train_text.phn --discount_fallback >$lm_dir/train_text_phn.04.arpa - $KENLM_ROOT/build_binary $lm_dir/train_text_phn.04.arpa $lm_dir/train_text_phn.04.bin - - python $FAIRSEQ_ROOT/examples/speech_recognition/kaldi/kaldi_initializer.py kaldi_root=$KALDI_ROOT fst_dir=$fst_dir lm_arpa=$lm_dir/train_text_phn.03.arpa data_dir=$tgt_dir/$s in_labels=phn -done -echo "done preprocessing audio and text for wav2vec-U" diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/bart/model.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/models/bart/model.py deleted file mode 100644 index 71d0b27cd2c0655fe3b00479b672d6d042a4d5ed..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/models/bart/model.py +++ /dev/null @@ -1,384 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -""" -BART: Denoising Sequence-to-Sequence Pre-training for -Natural Language Generation, Translation, and Comprehension -""" -from typing import Optional - -import logging - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.models import register_model, register_model_architecture -from fairseq.models.transformer import TransformerModel -from fairseq.modules.transformer_sentence_encoder import init_bert_params - -from .hub_interface import BARTHubInterface - - -logger = logging.getLogger(__name__) - - -@register_model("bart") -class BARTModel(TransformerModel): - __jit_unused_properties__ = ["supported_targets"] - - @classmethod - def hub_models(cls): - return { - "bart.base": "http://dl.fbaipublicfiles.com/fairseq/models/bart.base.tar.gz", - "bart.large": "http://dl.fbaipublicfiles.com/fairseq/models/bart.large.tar.gz", - "bart.large.mnli": "http://dl.fbaipublicfiles.com/fairseq/models/bart.large.mnli.tar.gz", - "bart.large.cnn": "http://dl.fbaipublicfiles.com/fairseq/models/bart.large.cnn.tar.gz", - "bart.large.xsum": "http://dl.fbaipublicfiles.com/fairseq/models/bart.large.xsum.tar.gz", - } - - def __init__(self, args, encoder, decoder): - super().__init__(args, encoder, decoder) - - # We follow BERT's random weight initialization - self.apply(init_bert_params) - - self.classification_heads = nn.ModuleDict() - if hasattr(self.encoder, "dictionary"): - self.eos: int = self.encoder.dictionary.eos() - - @staticmethod - def add_args(parser): - super(BARTModel, BARTModel).add_args(parser) - parser.add_argument( - "--pooler-dropout", - type=float, - metavar="D", - help="dropout probability in the masked_lm pooler layers", - ) - parser.add_argument( - "--pooler-activation-fn", - choices=utils.get_available_activation_fns(), - help="activation function to use for pooler layer", - ) - parser.add_argument( - "--spectral-norm-classification-head", - action="store_true", - help="Apply spectral normalization on the classification head", - ) - - @property - def supported_targets(self): - return {"self"} - - def forward( - self, - src_tokens, - src_lengths, - prev_output_tokens, - features_only: bool = False, - classification_head_name: Optional[str] = None, - token_embeddings: Optional[torch.Tensor] = None, - return_all_hiddens: bool = True, - alignment_layer: Optional[int] = None, - alignment_heads: Optional[int] = None, - ): - if classification_head_name is not None: - features_only = True - - encoder_out = self.encoder( - src_tokens, - src_lengths=src_lengths, - token_embeddings=token_embeddings, - return_all_hiddens=return_all_hiddens - ) - x, extra = self.decoder( - prev_output_tokens, - encoder_out=encoder_out, - features_only=features_only, - alignment_layer=alignment_layer, - alignment_heads=alignment_heads, - src_lengths=src_lengths, - return_all_hiddens=return_all_hiddens, - ) - eos: int = self.eos - if classification_head_name is not None: - sentence_representation = x[ - src_tokens.eq(eos), : - ].view(x.size(0), -1, x.size(-1))[:, -1, :] - for k, head in self.classification_heads.items(): - # for torch script only supports iteration - if k == classification_head_name: - x = head(sentence_representation) - break - return x, extra - - @classmethod - def from_pretrained( - cls, - model_name_or_path, - checkpoint_file="model.pt", - data_name_or_path=".", - bpe="gpt2", - sample_break_mode="eos", - **kwargs, - ): - from fairseq import hub_utils - - x = hub_utils.from_pretrained( - model_name_or_path, - checkpoint_file, - data_name_or_path, - archive_map=cls.hub_models(), - bpe=bpe, - load_checkpoint_heads=True, - sample_break_mode=sample_break_mode, - **kwargs, - ) - return BARTHubInterface(x["args"], x["task"], x["models"][0]) - - def register_classification_head( - self, name, num_classes=None, inner_dim=None, **kwargs - ): - """Register a classification head.""" - logger.info("Registering classification head: {0}".format(name)) - if name in self.classification_heads: - prev_num_classes = self.classification_heads[name].out_proj.out_features - prev_inner_dim = self.classification_heads[name].dense.out_features - if num_classes != prev_num_classes or inner_dim != prev_inner_dim: - logger.warning( - 're-registering head "{}" with num_classes {} (prev: {}) ' - "and inner_dim {} (prev: {})".format( - name, num_classes, prev_num_classes, inner_dim, prev_inner_dim - ) - ) - self.classification_heads[name] = BARTClassificationHead( - input_dim=self.args.encoder_embed_dim, - inner_dim=inner_dim or self.args.encoder_embed_dim, - num_classes=num_classes, - activation_fn=self.args.pooler_activation_fn, - pooler_dropout=self.args.pooler_dropout, - do_spectral_norm=getattr( - self.args, "spectral_norm_classification_head", False - ), - ) - - def upgrade_state_dict_named(self, state_dict, name): - super().upgrade_state_dict_named(state_dict, name) - - prefix = name + "." if name != "" else "" - current_head_names = ( - [] - if not hasattr(self, "classification_heads") - else self.classification_heads.keys() - ) - - # Handle new classification heads present in the state dict. - keys_to_delete = [] - for k in state_dict.keys(): - if not k.startswith(prefix + "classification_heads."): - continue - - head_name = k[len(prefix + "classification_heads.") :].split(".")[0] - num_classes = state_dict[ - prefix + "classification_heads." + head_name + ".out_proj.weight" - ].size(0) - inner_dim = state_dict[ - prefix + "classification_heads." + head_name + ".dense.weight" - ].size(0) - - if getattr(self.args, "load_checkpoint_heads", False): - if head_name not in current_head_names: - self.register_classification_head(head_name, num_classes, inner_dim) - else: - if head_name not in current_head_names: - logger.warning( - "deleting classification head ({}) from checkpoint " - "not present in current model: {}".format(head_name, k) - ) - keys_to_delete.append(k) - elif ( - num_classes - != self.classification_heads[head_name].out_proj.out_features - or inner_dim - != self.classification_heads[head_name].dense.out_features - ): - logger.warning( - "deleting classification head ({}) from checkpoint " - "with different dimensions than current model: {}".format( - head_name, k - ) - ) - keys_to_delete.append(k) - for k in keys_to_delete: - del state_dict[k] - - def truncate_emb(key): - if key in state_dict: - state_dict[key] = state_dict[key][:-1, :] - - # When finetuning on translation task, remove last row of - # embedding matrix that corresponds to mask_idx token. - loaded_dict_size = state_dict["encoder.embed_tokens.weight"].size(0) - if ( - loaded_dict_size == len(self.encoder.dictionary) + 1 - and "<mask>" not in self.encoder.dictionary - ): - truncate_emb("encoder.embed_tokens.weight") - truncate_emb("decoder.embed_tokens.weight") - truncate_emb("encoder.output_projection.weight") - truncate_emb("decoder.output_projection.weight") - - # When continued pretraining on new set of languages for mbart, - # add extra lang embeddings at the end of embed_tokens. - # Note: newly added languages are assumed to have been added at the end. - if self.args.task == "multilingual_denoising" and loaded_dict_size < len( - self.encoder.dictionary - ): - logger.info( - "Adding extra language embeddings not found in pretrained model for " - "continued pretraining of MBART on new set of languages." - ) - loaded_mask_token_embedding = state_dict["encoder.embed_tokens.weight"][ - -1, : - ] - - num_langids_to_add = len(self.encoder.dictionary) - loaded_dict_size - embed_dim = state_dict["encoder.embed_tokens.weight"].size(1) - - new_lang_embed_to_add = torch.zeros(num_langids_to_add, embed_dim) - nn.init.normal_(new_lang_embed_to_add, mean=0, std=embed_dim ** -0.5) - new_lang_embed_to_add = new_lang_embed_to_add.to( - dtype=state_dict["encoder.embed_tokens.weight"].dtype, - ) - - state_dict["encoder.embed_tokens.weight"] = torch.cat( - [ - state_dict["encoder.embed_tokens.weight"][ - : loaded_dict_size - 1, : - ], - new_lang_embed_to_add, - loaded_mask_token_embedding.unsqueeze(0), - ] - ) - state_dict["decoder.embed_tokens.weight"] = torch.cat( - [ - state_dict["decoder.embed_tokens.weight"][ - : loaded_dict_size - 1, : - ], - new_lang_embed_to_add, - loaded_mask_token_embedding.unsqueeze(0), - ] - ) - - # Copy any newly-added classification heads into the state dict - # with their current weights. - if hasattr(self, "classification_heads"): - cur_state = self.classification_heads.state_dict() - for k, v in cur_state.items(): - if prefix + "classification_heads." + k not in state_dict: - logger.info("Overwriting " + prefix + "classification_heads." + k) - state_dict[prefix + "classification_heads." + k] = v - - -class BARTClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__( - self, - input_dim, - inner_dim, - num_classes, - activation_fn, - pooler_dropout, - do_spectral_norm=False, - ): - super().__init__() - self.dense = nn.Linear(input_dim, inner_dim) - self.activation_fn = utils.get_activation_fn(activation_fn) - self.dropout = nn.Dropout(p=pooler_dropout) - self.out_proj = nn.Linear(inner_dim, num_classes) - - if do_spectral_norm: - self.out_proj = torch.nn.utils.spectral_norm(self.out_proj) - - def forward(self, features, **kwargs): - x = features - x = self.dropout(x) - x = self.dense(x) - x = self.activation_fn(x) - x = self.dropout(x) - x = self.out_proj(x) - return x - - -@register_model_architecture("bart", "bart_large") -def bart_large_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4 * 1024) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 12) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", True) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.dropout = getattr(args, "dropout", 0.1) - args.max_target_positions = getattr(args, "max_target_positions", 1024) - args.max_source_positions = getattr(args, "max_source_positions", 1024) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", True - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", True) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", True) - args.layernorm_embedding = getattr(args, "layernorm_embedding", True) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.pooler_dropout = getattr(args, "pooler_dropout", 0.0) - - -@register_model_architecture("bart", "bart_base") -def bart_base_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 768) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4 * 768) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 12) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 12) - bart_large_architecture(args) - - -@register_model_architecture("bart", "mbart_large") -def mbart_large_architecture(args): - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - bart_large_architecture(args) - - -@register_model_architecture("bart", "mbart_base") -def mbart_base_architecture(args): - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - bart_base_architecture(args) - - -@register_model_architecture("bart", "mbart_base_wmt20") -def mbart_base_wmt20_architecture(args): - args.layernorm_embedding = getattr(args, "layernorm_embedding", False) - mbart_base_architecture(args) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/speech_generator.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/speech_generator.py deleted file mode 100644 index 8086e34d2b56fa808d0905b1a00e87e6736fcf04..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/speech_generator.py +++ /dev/null @@ -1,219 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import numpy as np - -from fairseq.data.audio.speech_to_text_dataset import S2TDataConfig - - -class SpeechGenerator(object): - def __init__(self, model, vocoder, data_cfg: S2TDataConfig): - self.model = model - self.vocoder = vocoder - stats_npz_path = data_cfg.global_cmvn_stats_npz - self.gcmvn_stats = None - if stats_npz_path is not None: - self.gcmvn_stats = np.load(stats_npz_path) - - def gcmvn_denormalize(self, x): - # x: B x T x C - if self.gcmvn_stats is None: - return x - mean = torch.from_numpy(self.gcmvn_stats["mean"]).to(x) - std = torch.from_numpy(self.gcmvn_stats["std"]).to(x) - assert len(x.shape) == 3 and mean.shape[0] == std.shape[0] == x.shape[2] - x = x * std.view(1, 1, -1).expand_as(x) - return x + mean.view(1, 1, -1).expand_as(x) - - def get_waveform(self, feat): - # T x C -> T - return None if self.vocoder is None else self.vocoder(feat).squeeze(0) - - -class AutoRegressiveSpeechGenerator(SpeechGenerator): - def __init__( - self, model, vocoder, data_cfg, max_iter: int = 6000, - eos_prob_threshold: float = 0.5, - ): - super().__init__(model, vocoder, data_cfg) - self.max_iter = max_iter - self.eos_prob_threshold = eos_prob_threshold - - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - src_tokens = sample["net_input"]["src_tokens"] - src_lengths = sample["net_input"]["src_lengths"] - bsz, src_len = src_tokens.size() - n_frames_per_step = model.decoder.n_frames_per_step - out_dim = model.decoder.out_dim - raw_dim = out_dim // n_frames_per_step - - # initialize - encoder_out = model.forward_encoder(src_tokens, src_lengths, - speaker=sample["speaker"]) - incremental_state = {} - feat, attn, eos_prob = [], [], [] - finished = src_tokens.new_zeros((bsz,)).bool() - out_lens = src_lengths.new_zeros((bsz,)).long().fill_(self.max_iter) - - prev_feat_out = encoder_out["encoder_out"][0].new_zeros(bsz, 1, out_dim) - for step in range(self.max_iter): - cur_out_lens = out_lens.clone() - cur_out_lens.masked_fill_(cur_out_lens.eq(self.max_iter), step + 1) - _, cur_eos_out, cur_extra = model.forward_decoder( - prev_feat_out, encoder_out=encoder_out, - incremental_state=incremental_state, - target_lengths=cur_out_lens, speaker=sample["speaker"], **kwargs - ) - cur_eos_prob = torch.sigmoid(cur_eos_out).squeeze(2) - feat.append(cur_extra['feature_out']) - attn.append(cur_extra['attn']) - eos_prob.append(cur_eos_prob) - - cur_finished = (cur_eos_prob.squeeze(1) > self.eos_prob_threshold) - out_lens.masked_fill_((~finished) & cur_finished, step + 1) - finished = finished | cur_finished - if finished.sum().item() == bsz: - break - prev_feat_out = cur_extra['feature_out'] - - feat = torch.cat(feat, dim=1) - feat = model.decoder.postnet(feat) + feat - eos_prob = torch.cat(eos_prob, dim=1) - attn = torch.cat(attn, dim=2) - alignment = attn.max(dim=1)[1] - - feat = feat.reshape(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - - eos_prob = eos_prob.repeat_interleave(n_frames_per_step, dim=1) - attn = attn.repeat_interleave(n_frames_per_step, dim=2) - alignment = alignment.repeat_interleave(n_frames_per_step, dim=1) - out_lens = out_lens * n_frames_per_step - - finalized = [ - { - 'feature': feat[b, :out_len], - 'eos_prob': eos_prob[b, :out_len], - 'attn': attn[b, :, :out_len], - 'alignment': alignment[b, :out_len], - 'waveform': self.get_waveform(feat[b, :out_len]), - } - for b, out_len in zip(range(bsz), out_lens) - ] - - if has_targ: - assert sample["target"].size(-1) == out_dim - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - tgt_lens = sample["target_lengths"] * n_frames_per_step - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized - - -class NonAutoregressiveSpeechGenerator(SpeechGenerator): - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - bsz, max_src_len = sample["net_input"]["src_tokens"].size() - n_frames_per_step = model.encoder.n_frames_per_step - out_dim = model.encoder.out_dim - raw_dim = out_dim // n_frames_per_step - - feat, out_lens, log_dur_out, _, _ = model( - src_tokens=sample["net_input"]["src_tokens"], - src_lengths=sample["net_input"]["src_lengths"], - prev_output_tokens=sample["net_input"]["prev_output_tokens"], - incremental_state=None, - target_lengths=sample["target_lengths"], - speaker=sample["speaker"] - ) - - feat = feat.view(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - - dur_out = torch.clamp( - torch.round(torch.exp(log_dur_out) - 1).long(), min=0 - ) - - def get_dur_plot_data(d): - r = [] - for i, dd in enumerate(d): - r += [i + 1] * dd.item() - return r - - out_lens = out_lens * n_frames_per_step - finalized = [ - { - 'feature': feat[b, :l] if l > 0 else feat.new_zeros([1, raw_dim]), - 'waveform': self.get_waveform( - feat[b, :l] if l > 0 else feat.new_zeros([1, raw_dim]) - ), - 'attn': feat.new_tensor(get_dur_plot_data(dur_out[b])), - } - for b, l in zip(range(bsz), out_lens) - ] - - if has_targ: - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - tgt_lens = sample["target_lengths"] * n_frames_per_step - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized - - -class TeacherForcingAutoRegressiveSpeechGenerator(AutoRegressiveSpeechGenerator): - @torch.no_grad() - def generate(self, model, sample, has_targ=False, **kwargs): - model.eval() - - src_tokens = sample["net_input"]["src_tokens"] - src_lens = sample["net_input"]["src_lengths"] - prev_out_tokens = sample["net_input"]["prev_output_tokens"] - tgt_lens = sample["target_lengths"] - n_frames_per_step = model.decoder.n_frames_per_step - raw_dim = model.decoder.out_dim // n_frames_per_step - bsz = src_tokens.shape[0] - - feat, eos_prob, extra = model( - src_tokens, src_lens, prev_out_tokens, incremental_state=None, - target_lengths=tgt_lens, speaker=sample["speaker"] - ) - - attn = extra["attn"] # B x T_s x T_t - alignment = attn.max(dim=1)[1] - feat = feat.reshape(bsz, -1, raw_dim) - feat = self.gcmvn_denormalize(feat) - eos_prob = eos_prob.repeat_interleave(n_frames_per_step, dim=1) - attn = attn.repeat_interleave(n_frames_per_step, dim=2) - alignment = alignment.repeat_interleave(n_frames_per_step, dim=1) - tgt_lens = sample["target_lengths"] * n_frames_per_step - - finalized = [ - { - 'feature': feat[b, :tgt_len], - 'eos_prob': eos_prob[b, :tgt_len], - 'attn': attn[b, :, :tgt_len], - 'alignment': alignment[b, :tgt_len], - 'waveform': self.get_waveform(feat[b, :tgt_len]), - } - for b, tgt_len in zip(range(bsz), tgt_lens) - ] - - if has_targ: - tgt_feats = sample["target"].view(bsz, -1, raw_dim) - tgt_feats = self.gcmvn_denormalize(tgt_feats) - for b, (f, l) in enumerate(zip(tgt_feats, tgt_lens)): - finalized[b]["targ_feature"] = f[:l] - finalized[b]["targ_waveform"] = self.get_waveform(f[:l]) - return finalized diff --git a/spaces/mxs2019/nba-player-classifer/app.py b/spaces/mxs2019/nba-player-classifer/app.py deleted file mode 100644 index 25c3ba9c07dc51eda60fe11978cd19636a5f2a28..0000000000000000000000000000000000000000 --- a/spaces/mxs2019/nba-player-classifer/app.py +++ /dev/null @@ -1,24 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -learn = load_learner('nba-player-classifier.pkl') - -def classify_nba_player(img): - # Print - img = PILImage.create(img) - pred,pred_idx,probs = learn.predict(img) - return dict(zip(learn.dls.vocab, map(float, probs))) - -inputs = gr.Image(shape=(192,192)) - -outputs = gr.Label(num_top_classes=3) -examples = ['lebron.jpeg', 'michael.jpg', 'kobe.png'] - -iface = gr.Interface( - fn=classify_nba_player, - inputs=inputs, - outputs=outputs, - examples=examples, - ) - -iface.launch() \ No newline at end of file diff --git a/spaces/myrad01/Inpaint-Anything/__init__.py b/spaces/myrad01/Inpaint-Anything/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/trainers/__init__.py b/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/trainers/__init__.py deleted file mode 100644 index c59241f553efe4e2dd6b198e2e5656a2b1488857..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/saicinpainting/training/trainers/__init__.py +++ /dev/null @@ -1,30 +0,0 @@ -import logging -import torch -from saicinpainting.training.trainers.default import DefaultInpaintingTrainingModule - - -def get_training_model_class(kind): - if kind == 'default': - return DefaultInpaintingTrainingModule - - raise ValueError(f'Unknown trainer module {kind}') - - -def make_training_model(config): - kind = config.training_model.kind - kwargs = dict(config.training_model) - kwargs.pop('kind') - kwargs['use_ddp'] = config.trainer.kwargs.get('accelerator', None) == 'ddp' - - logging.info(f'Make training model {kind}') - - cls = get_training_model_class(kind) - return cls(config, **kwargs) - - -def load_checkpoint(train_config, path, map_location='cuda', strict=True): - model: torch.nn.Module = make_training_model(train_config) - state = torch.load(path, map_location=map_location) - model.load_state_dict(state['state_dict'], strict=strict) - model.on_load_checkpoint(state) - return model diff --git a/spaces/nateraw/jupyterlab-test2/create_dataset.py b/spaces/nateraw/jupyterlab-test2/create_dataset.py deleted file mode 100644 index 4f13ef16928dcfcda7ee5766c6b80a5fbc3278e6..0000000000000000000000000000000000000000 --- a/spaces/nateraw/jupyterlab-test2/create_dataset.py +++ /dev/null @@ -1,124 +0,0 @@ -import subprocess -from pathlib import Path -import librosa -from scipy.io import wavfile -import numpy as np -from demucs.pretrained import get_model, DEFAULT_MODEL -from demucs.apply import apply_model -import torch -import csv -import whisper - - -def download_youtube_clip(video_identifier, start_time, end_time, output_filename, num_attempts=5, url_base="https://www.youtube.com/watch?v="): - status = False - - output_path = Path(output_filename) - if output_path.exists(): - return True, "Already Downloaded" - - command = f""" - yt-dlp --quiet --no-warnings -x --audio-format wav -f bestaudio -o "{output_filename}" --download-sections "*{start_time}-{end_time}" "{url_base}{video_identifier}" - """.strip() - - attempts = 0 - while True: - try: - output = subprocess.check_output(command, shell=True, stderr=subprocess.STDOUT) - except subprocess.CalledProcessError as err: - attempts += 1 - if attempts == num_attempts: - return status, err.output - else: - break - - status = output_path.exists() - return status, "Downloaded" - - -def split_long_audio(model, filepaths, character_name, save_dir="data_dir", out_sr=44100): - if isinstance(filepaths, str): - filepaths = [filepaths] - - for file_idx, filepath in enumerate(filepaths): - - save_path = Path(save_dir) / character_name - save_path.mkdir(exist_ok=True, parents=True) - - print(f"Transcribing file {file_idx}: '{filepath}' to segments...") - result = model.transcribe(filepath, word_timestamps=True, task="transcribe", beam_size=5, best_of=5) - segments = result['segments'] - - wav, sr = librosa.load(filepath, sr=None, offset=0, duration=None, mono=True) - wav, _ = librosa.effects.trim(wav, top_db=20) - peak = np.abs(wav).max() - if peak > 1.0: - wav = 0.98 * wav / peak - wav2 = librosa.resample(wav, orig_sr=sr, target_sr=out_sr) - wav2 /= max(wav2.max(), -wav2.min()) - - for i, seg in enumerate(segments): - start_time = seg['start'] - end_time = seg['end'] - wav_seg = wav2[int(start_time * out_sr):int(end_time * out_sr)] - wav_seg_name = f"{character_name}_{file_idx}_{i}.wav" - out_fpath = save_path / wav_seg_name - wavfile.write(out_fpath, rate=out_sr, data=(wav_seg * np.iinfo(np.int16).max).astype(np.int16)) - - -def extract_vocal_demucs(model, filename, out_filename, sr=44100, device=None, shifts=1, split=True, overlap=0.25, jobs=0): - wav, sr = librosa.load(filename, mono=False, sr=sr) - wav = torch.tensor(wav) - ref = wav.mean(0) - wav = (wav - ref.mean()) / ref.std() - sources = apply_model( - model, - wav[None], - device=device, - shifts=shifts, - split=split, - overlap=overlap, - progress=True, - num_workers=jobs - )[0] - sources = sources * ref.std() + ref.mean() - - wav = sources[-1] - wav = wav / max(1.01 * wav.abs().max(), 1) - wavfile.write(out_filename, rate=sr, data=wav.numpy().T) - return out_filename - - -def main( - clips_csv_filepath = "data.csv", - character = "somebody", - do_extract_vocals = False, - whisper_size = "medium", - # Where raw yt clips will be downloaded to - dl_dir = "downloads", - # Where actual data will be organized - data_dir = "dataset_raw", - **kwargs -): - dl_path = Path(dl_dir) / character - dl_path.mkdir(exist_ok=True, parents=True) - if do_extract_vocals: - demucs_model = get_model(DEFAULT_MODEL) - - with Path(clips_csv_filepath).open() as f: - reader = csv.DictReader(f) - for i, row in enumerate(reader): - outfile_path = dl_path / f"{character}_{i:04d}.wav" - download_youtube_clip(row['ytid'], row['start'], row['end'], outfile_path) - if do_extract_vocals: - extract_vocal_demucs(demucs_model, outfile_path, outfile_path) - - filenames = sorted([str(x) for x in dl_path.glob("*.wav")]) - whisper_model = whisper.load_model(whisper_size) - split_long_audio(whisper_model, filenames, character, data_dir) - - -if __name__ == '__main__': - import json - cfg = json.loads(Path('dataset_config.json').read_text()) - main(**cfg) diff --git a/spaces/naver/PUMP/core/losses/unsupervised_deepmatching_loss.py b/spaces/naver/PUMP/core/losses/unsupervised_deepmatching_loss.py deleted file mode 100644 index 216aab4e84c96d8cf4c6e2fcdd0187b973aaa2f3..0000000000000000000000000000000000000000 --- a/spaces/naver/PUMP/core/losses/unsupervised_deepmatching_loss.py +++ /dev/null @@ -1,146 +0,0 @@ -# Copyright 2022-present NAVER Corp. -# CC BY-NC-SA 4.0 -# Available only for non-commercial use - -from pdb import set_trace as bb - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from core import functional as myF - - -class DeepMatchingLoss (nn.Module): - """ This loss is based on DeepMatching (IJCV'16). - atleast: (int) minimum image size at which the pyramid construction stops. - sub: (int) prior subsampling - way: (str) which way to compute the asymmetric matching ('1', '2' or '12') - border: (int) ignore pixels too close to the border - rectify_p: (float) non-linear power-rectification in DeepMatching - eps: (float) epsilon for the L1 normalization. Kinda handles unmatched pixels. - """ - def __init__(self, eps=0.03, atleast=5, sub=2, way='12', border=16, rectify_p=1.5): - super().__init__() - assert way in ('1','2','12') - self.subsample = sub - self.border = border - self.way = way - self.atleast = atleast - self.rectify_p = rectify_p - self.eps = eps - - self._cache = {} - - def rectify(self, corr): - corr = corr.clip_(min=0) - corr = corr ** self.rectify_p - return corr - - def forward(self, desc1, desc2, **kw): - # 1 --> 2 - loss1 = self.forward_oneway(desc1, desc2, **kw) \ - if '1' in self.way else 0 - - # 2 --> 1 - loss2 = self.forward_oneway(desc2, desc1, **kw) \ - if '2' in self.way else 0 - - return dict(deepm_loss=(loss1+loss2)/len(self.way)) - - def forward_oneway(self, desc1, desc2, dbg=(), **kw): - assert desc1.shape[:2] == desc2.shape[:2] - - # prior subsampling - s = slice(self.border, -self.border or None, self.subsample) - desc1, desc2 = desc1[...,s,s], desc2[...,s,s] - desc1 = desc1[:,:,2::4,2::4] # subsample patches in 1st image - B, D, H1, W1, H2, W2 = desc1.shape + desc2.shape[-2:] - if B == 0: return 0 # empty batch - - # intial 4D correlation volume - corr = torch.bmm(desc1.reshape(B,D,-1).transpose(1,2), desc2.reshape(B,D,-1)).view(B,H1,W1,H2,W2) - - # build pyramid - pyramid = self.deep_matching(corr) - corr = pyramid[-1] # high-level correlation - corr = self.rectify(corr) - - # L1 norm - B, H1, W1, H2, W2 = corr.shape - corr = corr / (corr.reshape(B,H1*W1,-1).sum(dim=-1).view(B,H1,W1,1,1) + self.eps) - - # squared L2 norm - loss = - torch.square(corr).sum() / (B*H1*W1) - return loss - - def deep_matching(self, corr): - # print(f'level=0 {corr.shape=}') - weights = None - pyramid = [corr] - for level in range(1,999): - corr, weights = self.forward_level(level, corr, weights) - pyramid.append(corr) - # print(f'{level=} {corr.shape=}') - if weights.sum() == 0: break # img1 has become too small - if min(corr.shape[-2:]) < 2*self.atleast: break # img2 has become too small - return pyramid - - def forward_level(self, level, corr, weights): - B, H1, W1, H2, W2 = corr.shape - - # max-pooling - pooled = F.max_pool2d(corr.view(B,H1*W1,H2,W2), 3, padding=1, stride=2) - pooled = pooled.view(B, H1, W1, *pooled.shape[-2:]) - - # print(f'rectifying corr at {level=}') - pooled = self.rectify(pooled) - - # sparse conv - key = level, H1, W1, H2, W2 - if key not in self._cache: - B, H1, W1, H2, W2 = myF.true_corr_shape(pooled.shape, level-1) - self._cache[key] = myF.children(level, H1, W1, H2, W2).to(corr.device) - - return sparse_conv(level, pooled, self._cache[key], weights) - - -def sparse_conv(level, corr, parents, weights=None, border_norm=0.9): - B, H1, W1, H2, W2 = myF.true_corr_shape(corr.shape, level-1) - n_cache = len(parents) - - # perform the sparse convolution 'manually' - # since sparse convolutions are not implemented in pytorch currently - corr = corr.view(B, -1, H2, W2) - - res = corr.new_zeros((B, n_cache+1, H2, W2)) # last one = garbage channel - nrm = corr.new_full((n_cache+1, 3, 3), torch.finfo(corr.dtype).eps) - ones = nrm.new_ones((corr.shape[1], 1, 1)) - ex = 1 - if weights is not None: - weights = weights.view(corr.shape[1],1,1) - corr = corr * weights[None] # apply weights to correlation maps beforehand - ones *= weights - - sl = lambda v: slice(0,-1 or None) if v < 0 else slice(1,None) - c = 0 - for y in (-1, 1): - for x in (-1, 1): - src_layers = parents[:,c]; c+= 1 - # we want to do: res += corr[src_layers] (for all children != -1) - # but we only have 'res.index_add_()' <==> res[tgt_layers] += corr - tgt_layers = myF.inverse_mapping(src_layers, max_elem=corr.shape[1], default=n_cache)[:-1] - - # All of corr's channels MUST be utilized. for level>1, this doesn't hold, - # so we'll send them to a garbage channel ==> res[n_cache] - sel = myF.good_slice( tgt_layers < n_cache ) - - res[:,:,sl(-y),sl(-x)].index_add_(1, tgt_layers[sel], corr[:,sel,sl(y),sl(x)]) - nrm[ :,sl(-y),sl(-x)].index_add_(0, tgt_layers[sel], ones[sel].expand(-1,2,2)) - - # normalize borders - weights = myF.norm_borders(res, nrm, norm=border_norm)[:-1] - - res = res[:,:-1] # remove garbage channel - return res.view(B, H1+ex, W1+ex, *res.shape[-2:]), weights - diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Designing-Type-Karen-Cheng-Pdf-19.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Designing-Type-Karen-Cheng-Pdf-19.md deleted file mode 100644 index 00a9335972d4c26dd237d5499d69cf3b478a9227..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Designing-Type-Karen-Cheng-Pdf-19.md +++ /dev/null @@ -1,51 +0,0 @@ -## Designing Type Karen Cheng Pdf 19 - - - -**Designing Type Karen Cheng Pdf 19 🗸 [https://kneedacexbrew.blogspot.com/?d=2tw0FI](https://kneedacexbrew.blogspot.com/?d=2tw0FI)** - - - -# Designing Type by Karen Cheng: A Review - - - -Designing Type is a book that explains the processes behind creating and designing typefaces. It was written by Karen Cheng, a professor of visual communication design at the University of Washington. The book was first published in 2006 by Yale University Press, and a second edition was released in 2020. - - - -The book covers topics such as design process, variables in type design, spacing, proportions, serif and sans serif letters, numbers, punctuation, and accents. It also includes introductory essays and diagrams that provide historical and theoretical background on typography. The book is illustrated with numerous examples of classic and modern typefaces, as well as sketches and diagrams that show the letter construction and visual principles. - - - -The book is intended for both students and professional graphic designers who want to learn more about the art and craft of type design. It is also a useful reference for anyone who is interested in typography and its applications. The book is praised for its clarity, depth, and rigor, as well as its balance between theory and practice. - - - -Designing Type is available in PDF format from various online sources. However, it is recommended to purchase the printed version from Yale University Press or other reputable bookstores to support the author and publisher. - - - -The book has received positive reviews from critics and readers alike. It has been praised for its clarity, depth, and rigor, as well as its balance between theory and practice. Some reviewers have called it an indispensable guide for developing and designing typefaces[^1^] [^2^], a useful single tool for designing letters[^2^], and a superb reference for both students and professional graphic designers[^1^]. - - - -The book has also been appreciated for its updated content and design in the second edition. The author has added new examples of contemporary typefaces, revised some of the diagrams and text, and redesigned the layout to improve readability and aesthetics. The book also includes a new foreword by renowned type designer Erik Spiekermann, who praises the book as a "masterpiece of clarity and usefulness". - - - -Designing Type is a book that not only teaches how to design type, but also why to design type. It shows the importance of typography as a visual language that communicates meaning, emotion, and identity. It also inspires the reader to explore the endless possibilities of creating and designing typefaces for different purposes and contexts. - - - -If you are interested in learning more about the book and the author, you can visit the official website of Designing Type at [https://designingtype.com/](https://designingtype.com/). There you can find more information about the book's content, sample pages, reviews, and resources. You can also watch a video of Karen Cheng talking about her process and experience of writing the book. - - - -You can also follow Karen Cheng on Twitter at [@karencheng](https://twitter.com/karencheng), where she shares her insights and opinions on typography, design, and education. She also posts updates on her latest projects and publications. - - - -Designing Type is a book that will enrich your knowledge and skills in typography and type design. It will also inspire you to create your own typefaces and explore the expressive potential of letters. Whether you are a beginner or an expert, a student or a professional, a designer or a reader, you will find something valuable and enjoyable in this book. - - 1b8d091108 \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Meinhausplaner Nutzer Id Crack Software.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Meinhausplaner Nutzer Id Crack Software.md deleted file mode 100644 index b709d65ba40219aa191cfc8d470f9097bef01a9a..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Meinhausplaner Nutzer Id Crack Software.md +++ /dev/null @@ -1,20 +0,0 @@ - -<h1>How to Use meinHausplaner Software with Your Nutzer ID</h1> -<p>meinHausplaner is a popular home design software that allows you to plan and visualize your dream house. Whether you want to build a new house, renovate an existing one, or add some extensions, meinHausplaner can help you create realistic 3D models of your project. You can also access a variety of features and libraries to customize your design according to your preferences and needs.</p> -<h2>meinhausplaner nutzer id crack software</h2><br /><p><b><b>Download Zip</b> &#127383; <a href="https://urlcod.com/2uIaua">https://urlcod.com/2uIaua</a></b></p><br /><br /> -<p>But before you can start using meinHausplaner, you need to register and get a Nutzer ID. A Nutzer ID is a unique code that identifies you as a user of meinHausplaner and allows you to access the software and its updates. In this article, we will show you how to get and use your Nutzer ID with meinHausplaner software.</p> -<h2>How to Get Your Nutzer ID</h2> -<p>To get your Nutzer ID, you need to download the meinHausplaner software from the official website[^1^]. You can choose from three versions of the software: Basic, Standard, and Professional. The Basic version is free and supported by ads, while the Standard and Professional versions are paid and offer more features and functions.</p> -<p>After downloading the software, you need to install it on your computer. The software is compatible with Windows 7, 8, and 10 operating systems[^2^]. During the installation process, you will be asked to enter your personal information, such as your name, email address, phone number, and country. This information is required for the authorization of your Nutzer ID.</p> -<p>Once you have entered your information, you will receive an email with a confirmation link. You need to click on this link to verify your email address. Then, you will receive a phone call from meinHausplaner team with your personal authorization code. You need to enter this code in the software to activate your Nutzer ID[^2^].</p> -<h2>How to Use Your Nutzer ID</h2> -<p>After activating your Nutzer ID, you can start using meinHausplaner software. You can access the software by clicking on the desktop icon or the start menu shortcut. When you open the software, you will see a welcome screen with some options. You can choose to start a new project, open an existing project, or browse some sample projects.</p> -<p>To start a new project, you need to click on the "New" button. You will be asked to enter a name for your project and select a template. A template is a predefined house design that you can modify according to your needs. You can choose from various templates based on different styles, sizes, and layouts.</p> -<p></p> -<p>After selecting a template, you will enter the main interface of meinHausplaner. Here, you can see your house design in 2D or 3D mode. You can switch between these modes by clicking on the buttons at the top right corner of the screen. You can also zoom in or out, rotate, pan, or tilt your view by using the mouse or keyboard commands.</p> -<p>To edit your house design, you can use the tools and menus on the left side of the screen. You can add or delete walls, doors, windows, stairs, roofs, floors, ceilings, furniture, appliances, lighting fixtures, and other elements. You can also change the dimensions, colors, textures, materials, and styles of these elements by using the properties panel on the right side of the screen.</p> -<p>To save your project, you need to click on the "File" menu and select "Save". You can also export your project as an image file or a CAD file by selecting "Export" from the same menu. You can also print your project or share it online by selecting "Print" or "Share" from the same menu.</p> -<h2>Conclusion</h2> -<p>meinHausplaner is a powerful and easy-to-use home design software that can help you plan and visualize your dream house. To use this software, you need to register and get a Nutzer ID. A Nutzer ID is a unique code that identifies you as a user of meinHausplaner and allows you to access the software and its updates. In this</p> 81aa517590<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/neuroliptica/2ch_captcha/app.py b/spaces/neuroliptica/2ch_captcha/app.py deleted file mode 100644 index 4e478482d00b453aa69a7e0b978c2b11d4c2d2b7..0000000000000000000000000000000000000000 --- a/spaces/neuroliptica/2ch_captcha/app.py +++ /dev/null @@ -1,199 +0,0 @@ -import numpy as np -from PIL import Image -import cv2 -import gradio as gr - -import logging -#import os -#import sys -from collections import defaultdict - -#os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # FATAL -#logging.getLogger('tensorflow').setLevel(logging.FATAL) - -import tensorflow as tf - -empty = (0, 0, 0) -last_saved = 0 -classes = '+абвгдежзийклмнопрстуфхцчшщъыьэюя' - -class PixelGraph: - def __init__(self): - self.graph = defaultdict(list) - - def append_undirected(self, a, b): - self.graph[a].append(b) - self.graph[b].append(a) - - def count_connected_components(self): - visited = defaultdict(int) - result = 0 - nodes = self.graph.keys() - for node in nodes: - if visited[node] == 0: - queue = [node] - while len(queue): - c = queue.pop() - visited[c] = 1 - for n in self.graph[c]: - if visited[n] == 0: - queue.append(n) - result += 1 - return result - -class Captcha: - def __init__(self, name="empty", omega=300, delta=10, img=""): - self.name = name - - self.omega = omega - self.delta = delta - self.components_limit = 4 - - self.image = img - if name != "empty": - self.image = Image.open(name) - - self.pixel_map = self.image.load() - self.w, self.h = self.image.size - - self.colors = defaultdict(int) - self.parts = list() - - def _init_colors(self): - for pixel in self.image.getdata(): - self.colors[pixel] += 1 - self.colors = list(filter(lambda x: x[1] > self.delta, self.colors.items())) - n = defaultdict(int) - for i in range(self.w): - for j in range(self.h): - if n[self.pixel_map[i, j]] == 0: - n[self.pixel_map[i, j]] = i+1 - else: - n[self.pixel_map[i, j]] = min(n[self.pixel_map[i, j]], i+1) - nc = list() - for color in self.colors: - c, f = color[0], color[1] - nc.append((c, (f, n[c]))) - self.colors = sorted(nc, key=lambda x: x[1][1]) - cols = self.colors.copy() - self.colors = list(map(lambda x: x[0], cols)) - - def _connected_componets(self, part): - graph = PixelGraph() - for i in range(self.w): - for j in range(self.h): - if i-1 != 0 and part[i-1,j] != empty: - graph.append_undirected((i-1, j), (i, j)) - - if i+1 != self.w and part[i+1,j] != empty: - graph.append_undirected((i+1, j), (i, j)) - - if j-1 != 0 and part[i,j-1] != empty: - graph.append_undirected((i, j-1), (i, j)) - - if j+1 != self.h and part[i,j+1] != empty: - graph.append_undirected((i, j+1), (i, j)) - return graph.count_connected_components() - - def _save_part(self, part): - none = (-1,-1) - top = none - for i in range(self.w): - for j in range(self.h): - if part[i, j] != empty: - top = (i, j) - break - if top != none: - break - bot = none - for i in range(self.w): - for j in range(self.h): - if part[self.w-i-1, j] != empty: - bot = (self.w-i-1, j) - break - if bot != none: - break - left = none - for j in range(self.h): - for i in range(self.w): - if part[i, j] != empty: - left = (i, j) - break - if left != none: - break - right = none - for j in range(self.h): - for i in range(self.w): - if part[i, self.h-j-1] != empty: - right = (i, self.h-1-j) - break - if right != none: - break - new = Image.new(mode="RGB", size=(bot[0]-top[0]+1, right[1]-left[1]+1)) - npart = new.load() - for i in range(bot[0]-top[0]+1): - for j in range(right[1]-left[1]+1): - npart[i, j] = part[i+top[0], left[1]+j] - if npart[i, j] != empty: - npart[i, j] = (255, 255, 255) - return new - - def _segment_parts(self): - for color in self.colors: - dif_tuple = (255-color[0], 255-color[1], 255-color[2]) - sum = dif_tuple[0]+dif_tuple[1]+dif_tuple[2] - if sum < self.omega: - continue - new = Image.new(mode="RGB", size=(self.w, self.h)) - part = new.load() - for i in range(self.w): - for j in range(self.h): - if self.pixel_map[i, j] == color: - part[i, j] = color - else: - part[i, j] = empty - comps = self._connected_componets(part) - print(f"{color}: sum = {sum}; subsets = {comps}", end=" ") - if comps > self.components_limit: - print("=> ignoring") - continue - print("=> ok") - new_part = self._save_part(part) - self.parts.append(new_part) - - def get_parts(self): - self._init_colors() - self._segment_parts() - return self.parts - -class Predictor: - def __init__(self, names, model): - self.names = names - - self.interpreter = tf.lite.Interpreter(model_path=model) - self.classify = self.interpreter.get_signature_runner("serving_default") - - def get_value(self, part): - img = cv2.cvtColor(np.array(part), cv2.COLOR_RGB2BGR) - img = cv2.resize(img, (32, 32), 3) - img = np.array(img, dtype="float32") - img = tf.expand_dims(img, 0) - - prediction = self.classify(rescaling_1_input=img)["dense_1"] - score = tf.nn.softmax(prediction) - - return self.names[np.argmax(score)] - -def captcha_solve(img): - parts = Captcha(img=img).get_parts() - pred = Predictor(list(classes), "model.tflite") - value = str() - for part in parts: - value += pred.get_value(part) - return value - -EXAMPLES = ["1.png", "2.png", "3.png", "4.png", "5.png"] - -if __name__ == "__main__": - demo = gr.Interface(fn=captcha_solve, examples=EXAMPLES, inputs=gr.inputs.Image(type="pil"), outputs=gr.outputs.Textbox()) - demo.launch() diff --git a/spaces/nightfury/SD_Studio_AI_Text2Image_Image2Image_Generation/share_btn.py b/spaces/nightfury/SD_Studio_AI_Text2Image_Image2Image_Generation/share_btn.py deleted file mode 100644 index 4bf271fe915e78e6df33a9df53b47ad68e620e2e..0000000000000000000000000000000000000000 --- a/spaces/nightfury/SD_Studio_AI_Text2Image_Image2Image_Generation/share_btn.py +++ /dev/null @@ -1,60 +0,0 @@ -community_icon_html = """<svg id="share-btn-share-icon" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"> - <path d="M20.6081 3C21.7684 3 22.8053 3.49196 23.5284 4.38415C23.9756 4.93678 24.4428 5.82749 24.4808 7.16133C24.9674 7.01707 25.4353 6.93643 25.8725 6.93643C26.9833 6.93643 27.9865 7.37587 28.696 8.17411C29.6075 9.19872 30.0124 10.4579 29.8361 11.7177C29.7523 12.3177 29.5581 12.8555 29.2678 13.3534C29.8798 13.8646 30.3306 14.5763 30.5485 15.4322C30.719 16.1032 30.8939 17.5006 29.9808 18.9403C30.0389 19.0342 30.0934 19.1319 30.1442 19.2318C30.6932 20.3074 30.7283 21.5229 30.2439 22.6548C29.5093 24.3704 27.6841 25.7219 24.1397 27.1727C21.9347 28.0753 19.9174 28.6523 19.8994 28.6575C16.9842 29.4379 14.3477 29.8345 12.0653 29.8345C7.87017 29.8345 4.8668 28.508 3.13831 25.8921C0.356375 21.6797 0.754104 17.8269 4.35369 14.1131C6.34591 12.058 7.67023 9.02782 7.94613 8.36275C8.50224 6.39343 9.97271 4.20438 12.4172 4.20438H12.4179C12.6236 4.20438 12.8314 4.2214 13.0364 4.25468C14.107 4.42854 15.0428 5.06476 15.7115 6.02205C16.4331 5.09583 17.134 4.359 17.7682 3.94323C18.7242 3.31737 19.6794 3 20.6081 3ZM20.6081 5.95917C20.2427 5.95917 19.7963 6.1197 19.3039 6.44225C17.7754 7.44319 14.8258 12.6772 13.7458 14.7131C13.3839 15.3952 12.7655 15.6837 12.2086 15.6837C11.1036 15.6837 10.2408 14.5497 12.1076 13.1085C14.9146 10.9402 13.9299 7.39584 12.5898 7.1776C12.5311 7.16799 12.4731 7.16355 12.4172 7.16355C11.1989 7.16355 10.6615 9.33114 10.6615 9.33114C10.6615 9.33114 9.0863 13.4148 6.38031 16.206C3.67434 18.998 3.5346 21.2388 5.50675 24.2246C6.85185 26.2606 9.42666 26.8753 12.0653 26.8753C14.8021 26.8753 17.6077 26.2139 19.1799 25.793C19.2574 25.7723 28.8193 22.984 27.6081 20.6107C27.4046 20.212 27.0693 20.0522 26.6471 20.0522C24.9416 20.0522 21.8393 22.6726 20.5057 22.6726C20.2076 22.6726 19.9976 22.5416 19.9116 22.222C19.3433 20.1173 28.552 19.2325 27.7758 16.1839C27.639 15.6445 27.2677 15.4256 26.746 15.4263C24.4923 15.4263 19.4358 19.5181 18.3759 19.5181C18.2949 19.5181 18.2368 19.4937 18.2053 19.4419C17.6743 18.557 17.9653 17.9394 21.7082 15.6009C25.4511 13.2617 28.0783 11.8545 26.5841 10.1752C26.4121 9.98141 26.1684 9.8956 25.8725 9.8956C23.6001 9.89634 18.2311 14.9403 18.2311 14.9403C18.2311 14.9403 16.7821 16.496 15.9057 16.496C15.7043 16.496 15.533 16.4139 15.4169 16.2112C14.7956 15.1296 21.1879 10.1286 21.5484 8.06535C21.7928 6.66715 21.3771 5.95917 20.6081 5.95917Z" fill="#FF9D00"></path> - <path d="M5.50686 24.2246C3.53472 21.2387 3.67446 18.9979 6.38043 16.206C9.08641 13.4147 10.6615 9.33111 10.6615 9.33111C10.6615 9.33111 11.2499 6.95933 12.59 7.17757C13.93 7.39581 14.9139 10.9401 12.1069 13.1084C9.29997 15.276 12.6659 16.7489 13.7459 14.713C14.8258 12.6772 17.7747 7.44316 19.304 6.44221C20.8326 5.44128 21.9089 6.00204 21.5484 8.06532C21.188 10.1286 14.795 15.1295 15.4171 16.2118C16.0391 17.2934 18.2312 14.9402 18.2312 14.9402C18.2312 14.9402 25.0907 8.49588 26.5842 10.1752C28.0776 11.8545 25.4512 13.2616 21.7082 15.6008C17.9646 17.9393 17.6744 18.557 18.2054 19.4418C18.7372 20.3266 26.9998 13.1351 27.7759 16.1838C28.5513 19.2324 19.3434 20.1173 19.9117 22.2219C20.48 24.3274 26.3979 18.2382 27.6082 20.6107C28.8193 22.9839 19.2574 25.7722 19.18 25.7929C16.0914 26.62 8.24723 28.3726 5.50686 24.2246Z" fill="#FFD21E"></path> -</svg>""" - -loading_icon_html = """<svg id="share-btn-loading-icon" style="display:none;" class="animate-spin" - style="color: #ffffff; -" - xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" fill="none" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 24 24"><circle style="opacity: 0.25;" cx="12" cy="12" r="10" stroke="white" stroke-width="4"></circle><path style="opacity: 0.75;" fill="white" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path></svg>""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - const gradioEl = document.querySelector('body > gradio-app'); - const imgEls = gradioEl.querySelectorAll('#gallery img'); - const promptTxt = gradioEl.querySelector('#prompt-text-input input').value; - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!imgEls.length){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const files = await Promise.all( - [...imgEls].map(async (imgEl) => { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `diffuse-the-rest-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }) - ); - const urls = await Promise.all(files.map((f) => uploadFile(f))); - const htmlImgs = urls.map(url => `<img src='${url}' width='400' height='400'>`); - const descriptionMd = `<div style='display: flex; flex-wrap: wrap; column-gap: 0.75rem;'> -${htmlImgs.join(`\n`)} -</div>`; - const params = new URLSearchParams({ - title: promptTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/stabilityai/stable-diffusion/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DeepLab/deeplab/lr_scheduler.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DeepLab/deeplab/lr_scheduler.py deleted file mode 100644 index b754b59750ed7fea1e2d24d40f019d26bd562bf5..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DeepLab/deeplab/lr_scheduler.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from typing import List -import torch - -from detectron2.solver.lr_scheduler import LRScheduler, _get_warmup_factor_at_iter - -# NOTE: PyTorch's LR scheduler interface uses names that assume the LR changes -# only on epoch boundaries. We typically use iteration based schedules instead. -# As a result, "epoch" (e.g., as in self.last_epoch) should be understood to mean -# "iteration" instead. - -# FIXME: ideally this would be achieved with a CombinedLRScheduler, separating -# MultiStepLR with WarmupLR but the current LRScheduler design doesn't allow it. - - -class WarmupPolyLR(LRScheduler): - """ - Poly learning rate schedule used to train DeepLab. - Paper: DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, - Atrous Convolution, and Fully Connected CRFs. - Reference: https://github.com/tensorflow/models/blob/21b73d22f3ed05b650e85ac50849408dd36de32e/research/deeplab/utils/train_utils.py#L337 # noqa - """ - - def __init__( - self, - optimizer: torch.optim.Optimizer, - max_iters: int, - warmup_factor: float = 0.001, - warmup_iters: int = 1000, - warmup_method: str = "linear", - last_epoch: int = -1, - power: float = 0.9, - constant_ending: float = 0.0, - ): - self.max_iters = max_iters - self.warmup_factor = warmup_factor - self.warmup_iters = warmup_iters - self.warmup_method = warmup_method - self.power = power - self.constant_ending = constant_ending - super().__init__(optimizer, last_epoch) - - def get_lr(self) -> List[float]: - warmup_factor = _get_warmup_factor_at_iter( - self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor - ) - if self.constant_ending > 0 and warmup_factor == 1.0: - # Constant ending lr. - if ( - math.pow((1.0 - self.last_epoch / self.max_iters), self.power) - < self.constant_ending - ): - return [base_lr * self.constant_ending for base_lr in self.base_lrs] - return [ - base_lr * warmup_factor * math.pow((1.0 - self.last_epoch / self.max_iters), self.power) - for base_lr in self.base_lrs - ] - - def _compute_values(self) -> List[float]: - # The new interface - return self.get_lr() diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/PointSup/tools/prepare_coco_point_annotations_without_masks.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/PointSup/tools/prepare_coco_point_annotations_without_masks.py deleted file mode 100644 index e4aee2aedf2e62e2357f278417ac58c6b4ff264e..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/PointSup/tools/prepare_coco_point_annotations_without_masks.py +++ /dev/null @@ -1,108 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import copy -import json -import numpy as np -import os -import sys -import pycocotools.mask as mask_utils - -from detectron2.utils.env import seed_all_rng -from detectron2.utils.file_io import PathManager - - -def get_point_annotations(input_filename, output_filename, num_points_per_instance): - with PathManager.open(input_filename, "r") as f: - coco_json = json.load(f) - - coco_annos = coco_json.pop("annotations") - coco_points_json = copy.deepcopy(coco_json) - - imgs = {} - for img in coco_json["images"]: - imgs[img["id"]] = img - - new_annos = [] - for ann in coco_annos: - # convert mask - t = imgs[ann["image_id"]] - h, w = t["height"], t["width"] - segm = ann.pop("segmentation") - if type(segm) == list: - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = mask_utils.frPyObjects(segm, h, w) - rle = mask_utils.merge(rles) - elif type(segm["counts"]) == list: - # uncompressed RLE - rle = mask_utils.frPyObjects(segm, h, w) - else: - # rle - rle = segm - mask = mask_utils.decode(rle) - new_ann = copy.deepcopy(ann) - # sample points in image coordinates - box = ann["bbox"] - point_coords_wrt_image = np.random.rand(num_points_per_instance, 2) - point_coords_wrt_image[:, 0] = point_coords_wrt_image[:, 0] * box[2] - point_coords_wrt_image[:, 1] = point_coords_wrt_image[:, 1] * box[3] - point_coords_wrt_image[:, 0] += box[0] - point_coords_wrt_image[:, 1] += box[1] - # round to integer coordinates - point_coords_wrt_image = np.floor(point_coords_wrt_image).astype(int) - # get labels - assert (point_coords_wrt_image >= 0).all(), (point_coords_wrt_image, mask.shape) - assert (point_coords_wrt_image[:, 0] < w).all(), (point_coords_wrt_image, mask.shape) - assert (point_coords_wrt_image[:, 1] < h).all(), (point_coords_wrt_image, mask.shape) - point_labels = mask[point_coords_wrt_image[:, 1], point_coords_wrt_image[:, 0]] - # store new annotations - new_ann["point_coords"] = point_coords_wrt_image.tolist() - new_ann["point_labels"] = point_labels.tolist() - new_annos.append(new_ann) - coco_points_json["annotations"] = new_annos - - with PathManager.open(output_filename, "w") as f: - json.dump(coco_points_json, f) - - print("{} is modified and stored in {}.".format(input_filename, output_filename)) - - -if __name__ == "__main__": - """ - Generate point-based supervision for COCO dataset. - - Usage: - python tools/prepare_coco_point_annotations_without_masks.py \ - NUM_POINTS_PER_INSTANCE NUM_VERSIONS_WITH_DIFFERENT_SEED - - Example to generate point-based COCO dataset with 10 points per instance: - python tools/prepare_coco_point_annotations_without_masks.py 10 - """ - - # Fix random seed - seed_all_rng(12345) - - assert len(sys.argv) >= 2, "Please provide number of points to sample per instance" - dataset_dir = os.path.join(os.getenv("DETECTRON2_DATASETS", "datasets"), "coco/annotations") - num_points_per_instance = int(sys.argv[1]) - if len(sys.argv) == 3: - repeat = int(sys.argv[2]) - else: - repeat = 1 - s = "instances_train2017" - for version in range(repeat): - print( - "Start sampling {} points per instance for annotations {}.".format( - num_points_per_instance, s - ) - ) - get_point_annotations( - os.path.join(dataset_dir, "{}.json".format(s)), - os.path.join( - dataset_dir, - "{}_n{}_v{}_without_masks.json".format(s, num_points_per_instance, version + 1), - ), - num_points_per_instance, - ) diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_vitdet_h_100ep.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_vitdet_h_100ep.py deleted file mode 100644 index 68bec5734456c9bbc813becd5da83bc2a0f90932..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/ViTDet/configs/LVIS/cascade_mask_rcnn_vitdet_h_100ep.py +++ /dev/null @@ -1,51 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.data.detection_utils import get_fed_loss_cls_weights -from detectron2.layers import ShapeSpec -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.roi_heads import FastRCNNOutputLayers, FastRCNNConvFCHead, CascadeROIHeads - -from .mask_rcnn_vitdet_h_100ep import ( - dataloader, - lr_multiplier, - model, - optimizer, - train, -) - -# arguments that don't exist for Cascade R-CNN -[model.roi_heads.pop(k) for k in ["box_head", "box_predictor", "proposal_matcher"]] - -model.roi_heads.update( - _target_=CascadeROIHeads, - num_classes=1203, - box_heads=[ - L(FastRCNNConvFCHead)( - input_shape=ShapeSpec(channels=256, height=7, width=7), - conv_dims=[256, 256, 256, 256], - fc_dims=[1024], - conv_norm="LN", - ) - for _ in range(3) - ], - box_predictors=[ - L(FastRCNNOutputLayers)( - input_shape=ShapeSpec(channels=1024), - box2box_transform=L(Box2BoxTransform)(weights=(w1, w1, w2, w2)), - num_classes="${...num_classes}", - test_score_thresh=0.02, - test_topk_per_image=300, - cls_agnostic_bbox_reg=True, - use_sigmoid_ce=True, - use_fed_loss=True, - get_fed_loss_cls_weights=lambda: get_fed_loss_cls_weights( - dataloader.train.dataset.names, 0.5 - ), - ) - for (w1, w2) in [(10, 5), (20, 10), (30, 15)] - ], - proposal_matchers=[ - L(Matcher)(thresholds=[th], labels=[0, 1], allow_low_quality_matches=False) - for th in [0.5, 0.6, 0.7] - ], -) diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/layers/test_deformable.py b/spaces/nikitaPDL2023/assignment4/detectron2/tests/layers/test_deformable.py deleted file mode 100644 index 4aa319fc7e614f6a7a8ece7a45c177211c03012d..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/layers/test_deformable.py +++ /dev/null @@ -1,175 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -import unittest -import torch - -from detectron2.layers import DeformConv, ModulatedDeformConv -from detectron2.utils.env import TORCH_VERSION - - -@unittest.skipIf( - TORCH_VERSION == (1, 8) and torch.cuda.is_available(), - "This test fails under cuda11 + torch1.8.", -) -class DeformableTest(unittest.TestCase): - @unittest.skipIf(not torch.cuda.is_available(), "Deformable not supported for cpu") - def test_forward_output(self): - device = torch.device("cuda") - N, C, H, W = shape = 1, 1, 5, 5 - kernel_size = 3 - padding = 1 - - inputs = torch.arange(np.prod(shape), dtype=torch.float32).reshape(*shape).to(device) - """ - 0 1 2 3 4 - 5 6 7 8 9 - 10 11 12 13 14 - 15 16 17 18 19 - 20 21 22 23 24 - """ - offset_channels = kernel_size * kernel_size * 2 - offset = torch.full((N, offset_channels, H, W), 0.5, dtype=torch.float32).to(device) - - # Test DCN v1 - deform = DeformConv(C, C, kernel_size=kernel_size, padding=padding).to(device) - deform.weight = torch.nn.Parameter(torch.ones_like(deform.weight)) - output = deform(inputs, offset) - output = output.detach().cpu().numpy() - deform_results = np.array( - [ - [30, 41.25, 48.75, 45, 28.75], - [62.25, 81, 90, 80.25, 50.25], - [99.75, 126, 135, 117.75, 72.75], - [105, 131.25, 138.75, 120, 73.75], - [71.75, 89.25, 93.75, 80.75, 49.5], - ] - ) - self.assertTrue(np.allclose(output.flatten(), deform_results.flatten())) - - # Test DCN v2 - mask_channels = kernel_size * kernel_size - mask = torch.full((N, mask_channels, H, W), 0.5, dtype=torch.float32).to(device) - modulate_deform = ModulatedDeformConv(C, C, kernel_size, padding=padding, bias=False).to( - device - ) - modulate_deform.weight = deform.weight - output = modulate_deform(inputs, offset, mask) - output = output.detach().cpu().numpy() - self.assertTrue(np.allclose(output.flatten(), deform_results.flatten() * 0.5)) - - def test_forward_output_on_cpu(self): - device = torch.device("cpu") - N, C, H, W = shape = 1, 1, 5, 5 - kernel_size = 3 - padding = 1 - - inputs = torch.arange(np.prod(shape), dtype=torch.float32).reshape(*shape).to(device) - - offset_channels = kernel_size * kernel_size * 2 - offset = torch.full((N, offset_channels, H, W), 0.5, dtype=torch.float32).to(device) - - # Test DCN v1 on cpu - deform = DeformConv(C, C, kernel_size=kernel_size, padding=padding).to(device) - deform.weight = torch.nn.Parameter(torch.ones_like(deform.weight)) - output = deform(inputs, offset) - output = output.detach().cpu().numpy() - deform_results = np.array( - [ - [30, 41.25, 48.75, 45, 28.75], - [62.25, 81, 90, 80.25, 50.25], - [99.75, 126, 135, 117.75, 72.75], - [105, 131.25, 138.75, 120, 73.75], - [71.75, 89.25, 93.75, 80.75, 49.5], - ] - ) - self.assertTrue(np.allclose(output.flatten(), deform_results.flatten())) - - @unittest.skipIf(not torch.cuda.is_available(), "This test requires gpu access") - def test_forward_output_on_cpu_equals_output_on_gpu(self): - N, C, H, W = shape = 2, 4, 10, 10 - kernel_size = 3 - padding = 1 - - for groups in [1, 2]: - inputs = torch.arange(np.prod(shape), dtype=torch.float32).reshape(*shape) - offset_channels = kernel_size * kernel_size * 2 - offset = torch.full((N, offset_channels, H, W), 0.5, dtype=torch.float32) - - deform_gpu = DeformConv( - C, C, kernel_size=kernel_size, padding=padding, groups=groups - ).to("cuda") - deform_gpu.weight = torch.nn.Parameter(torch.ones_like(deform_gpu.weight)) - output_gpu = deform_gpu(inputs.to("cuda"), offset.to("cuda")).detach().cpu().numpy() - - deform_cpu = DeformConv( - C, C, kernel_size=kernel_size, padding=padding, groups=groups - ).to("cpu") - deform_cpu.weight = torch.nn.Parameter(torch.ones_like(deform_cpu.weight)) - output_cpu = deform_cpu(inputs.to("cpu"), offset.to("cpu")).detach().numpy() - - self.assertTrue(np.allclose(output_gpu.flatten(), output_cpu.flatten())) - - @unittest.skipIf(not torch.cuda.is_available(), "Deformable not supported for cpu") - def test_small_input(self): - device = torch.device("cuda") - for kernel_size in [3, 5]: - padding = kernel_size // 2 - N, C, H, W = shape = (1, 1, kernel_size - 1, kernel_size - 1) - - inputs = torch.rand(shape).to(device) # input size is smaller than kernel size - - offset_channels = kernel_size * kernel_size * 2 - offset = torch.randn((N, offset_channels, H, W), dtype=torch.float32).to(device) - deform = DeformConv(C, C, kernel_size=kernel_size, padding=padding).to(device) - output = deform(inputs, offset) - self.assertTrue(output.shape == inputs.shape) - - mask_channels = kernel_size * kernel_size - mask = torch.ones((N, mask_channels, H, W), dtype=torch.float32).to(device) - modulate_deform = ModulatedDeformConv( - C, C, kernel_size, padding=padding, bias=False - ).to(device) - output = modulate_deform(inputs, offset, mask) - self.assertTrue(output.shape == inputs.shape) - - @unittest.skipIf(not torch.cuda.is_available(), "Deformable not supported for cpu") - def test_raise_exception(self): - device = torch.device("cuda") - N, C, H, W = shape = 1, 1, 3, 3 - kernel_size = 3 - padding = 1 - - inputs = torch.rand(shape, dtype=torch.float32).to(device) - offset_channels = kernel_size * kernel_size # This is wrong channels for offset - offset = torch.randn((N, offset_channels, H, W), dtype=torch.float32).to(device) - deform = DeformConv(C, C, kernel_size=kernel_size, padding=padding).to(device) - self.assertRaises(RuntimeError, deform, inputs, offset) - - offset_channels = kernel_size * kernel_size * 2 - offset = torch.randn((N, offset_channels, H, W), dtype=torch.float32).to(device) - mask_channels = kernel_size * kernel_size * 2 # This is wrong channels for mask - mask = torch.ones((N, mask_channels, H, W), dtype=torch.float32).to(device) - modulate_deform = ModulatedDeformConv(C, C, kernel_size, padding=padding, bias=False).to( - device - ) - self.assertRaises(RuntimeError, modulate_deform, inputs, offset, mask) - - def test_repr(self): - module = DeformConv(3, 10, kernel_size=3, padding=1, deformable_groups=2) - correct_string = ( - "DeformConv(in_channels=3, out_channels=10, kernel_size=(3, 3), " - "stride=(1, 1), padding=(1, 1), dilation=(1, 1), " - "groups=1, deformable_groups=2, bias=False)" - ) - self.assertEqual(repr(module), correct_string) - - module = ModulatedDeformConv(3, 10, kernel_size=3, padding=1, deformable_groups=2) - correct_string = ( - "ModulatedDeformConv(in_channels=3, out_channels=10, kernel_size=(3, 3), " - "stride=1, padding=1, dilation=1, groups=1, deformable_groups=2, bias=True)" - ) - self.assertEqual(repr(module), correct_string) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/tests/test_model_analysis.py b/spaces/nikitaPDL2023/assignment4/detectron2/tests/test_model_analysis.py deleted file mode 100644 index c01b7af09703c8dad889dee0118d74fcc12ac4b0..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/tests/test_model_analysis.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - - -import unittest -import torch -from torch import nn - -from detectron2.utils.analysis import find_unused_parameters, flop_count_operators, parameter_count -from detectron2.utils.testing import get_model_no_weights - - -class RetinaNetTest(unittest.TestCase): - def setUp(self): - self.model = get_model_no_weights("COCO-Detection/retinanet_R_50_FPN_1x.yaml") - - def test_flop(self): - # RetinaNet supports flop-counting with random inputs - inputs = [{"image": torch.rand(3, 800, 800), "test_unused": "abcd"}] - res = flop_count_operators(self.model, inputs) - self.assertEqual(int(res["conv"]), 146) # 146B flops - - def test_param_count(self): - res = parameter_count(self.model) - self.assertEqual(res[""], 37915572) - self.assertEqual(res["backbone"], 31452352) - - -class FasterRCNNTest(unittest.TestCase): - def setUp(self): - self.model = get_model_no_weights("COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml") - - def test_flop(self): - # Faster R-CNN supports flop-counting with random inputs - inputs = [{"image": torch.rand(3, 800, 800)}] - res = flop_count_operators(self.model, inputs) - - # This only checks flops for backbone & proposal generator - # Flops for box head is not conv, and depends on #proposals, which is - # almost 0 for random inputs. - self.assertEqual(int(res["conv"]), 117) - - def test_flop_with_output_shape(self): - inputs = [{"image": torch.rand(3, 800, 800), "height": 700, "width": 700}] - res = flop_count_operators(self.model, inputs) - self.assertEqual(int(res["conv"]), 117) - - def test_param_count(self): - res = parameter_count(self.model) - self.assertEqual(res[""], 41699936) - self.assertEqual(res["backbone"], 26799296) - - -class MaskRCNNTest(unittest.TestCase): - def setUp(self): - self.model = get_model_no_weights("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml") - - def test_flop(self): - inputs1 = [{"image": torch.rand(3, 800, 800)}] - inputs2 = [{"image": torch.rand(3, 800, 800), "height": 700, "width": 700}] - - for inputs in [inputs1, inputs2]: - res = flop_count_operators(self.model, inputs) - # The mask head could have extra conv flops, so total >= 117 - self.assertGreaterEqual(int(res["conv"]), 117) - - -class UnusedParamTest(unittest.TestCase): - def test_unused(self): - class TestMod(nn.Module): - def __init__(self): - super().__init__() - self.fc1 = nn.Linear(10, 10) - self.t = nn.Linear(10, 10) - - def forward(self, x): - return self.fc1(x).mean() - - m = TestMod() - ret = find_unused_parameters(m, torch.randn(10, 10)) - self.assertEqual(set(ret), {"t.weight", "t.bias"}) diff --git a/spaces/nomic-ai/BelleGroup_train_0.5M_CN/index.html b/spaces/nomic-ai/BelleGroup_train_0.5M_CN/index.html deleted file mode 100644 index 582d4db4b1b41909eb2877d27e3d4b3630bf53dc..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/BelleGroup_train_0.5M_CN/index.html +++ /dev/null @@ -1,42 +0,0 @@ -<html> - -<head> - <title>BelleGroup/train_0.5M_CN</title> - <style> - body { - font-family: Arial, sans-serif; - background-color: #f0f0f0; - display: flex; - justify-content: center; - align-items: center; - height: 100vh; - margin: 0; - padding: 0; - color: #333; - } - - .iframe-container { - border: 1px solid #ccc; - border-radius: 10px; - overflow: hidden; - width: 80%; - height: 80%; - box-shadow: 0 0 10px rgba(0, 0, 0, 0.1); - } - - iframe { - width: 100%; - height: 100%; - border: none; - } - </style> -</head> - -<body> - <div class="iframe-container"> - <iframe src="https://atlas.nomic.ai/map/58c35ca3-51a0-49f0-a37f-8026977fb895/802e8a7d-0d78-44e5-a285-0e5a3e0e09b9" allow="clipboard-read; clipboard-write" - title="Nomic Atlas"></iframe> - </div> -</body> - -</html> \ No newline at end of file diff --git a/spaces/omi0k/LoRA-DreamBooth-Training-UI/app_inference.py b/spaces/omi0k/LoRA-DreamBooth-Training-UI/app_inference.py deleted file mode 100644 index a9969e649ca321a5246130d7d560ac3c431a12f2..0000000000000000000000000000000000000000 --- a/spaces/omi0k/LoRA-DreamBooth-Training-UI/app_inference.py +++ /dev/null @@ -1,176 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import enum - -import gradio as gr -from huggingface_hub import HfApi - -from inference import InferencePipeline -from utils import find_exp_dirs - -SAMPLE_MODEL_IDS = [ - 'patrickvonplaten/lora_dreambooth_dog_example', - 'sayakpaul/sd-model-finetuned-lora-t4', -] - - -class ModelSource(enum.Enum): - SAMPLE = 'Sample' - HUB_LIB = 'Hub (lora-library)' - LOCAL = 'Local' - - -class InferenceUtil: - def __init__(self, hf_token: str | None): - self.hf_token = hf_token - - @staticmethod - def load_sample_lora_model_list(): - return gr.update(choices=SAMPLE_MODEL_IDS, value=SAMPLE_MODEL_IDS[0]) - - def load_hub_lora_model_list(self) -> dict: - api = HfApi(token=self.hf_token) - choices = [ - info.modelId for info in api.list_models(author='lora-library') - ] - return gr.update(choices=choices, - value=choices[0] if choices else None) - - @staticmethod - def load_local_lora_model_list() -> dict: - choices = find_exp_dirs() - return gr.update(choices=choices, - value=choices[0] if choices else None) - - def reload_lora_model_list(self, model_source: str) -> dict: - if model_source == ModelSource.SAMPLE.value: - return self.load_sample_lora_model_list() - elif model_source == ModelSource.HUB_LIB.value: - return self.load_hub_lora_model_list() - elif model_source == ModelSource.LOCAL.value: - return self.load_local_lora_model_list() - else: - raise ValueError - - def load_model_info(self, lora_model_id: str) -> tuple[str, str]: - try: - card = InferencePipeline.get_model_card(lora_model_id, - self.hf_token) - except Exception: - return '', '' - base_model = getattr(card.data, 'base_model', '') - instance_prompt = getattr(card.data, 'instance_prompt', '') - return base_model, instance_prompt - - def reload_lora_model_list_and_update_model_info( - self, model_source: str) -> tuple[dict, str, str]: - model_list_update = self.reload_lora_model_list(model_source) - model_list = model_list_update['choices'] - model_info = self.load_model_info(model_list[0] if model_list else '') - return model_list_update, *model_info - - -def create_inference_demo(pipe: InferencePipeline, - hf_token: str | None = None) -> gr.Blocks: - app = InferenceUtil(hf_token) - - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - with gr.Box(): - model_source = gr.Radio( - label='Model Source', - choices=[_.value for _ in ModelSource], - value=ModelSource.SAMPLE.value) - reload_button = gr.Button('Reload Model List') - lora_model_id = gr.Dropdown(label='LoRA Model ID', - choices=SAMPLE_MODEL_IDS, - value=SAMPLE_MODEL_IDS[0]) - with gr.Accordion( - label= - 'Model info (Base model and instance prompt used for training)', - open=False): - with gr.Row(): - base_model_used_for_training = gr.Text( - label='Base model', interactive=False) - instance_prompt_used_for_training = gr.Text( - label='Instance prompt', interactive=False) - prompt = gr.Textbox( - label='Prompt', - max_lines=1, - placeholder='Example: "A picture of a sks dog in a bucket"' - ) - alpha = gr.Slider(label='LoRA alpha', - minimum=0, - maximum=2, - step=0.05, - value=1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=100000, - step=1, - value=0) - with gr.Accordion('Other Parameters', open=False): - num_steps = gr.Slider(label='Number of Steps', - minimum=0, - maximum=100, - step=1, - value=25) - guidance_scale = gr.Slider(label='CFG Scale', - minimum=0, - maximum=50, - step=0.1, - value=7.5) - - run_button = gr.Button('Generate') - - gr.Markdown(''' - - After training, you can press "Reload Model List" button to load your trained model names. - ''') - with gr.Column(): - result = gr.Image(label='Result') - - model_source.change( - fn=app.reload_lora_model_list_and_update_model_info, - inputs=model_source, - outputs=[ - lora_model_id, - base_model_used_for_training, - instance_prompt_used_for_training, - ]) - reload_button.click( - fn=app.reload_lora_model_list_and_update_model_info, - inputs=model_source, - outputs=[ - lora_model_id, - base_model_used_for_training, - instance_prompt_used_for_training, - ]) - lora_model_id.change(fn=app.load_model_info, - inputs=lora_model_id, - outputs=[ - base_model_used_for_training, - instance_prompt_used_for_training, - ]) - inputs = [ - lora_model_id, - prompt, - alpha, - seed, - num_steps, - guidance_scale, - ] - prompt.submit(fn=pipe.run, inputs=inputs, outputs=result) - run_button.click(fn=pipe.run, inputs=inputs, outputs=result) - return demo - - -if __name__ == '__main__': - import os - - hf_token = os.getenv('HF_TOKEN') - pipe = InferencePipeline(hf_token) - demo = create_inference_demo(pipe, hf_token) - demo.queue(max_size=10).launch(share=False) diff --git a/spaces/onereal/rvc-models-convertvoice/vc_infer_pipeline.py b/spaces/onereal/rvc-models-convertvoice/vc_infer_pipeline.py deleted file mode 100644 index 7ff98b2c812f4e74afe92048fb26009fb008479d..0000000000000000000000000000000000000000 --- a/spaces/onereal/rvc-models-convertvoice/vc_infer_pipeline.py +++ /dev/null @@ -1,320 +0,0 @@ -import numpy as np, parselmouth, torch, pdb -from time import time as ttime -import torch.nn.functional as F -import scipy.signal as signal -import pyworld, os, traceback, faiss -from scipy import signal - -bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000) - - -class VC(object): - def __init__(self, tgt_sr, config): - self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = ( - config.x_pad, - config.x_query, - config.x_center, - config.x_max, - config.is_half, - ) - self.sr = 16000 # hubert输入采样率 - self.window = 160 # 每帧点数 - self.t_pad = self.sr * self.x_pad # 每条前后pad时间 - self.t_pad_tgt = tgt_sr * self.x_pad - self.t_pad2 = self.t_pad * 2 - self.t_query = self.sr * self.x_query # 查询切点前后查询时间 - self.t_center = self.sr * self.x_center # 查询切点位置 - self.t_max = self.sr * self.x_max # 免查询时长阈值 - self.device = config.device - - def get_f0(self, x, p_len, f0_up_key, f0_method, inp_f0=None): - time_step = self.window / self.sr * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - if f0_method == "pm": - f0 = ( - parselmouth.Sound(x, self.sr) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad( - f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant" - ) - elif f0_method == "harvest": - f0, t = pyworld.harvest( - x.astype(np.double), - fs=self.sr, - f0_ceil=f0_max, - f0_floor=f0_min, - frame_period=10, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, self.sr) - f0 = signal.medfilt(f0, 3) - f0 *= pow(2, f0_up_key / 12) - # with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - tf0 = self.sr // self.window # 每秒f0点数 - if inp_f0 is not None: - delta_t = np.round( - (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1 - ).astype("int16") - replace_f0 = np.interp( - list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1] - ) - shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0] - f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[ - :shape - ] - # with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()])) - f0bak = f0.copy() - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak # 1-0 - - def vc( - self, - model, - net_g, - sid, - audio0, - pitch, - pitchf, - times, - index, - big_npy, - index_rate, - ): # ,file_index,file_big_npy - feats = torch.from_numpy(audio0) - if self.is_half: - feats = feats.half() - else: - feats = feats.float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False) - - inputs = { - "source": feats.to(self.device), - "padding_mask": padding_mask, - "output_layer": 9, # layer 9 - } - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - if ( - isinstance(index, type(None)) == False - and isinstance(big_npy, type(None)) == False - and index_rate != 0 - ): - npy = feats[0].cpu().numpy() - if self.is_half: - npy = npy.astype("float32") - - # _, I = index.search(npy, 1) - # npy = big_npy[I.squeeze()] - - score, ix = index.search(npy, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - - if self.is_half: - npy = npy.astype("float16") - feats = ( - torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate - + (1 - index_rate) * feats - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - t1 = ttime() - p_len = audio0.shape[0] // self.window - if feats.shape[1] < p_len: - p_len = feats.shape[1] - if pitch != None and pitchf != None: - pitch = pitch[:, :p_len] - pitchf = pitchf[:, :p_len] - p_len = torch.tensor([p_len], device=self.device).long() - with torch.no_grad(): - if pitch != None and pitchf != None: - audio1 = ( - (net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - else: - audio1 = ( - (net_g.infer(feats, p_len, sid)[0][0, 0] * 32768) - .data.cpu() - .float() - .numpy() - .astype(np.int16) - ) - del feats, p_len, padding_mask - if torch.cuda.is_available(): - torch.cuda.empty_cache() - t2 = ttime() - times[0] += t1 - t0 - times[2] += t2 - t1 - return audio1 - - def pipeline( - self, - model, - net_g, - sid, - audio, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - f0_file=None, - ): - if ( - file_index != "" - # and file_big_npy != "" - # and os.path.exists(file_big_npy) == True - and os.path.exists(file_index) == True - and index_rate != 0 - ): - try: - index = faiss.read_index(file_index) - # big_npy = np.load(file_big_npy) - big_npy = index.reconstruct_n(0, index.ntotal) - except: - traceback.print_exc() - index = big_npy = None - else: - index = big_npy = None - audio = signal.filtfilt(bh, ah, audio) - audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect") - opt_ts = [] - if audio_pad.shape[0] > self.t_max: - audio_sum = np.zeros_like(audio) - for i in range(self.window): - audio_sum += audio_pad[i : i - self.window] - for t in range(self.t_center, audio.shape[0], self.t_center): - opt_ts.append( - t - - self.t_query - + np.where( - np.abs(audio_sum[t - self.t_query : t + self.t_query]) - == np.abs(audio_sum[t - self.t_query : t + self.t_query]).min() - )[0][0] - ) - s = 0 - audio_opt = [] - t = None - t1 = ttime() - audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect") - p_len = audio_pad.shape[0] // self.window - inp_f0 = None - if hasattr(f0_file, "name") == True: - try: - with open(f0_file.name, "r") as f: - lines = f.read().strip("\n").split("\n") - inp_f0 = [] - for line in lines: - inp_f0.append([float(i) for i in line.split(",")]) - inp_f0 = np.array(inp_f0, dtype="float32") - except: - traceback.print_exc() - sid = torch.tensor(sid, device=self.device).unsqueeze(0).long() - pitch, pitchf = None, None - if if_f0 == 1: - pitch, pitchf = self.get_f0(audio_pad, p_len, f0_up_key, f0_method, inp_f0) - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long() - pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float() - t2 = ttime() - times[1] += t2 - t1 - for t in opt_ts: - t = t // self.window * self.window - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - pitch[:, s // self.window : (t + self.t_pad2) // self.window], - pitchf[:, s // self.window : (t + self.t_pad2) // self.window], - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[s : t + self.t_pad2 + self.window], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - s = t - if if_f0 == 1: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - pitch[:, t // self.window :] if t is not None else pitch, - pitchf[:, t // self.window :] if t is not None else pitchf, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - else: - audio_opt.append( - self.vc( - model, - net_g, - sid, - audio_pad[t:], - None, - None, - times, - index, - big_npy, - index_rate, - )[self.t_pad_tgt : -self.t_pad_tgt] - ) - audio_opt = np.concatenate(audio_opt) - del pitch, pitchf, sid - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return audio_opt diff --git a/spaces/osiria/classifier-zero-shot-italian/app.py b/spaces/osiria/classifier-zero-shot-italian/app.py deleted file mode 100644 index 6d2a03a165151f4d4a59b214b93dadf81d0a33a1..0000000000000000000000000000000000000000 --- a/spaces/osiria/classifier-zero-shot-italian/app.py +++ /dev/null @@ -1,168 +0,0 @@ -import os -import gradio as gr -from gradio.components import Label -import subprocess -import sys - -def install(package): - subprocess.check_call([sys.executable, "-m", "pip", "install", package]) - -install("numpy") -install("transformers") -install("torch") - -import torch -from transformers import AutoTokenizer -from transformers import AutoModel -import numpy as np -import re - - -auth_token = os.environ.get("AUTH-TOKEN") - - -header = '''-------------------------------------------------------------------------------------------------- -<style> -.vertical-text { - writing-mode: vertical-lr; - text-orientation: upright; - background-color:red; -} -</style> -<center> -<body> -<span class="vertical-text" style="background-color:lightgreen;border-radius: 3px;padding: 3px;"> </span> -<span class="vertical-text" style="background-color:orange;border-radius: 3px;padding: 3px;"> D</span> -<span class="vertical-text" style="background-color:lightblue;border-radius: 3px;padding: 3px;">    E</span> -<span class="vertical-text" style="background-color:tomato;border-radius: 3px;padding: 3px;">    M</span> -<span class="vertical-text" style="background-color:lightgrey;border-radius: 3px;padding: 3px;"> O</span> -<span class="vertical-text" style="background-color:#CF9FFF;border-radius: 3px;padding: 3px;"> </span> -</body> -</center> -<br> - ---------------------------------------------------------------------------------------------------''' - -tokenizer_cl = tokenizer = AutoTokenizer.from_pretrained("osiria/distiluse-base-italian") -model_cl = AutoModel.from_pretrained("osiria/distiluse-base-italian") - - -def classify(text, classes, prompt = "L'argomento di cui parliamo è quindi: "): - - text = text[:10000] - - classes = {el.split(":")[0].strip(): el.split(":")[1].strip() for el in classes.split("\n")} - - t_vec = model_cl(tokenizer_cl.encode(text, return_tensors = "pt", truncation = True, max_length = 512)).last_hidden_state[0,0,:].cpu().detach().numpy() - t_vec = t_vec/np.linalg.norm(t_vec) - t_vec = t_vec.reshape(-1, 1) - - classes_mod = [prompt + re.sub("\s+", " ", classes[cl].lower().replace(",", " ")).strip() for cl in classes] - cl_vecs = np.array([model_cl(tokenizer_cl.encode(cl, return_tensors = "pt", truncation = True, max_length = 512)).last_hidden_state[0,0,:].cpu().detach().numpy() for cl in classes_mod]) - cl_vecs = cl_vecs/np.sqrt(np.sum(cl_vecs**2, axis = 1).reshape(-1,1)) - - scores = np.dot(cl_vecs, t_vec).reshape(1,-1)[0] - scores = scores*(scores > 0) - scores = (scores/np.sum(scores)) - - scores = scores*(scores > 0.05) - scores = (scores/np.sum(scores)) - scores = scores.tolist() - - classes = list(classes.keys()) - output = list(zip(classes, scores)) - output = sorted(output, key = lambda tpl: tpl[1], reverse = True) - output = {tpl[0].capitalize(): tpl[1] for tpl in output} - - return output - - - -init_text = '''L'Agenzia Spaziale Italiana (ASI) è un ente governativo italiano, istituito nel 1988, che ha il compito di predisporre e attuare la politica aerospaziale italiana. Dipende e utilizza i fondi ricevuti dal Governo italiano per finanziare il progetto, lo sviluppo e la gestione operativa di missioni spaziali, con obiettivi scientifici e applicativi. - -Gestisce missioni spaziali in proprio e in collaborazione con i maggiori organismi spaziali internazionali, prima tra tutte l'Agenzia spaziale europea (dove l'Italia è il terzo maggior contribuente dopo Francia e Germania, e a cui l'ASI corrisponde una parte del proprio budget), quindi la NASA e le altre agenzie spaziali nazionali. Per la realizzazione di satelliti e strumenti scientifici, l'ASI stipula contratti con le imprese, italiane e non, operanti nel settore aerospaziale. - -Ha la sede principale a Roma e centri operativi a Matera (sede del Centro di geodesia spaziale Giuseppe Colombo) e Malindi, Kenya (sede del Centro spaziale Luigi Broglio). Il centro di Trapani-Milo, usato per i lanci di palloni stratosferici dal 1975, non è più operativo dal 2010. - -Ha un organico di circa 393 dipendenti (al 2023), e un budget annuale al 2019 di circa 1,075 miliardi di euro. Le attività di ricerca vengono svolte in cooperazione con le Università, il CNR, gli osservatori astronomici, ecc. I campi di studio sono in genere le "scienze dell'universo, le scienze della Terra, le scienze della vita" e la tecnologia aerospaziale. - -Con DM 08/06/2023 il Professore Teodoro Valente è stato nominato Presidente dell'Agenzia Spaziale Italiana.''' - -init_classes = '''alimentazione: alimentazione, cibo, agricoltura, allevamento, nutrizione -arte: arte, pittura, scultura, moda -animali: animali, zoologia, botanica, piante -ambiente: ambiente, clima, sostenibilità, ecologia, inquinamento -economia: aziende, banche, economia, finanza, borsa -filosofia: etica, filosofia, religione, teologia -geografia: città, regioni, nazioni, geografia, geologia -giustizia: giustizia, magistratura, reati, criminalità -musica: musica, cantanti, gruppi musicali, generi musicali -cinema: cinema, film, televisione, spettacolo -intrattenimento: intrattenimento, tempo libero, svago, videogiochi -letteratura: letteratura, romanzi, narrativa, poesia -medicina: medicina, salute, farmaci, malattie, patologie -governo: governo, legge, politica, partiti, settore pubblico -scienza: scienza, ingegneria, tecnologia -sport: competizioni, sport -guerra: guerra, conflitti, battaglie, tematiche militari -storia: eventi, storia -società: tematiche sociali, tematiche internazionali -trasporti: automobili, treni, aerei, trasporti, veicoli -informatica: computer, smartphone, applicazioni, internet, social networks''' - -init_output = classify(init_text, init_classes) - -with gr.Blocks(css="footer {visibility: hidden}", theme=gr.themes.Default(text_size="lg", spacing_size="lg")) as interface: - - with gr.Row(): - gr.Markdown(header) - with gr.Row(): - text = gr.Text(label="Write or paste a text", lines = 5, value = init_text) - with gr.Row(): - gr.Examples([["Alessandro Manzoni, nome completo Alessandro Francesco Tommaso Antonio Manzoni (Milano, 7 marzo 1785 – Milano, 22 maggio 1873), è stato uno scrittore, poeta e drammaturgo italiano. Considerato uno dei maggiori romanzieri italiani di tutti i tempi per il suo celebre romanzo I promessi sposi, caposaldo della letteratura italiana, Manzoni ebbe il merito principale di aver gettato le basi per il romanzo moderno e di aver così patrocinato l'unità linguistica italiana, sulla scia di quella letteratura moralmente e civilmente impegnata propria dell'Illuminismo italiano."], - ["Oggi sto male perchè ho la febbre"], - ["Mi sono registrato su Facebook"], - ["Stasera guardo qualcosa su Netflix"], - ["La battaglia delle Termòpili, o delle Termòpile, fu una battaglia combattuta da un'alleanza di poleis greche, guidata dal re di Sparta Leonida I, contro l'Impero persiano governato da Serse I. Si svolse in tre giorni, durante la seconda invasione persiana della Grecia, nell'agosto o nel settembre del 480 a.C. presso lo stretto passaggio delle Termopili (o, più correttamente, Termopile, 'Le porte calde'), contemporaneamente alla battaglia navale di Capo Artemisio."], - ["Ieri ho comprato l'Xbox One"], - ["Domani per pranzo preparo la pasta alle vongole"], - ["Ho appena ascoltato l'ultimo album dei Green Day"], - ["Sono chiamati gas serra quei gas presenti nell'atmosfera che riescono a trattenere, in maniera consistente, una parte considerevole della componente nell'infrarosso della radiazione solare che colpisce la Terra ed è emessa dalla superficie terrestre, dall'atmosfera e dalle nuvole. Tale proprietà causa il fenomeno noto come 'effetto serra' ed è verificabile da un'analisi spettroscopica in laboratorio."]], - inputs=[text]) - with gr.Row(): - classes = gr.Text(label="Classes (write a few classes in the form 'class_name: word1, word2, word3...' using 1 to 5 descriptive words for each class)", lines = 1, value = '''alimentazione: alimentazione, cibo, agricoltura, allevamento, nutrizione -arte: arte, pittura, scultura, moda -animali: animali, zoologia, botanica, piante -ambiente: ambiente, clima, sostenibilità, ecologia, inquinamento -economia: aziende, banche, economia, finanza, borsa -filosofia: etica, filosofia, religione, teologia -geografia: città, regioni, nazioni, geografia, geologia -giustizia: giustizia, magistratura, reati, criminalità -musica: musica, cantanti, gruppi musicali, generi musicali -cinema: cinema, film, televisione, spettacolo -intrattenimento: intrattenimento, tempo libero, svago, videogiochi -letteratura: letteratura, romanzi, narrativa, poesia -medicina: medicina, salute, farmaci, malattie, patologie -governo: governo, legge, politica, partiti, settore pubblico -scienza: matematica, scienza, ingegneria, tecnologia, spazio -sport: competizioni, sport -guerra: guerra, conflitti, battaglie, tematiche militari -storia: eventi, storia -società: tematiche sociali, tematiche internazionali -trasporti: automobili, treni, aerei, trasporti, veicoli -informatica: computer, smartphone, applicazioni, internet, social networks''') - with gr.Row(): - button = gr.Button("Classify").style(full_width=False) - - with gr.Row(): - with gr.Column(): - output = Label(label="Result") - - with gr.Row(): - with gr.Column(): - footer = gr.Markdown("<center>A few examples in this demo are extracted from Wikipedia</center>") - - button.click(classify, inputs=[text, classes], outputs = [output]) - - -interface.launch() \ No newline at end of file diff --git "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_n\305\221v\303\251r_de.html" "b/spaces/oskarvanderwal/MT-bias-demo/results/simple_n\305\221v\303\251r_de.html" deleted file mode 100644 index 4064ab267d8c317370589be026eeac700a31a572..0000000000000000000000000000000000000000 --- "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_n\305\221v\303\251r_de.html" +++ /dev/null @@ -1,46 +0,0 @@ -<br/><b>0th instance:</b><br/> -<html> -<div id="yfajpifujnimdeywykoq_viz_container"> - <div id="yfajpifujnimdeywykoq_content" style="padding:15px;border-style:solid;margin:5px;"> - <div id = "yfajpifujnimdeywykoq_saliency_plot_container" class="yfajpifujnimdeywykoq_viz_container" style="display:block"> - -<div id="drslkexjaxgenhyvsxnl_saliency_plot" class="drslkexjaxgenhyvsxnl_viz_content"> - <div style="margin:5px;font-family:sans-serif;font-weight:bold;"> - <span style="font-size: 20px;">Source Saliency Heatmap</span> - <br> - x: Generated tokens, y: Attributed tokens - </div> - -<table border="1" cellpadding="5" cellspacing="5" - style="overflow-x:scroll;display:block;"> - <tr><th></th> -<th>▁Sie</th><th>▁ist</th><th>▁</th><th>Krankenschwester</th><th>.</th><th>&lt;/s&gt;</th></tr><tr><th>▁Ő</th><th style="background:rgba(255.0, 13.0, 87.0, 0.7635175282234107)">0.7</th><th style="background:rgba(255.0, 13.0, 87.0, 0.755634779164191)">0.689</th><th style="background:rgba(255.0, 13.0, 87.0, 0.030421865715983164)">0.033</th><th style="background:rgba(255.0, 13.0, 87.0, 0.16442859972271742)">0.156</th><th style="background:rgba(255.0, 13.0, 87.0, 0.20384234501881549)">0.187</th><th style="background:rgba(255.0, 13.0, 87.0, 0.1959595959595959)">0.179</th></tr><tr><th>▁nővér</th><th style="background:rgba(255.0, 13.0, 87.0, 0.4560903149138443)">0.417</th><th style="background:rgba(255.0, 13.0, 87.0, 0.6610417904535549)">0.603</th><th style="background:rgba(255.0, 13.0, 87.0, 0.6058625470390177)">0.553</th><th style="background:rgba(255.0, 13.0, 87.0, 0.26690433749257286)">0.248</th><th style="background:rgba(255.0, 13.0, 87.0, 0.5191523073876015)">0.476</th><th style="background:rgba(30.0, 136.0, 229.0, 0.22749059219647463)">-0.21</th></tr><tr><th>.</th><th style="background:rgba(255.0, 13.0, 87.0, 0.6295107942166767)">0.579</th><th style="background:rgba(255.0, 13.0, 87.0, 0.22749059219647458)">0.214</th><th style="background:rgba(255.0, 13.0, 87.0, 0.14078035254505847)">0.131</th><th style="background:rgba(255.0, 13.0, 87.0, 0.2117250940780353)">0.194</th><th style="background:rgba(255.0, 13.0, 87.0, 0.889641513170925)">0.809</th><th style="background:rgba(255.0, 13.0, 87.0, 0.14078035254505847)">0.135</th></tr><tr><th>&lt;/s&gt;</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)">0.0</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)">0.0</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)">0.0</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)">0.0</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)">0.0</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)">0.0</th></tr></table> -</div> - - </div> - </div> -</div> -</html> -<br/><b>0th instance:</b><br/> -<html> -<div id="xhzowgcqdiawdhhzuwjd_viz_container"> - <div id="xhzowgcqdiawdhhzuwjd_content" style="padding:15px;border-style:solid;margin:5px;"> - <div id = "xhzowgcqdiawdhhzuwjd_saliency_plot_container" class="xhzowgcqdiawdhhzuwjd_viz_container" style="display:block"> - -<div id="vzjuzmgdfgqxovhqpdwp_saliency_plot" class="vzjuzmgdfgqxovhqpdwp_viz_content"> - <div style="margin:5px;font-family:sans-serif;font-weight:bold;"> - <span style="font-size: 20px;">Target Saliency Heatmap</span> - <br> - x: Generated tokens, y: Attributed tokens - </div> - -<table border="1" cellpadding="5" cellspacing="5" - style="overflow-x:scroll;display:block;"> - <tr><th></th> -<th>▁Sie</th><th>▁ist</th><th>▁</th><th>Krankenschwester</th><th>.</th><th>&lt;/s&gt;</th></tr><tr><th>▁Sie</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(255.0, 13.0, 87.0, 0.3693800752624282)">0.34</th><th style="background:rgba(255.0, 13.0, 87.0, 0.4087938205585263)">0.375</th><th style="background:rgba(255.0, 13.0, 87.0, 0.1171321053673995)">0.11</th><th style="background:rgba(255.0, 13.0, 87.0, 0.08560110913052081)">0.083</th><th style="background:rgba(255.0, 13.0, 87.0, 0.2117250940780353)">0.195</th></tr><tr><th>▁ist</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(255.0, 13.0, 87.0, 0.8029312735195088)">0.732</th><th style="background:rgba(255.0, 13.0, 87.0, 0.17231134878193693)">0.163</th><th style="background:rgba(255.0, 13.0, 87.0, 0.12501485442661908)">0.119</th><th style="background:rgba(30.0, 136.0, 229.0, 0.10924935630817977)">-0.102</th></tr><tr><th>▁</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(255.0, 13.0, 87.0, 1.0)">0.915</th><th style="background:rgba(255.0, 13.0, 87.0, 0.16442859972271742)">0.15</th><th style="background:rgba(30.0, 136.0, 229.0, 0.10136660724896014)">-0.095</th></tr><tr><th>Krankenschwester</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(255.0, 13.0, 87.0, 0.219607843137255)">0.202</th><th style="background:rgba(30.0, 136.0, 229.0, 0.29055258467023176)">-0.27</th></tr><tr><th>.</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(255.0, 13.0, 87.0, 0.9684690037631213)">0.88</th></tr><tr><th>&lt;/s&gt;</th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th><th style="background:rgba(230.2941176470614, 26.505882352939775, 102.59215686274348, 0.0)"></th></tr></table> -</div> - - </div> - </div> -</div> -</html> diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/inference/inpainting.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/inference/inpainting.py deleted file mode 100644 index 8aad208ff34eb4d4ba1c6acfdfe0f97ac9afc4bc..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/inference/inpainting.py +++ /dev/null @@ -1,9 +0,0 @@ -import warnings - -from diffusers import StableDiffusionInpaintPipeline as StableDiffusionInpaintPipeline # noqa F401 - - -warnings.warn( - "The `inpainting.py` script is outdated. Please use directly `from diffusers import" - " StableDiffusionInpaintPipeline` instead." -) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/onnxruntime/unconditional_image_generation/train_unconditional.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/onnxruntime/unconditional_image_generation/train_unconditional.py deleted file mode 100644 index ba5ccd238fdc140186ea9b293e2c975007d44c95..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/research_projects/onnxruntime/unconditional_image_generation/train_unconditional.py +++ /dev/null @@ -1,701 +0,0 @@ -import argparse -import inspect -import logging -import math -import os -from pathlib import Path -from typing import Optional - -import accelerate -import datasets -import torch -import torch.nn.functional as F -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration -from datasets import load_dataset -from huggingface_hub import HfFolder, Repository, create_repo, whoami -from onnxruntime.training.optim.fp16_optimizer import FP16_Optimizer as ORT_FP16_Optimizer -from onnxruntime.training.ortmodule import ORTModule -from packaging import version -from torchvision import transforms -from tqdm.auto import tqdm - -import diffusers -from diffusers import DDPMPipeline, DDPMScheduler, UNet2DModel -from diffusers.optimization import get_scheduler -from diffusers.training_utils import EMAModel -from diffusers.utils import check_min_version, is_accelerate_version, is_tensorboard_available, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.17.0.dev0") - -logger = get_logger(__name__, log_level="INFO") - - -def _extract_into_tensor(arr, timesteps, broadcast_shape): - """ - Extract values from a 1-D numpy array for a batch of indices. - - :param arr: the 1-D numpy array. - :param timesteps: a tensor of indices into the array to extract. - :param broadcast_shape: a larger shape of K dimensions with the batch - dimension equal to the length of timesteps. - :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims. - """ - if not isinstance(arr, torch.Tensor): - arr = torch.from_numpy(arr) - res = arr[timesteps].float().to(timesteps.device) - while len(res.shape) < len(broadcast_shape): - res = res[..., None] - return res.expand(broadcast_shape) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that HF Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--model_config_name_or_path", - type=str, - default=None, - help="The config of the UNet model to train, leave as None to use standard DDPM configuration.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="ddpm-model-64", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--overwrite_output_dir", action="store_true") - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument( - "--resolution", - type=int, - default=64, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--random_flip", - default=False, - action="store_true", - help="whether to randomly flip images horizontally", - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--eval_batch_size", type=int, default=16, help="The number of images to generate for evaluation." - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "The number of subprocesses to use for data loading. 0 means that the data will be loaded in the main" - " process." - ), - ) - parser.add_argument("--num_epochs", type=int, default=100) - parser.add_argument("--save_images_epochs", type=int, default=10, help="How often to save images during training.") - parser.add_argument( - "--save_model_epochs", type=int, default=10, help="How often to save the model during training." - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="cosine", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument("--adam_beta1", type=float, default=0.95, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument( - "--adam_weight_decay", type=float, default=1e-6, help="Weight decay magnitude for the Adam optimizer." - ) - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer.") - parser.add_argument( - "--use_ema", - action="store_true", - help="Whether to use Exponential Moving Average for the final model weights.", - ) - parser.add_argument("--ema_inv_gamma", type=float, default=1.0, help="The inverse gamma value for the EMA decay.") - parser.add_argument("--ema_power", type=float, default=3 / 4, help="The power value for the EMA decay.") - parser.add_argument("--ema_max_decay", type=float, default=0.9999, help="The maximum decay magnitude for EMA.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--hub_private_repo", action="store_true", help="Whether or not to create a private repository." - ) - parser.add_argument( - "--logger", - type=str, - default="tensorboard", - choices=["tensorboard", "wandb"], - help=( - "Whether to use [tensorboard](https://www.tensorflow.org/tensorboard) or [wandb](https://www.wandb.ai)" - " for experiment tracking and logging of model metrics and model checkpoints" - ), - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument( - "--prediction_type", - type=str, - default="epsilon", - choices=["epsilon", "sample"], - help="Whether the model should predict the 'epsilon'/noise error or directly the reconstructed image 'x0'.", - ) - parser.add_argument("--ddpm_num_steps", type=int, default=1000) - parser.add_argument("--ddpm_num_inference_steps", type=int, default=1000) - parser.add_argument("--ddpm_beta_schedule", type=str, default="linear") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("You must specify either a dataset name from the hub or a train data directory.") - - return args - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def main(args): - logging_dir = os.path.join(args.output_dir, args.logging_dir) - accelerator_project_config = ProjectConfiguration( - total_limit=args.checkpoints_total_limit, project_dir=args.output_dir, logging_dir=logging_dir - ) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - - if args.logger == "tensorboard": - if not is_tensorboard_available(): - raise ImportError("Make sure to install tensorboard if you want to use it for logging during training.") - - elif args.logger == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # `accelerate` 0.16.0 will have better support for customized saving - if version.parse(accelerate.__version__) >= version.parse("0.16.0"): - # create custom saving & loading hooks so that `accelerator.save_state(...)` serializes in a nice format - def save_model_hook(models, weights, output_dir): - if accelerator.is_main_process: - if args.use_ema: - ema_model.save_pretrained(os.path.join(output_dir, "unet_ema")) - - for i, model in enumerate(models): - model.save_pretrained(os.path.join(output_dir, "unet")) - - # make sure to pop weight so that corresponding model is not saved again - weights.pop() - - def load_model_hook(models, input_dir): - if args.use_ema: - load_model = EMAModel.from_pretrained(os.path.join(input_dir, "unet_ema"), UNet2DModel) - ema_model.load_state_dict(load_model.state_dict()) - ema_model.to(accelerator.device) - del load_model - - for i in range(len(models)): - # pop models so that they are not loaded again - model = models.pop() - - # load diffusers style into model - load_model = UNet2DModel.from_pretrained(input_dir, subfolder="unet") - model.register_to_config(**load_model.config) - - model.load_state_dict(load_model.state_dict()) - del load_model - - accelerator.register_save_state_pre_hook(save_model_hook) - accelerator.register_load_state_pre_hook(load_model_hook) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - create_repo(repo_name, exist_ok=True, token=args.hub_token) - repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Initialize the model - if args.model_config_name_or_path is None: - model = UNet2DModel( - sample_size=args.resolution, - in_channels=3, - out_channels=3, - layers_per_block=2, - block_out_channels=(128, 128, 256, 256, 512, 512), - down_block_types=( - "DownBlock2D", - "DownBlock2D", - "DownBlock2D", - "DownBlock2D", - "AttnDownBlock2D", - "DownBlock2D", - ), - up_block_types=( - "UpBlock2D", - "AttnUpBlock2D", - "UpBlock2D", - "UpBlock2D", - "UpBlock2D", - "UpBlock2D", - ), - ) - else: - config = UNet2DModel.load_config(args.model_config_name_or_path) - model = UNet2DModel.from_config(config) - - # Create EMA for the model. - if args.use_ema: - ema_model = EMAModel( - model.parameters(), - decay=args.ema_max_decay, - use_ema_warmup=True, - inv_gamma=args.ema_inv_gamma, - power=args.ema_power, - model_cls=UNet2DModel, - model_config=model.config, - ) - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - model.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # Initialize the scheduler - accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) - if accepts_prediction_type: - noise_scheduler = DDPMScheduler( - num_train_timesteps=args.ddpm_num_steps, - beta_schedule=args.ddpm_beta_schedule, - prediction_type=args.prediction_type, - ) - else: - noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) - - # Initialize the optimizer - optimizer = torch.optim.AdamW( - model.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - optimizer = ORT_FP16_Optimizer(optimizer) - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - split="train", - ) - else: - dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") - # See more about loading custom images at - # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder - - # Preprocessing the datasets and DataLoaders creation. - augmentations = transforms.Compose( - [ - transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), - transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def transform_images(examples): - images = [augmentations(image.convert("RGB")) for image in examples["image"]] - return {"input": images} - - logger.info(f"Dataset size: {len(dataset)}") - - dataset.set_transform(transform_images) - train_dataloader = torch.utils.data.DataLoader( - dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers - ) - - # Initialize the learning rate scheduler - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=(len(train_dataloader) * args.num_epochs), - ) - - # Prepare everything with our `accelerator`. - model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - model, optimizer, train_dataloader, lr_scheduler - ) - - if args.use_ema: - ema_model.to(accelerator.device) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - run = os.path.split(__file__)[-1].split(".")[0] - accelerator.init_trackers(run) - - model = ORTModule(model) - - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - max_train_steps = args.num_epochs * num_update_steps_per_epoch - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(dataset)}") - logger.info(f" Num Epochs = {args.num_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {max_train_steps}") - - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Train! - for epoch in range(first_epoch, args.num_epochs): - model.train() - progress_bar = tqdm(total=num_update_steps_per_epoch, disable=not accelerator.is_local_main_process) - progress_bar.set_description(f"Epoch {epoch}") - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - clean_images = batch["input"] - # Sample noise that we'll add to the images - noise = torch.randn( - clean_images.shape, dtype=(torch.float32 if args.mixed_precision == "no" else torch.float16) - ).to(clean_images.device) - bsz = clean_images.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=clean_images.device - ).long() - - # Add noise to the clean images according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) - - with accelerator.accumulate(model): - # Predict the noise residual - model_output = model(noisy_images, timesteps, return_dict=False)[0] - - if args.prediction_type == "epsilon": - loss = F.mse_loss(model_output, noise) # this could have different weights! - elif args.prediction_type == "sample": - alpha_t = _extract_into_tensor( - noise_scheduler.alphas_cumprod, timesteps, (clean_images.shape[0], 1, 1, 1) - ) - snr_weights = alpha_t / (1 - alpha_t) - loss = snr_weights * F.mse_loss( - model_output, clean_images, reduction="none" - ) # use SNR weighting from distillation paper - loss = loss.mean() - else: - raise ValueError(f"Unsupported prediction type: {args.prediction_type}") - - accelerator.backward(loss) - - if accelerator.sync_gradients: - accelerator.clip_grad_norm_(model.parameters(), 1.0) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - if args.use_ema: - ema_model.step(model.parameters()) - progress_bar.update(1) - global_step += 1 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} - if args.use_ema: - logs["ema_decay"] = ema_model.cur_decay_value - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - progress_bar.close() - - accelerator.wait_for_everyone() - - # Generate sample images for visual inspection - if accelerator.is_main_process: - if epoch % args.save_images_epochs == 0 or epoch == args.num_epochs - 1: - unet = accelerator.unwrap_model(model) - - if args.use_ema: - ema_model.store(unet.parameters()) - ema_model.copy_to(unet.parameters()) - - pipeline = DDPMPipeline( - unet=unet, - scheduler=noise_scheduler, - ) - - generator = torch.Generator(device=pipeline.device).manual_seed(0) - # run pipeline in inference (sample random noise and denoise) - images = pipeline( - generator=generator, - batch_size=args.eval_batch_size, - num_inference_steps=args.ddpm_num_inference_steps, - output_type="numpy", - ).images - - if args.use_ema: - ema_model.restore(unet.parameters()) - - # denormalize the images and save to tensorboard - images_processed = (images * 255).round().astype("uint8") - - if args.logger == "tensorboard": - if is_accelerate_version(">=", "0.17.0.dev0"): - tracker = accelerator.get_tracker("tensorboard", unwrap=True) - else: - tracker = accelerator.get_tracker("tensorboard") - tracker.add_images("test_samples", images_processed.transpose(0, 3, 1, 2), epoch) - elif args.logger == "wandb": - # Upcoming `log_images` helper coming in https://github.com/huggingface/accelerate/pull/962/files - accelerator.get_tracker("wandb").log( - {"test_samples": [wandb.Image(img) for img in images_processed], "epoch": epoch}, - step=global_step, - ) - - if epoch % args.save_model_epochs == 0 or epoch == args.num_epochs - 1: - # save the model - unet = accelerator.unwrap_model(model) - - if args.use_ema: - ema_model.store(unet.parameters()) - ema_model.copy_to(unet.parameters()) - - pipeline = DDPMPipeline( - unet=unet, - scheduler=noise_scheduler, - ) - - pipeline.save_pretrained(args.output_dir) - - if args.use_ema: - ema_model.restore(unet.parameters()) - - if args.push_to_hub: - repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=False) - - accelerator.end_training() - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/pasinic/White-box-Cartoon/README.md b/spaces/pasinic/White-box-Cartoon/README.md deleted file mode 100644 index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000 --- a/spaces/pasinic/White-box-Cartoon/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -python_version: 3.7 -title: White Box Cartoonization -emoji: 📚 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: hylee/White-box-Cartoonization ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/pinkq/Newbing/src/lib/bots/bing/tts.ts b/spaces/pinkq/Newbing/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/pinkq/Newbing/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/idna/codec.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/idna/codec.py deleted file mode 100644 index 1ca9ba62c208527b796b49306f4b8c95eb868a51..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/idna/codec.py +++ /dev/null @@ -1,112 +0,0 @@ -from .core import encode, decode, alabel, ulabel, IDNAError -import codecs -import re -from typing import Tuple, Optional - -_unicode_dots_re = re.compile('[\u002e\u3002\uff0e\uff61]') - -class Codec(codecs.Codec): - - def encode(self, data: str, errors: str = 'strict') -> Tuple[bytes, int]: - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return b"", 0 - - return encode(data), len(data) - - def decode(self, data: bytes, errors: str = 'strict') -> Tuple[str, int]: - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return '', 0 - - return decode(data), len(data) - -class IncrementalEncoder(codecs.BufferedIncrementalEncoder): - def _buffer_encode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return "", 0 - - labels = _unicode_dots_re.split(data) - trailing_dot = '' - if labels: - if not labels[-1]: - trailing_dot = '.' - del labels[-1] - elif not final: - # Keep potentially unfinished label until the next call - del labels[-1] - if labels: - trailing_dot = '.' - - result = [] - size = 0 - for label in labels: - result.append(alabel(label)) - if size: - size += 1 - size += len(label) - - # Join with U+002E - result_str = '.'.join(result) + trailing_dot # type: ignore - size += len(trailing_dot) - return result_str, size - -class IncrementalDecoder(codecs.BufferedIncrementalDecoder): - def _buffer_decode(self, data: str, errors: str, final: bool) -> Tuple[str, int]: # type: ignore - if errors != 'strict': - raise IDNAError('Unsupported error handling \"{}\"'.format(errors)) - - if not data: - return ('', 0) - - labels = _unicode_dots_re.split(data) - trailing_dot = '' - if labels: - if not labels[-1]: - trailing_dot = '.' - del labels[-1] - elif not final: - # Keep potentially unfinished label until the next call - del labels[-1] - if labels: - trailing_dot = '.' - - result = [] - size = 0 - for label in labels: - result.append(ulabel(label)) - if size: - size += 1 - size += len(label) - - result_str = '.'.join(result) + trailing_dot - size += len(trailing_dot) - return (result_str, size) - - -class StreamWriter(Codec, codecs.StreamWriter): - pass - - -class StreamReader(Codec, codecs.StreamReader): - pass - - -def getregentry() -> codecs.CodecInfo: - # Compatibility as a search_function for codecs.register() - return codecs.CodecInfo( - name='idna', - encode=Codec().encode, # type: ignore - decode=Codec().decode, # type: ignore - incrementalencoder=IncrementalEncoder, - incrementaldecoder=IncrementalDecoder, - streamwriter=StreamWriter, - streamreader=StreamReader, - ) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/filesize.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/filesize.py deleted file mode 100644 index 99f118e20103174993b865cfb43ac6b6e00296a4..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/filesize.py +++ /dev/null @@ -1,89 +0,0 @@ -# coding: utf-8 -"""Functions for reporting filesizes. Borrowed from https://github.com/PyFilesystem/pyfilesystem2 - -The functions declared in this module should cover the different -use cases needed to generate a string representation of a file size -using several different units. Since there are many standards regarding -file size units, three different functions have been implemented. - -See Also: - * `Wikipedia: Binary prefix <https://en.wikipedia.org/wiki/Binary_prefix>`_ - -""" - -__all__ = ["decimal"] - -from typing import Iterable, List, Optional, Tuple - - -def _to_str( - size: int, - suffixes: Iterable[str], - base: int, - *, - precision: Optional[int] = 1, - separator: Optional[str] = " ", -) -> str: - if size == 1: - return "1 byte" - elif size < base: - return "{:,} bytes".format(size) - - for i, suffix in enumerate(suffixes, 2): # noqa: B007 - unit = base**i - if size < unit: - break - return "{:,.{precision}f}{separator}{}".format( - (base * size / unit), - suffix, - precision=precision, - separator=separator, - ) - - -def pick_unit_and_suffix(size: int, suffixes: List[str], base: int) -> Tuple[int, str]: - """Pick a suffix and base for the given size.""" - for i, suffix in enumerate(suffixes): - unit = base**i - if size < unit * base: - break - return unit, suffix - - -def decimal( - size: int, - *, - precision: Optional[int] = 1, - separator: Optional[str] = " ", -) -> str: - """Convert a filesize in to a string (powers of 1000, SI prefixes). - - In this convention, ``1000 B = 1 kB``. - - This is typically the format used to advertise the storage - capacity of USB flash drives and the like (*256 MB* meaning - actually a storage capacity of more than *256 000 000 B*), - or used by **Mac OS X** since v10.6 to report file sizes. - - Arguments: - int (size): A file size. - int (precision): The number of decimal places to include (default = 1). - str (separator): The string to separate the value from the units (default = " "). - - Returns: - `str`: A string containing a abbreviated file size and units. - - Example: - >>> filesize.decimal(30000) - '30.0 kB' - >>> filesize.decimal(30000, precision=2, separator="") - '30.00kB' - - """ - return _to_str( - size, - ("kB", "MB", "GB", "TB", "PB", "EB", "ZB", "YB"), - 1000, - precision=precision, - separator=separator, - ) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/tomli/_re.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/tomli/_re.py deleted file mode 100644 index 994bb7493fd92865e6ab87c277ba5741b44c31a9..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/tomli/_re.py +++ /dev/null @@ -1,107 +0,0 @@ -# SPDX-License-Identifier: MIT -# SPDX-FileCopyrightText: 2021 Taneli Hukkinen -# Licensed to PSF under a Contributor Agreement. - -from __future__ import annotations - -from datetime import date, datetime, time, timedelta, timezone, tzinfo -from functools import lru_cache -import re -from typing import Any - -from ._types import ParseFloat - -# E.g. -# - 00:32:00.999999 -# - 00:32:00 -_TIME_RE_STR = r"([01][0-9]|2[0-3]):([0-5][0-9]):([0-5][0-9])(?:\.([0-9]{1,6})[0-9]*)?" - -RE_NUMBER = re.compile( - r""" -0 -(?: - x[0-9A-Fa-f](?:_?[0-9A-Fa-f])* # hex - | - b[01](?:_?[01])* # bin - | - o[0-7](?:_?[0-7])* # oct -) -| -[+-]?(?:0|[1-9](?:_?[0-9])*) # dec, integer part -(?P<floatpart> - (?:\.[0-9](?:_?[0-9])*)? # optional fractional part - (?:[eE][+-]?[0-9](?:_?[0-9])*)? # optional exponent part -) -""", - flags=re.VERBOSE, -) -RE_LOCALTIME = re.compile(_TIME_RE_STR) -RE_DATETIME = re.compile( - rf""" -([0-9]{{4}})-(0[1-9]|1[0-2])-(0[1-9]|[12][0-9]|3[01]) # date, e.g. 1988-10-27 -(?: - [Tt ] - {_TIME_RE_STR} - (?:([Zz])|([+-])([01][0-9]|2[0-3]):([0-5][0-9]))? # optional time offset -)? -""", - flags=re.VERBOSE, -) - - -def match_to_datetime(match: re.Match) -> datetime | date: - """Convert a `RE_DATETIME` match to `datetime.datetime` or `datetime.date`. - - Raises ValueError if the match does not correspond to a valid date - or datetime. - """ - ( - year_str, - month_str, - day_str, - hour_str, - minute_str, - sec_str, - micros_str, - zulu_time, - offset_sign_str, - offset_hour_str, - offset_minute_str, - ) = match.groups() - year, month, day = int(year_str), int(month_str), int(day_str) - if hour_str is None: - return date(year, month, day) - hour, minute, sec = int(hour_str), int(minute_str), int(sec_str) - micros = int(micros_str.ljust(6, "0")) if micros_str else 0 - if offset_sign_str: - tz: tzinfo | None = cached_tz( - offset_hour_str, offset_minute_str, offset_sign_str - ) - elif zulu_time: - tz = timezone.utc - else: # local date-time - tz = None - return datetime(year, month, day, hour, minute, sec, micros, tzinfo=tz) - - -@lru_cache(maxsize=None) -def cached_tz(hour_str: str, minute_str: str, sign_str: str) -> timezone: - sign = 1 if sign_str == "+" else -1 - return timezone( - timedelta( - hours=sign * int(hour_str), - minutes=sign * int(minute_str), - ) - ) - - -def match_to_localtime(match: re.Match) -> time: - hour_str, minute_str, sec_str, micros_str = match.groups() - micros = int(micros_str.ljust(6, "0")) if micros_str else 0 - return time(int(hour_str), int(minute_str), int(sec_str), micros) - - -def match_to_number(match: re.Match, parse_float: ParseFloat) -> Any: - if match.group("floatpart"): - return parse_float(match.group()) - return int(match.group(), 0) diff --git a/spaces/pleonova/multi-label-summary-text/models.py b/spaces/pleonova/multi-label-summary-text/models.py deleted file mode 100644 index 96ba37b74db73caf7dec72ec01803fcbddd3263f..0000000000000000000000000000000000000000 --- a/spaces/pleonova/multi-label-summary-text/models.py +++ /dev/null @@ -1,89 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline -import streamlit as st -from keybert import KeyBERT -import re - - -# Reference: https://discuss.huggingface.co/t/summarization-on-long-documents/920/7 -def create_nest_sentences(document:str, token_max_length = 1024): - nested = [] - sent = [] - length = 0 - tokenizer = AutoTokenizer.from_pretrained('facebook/bart-large-mnli') - - for sentence in re.split(r'(?<=[^A-Z].[.?]) +(?=[A-Z])', document.replace("\n", ' ')): - tokens_in_sentence = tokenizer(str(sentence), truncation=False, padding=False)[0] # hugging face transformer tokenizer - length += len(tokens_in_sentence) - if length < token_max_length: - sent.append(sentence) - else: - nested.append(sent) - sent = [sentence] - length = 0 - if sent: - nested.append(sent) - return nested - -# Reference: https://github.com/MaartenGr/KeyBERT -@st.cache_resource -def load_keyword_model(): - kw_model = KeyBERT() - return kw_model - -def keyword_gen(kw_model, sequence:str): - keywords = kw_model.extract_keywords(sequence, - keyphrase_ngram_range=(1, 1), - stop_words='english', - use_mmr=True, - diversity=0.5, - top_n=10) - return keywords - -# Reference: https://huggingface.co/facebook/bart-large-mnli -@st.cache_resource -def load_summary_model(): - model_name = "facebook/bart-large-cnn" - summarizer = pipeline(task='summarization', model=model_name) - return summarizer - -# def load_summary_model(): -# model_name = "facebook/bart-large-mnli" -# tokenizer = BartTokenizer.from_pretrained(model_name) -# model = BartForConditionalGeneration.from_pretrained(model_name) -# summarizer = pipeline(task='summarization', model=model, tokenizer=tokenizer, framework='pt') -# return summarizer - -def summarizer_gen(summarizer, sequence:str, maximum_tokens:int, minimum_tokens:int): - output = summarizer(sequence, - num_beams=4, - length_penalty=2.0, - max_length=maximum_tokens, - min_length=minimum_tokens, - do_sample=False, - early_stopping = True, - no_repeat_ngram_size=3) - return output[0].get('summary_text') - -# # Reference: https://www.datatrigger.org/post/nlp_hugging_face/ -# # Custom summarization pipeline (to handle long articles) -# def summarize(text, minimum_length_of_summary = 100): -# # Tokenize and truncate -# inputs = tokenizer_bart([text], truncation=True, max_length=1024, return_tensors='pt').to('cuda') -# # Generate summary -# summary_ids = model_bart.generate(inputs['input_ids'], num_beams=4, min_length = minimum_length_of_summary, max_length=400, early_stopping=True) -# # Untokenize -# return([tokenizer_bart.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0]) - -# Reference: https://huggingface.co/spaces/team-zero-shot-nli/zero-shot-nli/blob/main/utils.py -@st.cache_resource -def load_model(): - model_name = "facebook/bart-large-mnli" - tokenizer = AutoTokenizer.from_pretrained(model_name) - model = AutoModelForSequenceClassification.from_pretrained(model_name) - classifier = pipeline(task='zero-shot-classification', model=model, tokenizer=tokenizer, framework='pt') - return classifier - -def classifier_zero(classifier, sequence:str, labels:list, multi_class:bool): - outputs = classifier(sequence, labels, multi_label=multi_class) - return outputs['labels'], outputs['scores'] diff --git a/spaces/pragneshbarik/ikigai-chat/utils.py b/spaces/pragneshbarik/ikigai-chat/utils.py deleted file mode 100644 index 694eef2f957acd02febe1a387f524f0f01e8cb54..0000000000000000000000000000000000000000 --- a/spaces/pragneshbarik/ikigai-chat/utils.py +++ /dev/null @@ -1,17 +0,0 @@ -import os -from dotenv import load_dotenv -from sentence_transformers import SentenceTransformer -load_dotenv() - -API_TOKEN = os.getenv('HF_TOKEN') - -API_URL = "https://api-inference.huggingface.co/models/all-distilroberta-v1" -headers = {"Authorization": f"Bearer {API_TOKEN}"} - - -def generate_text_embeddings(texts): - model = SentenceTransformer('all-distilroberta-v1') - - return model.encode(texts) - -# Example usage: diff --git a/spaces/prikmmo9/finetuned_diffusion/style.css b/spaces/prikmmo9/finetuned_diffusion/style.css deleted file mode 100644 index 9bfa78cc983f84693cf7cbab1e3bfd0e0d36c944..0000000000000000000000000000000000000000 --- a/spaces/prikmmo9/finetuned_diffusion/style.css +++ /dev/null @@ -1,24 +0,0 @@ -.finetuned-diffusion-div div{ - display:inline-flex; - align-items:center; - gap:.8rem; - font-size:1.75rem -} -.finetuned-diffusion-div div h1{ - font-weight:900; - margin-bottom:7px -} -.finetuned-diffusion-div p{ - margin-bottom:10px; - font-size:94% -} -a{ - text-decoration:underline -} -.tabs{ - margin-top:0; - margin-bottom:0 -} -#gallery{ - min-height:20rem -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/dataframe.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/dataframe.py deleted file mode 100644 index 5f7f6505a5d04f2a03ba5732a080a4cdae41ccaa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/dataframe.py +++ /dev/null @@ -1,304 +0,0 @@ -"""gr.Dataframe() component""" - -from __future__ import annotations - -import warnings -from typing import Any, Callable, Dict, List, Literal, Optional - -import numpy as np -import pandas as pd -import semantic_version -from gradio_client.documentation import document, set_documentation_group -from pandas.io.formats.style import Styler - -from gradio.components import Component -from gradio.data_classes import GradioModel -from gradio.events import Events - - -class DataframeData(GradioModel): - headers: List[str] - data: List[List[Any]] - metadata: Optional[ - Dict[str, List[Any]] - ] = None # Optional[Dict[str, List[Any]]] = None - - -set_documentation_group("component") - - -@document() -class Dataframe(Component): - """ - Accepts or displays 2D input through a spreadsheet-like component for dataframes. - Preprocessing: passes the uploaded spreadsheet data as a {pandas.DataFrame}, {numpy.array}, or {List[List]} depending on `type` - Postprocessing: expects a {pandas.DataFrame}, {pandas.Styler}, {numpy.array}, {List[List]}, {List}, a {Dict} with keys `data` (and optionally `headers`), or {str} path to a csv, which is rendered in the spreadsheet. - Examples-format: a {str} filepath to a csv with data, a pandas dataframe, or a list of lists (excluding headers) where each sublist is a row of data. - Demos: filter_records, matrix_transpose, tax_calculator - """ - - EVENTS = [Events.change, Events.input, Events.select] - - data_model = DataframeData - - def __init__( - self, - value: pd.DataFrame - | Styler - | np.ndarray - | list - | list[list] - | dict - | str - | Callable - | None = None, - *, - headers: list[str] | None = None, - row_count: int | tuple[int, str] = (1, "dynamic"), - col_count: int | tuple[int, str] | None = None, - datatype: str | list[str] = "str", - type: Literal["pandas", "numpy", "array"] = "pandas", - latex_delimiters: list[dict[str, str | bool]] | None = None, - label: str | None = None, - show_label: bool | None = None, - every: float | None = None, - height: int = 500, - scale: int | None = None, - min_width: int = 160, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - render: bool = True, - wrap: bool = False, - line_breaks: bool = True, - column_widths: list[str | int] | None = None, - ): - """ - Parameters: - value: Default value to display in the DataFrame. If a Styler is provided, it will be used to set the displayed value in the DataFrame (e.g. to set precision of numbers) if the `interactive` is False. If a Callable function is provided, the function will be called whenever the app loads to set the initial value of the component. - headers: List of str header names. If None, no headers are shown. - row_count: Limit number of rows for input and decide whether user can create new rows. The first element of the tuple is an `int`, the row count; the second should be 'fixed' or 'dynamic', the new row behaviour. If an `int` is passed the rows default to 'dynamic' - col_count: Limit number of columns for input and decide whether user can create new columns. The first element of the tuple is an `int`, the number of columns; the second should be 'fixed' or 'dynamic', the new column behaviour. If an `int` is passed the columns default to 'dynamic' - datatype: Datatype of values in sheet. Can be provided per column as a list of strings, or for the entire sheet as a single string. Valid datatypes are "str", "number", "bool", "date", and "markdown". - type: Type of value to be returned by component. "pandas" for pandas dataframe, "numpy" for numpy array, or "array" for a Python list of lists. - label: The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to. - latex_delimiters: A list of dicts of the form {"left": open delimiter (str), "right": close delimiter (str), "display": whether to display in newline (bool)} that will be used to render LaTeX expressions. If not provided, `latex_delimiters` is set to `[{ "left": "$$", "right": "$$", "display": True }]`, so only expressions enclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass in an empty list to disable LaTeX rendering. For more information, see the [KaTeX documentation](https://katex.org/docs/autorender.html). Only applies to columns whose datatype is "markdown". - label: The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to. - show_label: if True, will display label. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - height: The maximum height of the dataframe, in pixels. If more rows are created than can fit in the height, a scrollbar will appear. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: if True, will allow users to edit the dataframe; if False, can only be used to display data. If not provided, this is inferred based on whether the component is used as an input or output. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - render: If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later. - wrap: If True, the text in table cells will wrap when appropriate. If False and the `column_width` parameter is not set, the column widths will expand based on the cell contents and the table may need to be horizontally scrolled. If `column_width` is set, then any overflow text will be hidden. - line_breaks: If True (default), will enable Github-flavored Markdown line breaks in chatbot messages. If False, single new lines will be ignored. Only applies for columns of type "markdown." - column_widths: An optional list representing the width of each column. The elements of the list should be in the format "100px" (ints are also accepted and converted to pixel values) or "10%". If not provided, the column widths will be automatically determined based on the content of the cells. Setting this parameter will cause the browser to try to fit the table within the page width. - """ - self.wrap = wrap - self.row_count = self.__process_counts(row_count) - self.col_count = self.__process_counts( - col_count, len(headers) if headers else 3 - ) - - self.__validate_headers(headers, self.col_count[0]) - - self.headers = ( - headers - if headers is not None - else [str(i) for i in (range(1, self.col_count[0] + 1))] - ) - self.datatype = ( - datatype if isinstance(datatype, list) else [datatype] * self.col_count[0] - ) - valid_types = ["pandas", "numpy", "array"] - if type not in valid_types: - raise ValueError( - f"Invalid value for parameter `type`: {type}. Please choose from one of: {valid_types}" - ) - self.type = type - values = { - "str": "", - "number": 0, - "bool": False, - "date": "01/01/1970", - "markdown": "", - "html": "", - } - column_dtypes = ( - [datatype] * self.col_count[0] if isinstance(datatype, str) else datatype - ) - self.empty_input = { - "headers": self.headers, - "data": [ - [values[c] for c in column_dtypes] for _ in range(self.row_count[0]) - ], - "metadata": None, - } - - if latex_delimiters is None: - latex_delimiters = [{"left": "$$", "right": "$$", "display": True}] - self.latex_delimiters = latex_delimiters - self.height = height - self.line_breaks = line_breaks - self.column_widths = [ - w if isinstance(w, str) else f"{w}px" for w in (column_widths or []) - ] - super().__init__( - label=label, - every=every, - show_label=show_label, - scale=scale, - min_width=min_width, - interactive=interactive, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - render=render, - value=value, - ) - - def preprocess(self, payload: DataframeData) -> pd.DataFrame | np.ndarray | list: - if self.type == "pandas": - if payload.headers is not None: - return pd.DataFrame(payload.data, columns=payload.headers) - else: - return pd.DataFrame(payload.data) - if self.type == "numpy": - return np.array(payload.data) - elif self.type == "array": - return payload.data - else: - raise ValueError( - "Unknown type: " - + str(self.type) - + ". Please choose from: 'pandas', 'numpy', 'array'." - ) - - def postprocess( - self, - value: pd.DataFrame - | Styler - | np.ndarray - | list - | list[list] - | dict - | str - | None, - ) -> DataframeData | dict: - if value is None: - return self.postprocess(self.empty_input) - if isinstance(value, dict): - return value - if isinstance(value, (str, pd.DataFrame)): - if isinstance(value, str): - value = pd.read_csv(value) # type: ignore - return DataframeData( - headers=list(value.columns), # type: ignore - data=value.to_dict(orient="split")["data"], # type: ignore - ) - elif isinstance(value, Styler): - if semantic_version.Version(pd.__version__) < semantic_version.Version( - "1.5.0" - ): - raise ValueError( - "Styler objects are only supported in pandas version 1.5.0 or higher. Please try: `pip install --upgrade pandas` to use this feature." - ) - if self.interactive: - warnings.warn( - "Cannot display Styler object in interactive mode. Will display as a regular pandas dataframe instead." - ) - df: pd.DataFrame = value.data # type: ignore - return DataframeData( - headers=list(df.columns), - data=df.to_dict(orient="split")["data"], # type: ignore - metadata=self.__extract_metadata(value), - ) - elif isinstance(value, (str, pd.DataFrame)): - df = pd.read_csv(value) if isinstance(value, str) else value # type: ignore - return DataframeData( - headers=list(df.columns), - data=df.to_dict(orient="split")["data"], # type: ignore - ) - elif isinstance(value, (np.ndarray, list)): - if len(value) == 0: - return self.postprocess([[]]) - if isinstance(value, np.ndarray): - value = value.tolist() - if not isinstance(value, list): - raise ValueError("output cannot be converted to list") - - _headers = self.headers - if len(self.headers) < len(value[0]): - _headers: list[str] = [ - *self.headers, - *[str(i) for i in range(len(self.headers) + 1, len(value[0]) + 1)], - ] - elif len(self.headers) > len(value[0]): - _headers = self.headers[: len(value[0])] - - return DataframeData(headers=_headers, data=value) - else: - raise ValueError("Cannot process value as a Dataframe") - - @staticmethod - def __get_cell_style(cell_id: str, cell_styles: list[dict]) -> str: - styles_for_cell = [] - for style in cell_styles: - if cell_id in style.get("selectors", []): - styles_for_cell.extend(style.get("props", [])) - styles_str = "; ".join([f"{prop}: {value}" for prop, value in styles_for_cell]) - return styles_str - - @staticmethod - def __extract_metadata(df: Styler) -> dict[str, list[list]]: - metadata = {"display_value": [], "styling": []} - style_data = df._compute()._translate(None, None) # type: ignore - cell_styles = style_data.get("cellstyle", []) - for i in range(len(style_data["body"])): - metadata["display_value"].append([]) - metadata["styling"].append([]) - for j in range(len(style_data["body"][i])): - cell_type = style_data["body"][i][j]["type"] - if cell_type != "td": - continue - display_value = style_data["body"][i][j]["display_value"] - cell_id = style_data["body"][i][j]["id"] - styles_str = Dataframe.__get_cell_style(cell_id, cell_styles) - metadata["display_value"][i].append(display_value) - metadata["styling"][i].append(styles_str) - return metadata - - @staticmethod - def __process_counts(count, default=3) -> tuple[int, str]: - if count is None: - return (default, "dynamic") - if type(count) == int or type(count) == float: - return (int(count), "dynamic") - else: - return count - - @staticmethod - def __validate_headers(headers: list[str] | None, col_count: int): - if headers is not None and len(headers) != col_count: - raise ValueError( - f"The length of the headers list must be equal to the col_count int.\n" - f"The column count is set to {col_count} but `headers` has {len(headers)} items. " - f"Check the values passed to `col_count` and `headers`." - ) - - def as_example(self, input_data: pd.DataFrame | np.ndarray | str | None): - if input_data is None: - return "" - elif isinstance(input_data, pd.DataFrame): - return input_data.head(n=5).to_dict(orient="split")["data"] # type: ignore - elif isinstance(input_data, np.ndarray): - return input_data.tolist() - return input_data - - def example_inputs(self) -> Any: - return {"headers": ["a", "b"], "data": [["foo", "bar"]]} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jsonschema_specifications/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jsonschema_specifications/__init__.py deleted file mode 100644 index 508eb614d7d92ff1b8e1d271db696af7bf03e783..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/jsonschema_specifications/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -""" -The JSON Schema meta-schemas and vocabularies, exposed as a Registry. -""" -from referencing import Registry as _Registry -from referencing.jsonschema import SchemaRegistry as _SchemaRegistry - -from jsonschema_specifications._core import _schemas - -#: A `referencing.jsonschema.SchemaRegistry` containing all of the official -#: meta-schemas and vocabularies. -REGISTRY: _SchemaRegistry = (_schemas() @ _Registry()).crawl() - -__all__ = ["REGISTRY"] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markupsafe/_speedups.c b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markupsafe/_speedups.c deleted file mode 100644 index 3c463fb82d53e9a9616acfbbece0eb3be6d0d5e7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/markupsafe/_speedups.c +++ /dev/null @@ -1,320 +0,0 @@ -#include <Python.h> - -static PyObject* markup; - -static int -init_constants(void) -{ - PyObject *module; - - /* import markup type so that we can mark the return value */ - module = PyImport_ImportModule("markupsafe"); - if (!module) - return 0; - markup = PyObject_GetAttrString(module, "Markup"); - Py_DECREF(module); - - return 1; -} - -#define GET_DELTA(inp, inp_end, delta) \ - while (inp < inp_end) { \ - switch (*inp++) { \ - case '"': \ - case '\'': \ - case '&': \ - delta += 4; \ - break; \ - case '<': \ - case '>': \ - delta += 3; \ - break; \ - } \ - } - -#define DO_ESCAPE(inp, inp_end, outp) \ - { \ - Py_ssize_t ncopy = 0; \ - while (inp < inp_end) { \ - switch (*inp) { \ - case '"': \ - memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \ - outp += ncopy; ncopy = 0; \ - *outp++ = '&'; \ - *outp++ = '#'; \ - *outp++ = '3'; \ - *outp++ = '4'; \ - *outp++ = ';'; \ - break; \ - case '\'': \ - memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \ - outp += ncopy; ncopy = 0; \ - *outp++ = '&'; \ - *outp++ = '#'; \ - *outp++ = '3'; \ - *outp++ = '9'; \ - *outp++ = ';'; \ - break; \ - case '&': \ - memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \ - outp += ncopy; ncopy = 0; \ - *outp++ = '&'; \ - *outp++ = 'a'; \ - *outp++ = 'm'; \ - *outp++ = 'p'; \ - *outp++ = ';'; \ - break; \ - case '<': \ - memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \ - outp += ncopy; ncopy = 0; \ - *outp++ = '&'; \ - *outp++ = 'l'; \ - *outp++ = 't'; \ - *outp++ = ';'; \ - break; \ - case '>': \ - memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \ - outp += ncopy; ncopy = 0; \ - *outp++ = '&'; \ - *outp++ = 'g'; \ - *outp++ = 't'; \ - *outp++ = ';'; \ - break; \ - default: \ - ncopy++; \ - } \ - inp++; \ - } \ - memcpy(outp, inp-ncopy, sizeof(*outp)*ncopy); \ - } - -static PyObject* -escape_unicode_kind1(PyUnicodeObject *in) -{ - Py_UCS1 *inp = PyUnicode_1BYTE_DATA(in); - Py_UCS1 *inp_end = inp + PyUnicode_GET_LENGTH(in); - Py_UCS1 *outp; - PyObject *out; - Py_ssize_t delta = 0; - - GET_DELTA(inp, inp_end, delta); - if (!delta) { - Py_INCREF(in); - return (PyObject*)in; - } - - out = PyUnicode_New(PyUnicode_GET_LENGTH(in) + delta, - PyUnicode_IS_ASCII(in) ? 127 : 255); - if (!out) - return NULL; - - inp = PyUnicode_1BYTE_DATA(in); - outp = PyUnicode_1BYTE_DATA(out); - DO_ESCAPE(inp, inp_end, outp); - return out; -} - -static PyObject* -escape_unicode_kind2(PyUnicodeObject *in) -{ - Py_UCS2 *inp = PyUnicode_2BYTE_DATA(in); - Py_UCS2 *inp_end = inp + PyUnicode_GET_LENGTH(in); - Py_UCS2 *outp; - PyObject *out; - Py_ssize_t delta = 0; - - GET_DELTA(inp, inp_end, delta); - if (!delta) { - Py_INCREF(in); - return (PyObject*)in; - } - - out = PyUnicode_New(PyUnicode_GET_LENGTH(in) + delta, 65535); - if (!out) - return NULL; - - inp = PyUnicode_2BYTE_DATA(in); - outp = PyUnicode_2BYTE_DATA(out); - DO_ESCAPE(inp, inp_end, outp); - return out; -} - - -static PyObject* -escape_unicode_kind4(PyUnicodeObject *in) -{ - Py_UCS4 *inp = PyUnicode_4BYTE_DATA(in); - Py_UCS4 *inp_end = inp + PyUnicode_GET_LENGTH(in); - Py_UCS4 *outp; - PyObject *out; - Py_ssize_t delta = 0; - - GET_DELTA(inp, inp_end, delta); - if (!delta) { - Py_INCREF(in); - return (PyObject*)in; - } - - out = PyUnicode_New(PyUnicode_GET_LENGTH(in) + delta, 1114111); - if (!out) - return NULL; - - inp = PyUnicode_4BYTE_DATA(in); - outp = PyUnicode_4BYTE_DATA(out); - DO_ESCAPE(inp, inp_end, outp); - return out; -} - -static PyObject* -escape_unicode(PyUnicodeObject *in) -{ - if (PyUnicode_READY(in)) - return NULL; - - switch (PyUnicode_KIND(in)) { - case PyUnicode_1BYTE_KIND: - return escape_unicode_kind1(in); - case PyUnicode_2BYTE_KIND: - return escape_unicode_kind2(in); - case PyUnicode_4BYTE_KIND: - return escape_unicode_kind4(in); - } - assert(0); /* shouldn't happen */ - return NULL; -} - -static PyObject* -escape(PyObject *self, PyObject *text) -{ - static PyObject *id_html; - PyObject *s = NULL, *rv = NULL, *html; - - if (id_html == NULL) { - id_html = PyUnicode_InternFromString("__html__"); - if (id_html == NULL) { - return NULL; - } - } - - /* we don't have to escape integers, bools or floats */ - if (PyLong_CheckExact(text) || - PyFloat_CheckExact(text) || PyBool_Check(text) || - text == Py_None) - return PyObject_CallFunctionObjArgs(markup, text, NULL); - - /* if the object has an __html__ method that performs the escaping */ - html = PyObject_GetAttr(text ,id_html); - if (html) { - s = PyObject_CallObject(html, NULL); - Py_DECREF(html); - if (s == NULL) { - return NULL; - } - /* Convert to Markup object */ - rv = PyObject_CallFunctionObjArgs(markup, (PyObject*)s, NULL); - Py_DECREF(s); - return rv; - } - - /* otherwise make the object unicode if it isn't, then escape */ - PyErr_Clear(); - if (!PyUnicode_Check(text)) { - PyObject *unicode = PyObject_Str(text); - if (!unicode) - return NULL; - s = escape_unicode((PyUnicodeObject*)unicode); - Py_DECREF(unicode); - } - else - s = escape_unicode((PyUnicodeObject*)text); - - /* convert the unicode string into a markup object. */ - rv = PyObject_CallFunctionObjArgs(markup, (PyObject*)s, NULL); - Py_DECREF(s); - return rv; -} - - -static PyObject* -escape_silent(PyObject *self, PyObject *text) -{ - if (text != Py_None) - return escape(self, text); - return PyObject_CallFunctionObjArgs(markup, NULL); -} - - -static PyObject* -soft_str(PyObject *self, PyObject *s) -{ - if (!PyUnicode_Check(s)) - return PyObject_Str(s); - Py_INCREF(s); - return s; -} - - -static PyMethodDef module_methods[] = { - { - "escape", - (PyCFunction)escape, - METH_O, - "Replace the characters ``&``, ``<``, ``>``, ``'``, and ``\"`` in" - " the string with HTML-safe sequences. Use this if you need to display" - " text that might contain such characters in HTML.\n\n" - "If the object has an ``__html__`` method, it is called and the" - " return value is assumed to already be safe for HTML.\n\n" - ":param s: An object to be converted to a string and escaped.\n" - ":return: A :class:`Markup` string with the escaped text.\n" - }, - { - "escape_silent", - (PyCFunction)escape_silent, - METH_O, - "Like :func:`escape` but treats ``None`` as the empty string." - " Useful with optional values, as otherwise you get the string" - " ``'None'`` when the value is ``None``.\n\n" - ">>> escape(None)\n" - "Markup('None')\n" - ">>> escape_silent(None)\n" - "Markup('')\n" - }, - { - "soft_str", - (PyCFunction)soft_str, - METH_O, - "Convert an object to a string if it isn't already. This preserves" - " a :class:`Markup` string rather than converting it back to a basic" - " string, so it will still be marked as safe and won't be escaped" - " again.\n\n" - ">>> value = escape(\"<User 1>\")\n" - ">>> value\n" - "Markup('&lt;User 1&gt;')\n" - ">>> escape(str(value))\n" - "Markup('&amp;lt;User 1&amp;gt;')\n" - ">>> escape(soft_str(value))\n" - "Markup('&lt;User 1&gt;')\n" - }, - {NULL, NULL, 0, NULL} /* Sentinel */ -}; - -static struct PyModuleDef module_definition = { - PyModuleDef_HEAD_INIT, - "markupsafe._speedups", - NULL, - -1, - module_methods, - NULL, - NULL, - NULL, - NULL -}; - -PyMODINIT_FUNC -PyInit__speedups(void) -{ - if (!init_constants()) - return NULL; - - return PyModule_Create(&module_definition); -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_qtagg.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_qtagg.py deleted file mode 100644 index 256e50a3d1c3305be1b5fd5e9da9cb9807f4ec1f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/backends/backend_qtagg.py +++ /dev/null @@ -1,86 +0,0 @@ -""" -Render to qt from agg. -""" - -import ctypes - -from matplotlib.transforms import Bbox - -from .qt_compat import QT_API, QtCore, QtGui -from .backend_agg import FigureCanvasAgg -from .backend_qt import _BackendQT, FigureCanvasQT -from .backend_qt import ( # noqa: F401 # pylint: disable=W0611 - FigureManagerQT, NavigationToolbar2QT) - - -class FigureCanvasQTAgg(FigureCanvasAgg, FigureCanvasQT): - - def paintEvent(self, event): - """ - Copy the image from the Agg canvas to the qt.drawable. - - In Qt, all drawing should be done inside of here when a widget is - shown onscreen. - """ - self._draw_idle() # Only does something if a draw is pending. - - # If the canvas does not have a renderer, then give up and wait for - # FigureCanvasAgg.draw(self) to be called. - if not hasattr(self, 'renderer'): - return - - painter = QtGui.QPainter(self) - try: - # See documentation of QRect: bottom() and right() are off - # by 1, so use left() + width() and top() + height(). - rect = event.rect() - # scale rect dimensions using the screen dpi ratio to get - # correct values for the Figure coordinates (rather than - # QT5's coords) - width = rect.width() * self.device_pixel_ratio - height = rect.height() * self.device_pixel_ratio - left, top = self.mouseEventCoords(rect.topLeft()) - # shift the "top" by the height of the image to get the - # correct corner for our coordinate system - bottom = top - height - # same with the right side of the image - right = left + width - # create a buffer using the image bounding box - bbox = Bbox([[left, bottom], [right, top]]) - buf = memoryview(self.copy_from_bbox(bbox)) - - if QT_API == "PyQt6": - from PyQt6 import sip - ptr = int(sip.voidptr(buf)) - else: - ptr = buf - - painter.eraseRect(rect) # clear the widget canvas - qimage = QtGui.QImage(ptr, buf.shape[1], buf.shape[0], - QtGui.QImage.Format.Format_RGBA8888) - qimage.setDevicePixelRatio(self.device_pixel_ratio) - # set origin using original QT coordinates - origin = QtCore.QPoint(rect.left(), rect.top()) - painter.drawImage(origin, qimage) - # Adjust the buf reference count to work around a memory - # leak bug in QImage under PySide. - if QT_API == "PySide2" and QtCore.__version_info__ < (5, 12): - ctypes.c_long.from_address(id(buf)).value = 1 - - self._draw_rect_callback(painter) - finally: - painter.end() - - def print_figure(self, *args, **kwargs): - super().print_figure(*args, **kwargs) - # In some cases, Qt will itself trigger a paint event after closing the file - # save dialog. When that happens, we need to be sure that the internal canvas is - # re-drawn. However, if the user is using an automatically-chosen Qt backend but - # saving with a different backend (such as pgf), we do not want to trigger a - # full draw in Qt, so just set the flag for next time. - self._draw_pending = True - - -@_BackendQT.export -class _BackendQTAgg(_BackendQT): - FigureCanvas = FigureCanvasQTAgg diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/__ufunc_api.h b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/__ufunc_api.h deleted file mode 100644 index e2efe29e8635f6b75d888a107804ec49ad025f86..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/core/include/numpy/__ufunc_api.h +++ /dev/null @@ -1,314 +0,0 @@ - -#ifdef _UMATHMODULE - -extern NPY_NO_EXPORT PyTypeObject PyUFunc_Type; - -extern NPY_NO_EXPORT PyTypeObject PyUFunc_Type; - -NPY_NO_EXPORT PyObject * PyUFunc_FromFuncAndData \ - (PyUFuncGenericFunction *, void **, char *, int, int, int, int, const char *, const char *, int); -NPY_NO_EXPORT int PyUFunc_RegisterLoopForType \ - (PyUFuncObject *, int, PyUFuncGenericFunction, const int *, void *); -NPY_NO_EXPORT int PyUFunc_GenericFunction \ - (PyUFuncObject *NPY_UNUSED(ufunc), PyObject *NPY_UNUSED(args), PyObject *NPY_UNUSED(kwds), PyArrayObject **NPY_UNUSED(op)); -NPY_NO_EXPORT void PyUFunc_f_f_As_d_d \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_d_d \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_f_f \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_g_g \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_F_F_As_D_D \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_F_F \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_D_D \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_G_G \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_O_O \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_ff_f_As_dd_d \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_ff_f \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_dd_d \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_gg_g \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_FF_F_As_DD_D \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_DD_D \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_FF_F \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_GG_G \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_OO_O \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_O_O_method \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_OO_O_method \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_On_Om \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT int PyUFunc_GetPyValues \ - (char *, int *, int *, PyObject **); -NPY_NO_EXPORT int PyUFunc_checkfperr \ - (int, PyObject *, int *); -NPY_NO_EXPORT void PyUFunc_clearfperr \ - (void); -NPY_NO_EXPORT int PyUFunc_getfperr \ - (void); -NPY_NO_EXPORT int PyUFunc_handlefperr \ - (int, PyObject *, int, int *); -NPY_NO_EXPORT int PyUFunc_ReplaceLoopBySignature \ - (PyUFuncObject *, PyUFuncGenericFunction, const int *, PyUFuncGenericFunction *); -NPY_NO_EXPORT PyObject * PyUFunc_FromFuncAndDataAndSignature \ - (PyUFuncGenericFunction *, void **, char *, int, int, int, int, const char *, const char *, int, const char *); -NPY_NO_EXPORT int PyUFunc_SetUsesArraysAsData \ - (void **NPY_UNUSED(data), size_t NPY_UNUSED(i)); -NPY_NO_EXPORT void PyUFunc_e_e \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_e_e_As_f_f \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_e_e_As_d_d \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_ee_e \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_ee_e_As_ff_f \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT void PyUFunc_ee_e_As_dd_d \ - (char **, npy_intp const *, npy_intp const *, void *); -NPY_NO_EXPORT int PyUFunc_DefaultTypeResolver \ - (PyUFuncObject *, NPY_CASTING, PyArrayObject **, PyObject *, PyArray_Descr **); -NPY_NO_EXPORT int PyUFunc_ValidateCasting \ - (PyUFuncObject *, NPY_CASTING, PyArrayObject **, PyArray_Descr **); -NPY_NO_EXPORT int PyUFunc_RegisterLoopForDescr \ - (PyUFuncObject *, PyArray_Descr *, PyUFuncGenericFunction, PyArray_Descr **, void *); -NPY_NO_EXPORT PyObject * PyUFunc_FromFuncAndDataAndSignatureAndIdentity \ - (PyUFuncGenericFunction *, void **, char *, int, int, int, int, const char *, const char *, const int, const char *, PyObject *); - -#else - -#if defined(PY_UFUNC_UNIQUE_SYMBOL) -#define PyUFunc_API PY_UFUNC_UNIQUE_SYMBOL -#endif - -#if defined(NO_IMPORT) || defined(NO_IMPORT_UFUNC) -extern void **PyUFunc_API; -#else -#if defined(PY_UFUNC_UNIQUE_SYMBOL) -void **PyUFunc_API; -#else -static void **PyUFunc_API=NULL; -#endif -#endif - -#define PyUFunc_Type (*(PyTypeObject *)PyUFunc_API[0]) -#define PyUFunc_FromFuncAndData \ - (*(PyObject * (*)(PyUFuncGenericFunction *, void **, char *, int, int, int, int, const char *, const char *, int)) \ - PyUFunc_API[1]) -#define PyUFunc_RegisterLoopForType \ - (*(int (*)(PyUFuncObject *, int, PyUFuncGenericFunction, const int *, void *)) \ - PyUFunc_API[2]) -#define PyUFunc_GenericFunction \ - (*(int (*)(PyUFuncObject *NPY_UNUSED(ufunc), PyObject *NPY_UNUSED(args), PyObject *NPY_UNUSED(kwds), PyArrayObject **NPY_UNUSED(op))) \ - PyUFunc_API[3]) -#define PyUFunc_f_f_As_d_d \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[4]) -#define PyUFunc_d_d \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[5]) -#define PyUFunc_f_f \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[6]) -#define PyUFunc_g_g \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[7]) -#define PyUFunc_F_F_As_D_D \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[8]) -#define PyUFunc_F_F \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[9]) -#define PyUFunc_D_D \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[10]) -#define PyUFunc_G_G \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[11]) -#define PyUFunc_O_O \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[12]) -#define PyUFunc_ff_f_As_dd_d \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[13]) -#define PyUFunc_ff_f \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[14]) -#define PyUFunc_dd_d \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[15]) -#define PyUFunc_gg_g \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[16]) -#define PyUFunc_FF_F_As_DD_D \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[17]) -#define PyUFunc_DD_D \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[18]) -#define PyUFunc_FF_F \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[19]) -#define PyUFunc_GG_G \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[20]) -#define PyUFunc_OO_O \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[21]) -#define PyUFunc_O_O_method \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[22]) -#define PyUFunc_OO_O_method \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[23]) -#define PyUFunc_On_Om \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[24]) -#define PyUFunc_GetPyValues \ - (*(int (*)(char *, int *, int *, PyObject **)) \ - PyUFunc_API[25]) -#define PyUFunc_checkfperr \ - (*(int (*)(int, PyObject *, int *)) \ - PyUFunc_API[26]) -#define PyUFunc_clearfperr \ - (*(void (*)(void)) \ - PyUFunc_API[27]) -#define PyUFunc_getfperr \ - (*(int (*)(void)) \ - PyUFunc_API[28]) -#define PyUFunc_handlefperr \ - (*(int (*)(int, PyObject *, int, int *)) \ - PyUFunc_API[29]) -#define PyUFunc_ReplaceLoopBySignature \ - (*(int (*)(PyUFuncObject *, PyUFuncGenericFunction, const int *, PyUFuncGenericFunction *)) \ - PyUFunc_API[30]) -#define PyUFunc_FromFuncAndDataAndSignature \ - (*(PyObject * (*)(PyUFuncGenericFunction *, void **, char *, int, int, int, int, const char *, const char *, int, const char *)) \ - PyUFunc_API[31]) -#define PyUFunc_SetUsesArraysAsData \ - (*(int (*)(void **NPY_UNUSED(data), size_t NPY_UNUSED(i))) \ - PyUFunc_API[32]) -#define PyUFunc_e_e \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[33]) -#define PyUFunc_e_e_As_f_f \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[34]) -#define PyUFunc_e_e_As_d_d \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[35]) -#define PyUFunc_ee_e \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[36]) -#define PyUFunc_ee_e_As_ff_f \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[37]) -#define PyUFunc_ee_e_As_dd_d \ - (*(void (*)(char **, npy_intp const *, npy_intp const *, void *)) \ - PyUFunc_API[38]) -#define PyUFunc_DefaultTypeResolver \ - (*(int (*)(PyUFuncObject *, NPY_CASTING, PyArrayObject **, PyObject *, PyArray_Descr **)) \ - PyUFunc_API[39]) -#define PyUFunc_ValidateCasting \ - (*(int (*)(PyUFuncObject *, NPY_CASTING, PyArrayObject **, PyArray_Descr **)) \ - PyUFunc_API[40]) -#define PyUFunc_RegisterLoopForDescr \ - (*(int (*)(PyUFuncObject *, PyArray_Descr *, PyUFuncGenericFunction, PyArray_Descr **, void *)) \ - PyUFunc_API[41]) - -#if NPY_FEATURE_VERSION >= NPY_1_16_API_VERSION -#define PyUFunc_FromFuncAndDataAndSignatureAndIdentity \ - (*(PyObject * (*)(PyUFuncGenericFunction *, void **, char *, int, int, int, int, const char *, const char *, const int, const char *, PyObject *)) \ - PyUFunc_API[42]) -#endif - -static inline int -_import_umath(void) -{ - PyObject *numpy = PyImport_ImportModule("numpy.core._multiarray_umath"); - PyObject *c_api = NULL; - - if (numpy == NULL) { - PyErr_SetString(PyExc_ImportError, - "numpy.core._multiarray_umath failed to import"); - return -1; - } - c_api = PyObject_GetAttrString(numpy, "_UFUNC_API"); - Py_DECREF(numpy); - if (c_api == NULL) { - PyErr_SetString(PyExc_AttributeError, "_UFUNC_API not found"); - return -1; - } - - if (!PyCapsule_CheckExact(c_api)) { - PyErr_SetString(PyExc_RuntimeError, "_UFUNC_API is not PyCapsule object"); - Py_DECREF(c_api); - return -1; - } - PyUFunc_API = (void **)PyCapsule_GetPointer(c_api, NULL); - Py_DECREF(c_api); - if (PyUFunc_API == NULL) { - PyErr_SetString(PyExc_RuntimeError, "_UFUNC_API is NULL pointer"); - return -1; - } - return 0; -} - -#define import_umath() \ - do {\ - UFUNC_NOFPE\ - if (_import_umath() < 0) {\ - PyErr_Print();\ - PyErr_SetString(PyExc_ImportError,\ - "numpy.core.umath failed to import");\ - return NULL;\ - }\ - } while(0) - -#define import_umath1(ret) \ - do {\ - UFUNC_NOFPE\ - if (_import_umath() < 0) {\ - PyErr_Print();\ - PyErr_SetString(PyExc_ImportError,\ - "numpy.core.umath failed to import");\ - return ret;\ - }\ - } while(0) - -#define import_umath2(ret, msg) \ - do {\ - UFUNC_NOFPE\ - if (_import_umath() < 0) {\ - PyErr_Print();\ - PyErr_SetString(PyExc_ImportError, msg);\ - return ret;\ - }\ - } while(0) - -#define import_ufunc() \ - do {\ - UFUNC_NOFPE\ - if (_import_umath() < 0) {\ - PyErr_Print();\ - PyErr_SetString(PyExc_ImportError,\ - "numpy.core.umath failed to import");\ - }\ - } while(0) - -#endif diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/dtypes/cast/test_construct_ndarray.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/dtypes/cast/test_construct_ndarray.py deleted file mode 100644 index 10085ddde5c8fc719cb4fdd0e212ceb0066c1192..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/dtypes/cast/test_construct_ndarray.py +++ /dev/null @@ -1,30 +0,0 @@ -import numpy as np -import pytest - -import pandas._testing as tm -from pandas.core.construction import sanitize_array - - -@pytest.mark.parametrize( - "values, dtype, expected", - [ - ([1, 2, 3], None, np.array([1, 2, 3], dtype=np.int64)), - (np.array([1, 2, 3]), None, np.array([1, 2, 3])), - (["1", "2", None], None, np.array(["1", "2", None])), - (["1", "2", None], np.dtype("str"), np.array(["1", "2", None])), - ([1, 2, None], np.dtype("str"), np.array(["1", "2", None])), - ], -) -def test_construct_1d_ndarray_preserving_na(values, dtype, expected): - result = sanitize_array(values, index=None, dtype=dtype) - tm.assert_numpy_array_equal(result, expected) - - -@pytest.mark.parametrize("dtype", ["m8[ns]", "M8[ns]"]) -def test_construct_1d_ndarray_preserving_na_datetimelike(dtype): - arr = np.arange(5, dtype=np.int64).view(dtype) - expected = np.array(list(arr), dtype=object) - assert all(isinstance(x, type(arr[0])) for x in expected) - - result = sanitize_array(arr, index=None, dtype=np.dtype(object)) - tm.assert_numpy_array_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/websockets/legacy/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git "a/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" "b/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" deleted file mode 100644 index ffbb05599ef09c9de25334ebeca2eef8022b9aaf..0000000000000000000000000000000000000000 --- "a/spaces/qingxu98/academic-chatgpt-beta/crazy_functions/\346\211\271\351\207\217\346\200\273\347\273\223PDF\346\226\207\346\241\243pdfminer.py" +++ /dev/null @@ -1,160 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive - -fast_debug = False - -def readPdf(pdfPath): - """ - 读取pdf文件,返回文本内容 - """ - import pdfminer - from pdfminer.pdfparser import PDFParser - from pdfminer.pdfdocument import PDFDocument - from pdfminer.pdfpage import PDFPage, PDFTextExtractionNotAllowed - from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter - from pdfminer.pdfdevice import PDFDevice - from pdfminer.layout import LAParams - from pdfminer.converter import PDFPageAggregator - - fp = open(pdfPath, 'rb') - - # Create a PDF parser object associated with the file object - parser = PDFParser(fp) - - # Create a PDF document object that stores the document structure. - # Password for initialization as 2nd parameter - document = PDFDocument(parser) - # Check if the document allows text extraction. If not, abort. - if not document.is_extractable: - raise PDFTextExtractionNotAllowed - - # Create a PDF resource manager object that stores shared resources. - rsrcmgr = PDFResourceManager() - - # Create a PDF device object. - # device = PDFDevice(rsrcmgr) - - # BEGIN LAYOUT ANALYSIS. - # Set parameters for analysis. - laparams = LAParams( - char_margin=10.0, - line_margin=0.2, - boxes_flow=0.2, - all_texts=False, - ) - # Create a PDF page aggregator object. - device = PDFPageAggregator(rsrcmgr, laparams=laparams) - # Create a PDF interpreter object. - interpreter = PDFPageInterpreter(rsrcmgr, device) - - # loop over all pages in the document - outTextList = [] - for page in PDFPage.create_pages(document): - # read the page into a layout object - interpreter.process_page(page) - layout = device.get_result() - for obj in layout._objs: - if isinstance(obj, pdfminer.layout.LTTextBoxHorizontal): - # print(obj.get_text()) - outTextList.append(obj.get_text()) - - return outTextList - - -def 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, glob, os - from bs4 import BeautifulSoup - print('begin analysis on:', file_manifest) - for index, fp in enumerate(file_manifest): - if ".tex" in fp: - with open(fp, 'r', encoding='utf-8', errors='replace') as f: - file_content = f.read() - if ".pdf" in fp.lower(): - file_content = readPdf(fp) - file_content = BeautifulSoup(''.join(file_content), features="lxml").body.text.encode('gbk', 'ignore').decode('gbk') - - prefix = "接下来请你逐文件分析下面的论文文件,概括其内容" if index==0 else "" - i_say = prefix + f'请对下面的文章片段用中文做一个概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{file_content}```' - i_say_show_user = prefix + f'[{index}/{len(file_manifest)}] 请对下面的文章片段做一个概述: {os.path.abspath(fp)}' - chatbot.append((i_say_show_user, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say_show_user, gpt_say) - history.append(i_say_show_user); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - if not fast_debug: time.sleep(2) - - all_file = ', '.join([os.path.relpath(fp, project_folder) for index, fp in enumerate(file_manifest)]) - i_say = f'根据以上你自己的分析,对全文进行概括,用学术性语言写一段中文摘要,然后再写一段英文摘要(包括{all_file})。' - chatbot.append((i_say, "[Local Message] waiting gpt response.")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - if not fast_debug: - msg = '正常' - # ** gpt request ** - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=history, - sys_prompt="总结文章。" - ) # 带超时倒计时 - chatbot[-1] = (i_say, gpt_say) - history.append(i_say); history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - - - -@CatchException -def 批量总结PDF文档pdfminer(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - history = [] # 清空历史,以免输入溢出 - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结PDF文档,此版本使用pdfminer插件,带token约简功能。函数插件贡献者: Euclid-Jie。"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import pdfminer, bs4 - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade pdfminer beautifulsoup4```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.tex', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.pdf', recursive=True)] # + \ - # [f for f in glob.glob(f'{project_folder}/**/*.cpp', recursive=True)] + \ - # [f for f in glob.glob(f'{project_folder}/**/*.c', recursive=True)] - if len(file_manifest) == 0: - report_execption(chatbot, history, a = f"解析项目: {txt}", b = f"找不到任何.tex或pdf文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - yield from 解析Paper(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt) - diff --git "a/spaces/qingxu98/gpt-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" "b/spaces/qingxu98/gpt-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" deleted file mode 100644 index abcbbc6631fa4abdd0577c5a06bd12c5c2504b09..0000000000000000000000000000000000000000 --- "a/spaces/qingxu98/gpt-academic/crazy_functions/\351\253\230\347\272\247\345\212\237\350\203\275\345\207\275\346\225\260\346\250\241\346\235\277.py" +++ /dev/null @@ -1,29 +0,0 @@ -from toolbox import CatchException, update_ui -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime -@CatchException -def 高阶功能模板函数(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,用于灵活调整复杂功能的各种参数 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append(("这是什么功能?", "[Local Message] 请注意,您正在调用一个[函数插件]的模板,该函数面向希望实现更多有趣功能的开发者,它可以作为创建新功能函数的模板(该函数只有20多行代码)。此外我们也提供可同步处理大量文件的多线程Demo供您参考。您若希望分享新的功能模组,请不吝PR!")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - for i in range(5): - currentMonth = (datetime.date.today() + datetime.timedelta(days=i)).month - currentDay = (datetime.date.today() + datetime.timedelta(days=i)).day - i_say = f'历史中哪些事件发生在{currentMonth}月{currentDay}日?列举两条并发送相关图片。发送图片时,请使用Markdown,将Unsplash API中的PUT_YOUR_QUERY_HERE替换成描述该事件的一个最重要的单词。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=i_say, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], - sys_prompt="当你想发送一张照片时,请使用Markdown, 并且不要有反斜线, 不要用代码块。使用 Unsplash API (https://source.unsplash.com/1280x720/? < PUT_YOUR_QUERY_HERE >)。" - ) - chatbot[-1] = (i_say, gpt_say) - history.append(i_say);history.append(gpt_say) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Photoshop Elements 2020.1 Crack ((HOT)).md b/spaces/quidiaMuxgu/Expedit-SAM/Adobe Photoshop Elements 2020.1 Crack ((HOT)).md deleted file mode 100644 index 9fcd845f81ed32d1fe46e008aa205072f6a40758..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Adobe Photoshop Elements 2020.1 Crack ((HOT)).md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>Adobe Photoshop Elements 2020.1 Crack</h2><br /><p><b><b>DOWNLOAD</b> &#127775; <a href="https://geags.com/2uCsaR">https://geags.com/2uCsaR</a></b></p><br /><br /> - -Oct 29, 2019 Adobe Photoshop Elements 2020 Full Version Free (For Windows & MAC) Adobe Photoshop Elements 2020 v18.0 With Crack + ... 1fdad05405<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Autodesk Inventor HSM Pro 2017 Crack 19.md b/spaces/raedeXanto/academic-chatgpt-beta/Autodesk Inventor HSM Pro 2017 Crack 19.md deleted file mode 100644 index b4246e20f74d89d25a87714ac5b793ada4a72168..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Autodesk Inventor HSM Pro 2017 Crack 19.md +++ /dev/null @@ -1,93 +0,0 @@ - -<br> - Why you need a crack for Autodesk Inventor HSM Pro 2017? <br> - What are the risks and benefits of using a crack for Autodesk Inventor HSM Pro 2017? | | H2: How to download and install Autodesk Inventor HSM Pro 2017 Crack~ 19? | - Where to find reliable and safe software torrent sites? <br> - How to use a VPN to access blocked torrent sites and protect your privacy? <br> - How to download and install Autodesk Inventor HSM Pro 2017 Crack~ 19 from torrent sites? | | H2: How to use Autodesk Inventor HSM Pro 2017 Crack~ 19? | - How to activate Autodesk Inventor HSM Pro 2017 with the crack? <br> - How to use the toolpath strategies, setup options, post-processing features, and other functions of Autodesk Inventor HSM Pro 2017? <br> - How to troubleshoot common issues and errors with Autodesk Inventor HSM Pro 2017 Crack~ 19? | | H2: Conclusion | - Summary of the main points of the article <br> - Recommendations and tips for using Autodesk Inventor HSM Pro 2017 Crack~ 19 <br> - Call to action and invitation for feedback | | H2: FAQs | - Q1: What is the difference between Inventor HSM Express, Inventor HSM, and Inventor HSM Pro? <br> - Q2: Is Autodesk Inventor HSM Pro 2017 compatible with other versions of Inventor and Inventor LT? <br> - Q3: How can I update my Autodesk Inventor HSM Pro 2017 Crack~ 19 to the latest version? <br> - Q4: What are the system requirements for running Autodesk Inventor HSM Pro 2017 Crack~ 19? <br> - Q5: Where can I find more resources and support for using Autodesk Inventor HSM Pro 2017 Crack~ 19? | Table 2: Article with HTML formatting <h1>Autodesk Inventor HSM Pro 2017 Crack~ 19: What is it and why you need it?</h1> - <p>If you are looking for a powerful and versatile software for computer-aided manufacturing (CAM), you might have heard of <strong>Autodesk Inventor HSM Pro 2017</strong>. This software is designed to help you create high-quality machined parts, reduce machining time, optimize toolpaths, and integrate seamlessly with Autodesk Inventor. However, this software is not cheap, and you might be wondering if there is a way to get it for free or at a lower cost. That's where <strong>Autodesk Inventor HSM Pro 2017 Crack~ 19</strong> comes in.</p> -<h2>Autodesk Inventor HSM Pro 2017 Crack~ 19</h2><br /><p><b><b>DOWNLOAD</b> &#9733;&#9733;&#9733;&#9733;&#9733; <a href="https://tinourl.com/2uL0Zb">https://tinourl.com/2uL0Zb</a></b></p><br /><br /> - <p>In this article, we will explain what Autodesk Inventor HSM Pro 2017 Crack~ 19 is, how to download and install it, how to use it, and what are the risks and benefits of using it. We will also answer some frequently asked questions about this software and provide some tips and recommendations for using it. By the end of this article, you will have a clear idea of whether Autodesk Inventor HSM Pro 2017 Crack~ 19 is worth trying or not.</p> - <h2>Why you need a crack for Autodesk Inventor HSM Pro 2017?</h2> - <p>A crack is a software tool that modifies or bypasses the original code or protection mechanism of another software, usually to unlock its full features or remove its limitations. A crack can also be a modified version of the original software that has been altered to work without a license key or activation code.</p> - <p>The main reason why you might need a crack for Autodesk Inventor HSM Pro 2017 is that this software is very expensive. According to the official website of Autodesk, the price of Autodesk Inventor HSM Pro 2017 is $10,000 per year. This means that you have to pay a hefty amount every year to use this software, which might not be affordable or feasible for many users.</p> - <p>By using a crack for Autodesk Inventor HSM Pro 2017, you can save money <p>and enjoy its full features without any limitations. However, using a crack for Autodesk Inventor HSM Pro 2017 also comes with some risks and drawbacks that you should be aware of.</p> - <h2>What are the risks and benefits of using a crack for Autodesk Inventor HSM Pro 2017?</h2> - <p>Using a crack for Autodesk Inventor HSM Pro 2017 can have both positive and negative consequences, depending on how you use it and where you get it from. Here are some of the pros and cons of using a crack for this software:</p> -<p></p> - <h3>Pros:</h3> - <ul> -<li>You can save money by not paying the annual subscription fee for Autodesk Inventor HSM Pro 2017.</li> -<li>You can access all the features and functions of Autodesk Inventor HSM Pro 2017, including the advanced toolpath strategies, setup options, post-processing features, and integration with Autodesk Inventor.</li> -<li>You can use Autodesk Inventor HSM Pro 2017 offline without requiring an internet connection or a license server.</li> -<li>You can use Autodesk Inventor HSM Pro 2017 on multiple devices without any restrictions.</li> -</ul> - <h3>Cons:</h3> - <ul> -<li>You can expose your computer to malware, viruses, or spyware that might be hidden in the crack file or the torrent site you download it from. This can compromise your security, privacy, and data.</li> -<li>You can violate the terms of service and the intellectual property rights of Autodesk by using a crack for Autodesk Inventor HSM Pro 2017. This can result in legal actions, fines, or penalties from Autodesk or other authorities.</li> -<li>You can miss out on the latest updates, patches, bug fixes, and improvements that Autodesk releases for Autodesk Inventor HSM Pro 2017. This can affect the performance, stability, and compatibility of the software.</li> -<li>You can encounter technical issues, errors, or crashes with Autodesk Inventor HSM Pro 2017 that might not be resolved by Autodesk support or community forums. This can affect your productivity, quality, and efficiency.</li> -</ul> - <p>As you can see, using a crack for Autodesk Inventor HSM Pro 2017 has its advantages and disadvantages. You should weigh them carefully before deciding to use a crack for this software. If you do decide to use a crack for Autodesk Inventor HSM Pro 2017, you should also take some precautions to protect yourself and your computer from potential threats.</p> - <h2>How to download and install Autodesk Inventor HSM Pro 2017 Crack~ 19?</h2> - <p>If you have decided to use a crack for Autodesk Inventor HSM Pro 2017, you will need to follow some steps to download and install it on your computer. Here are the general steps you will need to take:</p> - <ol> -<li>Find a reliable and safe software torrent site that offers Autodesk Inventor HSM Pro 2017 Crack~ 19. You can use the results from `search_web("software torrent sites") -> str` to find some of the best torrent sites for software in 2023 . However, you should always check the comments, ratings, and reviews of the torrent files before downloading them to avoid fake or malicious ones.</li> -<li>Use a VPN to access blocked torrent sites and protect your privacy. Some torrent sites might be blocked in your country or region due to legal or regulatory reasons. To bypass these restrictions, you will need to use a VPN (virtual private network) that masks your IP address and encrypts your traffic. You can use the results from `search_web("VPN for torrenting") -> str` to find some of the best VPNs for torrenting in 2023 . However, you should always choose a reputable and trustworthy VPN provider that has a strict no-logs policy and supports P2P file sharing.</li> -<li>Download and install Autodesk Inventor HSM Pro 2017 Crack~ 19 from torrent sites. Once you have found a good torrent site and a VPN service, you can download the torrent file or magnet link for Autodesk Inventor HSM Pro 2017 Crack~ 19. You will need a torrent client such as uTorrent or BitTorrent to open the torrent file or magnet link and start downloading the software. You should also scan the downloaded files with an antivirus program before installing them to ensure they are clean and safe.</li> -<li>Activate Autodesk Inventor HSM Pro 2017 with the crack. After downloading and installing Autodesk Inventor HSM Pro 2017 Crack~ 19, you will need to activate it with the crack. The crack might be included in the downloaded files or in a separate file. You will need to follow the instructions provided by the crack creator to apply it correctly to the software. Usually, this involves copying and pasting the crack file to the installation folder of Autodesk Inventor HSM Pro 2017, replacing the original file, or running the crack file as an administrator. After applying the crack, you should be able to launch and use Autodesk Inventor HSM Pro 2017 without any license or activation issues.</li> -</ol> - <h2>How to use Autodesk Inventor HSM Pro 2017 Crack~ 19?</h2> - <p>Once you have activated Autodesk Inventor HSM Pro 2017 with the crack, you can start using it for your CAM projects. Here are some of the basic steps you will need to take:</p> - <ol> -<li>Create or open a 3D model in Autodesk Inventor. You can use the built-in modeling tools or import a model from another CAD software. You can also use the parametric design, assembly, and simulation features of Autodesk Inventor to optimize your model for machining.</li> -<li>Switch to the CAM environment in Autodesk Inventor. You can access the CAM environment by clicking on the CAM tab on the ribbon menu. You will see a new set of tools and options for creating and managing your toolpaths.</li> -<li>Define your setup and work coordinate system. You will need to specify the orientation, origin, and stock size of your model in relation to the machine tool. You can use the Setup panel on the CAM tab to create a new setup or edit an existing one.</li> -<li>Select a toolpath strategy and assign tools. You will need to choose a suitable toolpath strategy for each operation you want to perform on your model, such as facing, contouring, drilling, pocketing, etc. You can use the 2D, 3D, or Turning panels on the CAM tab to select a toolpath strategy and assign tools from the tool library. You can also customize the toolpath parameters, such as feed rate, spindle speed, stepover, depth of cut, etc.</li> -<li>Simulate and verify your toolpaths. You can use the Simulate panel on the CAM tab to preview and check your toolpaths for errors, collisions, or interferences. You can also use the Compare panel to compare your model with the stock after machining.</li> -<li>Generate and post-process your NC code. You will need to generate and post-process your NC code for your machine tool controller. You can use the Actions panel on the CAM tab to generate and post-process your NC code. You can also select a post-processor from the list of available ones or create your own custom post-processor.</li> -</ol> - <h2>How to troubleshoot common issues and errors with Autodesk Inventor HSM Pro 2017 Crack~ 19?</h2> - <p>While using Autodesk Inventor HSM Pro 2017 Crack~ 19, you might encounter some common issues and errors that might affect your workflow or results. Here are some of the possible solutions for these issues and errors:</p> - <h3>Issue: The software crashes or freezes frequently.</h3> - <p>Solution: This issue might be caused by insufficient system resources, incompatible hardware drivers, corrupted files, or malware infection. To fix this issue, you can try the following steps:</p> - <ul> -<li>Close any unnecessary programs or processes running in the background that might consume system resources.</li> -<li>Update your hardware drivers to the latest versions that are compatible with Autodesk Inventor HSM Pro 2017.</li> -<li>Repair or reinstall Autodesk Inventor HSM Pro 2017 Crack~ 19 using the installation wizard or setup file.</li> -<li>Scan your computer with an antivirus program and remove any malware or viruses that might be affecting the software.</li> -</ul> - <h3>Error: The software fails to activate or shows an invalid license message.</h3> - <p>Solution: This error might be caused by an incorrect application of the crack, a missing or corrupted crack file, or an interference from an antivirus program or firewall. To fix this error, you can try the following steps:</p> - <ul> -<li>Reapply the crack to Autodesk Inventor HSM Pro 2017 using the correct instructions provided by the crack creator.</li> -<li>Make sure that the crack file is not missing or corrupted by checking its size and checksum.</li> -<li>Disable or whitelist your antivirus program or firewall from blocking or deleting the crack file or interfering with its operation.</li> -</ul> - <h3>Error: The software shows an incompatible version message or fails to open certain files.</h3> - <p>Solution: This error might be caused by a mismatch between the version of Autodesk Inventor HSM Pro 2017 Crack~ 19 and the version of Autodesk Inventor or other CAD software you are using. To fix this error, you can try the following steps:</p> - <ul> -<li>Make sure that you are using Autodesk Inventor HSM Pro 2017 Crack~ 19 that matches the version of Autodesk Inventor or other CAD software you are using. For example, if you are using Autodesk Inventor 2017, you should use Autodesk Inventor HSM Pro 2017 Crack~ 19, not Autodesk Inventor HSM Pro 2018 Crack~ 19 or Autodesk Inventor HSM Pro 2016 Crack~ 19.</li> -<li>Update your Autodesk Inventor HSM Pro 2017 Crack~ 19 to the latest version that is compatible with the latest version of Autodesk Inventor or other CAD software you are using. You can use the results from `search_web("Autodesk Inventor HSM Pro 2017 Crack~ 19 update") -> str` to find the latest updates for this software.</li> -<li>Convert or export your files to a compatible format that can be opened by Autodesk Inventor HSM Pro 2017 Crack~ 19. You can use the File menu or the Export panel on the ribbon menu to convert or export your files to different formats, such as STEP, IGES, STL, DXF, etc.</li> -</ul> - <h2>Conclusion</h2> - <p>Autodesk Inventor HSM Pro 2017 Crack~ 19 is a software tool that allows you to use Autodesk Inventor HSM Pro 2017 for free or at a lower cost. It can help you create high-quality machined parts, reduce machining time, optimize toolpaths, and integrate seamlessly with Autodesk Inventor. However, it also comes with some risks and drawbacks, such as malware infection, legal issues, technical problems, and missing updates. Therefore, you should weigh the pros and cons of using a crack for this software before deciding to use it.</p> - <p>If you do decide to use a crack for Autodesk Inventor HSM Pro 2017, you should follow the steps we have outlined in this article to download and install it, activate it with the crack, use it for your CAM projects, and troubleshoot common issues and errors. You should also take some precautions to protect yourself and your computer from potential threats. We hope that this article has been helpful and informative for you.</p> - <p>If you have any questions, comments, or feedback about this article or Autodesk Inventor HSM Pro 2017 Crack~ 19, please feel free to leave them below. We would love to hear from you and help you out. Thank you for reading and happy machining!</p> - <h2>FAQs</h2> - <h3>Q1: What is the difference between Inventor HSM Express, Inventor HSM, and Inventor HSM Pro?</h3> - <p>A1: Inventor HSM Express is a free version of Inventor HSM that offers basic 2.5-axis milling capabilities. Inventor HSM is a paid version of Inventor HSM that offers advanced 2.5-axis milling and turning capabilities. Inventor HSM Pro is the most comprehensive version of Inventor HSM that offers full 3-axis milling and turning capabilities, as well as multi-axis and mill-turn capabilities.</p> - <h3>Q2: Is Autodesk Inventor HSM Pro 2017 compatible with other versions of Inventor and Inventor LT?</h3> - <p>A2: Autodesk Inventor HSM Pro 2017 is compatible with Autodesk Inventor 2017 and Autodesk Inventor LT 2017. However, it is not compatible with other versions of Inventor and Inventor LT, such as 2016 or 2018. You will need to use the corresponding version of Autodesk Inventor HSM Pro that matches your version of Inventor or Inventor LT.</p> - <h3>Q3: How can I update my Autodesk Inventor HSM Pro 2017 Crack~ 19 to the latest version?</h3> - <p>A3: You can update your Autodesk Inventor HSM Pro 2017 Crack~ 19 to the latest version by downloading and installing the latest update from a reliable torrent site. However, you will also need to reapply the crack to the updated software to activate it again. Alternatively, you can use the results from `search_web("Autodesk Inventor HSM Pro 2017 Crack~ 19 update") -> str` to find the latest updates for this software.</p> - <h3>Q4: What are the system requirements for running Autodesk Inventor HSM Pro 2017 Crack~ 19?</h3> - <p>A4: The system requirements for running Autodesk Inventor HSM Pro 2017 Crack~ 19 are the same as the system requirements for running Autodesk Inventor HSM Pro 2017. According to the official website of Autodesk, the minimum system requirements for running Autodesk Inventor HSM Pro 2017 are as follows: - Operating System: Microsoft Windows 10, 8.1, or 7 (64-bit only) - Processor: Intel or AMD 64-bit processor, 2 GHz or faster - Memory: 8 GB RAM or more - Disk Space: 40 GB free disk space or more - Graphics: Microsoft DirectX 10 or higher compatible graphics card - Display: 1280 x 1024 or higher resolution monitor - Internet: Internet connection for web install, license activation, and online services However, these are only the minimum requirements, and you might need higher specifications to run the software smoothly and efficiently, especially for complex or large models. You can use the results from `search_web("Autodesk Inventor HSM Pro 2017 system requirements") -> str` to find more information and recommendations about the system requirements for this software.</p> - <h3>Q5: Where can I find more resources and support for using Autodesk Inventor HSM Pro 2017 Crack~ 19?</h3> - <p>A5: You can find more resources and support for using Autodesk Inventor HSM Pro 2017 Crack~ 19 from various sources, such as:</p> - <ul> -<li>The official website of Autodesk, where you can find product information, tutorials, videos, forums, blogs, and help documentation for Autodesk Inventor HSM Pro 2017.</li> -<li>The crack creator's website or channel, where you can find instructions, tips, updates, feedback, and support for Autodesk Inventor HSM Pro 2017 Crack~ 19.</li> -<li>The online communities of Autodesk Inventor HSM Pro users, where you can find discussions, questions, answers, tips, tricks, and best practices for using Autodesk Inventor HSM Pro 2017 Crack~ 19.</li> -<li>The online courses or books on Autodesk Inventor HSM Pro 2017, where you can learn the basics and advanced skills of using Autodesk Inventor HSM Pro 2017 Crack~ 19.</li> -</ul> - <p>However, you should always be careful and cautious when using these resources and support sources, as they might not be official, authorized, or verified by Autodesk. You should also avoid sharing any personal or sensitive information with these sources, as they might not be secure or trustworthy.</p> b2dd77e56b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dow Font Vnsimli.shx [TOP].md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dow Font Vnsimli.shx [TOP].md deleted file mode 100644 index af61128d65db774c266b21f2b246224d562d0052..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Dow Font Vnsimli.shx [TOP].md +++ /dev/null @@ -1,37 +0,0 @@ -<br /> -<h1>Dow font vnsimli.shx cho autocad - Tất cả những gì bạn cần biết</h1> -<p>Autocad là một phần mềm thiết kế đồ họa rất phổ biến và được sử dụng rộng rãi trong nhiều lĩnh vực khác nhau. Tuy nhiên, khi sử dụng autocad, bạn có thể gặp phải một số vấn đề liên quan đến font chữ, đặc biệt là font vnsimli.shx. Font vnsimli.shx là một font chữ tiếng Việt được sử dụng nhiều trong autocad, nhưng nó không phải là font chữ mặc định của phần mềm. Do đó, bạn cần dow font vnsimli.shx và cài đặt nó vào máy tính để có thể hiển thị và in ấn các bản vẽ autocad một cách chính xác và đẹp mắt.</p> -<h2>Làm sao để dow font vnsimli.shx cho autocad?</h2> -<p>Bạn có thể dow font vnsimli.shx cho autocad thông qua các trang web chia sẻ font chữ miễn phí trên internet. Một số trang web uy tín và an toàn mà bạn có thể tham khảo là:</p> -<h2>dow font vnsimli.shx</h2><br /><p><b><b>DOWNLOAD</b> &ndash;&ndash;&ndash;&ndash;&ndash;>>> <a href="https://urlgoal.com/2uCN3Z">https://urlgoal.com/2uCN3Z</a></b></p><br /><br /> -<ul> -<li><a href="https://cadvn.com/download-font-shx-cho-autocad/">https://cadvn.com/download-font-shx-cho-autocad/</a></li> -<li><a href="https://phanthinh.vn/tai-font-shx-day-du-cho-cad-4250/">https://phanthinh.vn/tai-font-shx-day-du-cho-cad-4250/</a></li> -<li><a href="https://new.c.mi.com/th/post/270961/Dow_Font_Vnsimlishx_Extra_Quality">https://new.c.mi.com/th/post/270961/Dow_Font_Vnsimlishx_Extra_Quality</a></li> -</ul> -<p>Sau khi dow font vnsimli.shx cho autocad, bạn cần giải nén file rar hoặc zip và copy tất cả các file font shx vào thư mục font của autocad. Thường thì thư mục font của autocad có đường dẫn như sau: C:\Program Files\Autodesk\AutoCAD\Fonts. Bạn có thể sử dụng phím tắt Ctrl + A để chọn tất cả các file font shx và phím tắt Ctrl + C để copy chúng. Sau đó, bạn mở thư mục font của autocad và dùng phím tắt Ctrl + V để paste chúng vào. Nếu có thông báo xung đột hoặc trùng lặp file, bạn có thể nhấn Don’t copy để bỏ qua.</p> -<h2>Tại sao bạn nên dow font vnsimli.shx cho autocad?</h2> -<p>Có nhiều lý do để bạn nên dow font vnsimli.shx cho autocad, nhưng chủ yếu là để khắc phục lỗi font chữ trong autocad và để tạo ra các bản vẽ autocad chuyên nghiệp và đẹp mắt. Một số lợi ích cụ thể của việc dow font vnsimli.shx cho autocad là:</p> -<ul> -<li>Bạn sẽ không gặp phải lỗi font chữ khi mở các bản vẽ autocad có sử dụng font vnsimli.shx hoặc khi chuyển đổi các bản vẽ từ phiên bản autocad khác sang phiên bản autocad của bạn.</li> -<li>Bạn sẽ không bị mất thời gian và công sức để tìm kiếm và cài đặt lại font vnsimli.shx mỗi khi bạn cài lại win hoặc ghost lại máy tính.</li> -<li>Bạn sẽ có được các bản vẽ autocad tiếng Việt chính xác và rõ ràng, không bị lỗi ký tự hoặc hiển thị sai dấu.</li> -<li>Bạn sẽ tạo được ấn tượng tốt với khách hàng hoặc đối tác khi gửi hoặc in ấn các bản vẽ autocad tiếng Việt có sử dụng font vnsimli.shx.</li> -</ul> -<h2>Kết luận</h2> -<p>Font vnsimli.shx là một font chữ tiếng Việt quan trọng và phổ biến trong autocad. Bạn nên dow font vnsimli.shx cho autocad để có thể sử dụng phần mềm này một cách hiệu quả và chuyên nghiệp. Bạn có thể dow font vnsimli.shx cho autocad thông qua các trang web miễn phí trên internet và cài đặt nó vào thư mục font của autocad. Việc này sẽ giúp bạn khắc phục lỗi font chữ trong autocad và tạo ra các bản vẽ autocad tiếng Việt đẹp mắt và ấn tượng.</p> -<h2>Các font shx khác ngoài font vnsimli.shx cho autocad</h2> -<p>Ngoài font vnsimli.shx, bạn cũng có thể dow font shx khác cho autocad để sử dụng trong các trường hợp khác nhau. Một số font shx phổ biến và hữu ích mà bạn có thể tham khảo là:</p> -<ul> -<li>Font txt.shx: Đây là font chữ mặc định của autocad, có thể sử dụng trong mọi phiên bản autocad. Font txt.shx có độ rõ nét cao và dễ nhìn, phù hợp cho việc ghi chú và đánh số trên bản vẽ.</li> -<li>Font vntime.shx: Đây là font chữ tiếng Việt cổ điển, có thể sử dụng trong các bản vẽ có yêu cầu về tính chính xác và trang trọng. Font vntime.shx có độ nét mỏng và thanh lịch, phù hợp cho việc ghi tên và thông tin trên bản vẽ.</li> -<li>Font romans.shx: Đây là font chữ tiếng Anh đơn giản và phổ biến, có thể sử dụng trong các bản vẽ quốc tế hoặc có liên quan đến tiếng Anh. Font romans.shx có độ nét đậm và rõ ràng, phù hợp cho việc ghi ký hiệu và kích thước trên bản vẽ.</li> -<li>Font unicode.shx: Đây là font chữ hỗ trợ nhiều ngôn ngữ khác nhau, có thể sử dụng trong các bản vẽ đa dạng và phức tạp. Font unicode.shx có độ nét tương đối và linh hoạt, phù hợp cho việc ghi các thông tin khác nhau trên bản vẽ.</li> -</ul> -<h2>Tổng kết</h2> -<p>Dow font vnsimli.shx cho autocad là một việc làm cần thiết và hữu ích cho người sử dụng phần mềm thiết kế đồ họa này. Bạn có thể dow font vnsimli.shx cho autocad qua các trang web miễn phí trên internet và cài đặt nó vào thư mục font của autocad để khắc phục lỗi font chữ và tạo ra các bản vẽ autocad tiếng Việt chất lượng cao. Bạn cũng có thể dow font shx khác cho autocad để sử dụng trong các trường hợp khác nhau, tùy theo yêu cầu của bản vẽ. Hy vọng bài viết này đã cung cấp cho bạn những thông tin hữu ích và chi tiết về cách dow font vnsimli.shx cho autocad.</p> -<p></p> -<h2>Tổng kết</h2> -<p>Dow font vnsimli.shx cho autocad là một việc làm cần thiết và hữu ích cho người sử dụng phần mềm thiết kế đồ họa này. Bạn có thể dow font vnsimli.shx cho autocad qua các trang web miễn phí trên internet và cài đặt nó vào thư mục font của autocad để khắc phục lỗi font chữ và tạo ra các bản vẽ autocad tiếng Việt chất lượng cao. Bạn cũng có thể dow font shx khác cho autocad để sử dụng trong các trường hợp khác nhau, tùy theo yêu cầu của bản vẽ. Hy vọng bài viết này đã cung cấp cho bạn những thông tin hữu ích và chi tiết về cách dow font vnsimli.shx cho autocad.</p> 3cee63e6c2<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/data_structures/general_data.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/data_structures/general_data.py deleted file mode 100644 index 978fdfd7460dda68bc1bfc81cdd9aef493d445b3..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/data_structures/general_data.py +++ /dev/null @@ -1,336 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -import numpy as np -import torch - -from mmdet.utils.util_mixins import NiceRepr - - -class GeneralData(NiceRepr): - """A general data structure of OpenMMlab. - - A data structure that stores the meta information, - the annotations of the images or the model predictions, - which can be used in communication between components. - - The attributes in `GeneralData` are divided into two parts, - the `meta_info_fields` and the `data_fields` respectively. - - - `meta_info_fields`: Usually contains the - information about the image such as filename, - image_shape, pad_shape, etc. All attributes in - it are immutable once set, - but the user can add new meta information with - `set_meta_info` function, all information can be accessed - with methods `meta_info_keys`, `meta_info_values`, - `meta_info_items`. - - - `data_fields`: Annotations or model predictions are - stored. The attributes can be accessed or modified by - dict-like or object-like operations, such as - `.` , `[]`, `in`, `del`, `pop(str)` `get(str)`, `keys()`, - `values()`, `items()`. Users can also apply tensor-like methods - to all obj:`torch.Tensor` in the `data_fileds`, - such as `.cuda()`, `.cpu()`, `.numpy()`, `device`, `.to()` - `.detach()`, `.numpy()` - - Args: - meta_info (dict, optional): A dict contains the meta information - of single image. such as `img_shape`, `scale_factor`, etc. - Default: None. - data (dict, optional): A dict contains annotations of single image or - model predictions. Default: None. - - Examples: - >>> from mmdet.core import GeneralData - >>> img_meta = dict(img_shape=(800, 1196, 3), pad_shape=(800, 1216, 3)) - >>> instance_data = GeneralData(meta_info=img_meta) - >>> img_shape in instance_data - True - >>> instance_data.det_labels = torch.LongTensor([0, 1, 2, 3]) - >>> instance_data["det_scores"] = torch.Tensor([0.01, 0.1, 0.2, 0.3]) - >>> print(results) - <GeneralData( - - META INFORMATION - img_shape: (800, 1196, 3) - pad_shape: (800, 1216, 3) - - DATA FIELDS - shape of det_labels: torch.Size([4]) - shape of det_scores: torch.Size([4]) - - ) at 0x7f84acd10f90> - >>> instance_data.det_scores - tensor([0.0100, 0.1000, 0.2000, 0.3000]) - >>> instance_data.det_labels - tensor([0, 1, 2, 3]) - >>> instance_data['det_labels'] - tensor([0, 1, 2, 3]) - >>> 'det_labels' in instance_data - True - >>> instance_data.img_shape - (800, 1196, 3) - >>> 'det_scores' in instance_data - True - >>> del instance_data.det_scores - >>> 'det_scores' in instance_data - False - >>> det_labels = instance_data.pop('det_labels', None) - >>> det_labels - tensor([0, 1, 2, 3]) - >>> 'det_labels' in instance_data - >>> False - """ - - def __init__(self, meta_info=None, data=None): - - self._meta_info_fields = set() - self._data_fields = set() - - if meta_info is not None: - self.set_meta_info(meta_info=meta_info) - if data is not None: - self.set_data(data) - - def set_meta_info(self, meta_info): - """Add meta information. - - Args: - meta_info (dict): A dict contains the meta information - of image. such as `img_shape`, `scale_factor`, etc. - Default: None. - """ - assert isinstance(meta_info, - dict), f'meta should be a `dict` but get {meta_info}' - meta = copy.deepcopy(meta_info) - for k, v in meta.items(): - # should be consistent with original meta_info - if k in self._meta_info_fields: - ori_value = getattr(self, k) - if isinstance(ori_value, (torch.Tensor, np.ndarray)): - if (ori_value == v).all(): - continue - else: - raise KeyError( - f'img_meta_info {k} has been set as ' - f'{getattr(self, k)} before, which is immutable ') - elif ori_value == v: - continue - else: - raise KeyError( - f'img_meta_info {k} has been set as ' - f'{getattr(self, k)} before, which is immutable ') - else: - self._meta_info_fields.add(k) - self.__dict__[k] = v - - def set_data(self, data): - """Update a dict to `data_fields`. - - Args: - data (dict): A dict contains annotations of image or - model predictions. Default: None. - """ - assert isinstance(data, - dict), f'meta should be a `dict` but get {data}' - for k, v in data.items(): - self.__setattr__(k, v) - - def new(self, meta_info=None, data=None): - """Return a new results with same image meta information. - - Args: - meta_info (dict, optional): A dict contains the meta information - of image. such as `img_shape`, `scale_factor`, etc. - Default: None. - data (dict, optional): A dict contains annotations of image or - model predictions. Default: None. - """ - new_data = self.__class__() - new_data.set_meta_info(dict(self.meta_info_items())) - if meta_info is not None: - new_data.set_meta_info(meta_info) - if data is not None: - new_data.set_data(data) - return new_data - - def keys(self): - """ - Returns: - list: Contains all keys in data_fields. - """ - return [key for key in self._data_fields] - - def meta_info_keys(self): - """ - Returns: - list: Contains all keys in meta_info_fields. - """ - return [key for key in self._meta_info_fields] - - def values(self): - """ - Returns: - list: Contains all values in data_fields. - """ - return [getattr(self, k) for k in self.keys()] - - def meta_info_values(self): - """ - Returns: - list: Contains all values in meta_info_fields. - """ - return [getattr(self, k) for k in self.meta_info_keys()] - - def items(self): - for k in self.keys(): - yield (k, getattr(self, k)) - - def meta_info_items(self): - for k in self.meta_info_keys(): - yield (k, getattr(self, k)) - - def __setattr__(self, name, val): - if name in ('_meta_info_fields', '_data_fields'): - if not hasattr(self, name): - super().__setattr__(name, val) - else: - raise AttributeError( - f'{name} has been used as a ' - f'private attribute, which is immutable. ') - else: - if name in self._meta_info_fields: - raise AttributeError(f'`{name}` is used in meta information,' - f'which is immutable') - - self._data_fields.add(name) - super().__setattr__(name, val) - - def __delattr__(self, item): - - if item in ('_meta_info_fields', '_data_fields'): - raise AttributeError(f'{item} has been used as a ' - f'private attribute, which is immutable. ') - - if item in self._meta_info_fields: - raise KeyError(f'{item} is used in meta information, ' - f'which is immutable.') - super().__delattr__(item) - if item in self._data_fields: - self._data_fields.remove(item) - - # dict-like methods - __setitem__ = __setattr__ - __delitem__ = __delattr__ - - def __getitem__(self, name): - return getattr(self, name) - - def get(self, *args): - assert len(args) < 3, '`get` get more than 2 arguments' - return self.__dict__.get(*args) - - def pop(self, *args): - assert len(args) < 3, '`pop` get more than 2 arguments' - name = args[0] - if name in self._meta_info_fields: - raise KeyError(f'{name} is a key in meta information, ' - f'which is immutable') - - if args[0] in self._data_fields: - self._data_fields.remove(args[0]) - return self.__dict__.pop(*args) - - # with default value - elif len(args) == 2: - return args[1] - else: - raise KeyError(f'{args[0]}') - - def __contains__(self, item): - return item in self._data_fields or \ - item in self._meta_info_fields - - # Tensor-like methods - def to(self, *args, **kwargs): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if hasattr(v, 'to'): - v = v.to(*args, **kwargs) - new_data[k] = v - return new_data - - # Tensor-like methods - def cpu(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.cpu() - new_data[k] = v - return new_data - - # Tensor-like methods - def npu(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.npu() - new_data[k] = v - return new_data - - # Tensor-like methods - def mlu(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.mlu() - new_data[k] = v - return new_data - - # Tensor-like methods - def cuda(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.cuda() - new_data[k] = v - return new_data - - # Tensor-like methods - def detach(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.detach() - new_data[k] = v - return new_data - - # Tensor-like methods - def numpy(self): - """Apply same name function to all tensors in data_fields.""" - new_data = self.new() - for k, v in self.items(): - if isinstance(v, torch.Tensor): - v = v.detach().cpu().numpy() - new_data[k] = v - return new_data - - def __nice__(self): - repr = '\n \n META INFORMATION \n' - for k, v in self.meta_info_items(): - repr += f'{k}: {v} \n' - repr += '\n DATA FIELDS \n' - for k, v in self.items(): - if isinstance(v, (torch.Tensor, np.ndarray)): - repr += f'shape of {k}: {v.shape} \n' - else: - repr += f'{k}: {v} \n' - return repr + '\n' diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/memory.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/memory.py deleted file mode 100644 index eb212bcaed139e5c9db595186ee8e16677921512..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/utils/memory.py +++ /dev/null @@ -1,213 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from collections import abc -from contextlib import contextmanager -from functools import wraps - -import torch - -from mmdet.utils import get_root_logger - - -def cast_tensor_type(inputs, src_type=None, dst_type=None): - """Recursively convert Tensor in inputs from ``src_type`` to ``dst_type``. - - Args: - inputs: Inputs that to be casted. - src_type (torch.dtype | torch.device): Source type. - src_type (torch.dtype | torch.device): Destination type. - - Returns: - The same type with inputs, but all contained Tensors have been cast. - """ - assert dst_type is not None - if isinstance(inputs, torch.Tensor): - if isinstance(dst_type, torch.device): - # convert Tensor to dst_device - if hasattr(inputs, 'to') and \ - hasattr(inputs, 'device') and \ - (inputs.device == src_type or src_type is None): - return inputs.to(dst_type) - else: - return inputs - else: - # convert Tensor to dst_dtype - if hasattr(inputs, 'to') and \ - hasattr(inputs, 'dtype') and \ - (inputs.dtype == src_type or src_type is None): - return inputs.to(dst_type) - else: - return inputs - # we need to ensure that the type of inputs to be casted are the same - # as the argument `src_type`. - elif isinstance(inputs, abc.Mapping): - return type(inputs)({ - k: cast_tensor_type(v, src_type=src_type, dst_type=dst_type) - for k, v in inputs.items() - }) - elif isinstance(inputs, abc.Iterable): - return type(inputs)( - cast_tensor_type(item, src_type=src_type, dst_type=dst_type) - for item in inputs) - # TODO: Currently not supported - # elif isinstance(inputs, InstanceData): - # for key, value in inputs.items(): - # inputs[key] = cast_tensor_type( - # value, src_type=src_type, dst_type=dst_type) - # return inputs - else: - return inputs - - -@contextmanager -def _ignore_torch_cuda_oom(): - """A context which ignores CUDA OOM exception from pytorch. - - Code is modified from - <https://github.com/facebookresearch/detectron2/blob/main/detectron2/utils/memory.py> # noqa: E501 - """ - try: - yield - except RuntimeError as e: - # NOTE: the string may change? - if 'CUDA out of memory. ' in str(e): - pass - else: - raise - - -class AvoidOOM: - """Try to convert inputs to FP16 and CPU if got a PyTorch's CUDA Out of - Memory error. It will do the following steps: - - 1. First retry after calling `torch.cuda.empty_cache()`. - 2. If that still fails, it will then retry by converting inputs - to FP16. - 3. If that still fails trying to convert inputs to CPUs. - In this case, it expects the function to dispatch to - CPU implementation. - - Args: - to_cpu (bool): Whether to convert outputs to CPU if get an OOM - error. This will slow down the code significantly. - Defaults to True. - test (bool): Skip `_ignore_torch_cuda_oom` operate that can use - lightweight data in unit test, only used in - test unit. Defaults to False. - - Examples: - >>> from mmdet.utils.memory import AvoidOOM - >>> AvoidCUDAOOM = AvoidOOM() - >>> output = AvoidOOM.retry_if_cuda_oom( - >>> some_torch_function)(input1, input2) - >>> # To use as a decorator - >>> # from mmdet.utils import AvoidCUDAOOM - >>> @AvoidCUDAOOM.retry_if_cuda_oom - >>> def function(*args, **kwargs): - >>> return None - ``` - - Note: - 1. The output may be on CPU even if inputs are on GPU. Processing - on CPU will slow down the code significantly. - 2. When converting inputs to CPU, it will only look at each argument - and check if it has `.device` and `.to` for conversion. Nested - structures of tensors are not supported. - 3. Since the function might be called more than once, it has to be - stateless. - """ - - def __init__(self, to_cpu=True, test=False): - self.to_cpu = to_cpu - self.test = test - - def retry_if_cuda_oom(self, func): - """Makes a function retry itself after encountering pytorch's CUDA OOM - error. - - The implementation logic is referred to - https://github.com/facebookresearch/detectron2/blob/main/detectron2/utils/memory.py - - Args: - func: a stateless callable that takes tensor-like objects - as arguments. - Returns: - func: a callable which retries `func` if OOM is encountered. - """ # noqa: W605 - - @wraps(func) - def wrapped(*args, **kwargs): - - # raw function - if not self.test: - with _ignore_torch_cuda_oom(): - return func(*args, **kwargs) - - # Clear cache and retry - torch.cuda.empty_cache() - with _ignore_torch_cuda_oom(): - return func(*args, **kwargs) - - # get the type and device of first tensor - dtype, device = None, None - values = args + tuple(kwargs.values()) - for value in values: - if isinstance(value, torch.Tensor): - dtype = value.dtype - device = value.device - break - if dtype is None or device is None: - raise ValueError('There is no tensor in the inputs, ' - 'cannot get dtype and device.') - - # Convert to FP16 - fp16_args = cast_tensor_type(args, dst_type=torch.half) - fp16_kwargs = cast_tensor_type(kwargs, dst_type=torch.half) - logger = get_root_logger() - logger.warning(f'Attempting to copy inputs of {str(func)} ' - 'to FP16 due to CUDA OOM') - - # get input tensor type, the output type will same as - # the first parameter type. - with _ignore_torch_cuda_oom(): - output = func(*fp16_args, **fp16_kwargs) - output = cast_tensor_type( - output, src_type=torch.half, dst_type=dtype) - if not self.test: - return output - logger.warning('Using FP16 still meet CUDA OOM') - - # Try on CPU. This will slow down the code significantly, - # therefore print a notice. - if self.to_cpu: - logger.warning(f'Attempting to copy inputs of {str(func)} ' - 'to CPU due to CUDA OOM') - cpu_device = torch.empty(0).device - cpu_args = cast_tensor_type(args, dst_type=cpu_device) - cpu_kwargs = cast_tensor_type(kwargs, dst_type=cpu_device) - - # convert outputs to GPU - with _ignore_torch_cuda_oom(): - logger.warning(f'Convert outputs to GPU (device={device})') - output = func(*cpu_args, **cpu_kwargs) - output = cast_tensor_type( - output, src_type=cpu_device, dst_type=device) - return output - - warnings.warn('Cannot convert output to GPU due to CUDA OOM, ' - 'the output is now on CPU, which might cause ' - 'errors if the output need to interact with GPU ' - 'data in subsequent operations') - logger.warning('Cannot convert output to GPU due to ' - 'CUDA OOM, the output is on CPU now.') - - return func(*cpu_args, **cpu_kwargs) - else: - # may still get CUDA OOM error - return func(*args, **kwargs) - - return wrapped - - -# To use AvoidOOM as a decorator -AvoidCUDAOOM = AvoidOOM() diff --git a/spaces/rorallitri/biomedical-language-models/logs/Download or Stream Aurat Aur Inteqam 720p A Tale of Courage and Vengeance.md b/spaces/rorallitri/biomedical-language-models/logs/Download or Stream Aurat Aur Inteqam 720p A Tale of Courage and Vengeance.md deleted file mode 100644 index 0348eb0e5a6ced73192a5640e8b161a41d7f714c..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Download or Stream Aurat Aur Inteqam 720p A Tale of Courage and Vengeance.md +++ /dev/null @@ -1,17 +0,0 @@ -<br /> -<p>You can download or play Goriya Goli Mp3 Song Download . Play and Listen presenting ankhiyon se goli . Play and Listen song akhiyon se goli maare movie .. akhiyon se goli maare full movie Video Download 3GP, MP4, HD MP4, And Watch akhiyon se goli maare full movie Video<br /><br />Download Bollywood Dance Songs Audio Jukebox Ankhiyon Se Goli Maare Tseries full video in hd 720p 1080p mp3 torrent mp4 free utorrent 3GP, MKV, Avi , watch online, WEBM,. Free Download Akhiyon Se Goli Maare 2002 Filmywap Full Movie . "Akhiyon Se Goli Maare 2002 fill movie download mp4 hd 720p" and you also know here movie duration .<br /><br />Ankhiyon Ke Jharokhon Se (1978) Ankhiyon Se Goli Maare . 2015 science engineering 720p 1080p x264 . -Download ankhiyon ke jharokhon se 1978 Movie .<br /><br />Akhiyon Se Goli Maare (2002) - watch online/download - quality: DVDRip HD 720p - free full movie - Akendra alias Topichand Bhangare runs an antique store as a.. Akhiyon Se Goli Maare Full Movie, Download the latest released Bollywood HD Movies, Games and Software directly from Torrent. Wapking and DJmaza official mp4, 3gp, avi videos.. Akhiyon Se Goli Maare (2002) - Thumka Lagake Naacho released on 2002.Akhiyon Se Goli Maare (2002) - Thumka Lagake Naacho is available in HD quality , Download .<br /><br />Check all videos related to akhiyon se goli maare hd video . akhiyon se goli maare HD 720p full song . 4:15. Ankhiyon Se Goli Maare' HD Song Govinda, Raveena .<br /> 518920514c</p> -<p>tamil Dost Garibon Ka free download utorrent<br />Hatyare movie 2 download<br />the Pyaar Mein Kabhi Kabhi 2 full movie in hindi download hd<br />1 Khoon Ka Sindoor full movie download<br />Laado malayalam movie free download in hd<br />2012 Midhunam full movie in hindi hd 1080p<br />download film Salakhen<br />Hatyare hindi dubbed watch online free<br />the Benaam movie hd download<br />Aurat Aur Inteqam dubbed full movie<br /></p> -<h2>Aurat Aur Inteqam 720p Torrent Download</h2><br /><p><b><b>DOWNLOAD</b> &#10042; <a href="https://tinurll.com/2uznK9">https://tinurll.com/2uznK9</a></b></p><br /><br /> -<p>Kites HinDi Movie ( Part 14 ) - With English Subtitles - HD HQ - Ft - Hrithik Roshan & Barbara Mori. Capture crystal clear 4K-quality video while recording action movies, and save them all, ready to share later with this GoPro HERO5 Session 4K Ultra HD Action Camcorder & Memory Card Bundle.. Share Movie. HD Streams. Online . Auf der Flucht (2010) Nowax 05.06.2018 - Kites - Auf der Flucht HD Stream. 1080p Full HD Stream. 720p HD Stream. 480p Stream .<br><br>Nonton film Kites (2010) streaming dan download movie subtitle indonesia kualitas HD gratis terlengkap dan terbaru. Action, Drama, Romance, India, Anurag Basu, Hrithik .. Kites (Film) Video Download 3GP, MP4, HD MP4, And Watch Kites (Film) Video. Streaming and Download KITES FULL MOVIE HD 1080P BLU RAY Video, Song MP3 and Movies HQ (14:39) 1080p 720p 320Kbps 3GP MP4 for free - Ucvideo<br><br>Kites 2010 Movie Free Download HD 720p,Kites 2010 full movie download hd,Kites 2010 film download,Kites 2010 hindi movie download hd 720p BlueRay,<br><br>Top features: - 4K video recording for impressive action shots - Creative movie making with versatile shooting modes - Tough design is waterproof, shockproof, freezeproof and dustproof -<br><br>Online Shopping China the best bang for your buck, provides cool electronics gadgets, toys, cell phones, vr headset, 3d printer, tv box, home decor, apparel at great prices.. HD movies free download any type of movie download free.Latest,english,tamil, punjabi,hindi movies free download.For PC, tablet,mobile free movie download.<br>fd3bc05f4a</p> -<p>Brides Wanted 1 dvdrip download movie<br>Aham Premasmi movie download in tamil dubbed hindi<br>1080p Utthaan<br>Sauda full hd movie 1080p<br>Coffee Shop full movie hd in tamil download movies<br>Operation Green Hunt movie free download hindi<br>Let 's Dance 5 free movie download<br>the Overtime 2012 full movie 1080p download movies<br>Ho Jatta Hai Pyar full movie 1080p hd<br>Detour 2 movie download in hindi 720p download<br></p> -<p>Sitam download full movie in hd<br>blu Sahiya 1080p telugu movies<br>Nanhe Jaisalmer hindi movie songs mp3 free download<br>Saawan Ko Aane Do 4 english dubbed mp4 torrent<br>Nayak 2 movie free download<br>hindi film Ssimran download<br>download movie the Road to Sangam<br>Police 3 full movie in hindi download mp4<br>download Formula 69 in hindi kickass<br>Wajahh 3 movie in hindi free download 3gp<br></p> -<p>Pareeksha movie download hd 1080p kickass<br>Soundtrack 2 720p blu-ray movies<br>Paheli full movie in hindi hd 1080p 2012 movies<br>Crook: It 's Good To Be Bad hai full movie 720p<br>free Bhanvraa-Love and Aggressions pdf hindi<br>Daak Bangla 4 full movie for download<br>download Johnny Gaddaar 3 in hindi 720p<br>Ichadhari Shaitan full movie hd 1080p online<br>Om book marathi movie download<br>Jashnn full movie free download in telugu mp4 hd<br></p> -<p>Create impressive videos.. . www.spekmac.com Download hindi songs free download rm files at . Thodi Life Thoda Magic . (Full Song) Film - Aap Kaa Surroor - The Movie .. nigfayn June 05, 2018 Download Full Movie Thodi Life Thoda Magic Part 1 In Hindi nigfayn. Download Full Movie Thodi Life Thoda Magic Part 1 In Hindi.<br><br>. film in hindi dubbed download Thodi Life Thoda Magic malayalam full movie hd 1080p Zabardast 2 full movie 3gp download Thoda Pyaar Thoda Magic full movie hd 1080p .<br><br>Thodi Life Thoda Magic Movie Photos: Check out for latest Thodi Life Thoda Magic movie stills, working stills, Thodi Life Thoda Magic behind the scenes photos, Thodi Life Thoda Magic star<br><br>Edmunds Research & Reviews Search New Car Listings Nearby!. Thodi Life Thoda Magic (2008) - Hindi Movie Watch Online.. Thodi Life Thoda Magic Bengali Full Movie Download Dvdrip Movies. Sa Ghum Ho Sad hindi movie songs download, 3gp Thoda Sa Ghum Ho . Sa Ghum Ho Sad full song download, Thoda Sa Ghum Ho .<br><br>The Thodi Life Thoda Magic Full Movie With . Thodi Life Thoda Magic Free Mp3 Download Thodi Life Thoda Magic Song Free Download Thodi Life Thoda Magic Hindi Movie .. Thoda Pyaar Thoda Magic Full Movie In Hindi Free Download Utorrent Kickass . Thodi Life Thoda Magic full movie download in .. See full summary . Title: Thodi Life Thoda Magic (2008) 4.3 /10. Want to share IMDb's . Download Audio Books .<br>78f063afee</p> -<p></p> -<p>The Apartment<br>download Yeh Mohabbat Hai movie in hindi 3gp<br>full movie Cactus-A Story Of Hope download<br>tamil West is West free download<br>Dhan Dhana Dhan Goal tamil movie mp3 download<br>Parmatma movie mp3 songs free download<br>Kya Time Hai Yaar bengali full movie hd 720p download<br>London Dreams 1 movie download utorrent<br>Who 's There movie tamil dubbed in 720p<br>hindi movie Mr. Majnu hai full movie download<br></p> -<p>hindi dubbed 300mb movies free download, hindi dubbed full movies download, hindi dubbed free movie download, 300mb movies hindi dubbed, 300mbfilms hindi dubbed, hindi dubbed 3gp movie,. Free New Hollywood Hindi Dubbed movies download hindi punjabi bollywood hindi dubbed movies in 3gp mp4 full hd 720p 1080p many more.<br><br>Conctate con amigos, familiares y compaeros.. Librivox Free Audiobook. . Sarrainodu (Telugu) Hindi Dubbed Movies Preview . Rakul Preet Singh, Catherine.srt download. download 1 .<br><br>Find Where Free Movies Is Available To Stream Now. Yidio is the premier streaming guide for TV Shows & Movies on the web, phone, tablet or smart tv.<br><br>Free Download Watch Movies Online. Hindi; . (2018) Full Telugu Movie Watch Online 2018. . A Single Shot (2013) Hindi Dubbed Full Movie Free Download 2013. Movie .. TODAYPK TELUGU MOVIES ONLINE 2017 WATCH ONLINE FREE. TodayPk Movies Telugu . Dubbed. Hindi Dubbed; Hollywood Movies; . TELUGU MOVIES 2017 WATCH ONLINE DOWNLOAD .<br><br>Download Hd 3Gp Mp4 Movies Download xFilmywap is Best Free Sorce For Download Any Kind Movies For Mobile And Pc.. Watch Full Hollywood Movies Dubbed in Hindi online free. Latest Hollywood Movies Dubbed in Hindi watch online released . telugu, hindi, gujarati, english, punjabi .<br>78f063afee</p> -<p>tamil movie Loins Of Punjab Presents video songs free download<br>Yeh Hai Prem-Janjaal pdf free<br>F.A.L.T.U 720p movies download<br>Main Aurr Mrs Khanna movie full in hindi download<br>Toonpur Ka Superrhero in hindi download hd<br>Tere Mere Phere movie 720p download movie<br>Bal Hanuman 2 2 movie download hd 720p<br>download film C Kkompany man 2 full movie<br>Marega Salaa love telugu movie dubbed in hindi free download<br>Ummeed - The Hope full movie in hindi dubbed hd 1080p<br></p> -<p>hindi songs of Danger free download<br>Cocktail-The Deadly Combination 1 2 movie download<br>Miley Naa Miley Hum 3 full movie download<br>free download Ab To Banja Sajana Hamaar<br>download the Kaaran in hindi hd<br>Bhoot Unkle 3 full movie download hd 720p<br>download the Mission - The Last War movie mp4<br>download Mission 11 Jul english dubbed free<br>tamil Cactus-A Story Of Hope film movie free download<br>Aur Ek Prem Kahani IN HINDI pdf free download<br></p> -<p>malayalam movie Tere Liye full movie download . 9 Eleven 2015 full movie download download Saab, Chai Paani man full movie in hindi. Overview; Share this page.. Dana Paani. 4. See full list of movies from 1989. 1988. Bees Saal Baad. 2 12. Biwi Ho To Aisi. 2. Commando (1988) 2. . See full list of movies from 1965. 1964. Ayee .<br><br>Saab, Chai Paani 2 Full Movie Hd 1080p Tamil Dubbed English Movie. . Malayalam, dubbed, . Movies You . 3 full movie in hindi dubbed hd download . Paani Mein 2 full .. Find Where Full Movies Is Available To Stream Now. Yidio is the premier streaming guide for TV Shows & Movies on the web, phone, tablet or smart tv.<br><br>Here you can download Kalyug Full Movie in HD format for free from single . Chunnu N Munnu hindi book pdf download Saab, Chai Paani malayalam movie mp3 download.. download full movie The Undertrial in 720p Saab, Chai Paani book free download in hindi Umar 4 full movie in hindi free . the Zamaanat malayalam full movie download.. Bullet Raja has taken a poor start at the box office like last week's duds Gori Tere Pyaar Mein and Singh Saab . Full Bollywood movie Download . paani aa jata tha .<br><br>Find Where Full Movies Is Available To Stream Now. Yidio is the premier streaming guide for TV Shows & Movies on the web, phone, tablet or smart tv.. Malayalam Mp3. Download free for Khote Sikkey Hindi Movie Song Mp3 or search any . Saab, Chai Paani Movie In .. Filmfare is an English-language tabloid-sized bi-weekly magazine about Indian cinema. It was started in 1967 and published by The Times Group.<br><br>Saab, Chai Paani kannada movie free download utorrent .. Bullet Raja has taken a poor start at the box office like last week's duds Gori Tere Pyaar Mein and Singh Saab . Full Bollywood movie Download . paani aa jata tha .<br>78f063afee</p> aaccfb2cb3<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/rottenlittlecreature/Moon_Goblin/theme_dropdown.py b/spaces/rottenlittlecreature/Moon_Goblin/theme_dropdown.py deleted file mode 100644 index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000 --- a/spaces/rottenlittlecreature/Moon_Goblin/theme_dropdown.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import pathlib - -from gradio.themes.utils import ThemeAsset - - -def create_theme_dropdown(): - import gradio as gr - - asset_path = pathlib.Path(__file__).parent / "themes" - themes = [] - for theme_asset in os.listdir(str(asset_path)): - themes.append( - (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset))) - ) - - def make_else_if(theme_asset): - return f""" - else if (theme == '{str(theme_asset[0].version)}') {{ - var theme_css = `{theme_asset[1]._get_theme_css()}` - }}""" - - head, tail = themes[0], themes[1:] - if_statement = f""" - if (theme == "{str(head[0].version)}") {{ - var theme_css = `{head[1]._get_theme_css()}` - }} {" ".join(make_else_if(t) for t in tail)} - """ - - latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[ - ::-1 - ] - latest_to_oldest = [str(t.version) for t in latest_to_oldest] - - component = gr.Dropdown( - choices=latest_to_oldest, - value=latest_to_oldest[0], - render=False, - label="Select Version", - ).style(container=False) - - return ( - component, - f""" - (theme) => {{ - if (!document.querySelector('.theme-css')) {{ - var theme_elem = document.createElement('style'); - theme_elem.classList.add('theme-css'); - document.head.appendChild(theme_elem); - }} else {{ - var theme_elem = document.querySelector('.theme-css'); - }} - {if_statement} - theme_elem.innerHTML = theme_css; - }} - """, - ) diff --git a/spaces/runa91/bite_gradio/src/stacked_hourglass/__init__.py b/spaces/runa91/bite_gradio/src/stacked_hourglass/__init__.py deleted file mode 100644 index bdb1a50e871a0da6d20f89aaf5d559d40bf5341c..0000000000000000000000000000000000000000 --- a/spaces/runa91/bite_gradio/src/stacked_hourglass/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -import os -import sys -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '../../')) -from src.stacked_hourglass.model import hg1, hg2, hg4, hg8 -from src.stacked_hourglass.predictor import HumanPosePredictor diff --git a/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_seg.py b/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_seg.py deleted file mode 100644 index ee72ac9398309993dc23b5ac860e2b2d072efe32..0000000000000000000000000000000000000000 --- a/spaces/rzzgate/Stable-Diffusion-ControlNet-WebUI/diffusion_webui/diffusion_models/controlnet/controlnet_inpaint/controlnet_inpaint_seg.py +++ /dev/null @@ -1,403 +0,0 @@ -import gradio as gr -import numpy as np -import torch -from diffusers import ControlNetModel, StableDiffusionControlNetPipeline -from PIL import Image -from transformers import AutoImageProcessor, UperNetForSemanticSegmentation - -from diffusion_webui.utils.model_list import ( - controlnet_seg_model_list, - stable_inpiant_model_list, -) -from diffusion_webui.utils.scheduler_list import ( - SCHEDULER_LIST, - get_scheduler_list, -) - -# https://github.com/mikonvergence/ControlNetInpaint - - -def ade_palette(): - """ADE20K palette that maps each class to RGB values.""" - return [ - [120, 120, 120], - [180, 120, 120], - [6, 230, 230], - [80, 50, 50], - [4, 200, 3], - [120, 120, 80], - [140, 140, 140], - [204, 5, 255], - [230, 230, 230], - [4, 250, 7], - [224, 5, 255], - [235, 255, 7], - [150, 5, 61], - [120, 120, 70], - [8, 255, 51], - [255, 6, 82], - [143, 255, 140], - [204, 255, 4], - [255, 51, 7], - [204, 70, 3], - [0, 102, 200], - [61, 230, 250], - [255, 6, 51], - [11, 102, 255], - [255, 7, 71], - [255, 9, 224], - [9, 7, 230], - [220, 220, 220], - [255, 9, 92], - [112, 9, 255], - [8, 255, 214], - [7, 255, 224], - [255, 184, 6], - [10, 255, 71], - [255, 41, 10], - [7, 255, 255], - [224, 255, 8], - [102, 8, 255], - [255, 61, 6], - [255, 194, 7], - [255, 122, 8], - [0, 255, 20], - [255, 8, 41], - [255, 5, 153], - [6, 51, 255], - [235, 12, 255], - [160, 150, 20], - [0, 163, 255], - [140, 140, 140], - [250, 10, 15], - [20, 255, 0], - [31, 255, 0], - [255, 31, 0], - [255, 224, 0], - [153, 255, 0], - [0, 0, 255], - [255, 71, 0], - [0, 235, 255], - [0, 173, 255], - [31, 0, 255], - [11, 200, 200], - [255, 82, 0], - [0, 255, 245], - [0, 61, 255], - [0, 255, 112], - [0, 255, 133], - [255, 0, 0], - [255, 163, 0], - [255, 102, 0], - [194, 255, 0], - [0, 143, 255], - [51, 255, 0], - [0, 82, 255], - [0, 255, 41], - [0, 255, 173], - [10, 0, 255], - [173, 255, 0], - [0, 255, 153], - [255, 92, 0], - [255, 0, 255], - [255, 0, 245], - [255, 0, 102], - [255, 173, 0], - [255, 0, 20], - [255, 184, 184], - [0, 31, 255], - [0, 255, 61], - [0, 71, 255], - [255, 0, 204], - [0, 255, 194], - [0, 255, 82], - [0, 10, 255], - [0, 112, 255], - [51, 0, 255], - [0, 194, 255], - [0, 122, 255], - [0, 255, 163], - [255, 153, 0], - [0, 255, 10], - [255, 112, 0], - [143, 255, 0], - [82, 0, 255], - [163, 255, 0], - [255, 235, 0], - [8, 184, 170], - [133, 0, 255], - [0, 255, 92], - [184, 0, 255], - [255, 0, 31], - [0, 184, 255], - [0, 214, 255], - [255, 0, 112], - [92, 255, 0], - [0, 224, 255], - [112, 224, 255], - [70, 184, 160], - [163, 0, 255], - [153, 0, 255], - [71, 255, 0], - [255, 0, 163], - [255, 204, 0], - [255, 0, 143], - [0, 255, 235], - [133, 255, 0], - [255, 0, 235], - [245, 0, 255], - [255, 0, 122], - [255, 245, 0], - [10, 190, 212], - [214, 255, 0], - [0, 204, 255], - [20, 0, 255], - [255, 255, 0], - [0, 153, 255], - [0, 41, 255], - [0, 255, 204], - [41, 0, 255], - [41, 255, 0], - [173, 0, 255], - [0, 245, 255], - [71, 0, 255], - [122, 0, 255], - [0, 255, 184], - [0, 92, 255], - [184, 255, 0], - [0, 133, 255], - [255, 214, 0], - [25, 194, 194], - [102, 255, 0], - [92, 0, 255], - ] - - -class StableDiffusionControlNetInpaintSegGenerator: - def __init__(self): - self.pipe = None - - def load_model( - self, - stable_model_path, - controlnet_model_path, - scheduler, - ): - - if self.pipe is None: - controlnet = ControlNetModel.from_pretrained( - controlnet_model_path, torch_dtype=torch.float16 - ) - self.pipe = StableDiffusionControlNetPipeline.from_pretrained( - pretrained_model_name_or_path=stable_model_path, - controlnet=controlnet, - safety_checker=None, - torch_dtype=torch.float16, - ) - - self.pipe = get_scheduler_list(pipe=self.pipe, scheduler=scheduler) - self.pipe.to("cuda") - self.pipe.enable_xformers_memory_efficient_attention() - - return self.pipe - - def load_image(self, image_path): - image = np.array(image_path) - image = Image.fromarray(image) - return image - - def controlnet_seg_inpaint(self, image_path: str): - image_processor = AutoImageProcessor.from_pretrained( - "openmmlab/upernet-convnext-small" - ) - image_segmentor = UperNetForSemanticSegmentation.from_pretrained( - "openmmlab/upernet-convnext-small" - ) - - image = image_path["image"].convert("RGB").resize((512, 512)) - image = np.array(image) - pixel_values = image_processor(image, return_tensors="pt").pixel_values - - with torch.no_grad(): - outputs = image_segmentor(pixel_values) - - seg = image_processor.post_process_semantic_segmentation( - outputs, target_sizes=[image.size[::-1]] - )[0] - - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - palette = np.array(ade_palette()) - - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - - color_seg = color_seg.astype(np.uint8) - image = Image.fromarray(color_seg) - - return image - - def generate_image( - self, - image_path: str, - stable_model_path: str, - controlnet_model_path: str, - prompt: str, - negative_prompt: str, - num_images_per_prompt: int, - guidance_scale: int, - num_inference_step: int, - controlnet_conditioning_scale: int, - scheduler: str, - seed_generator: int, - ): - - normal_image = image_path["image"].convert("RGB").resize((512, 512)) - mask_image = image_path["mask"].convert("RGB").resize((512, 512)) - - normal_image = self.load_image(image_path=normal_image) - mask_image = self.load_image(image_path=mask_image) - - controlnet_image = self.controlnet_seg_inpaint(image_path=image_path) - - pipe = self.load_model( - stable_model_path=stable_model_path, - controlnet_model_path=controlnet_model_path, - scheduler=scheduler, - ) - - if seed_generator == 0: - random_seed = torch.randint(0, 1000000, (1,)) - generator = torch.manual_seed(random_seed) - else: - generator = torch.manual_seed(seed_generator) - - output = pipe( - prompt=prompt, - image=normal_image, - mask_image=mask_image, - control_image=controlnet_image, - negative_prompt=negative_prompt, - num_images_per_prompt=num_images_per_prompt, - num_inference_steps=num_inference_step, - guidance_scale=guidance_scale, - controlnet_conditioning_scale=controlnet_conditioning_scale, - generator=generator, - ).images - - return output - - def app(): - with gr.Blocks(): - with gr.Row(): - with gr.Column(): - controlnet_seg_inpaint_image_file = gr.Image( - source="upload", - tool="sketch", - elem_id="image_upload", - type="pil", - label="Upload", - ) - - controlnet_seg_inpaint_prompt = gr.Textbox( - lines=1, placeholder="Prompt", show_label=False - ) - - controlnet_seg_inpaint_negative_prompt = gr.Textbox( - lines=1, - show_label=False, - placeholder="Negative Prompt", - ) - with gr.Row(): - with gr.Column(): - controlnet_seg_inpaint_stable_model_id = ( - gr.Dropdown( - choices=stable_inpiant_model_list, - value=stable_inpiant_model_list[0], - label="Stable Model Id", - ) - ) - - controlnet_seg_inpaint_guidance_scale = gr.Slider( - minimum=0.1, - maximum=15, - step=0.1, - value=7.5, - label="Guidance Scale", - ) - - controlnet_seg_inpaint_num_inference_step = ( - gr.Slider( - minimum=1, - maximum=100, - step=1, - value=50, - label="Num Inference Step", - ) - ) - controlnet_seg_inpaint_num_images_per_prompt = ( - gr.Slider( - minimum=1, - maximum=10, - step=1, - value=1, - label="Number Of Images", - ) - ) - with gr.Row(): - with gr.Column(): - controlnet_seg_inpaint_model_id = gr.Dropdown( - choices=controlnet_seg_model_list, - value=controlnet_seg_model_list[0], - label="Controlnet Model Id", - ) - controlnet_seg_inpaint_scheduler = gr.Dropdown( - choices=SCHEDULER_LIST, - value=SCHEDULER_LIST[0], - label="Scheduler", - ) - controlnet_seg_inpaint_controlnet_conditioning_scale = gr.Slider( - minimum=0.1, - maximum=1.0, - step=0.1, - value=0.5, - label="Controlnet Conditioning Scale", - ) - - controlnet_seg_inpaint_seed_generator = ( - gr.Slider( - minimum=0, - maximum=1000000, - step=1, - value=0, - label="Seed Generator", - ) - ) - - controlnet_seg_inpaint_predict = gr.Button( - value="Generator" - ) - - with gr.Column(): - output_image = gr.Gallery( - label="Generated images", - show_label=False, - elem_id="gallery", - ).style(grid=(1, 2)) - - controlnet_seg_inpaint_predict.click( - fn=StableDiffusionControlNetInpaintSegGenerator().generate_image, - inputs=[ - controlnet_seg_inpaint_image_file, - controlnet_seg_inpaint_stable_model_id, - controlnet_seg_inpaint_model_id, - controlnet_seg_inpaint_prompt, - controlnet_seg_inpaint_negative_prompt, - controlnet_seg_inpaint_num_images_per_prompt, - controlnet_seg_inpaint_guidance_scale, - controlnet_seg_inpaint_num_inference_step, - controlnet_seg_inpaint_controlnet_conditioning_scale, - controlnet_seg_inpaint_scheduler, - controlnet_seg_inpaint_seed_generator, - ], - outputs=[output_image], - ) diff --git a/spaces/sahil2801/CodeAlpaca/README.md b/spaces/sahil2801/CodeAlpaca/README.md deleted file mode 100644 index 1393da861c1c902e9d2a72400c8f45107a7cafc0..0000000000000000000000000000000000000000 --- a/spaces/sahil2801/CodeAlpaca/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CodeAlpaca -emoji: 💻 -colorFrom: purple -colorTo: purple -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sajjade/hassanblend-hassanblend1.4/app.py b/spaces/sajjade/hassanblend-hassanblend1.4/app.py deleted file mode 100644 index e4edcc74c90d799e6e24bd2de6f54ee9ccec0718..0000000000000000000000000000000000000000 --- a/spaces/sajjade/hassanblend-hassanblend1.4/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/hassanblend/hassanblend1.4").launch() \ No newline at end of file diff --git a/spaces/sanchanhart/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/detector_test.py b/spaces/sanchanhart/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/detector_test.py deleted file mode 100644 index f728dcfe392de07aaa3b9e7b28b734142b15423b..0000000000000000000000000000000000000000 --- a/spaces/sanchanhart/Warehouse_Apparel_Detection/metadata/predictor_yolo_detector/detector_test.py +++ /dev/null @@ -1,176 +0,0 @@ -import os -import shutil -import time -from pathlib import Path - -import cv2 -import torch -import torch.backends.cudnn as cudnn -from numpy import random -from PIL import Image - -from metadata.utils.utils import encodeImageIntoBase64 - -import sys -sys.path.insert(0, 'metadata/predictor_yolo_detector') - -from metadata.predictor_yolo_detector.models.experimental import attempt_load -from metadata.predictor_yolo_detector.utils.datasets import LoadStreams, LoadImages -from metadata.predictor_yolo_detector.utils.general import ( - check_img_size, non_max_suppression, apply_classifier, scale_coords, - xyxy2xywh, plot_one_box, strip_optimizer, set_logging) -from metadata.predictor_yolo_detector.utils.torch_utils import select_device, load_classifier, \ - time_synchronized - - -class Detector(): - def __init__(self, filename): - self.weights = "./metadata/predictor_yolo_detector/best.pt" - self.conf = float(0.5) - self.source = "./metadata/predictor_yolo_detector/inference/images/" - self.img_size = int(416) - self.save_dir = "./metadata/predictor_yolo_detector/inference/output" - self.view_img = False - self.save_txt = False - self.device = 'cpu' - self.augment = True - self.agnostic_nms = True - self.conf_thres = float(0.5) - self.iou_thres = float(0.45) - self.classes = 0 - self.save_conf = True - self.update = True - self.filename = filename - - def detect(self, save_img=False): - out, source, weights, view_img, save_txt, imgsz = \ - self.save_dir, self.source, self.weights, self.view_img, self.save_txt, self.img_size - webcam = source.isnumeric() or source.startswith(('rtsp://', 'rtmp://', 'http://')) or source.endswith('.txt') - - # Initialize - set_logging() - device = select_device(self.device) - if os.path.exists(out): # output dir - shutil.rmtree(out) # delete dir - os.makedirs(out) # make new dir - half = device.type != 'cpu' # half precision only supported on CUDA - - # Load model - model = attempt_load(weights, map_location=device) # load FP32 model - imgsz = check_img_size(imgsz, s=model.stride.max()) # check img_size - if half: - model.half() # to FP16 - - # Second-stage classifier - classify = False - if classify: - modelc = load_classifier(name='resnet101', n=2) # initialize - modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']) # load weights - modelc.to(device).eval() - - # Set Dataloader - vid_path, vid_writer = None, None - if webcam: - view_img = True - cudnn.benchmark = True # set True to speed up constant image size inference - dataset = LoadStreams(source, img_size=imgsz) - else: - save_img = True - dataset = LoadImages(source, img_size=imgsz) - - # Get names and colors - names = model.module.names if hasattr(model, 'module') else model.names - colors = [[random.randint(0, 255) for _ in range(3)] for _ in range(len(names))] - - # Run inference - t0 = time.time() - img = torch.zeros((1, 3, imgsz, imgsz), device=device) # init img - _ = model(img.half() if half else img) if device.type != 'cpu' else None # run once - for path, img, im0s, vid_cap in dataset: - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img /= 255.0 # 0 - 255 to 0.0 - 1.0 - if img.ndimension() == 3: - img = img.unsqueeze(0) - - # Inference - t1 = time_synchronized() - pred = model(img, augment=self.augment)[0] - - # Apply NMS - pred = non_max_suppression(pred, self.conf_thres, self.iou_thres, classes=self.classes, - agnostic=self.agnostic_nms) - t2 = time_synchronized() - - # Apply Classifier - if classify: - pred = apply_classifier(pred, modelc, img, im0s) - - # Process detections - for i, det in enumerate(pred): # detections per image - if webcam: # batch_size >= 1 - p, s, im0 = path[i], '%g: ' % i, im0s[i].copy() - else: - p, s, im0 = path, '', im0s - - save_path = str(Path(out) / Path(p).name) - txt_path = str(Path(out) / Path(p).stem) + ('_%g' % dataset.frame if dataset.mode == 'video' else '') - s += '%gx%g ' % img.shape[2:] # print string - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - if det is not None and len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, -1].unique(): - n = (det[:, -1] == c).sum() # detections per class - s += '%g %ss, ' % (n, names[int(c)]) # add to string - - # Write results - for *xyxy, conf, cls in reversed(det): - if save_txt: # Write to file - xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh - line = (cls, conf, *xywh) if self.save_conf else (cls, *xywh) # label format - with open(txt_path + '.txt', 'a') as f: - f.write(('%g ' * len(line) + '\n') % line) - - if save_img or view_img: # Add bbox to image - label = '%s %.2f' % (names[int(cls)], conf) - plot_one_box(xyxy, im0, label=label, color=colors[int(cls)], line_thickness=3) - - # Print time (inference + NMS) - # print('%sDone. (%.3fs)' % (s, t2 - t1)) - # detections = "Total No. of Cardboards:" + str(len(det)) - # cv2.putText(img = im0, text = detections, org = (round(im0.shape[0]*0.08), round(im0.shape[1]*0.08)),fontFace = cv2.FONT_HERSHEY_DUPLEX, fontScale = 1.0,color = (0, 0, 255),thickness = 3) - im0 = cv2.cvtColor(im0, cv2.COLOR_RGB2BGR) - return im0 - # if save_img: - # if dataset.mode == 'images': - - # #im = im0[:, :, ::-1] - # im = Image.fromarray(im0) - - # im.save("output.jpg") - # # cv2.imwrite(save_path, im0) - # else: - # print("Video Processing Needed") - - - # if save_txt or save_img: - # print('Results saved to %s' % Path(out)) - - # print('Done. (%.3fs)' % (time.time() - t0)) - - # return "Done" - - def detect_action(self): - with torch.no_grad(): - img = self.detect() - return img - # bgr_image = cv2.imread("output.jpg") - # im_rgb = cv2.cvtColor(bgr_image, cv2.COLOR_RGB2BGR) - # cv2.imwrite('color_img.jpg', im_rgb) - # opencodedbase64 = encodeImageIntoBase64("color_img.jpg") - # result = {"image": opencodedbase64.decode('utf-8')} - # return result - diff --git a/spaces/sanchanhart/Warehouse_Apparel_Detection/templates/index.html b/spaces/sanchanhart/Warehouse_Apparel_Detection/templates/index.html deleted file mode 100644 index 9d60b551ef40b9b33e45c9e0c10dc32f005d41e0..0000000000000000000000000000000000000000 --- a/spaces/sanchanhart/Warehouse_Apparel_Detection/templates/index.html +++ /dev/null @@ -1,351 +0,0 @@ -<!DOCTYPE html> -<html lang="en"> - -<head> - <meta charset="UTF-8"> - <meta name="viewport" content="width=device-width, initial-scale=1.0"> - <meta http-equiv="X-UA-Compatible" content="ie=edge"> - <title>iNeuron</title> - <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" - integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous"> - <style> - - .iupload h3 { - color: #1b2d6b; - font-size: 30px; - font-weight: 700; - } - - .img-part-1 { - height: 300px; - width: 300px; - margin: 0px auto; - } - - .image-part { - height: 300px; - width: 300px; - border: 1px solid #1b2d6b; - } - - .image-part img { - /* position: absolute; */ - height: 300px; - width: 300px; - display: none; - padding: 5px; - } - - .image-part #video { - /* display: block; */ - height: 300px; - width: 300px; - padding: 5px; - } - - .res-part { - /* margin-left: 20px; */ - height: 400px; - width: 100%; - padding: 5px; - margin: 0px auto; - overflow: auto; - } - - .upload-image { - /* margin-left: 20px; */ - height: 400px; - width: auto;; - padding: 5px; - margin: 0px auto; - overflow: auto; - } - - .resp-img { - height: 400px; - width: auto; - margin: 0px auto; - } - - .jsonRes { - margin-left: 30px; - } - - #send { - cursor: pointer; - } - - .btn-part { - width: 325px; - } - - textarea, - select, - .form-control, - .custom-select, - button.btn, - .btn-primary, - input[type="text"], - input[type="url"], - .uneditable-input { - border: 1px solid #363e75; - outline: 0 !important; - border-radius: 0px; - box-shadow: none; - -webkit-box-shadow: none; - -moz-box-shadow: none; - -moz-transition: none; - -webkit-transition: none; - } - - textarea:focus, - select:focus, - .form-control:focus, - .btn:focus, - .btn-primary:focus, - .custom-select:focus, - input[type="text"]:focus, - .uneditable-input:focus { - border: 1px solid #007bff; - outline: 0 !important; - border-radius: 0px; - box-shadow: none; - -webkit-box-shadow: none; - -moz-box-shadow: none; - -moz-transition: none; - -webkit-transition: none; - } - - #loading { - position: fixed; - left: 0px; - top: 0px; - width: 100%; - height: 100%; - z-index: 9999999999; - overflow: hidden; - background: rgba(255, 255, 255, 0.7); - } - - .loader { - border: 8px solid #f3f3f3; - border-top: 8px solid #363e75; - border-radius: 50%; - width: 60px; - height: 60px; - left: 50%; - margin-left: -4em; - display: block; - animation: spin 2s linear infinite; - } - - .loader, - .loader:after { - display: block; - position: absolute; - top: 50%; - margin-top: -4.05em; - } - - @keyframes spin { - 0% { - transform: rotate(0deg); - } - - 100% { - transform: rotate(360deg); - } - } - - .logo { - position: absolute; - right: 0px; - bottom: 0px; - margin-right: 30px; - margin-bottom: 30px; - } - </style> -</head> - -<body> - <!-- <div class="main container"> - <section class="iupload"> - <h3 class="text-center py-4">Object Detection Using TFOD</h3> - <div class="row"> - <div class="img-part col-md-6"> - <div class="image-part"> - <video autoplay id="video" - poster="https://img.freepik.com/free-vector/group-young-people-posing-photo_52683-18824.jpg?size=338&ext=jpg"></video> - <img src="" id="photo"> - <canvas style="display:none;" id="canvas"></canvas> - </div> - </div> - <div class="col-md-6 col-xs-12 right-part"> - <h5 class="mb-2"> - Prediction Results - </h5> - <div class="row"> - <div class="res-part2 col-md-2 col-xs-12"></div> - </div> - </div> - </div> - </section> - - - </div> --> - - <!-- Header --> -<header class="bg-primary text-center py-5 mb-4"> - <div class="container"> - <h1 class="font-weight-light text-white">Warehouse Apparel Detection using YOLOv5</h1> - </div> - </header> - - <!-- Page Content --> - <div class="container"> - - - <form class="input-group upload-data row"> - <div class="col-xl-6 col-md-6 col-sm-6"> - <button type="button" class="btn btn-primary col-12" id="uload">Upload</button> - </div> - <div class="col-xl-6 col-md-6 col-sm-6"> - <button id="send" type="button" class="btn btn-success col-12">Predict</button> - </div> - - <!-- change url value --> - - <input type="hidden" class="form-control mr-2" id="url" placeholder="Enter REST Api url..." value="../predict" /> - <input name="upload" type="file" id="fileinput" style="position:absolute;top:-500px; display: none;" /><br /> - </form> - - <div class="row"> - <!-- Team Member 1 --> - <div class="col-xl-6 col-md-6 col-sm-6 mb-6"> - <div class="card border-0 shadow upload-image "> - <!-- <img src="https://source.unsplash.com/TMgQMXoglsM/500x350" class="card-img-top" alt="..."> --> - <video autoplay id="video" poster="https://img.freepik.com/free-vector/group-young-people-posing-photo_52683-18824.jpg?size=338&ext=jpg"></video> - <img src="" class="" id="photo"> - <canvas style="display:none;" id="canvas"></canvas> - <!-- <div class="card-body text-center"> - <h5 class="card-title mb-0">Team Member</h5> - </div> --> - </div> - </div> - <!-- Team Member 2 --> - <div class="col-xl-6 col-md-6 col-sm-6 mb-6"> - <div class="card border-0 shadow res-part2"> - <div class="card-body text-center"> - <h5 class="card-title mb-0">Prediction Results</h5> - </div> - </div> - </div> - </div> - <!-- /.row --> - - - </div> - <!-- /.container --> - - <img class="logo" - src="https://apparel.ineuronvision.com/static/logo.png" /> - - - <div id="loading"> - <div class="loader"></div> - </div> - <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script> - <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" - integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"> - </script> - <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" - integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous"> - </script> - - <script> - var mybtn = document.getElementById('startbtn'); - var myvideo = document.getElementById('video'); - var mycanvas = document.getElementById('canvas'); - var myphoto = document.getElementById('photo'); - var base_data = ""; - - function sendRequest(base64Data) { - var type = "json"; - if (base64Data != "" || base64Data != null) { - if (type == "imgtobase") { - $(".res-part").html(""); - $(".res-part").html(base64Data); - } else if (type == "basetoimg") { - var imageData = $("#imgstring").val(); - $(".res-part").html(""); - $(".res-part").append("<img src='data:image/jpeg;base64," + imageData + "' alt='' />"); - } else { - var url = $("#url").val(); - $("#loading").show(); - $.ajax({ - url: url, - type: "post", - cache: false, - async: true, - crossDomain: true, - headers: { - 'Content-Type': 'application/json', - 'Access-Control-Allow-Origin': '*' - }, - data: JSON.stringify({ - image: base64Data - }), - success: function (res) { - $(".res-part").html(""); - $(".res-part2").html(""); - var imageData = res.image; - $(".res-part2").append("<img class='resp-img' src='data:image/jpeg;base64," + - imageData + "' alt='' />"); - // $(".res-part").html("<pre>" + JSON.stringify(res[0], undefined, 2) + "</pre>"); - $("#loading").hide(); - } - }); - } - } - } - - $(document).ready(function () { - $("#loading").hide(); - - $('#send').click(function (evt) { - sendRequest(base_data); - }); - - $('#uload').click(function (evt) { - $('#fileinput').focus().trigger('click'); - }); - $("#fileinput").change(function () { - if (this.files && this.files[0]) { - var reader = new FileReader(); - reader.onload = function (e) { - var url = e.target.result; - var img = new Image(); - img.crossOrigin = 'Anonymous'; - img.onload = function () { - var canvas = document.createElement('CANVAS'); - var ctx = canvas.getContext('2d'); - canvas.height = this.height; - canvas.width = this.width; - ctx.drawImage(this, 0, 0); - base_data = canvas.toDataURL('image/jpeg', 1.0).replace( - /^data:image.+;base64,/, ''); - canvas = null; - }; - img.src = url; - $('#photo').attr('src', url); - $('#photo').show(); - $('#video').hide(); - } - reader.readAsDataURL(this.files[0]); - } - }); - }); - </script> -</body> - -</html> \ No newline at end of file diff --git a/spaces/sayakpaul/gopro-deblurring-maxim/maxim/blocks/unet.py b/spaces/sayakpaul/gopro-deblurring-maxim/maxim/blocks/unet.py deleted file mode 100644 index 6000e05ae4472df5191a7af890b4d9274271081f..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/gopro-deblurring-maxim/maxim/blocks/unet.py +++ /dev/null @@ -1,133 +0,0 @@ -import functools - -import tensorflow as tf -from tensorflow.keras import layers - -from .attentions import RCAB -from .misc_gating import CrossGatingBlock, ResidualSplitHeadMultiAxisGmlpLayer - -Conv1x1 = functools.partial(layers.Conv2D, kernel_size=(1, 1), padding="same") -Conv3x3 = functools.partial(layers.Conv2D, kernel_size=(3, 3), padding="same") -ConvT_up = functools.partial( - layers.Conv2DTranspose, kernel_size=(2, 2), strides=(2, 2), padding="same" -) -Conv_down = functools.partial( - layers.Conv2D, kernel_size=(4, 4), strides=(2, 2), padding="same" -) - - -def UNetEncoderBlock( - num_channels: int, - block_size, - grid_size, - num_groups: int = 1, - lrelu_slope: float = 0.2, - block_gmlp_factor: int = 2, - grid_gmlp_factor: int = 2, - input_proj_factor: int = 2, - channels_reduction: int = 4, - dropout_rate: float = 0.0, - downsample: bool = True, - use_global_mlp: bool = True, - use_bias: bool = True, - use_cross_gating: bool = False, - name: str = "unet_encoder", -): - """Encoder block in MAXIM.""" - - def apply(x, skip=None, enc=None, dec=None): - if skip is not None: - x = tf.concat([x, skip], axis=-1) - - # convolution-in - x = Conv1x1(filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_0")(x) - shortcut_long = x - - for i in range(num_groups): - if use_global_mlp: - x = ResidualSplitHeadMultiAxisGmlpLayer( - grid_size=grid_size, - block_size=block_size, - grid_gmlp_factor=grid_gmlp_factor, - block_gmlp_factor=block_gmlp_factor, - input_proj_factor=input_proj_factor, - use_bias=use_bias, - dropout_rate=dropout_rate, - name=f"{name}_SplitHeadMultiAxisGmlpLayer_{i}", - )(x) - x = RCAB( - num_channels=num_channels, - reduction=channels_reduction, - lrelu_slope=lrelu_slope, - use_bias=use_bias, - name=f"{name}_channel_attention_block_1{i}", - )(x) - - x = x + shortcut_long - - if enc is not None and dec is not None: - assert use_cross_gating - x, _ = CrossGatingBlock( - features=num_channels, - block_size=block_size, - grid_size=grid_size, - dropout_rate=dropout_rate, - input_proj_factor=input_proj_factor, - upsample_y=False, - use_bias=use_bias, - name=f"{name}_cross_gating_block", - )(x, enc + dec) - - if downsample: - x_down = Conv_down( - filters=num_channels, use_bias=use_bias, name=f"{name}_Conv_1" - )(x) - return x_down, x - else: - return x - - return apply - - -def UNetDecoderBlock( - num_channels: int, - block_size, - grid_size, - num_groups: int = 1, - lrelu_slope: float = 0.2, - block_gmlp_factor: int = 2, - grid_gmlp_factor: int = 2, - input_proj_factor: int = 2, - channels_reduction: int = 4, - dropout_rate: float = 0.0, - downsample: bool = True, - use_global_mlp: bool = True, - use_bias: bool = True, - name: str = "unet_decoder", -): - - """Decoder block in MAXIM.""" - - def apply(x, bridge=None): - x = ConvT_up( - filters=num_channels, use_bias=use_bias, name=f"{name}_ConvTranspose_0" - )(x) - x = UNetEncoderBlock( - num_channels=num_channels, - num_groups=num_groups, - lrelu_slope=lrelu_slope, - block_size=block_size, - grid_size=grid_size, - block_gmlp_factor=block_gmlp_factor, - grid_gmlp_factor=grid_gmlp_factor, - channels_reduction=channels_reduction, - use_global_mlp=use_global_mlp, - dropout_rate=dropout_rate, - downsample=False, - use_bias=use_bias, - name=f"{name}_UNetEncoderBlock_0", - )(x, skip=bridge) - - return x - - return apply diff --git a/spaces/scedlatioru/img-to-music/example/Crack Vcarve Pro 65rar.md b/spaces/scedlatioru/img-to-music/example/Crack Vcarve Pro 65rar.md deleted file mode 100644 index a1383f4c0a9f8e7ab4e7643358af61cf4ae7ba5c..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Crack Vcarve Pro 65rar.md +++ /dev/null @@ -1,10 +0,0 @@ -<br /> -<p>lolyvidio 6640f8074e adobe acrobat professional 8.1 crack serial key download<br/>jasmine and honey 2 torrent<br/>vidzone-x crack<br/>xsimlock 2.1.zip<br/>csype: mastering c++ programming fundamentals (8th edition) 2015<br/>protozapp 1.2 crack<br/>shamsul aziz hossain tahirat 2 movie in bangla<br/>titan ae3.2 crack<br/> </p> -<h2>Crack Vcarve Pro 65rar</h2><br /><p><b><b>DOWNLOAD</b> &#10026; <a href="https://gohhs.com/2uEAyS">https://gohhs.com/2uEAyS</a></b></p><br /><br /> -<p>itagomir 1189f9a962 bulletstorm download free<br/>waves complete v9r29 incl r2r win<br/>wwe 2 complete series 720p download<br/>dynasty warriors 7: empires v2.1.0.0 crack<br/>crucial 16gb crucial sportxtreme 9500cta512mb-16g-kr/pt<br/>shilpa mysore biography free download<br/>waves complete v9r29 incl r2r win<br/>kwik edijet 4.5.0 r26 1 packlet<br/>harmony v7.3.2.0 nougat full cracked<br/> </p> -<p>yhcjhb d25b4f6f0f1 adobe photoshop elements 12.0.1.0 crack 64 bit serial<br/>nvidia gtx 960 mega 32 free download<br/>nokia lumia 1520 unlimited internet download<br/>the avengers: the dark knight rises 720p full movie download<br/>hello my name is lisa 9.1 crack<br/>xsimlock 2.zip<br/>ms visual c++ 2008 runtime environment 64 bit free download<br/>cricut design studio software keygen download<br/>veeram 1.1 crack<br/>waves complete v9r29 incl r2r win<br/>luxonix pure 4.0 crack<br/>titan ae3.2 crack<br/> </p> -<p>najib idr 3.0.1 full version [all layers] | location italia | torino | download full version | serial number | serayanggah merdeka selatan | torrent mp3 | - download full version, crack and serial number</p> -<p>sammachiel 9c9bcee56 myki 12 torrent cracked 9.99 free download full vpn full version seven for all zip full version crack naptons kebijakan 2011 torrent by sooraj kv 2.1.2 full version without cracks full version download torrent cracked version 8.7.9 full version full version torrent full version zip full version registration [best] dreamland (1996) free download torrent full version without crack patch lalsan onlar hoca sesi 1.0.0 full version hd 1080p free torrent full version download nulled keygen dfx audio enhancer 10.137 winamp pro 5.3367 full keygen free download nulled scargar new free software utorrent free full version with keygen free full version registration [best] dreamland (1996) free download torrent free full version without cracked patch lalsan onlar hoca sesi 1.</p> -<p></p> 899543212b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Of Windows 10 Themes Deviantart !FREE!.md b/spaces/scedlatioru/img-to-music/example/Of Windows 10 Themes Deviantart !FREE!.md deleted file mode 100644 index 08140231f2244a9e6c92a3f80ebc42c4498b6888..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Of Windows 10 Themes Deviantart !FREE!.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>Of Windows 10 Themes Deviantart</h2><br /><p><b><b>Download Zip</b> &#128505; <a href="https://gohhs.com/2uEAG2">https://gohhs.com/2uEAG2</a></b></p><br /><br /> -<br /> -If you want to theme your Windows 10 desktop, we recommend you install ... out the Windows 7 visual styles page on DeviantArt to find themes. 1fdad05405<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/scedlatioru/img-to-music/example/Oxford Paravia English-Italian-English Dictionary 2006 Download.md b/spaces/scedlatioru/img-to-music/example/Oxford Paravia English-Italian-English Dictionary 2006 Download.md deleted file mode 100644 index addfa287afce57cd4eca183bfc23c5b4b55f1357..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Oxford Paravia English-Italian-English Dictionary 2006 Download.md +++ /dev/null @@ -1,7 +0,0 @@ -<br /> -<p>6 [pdf]english italian dictionary italiano inglese dizionario english-italian-dictionary-italiano-inglese-dizionario-.pdf are you also searching for english italian dictionary italiano inglese dizionario get it only at our library now. english italian dictionary, italiano-inglese dizionario. rilasciato: may 18, 2012 autore: oxford learn italian team. english italian dictionary italiano inglese dizionario ebooks is [pdf]italian english dizionario parlato downloads - free italiano-inglese-dizionario-parlato-downloads-free-.pdf are you also searching for italiano inglese dizionario parlato downloads - free get it only at our library now. search results for italiano inglese dizionario parlato; italian english dictionary - dizionario il ragazzino 2013 dizionario inglese-italiano italian-english.</p> -<p>it's a dictionary is rather large to be included in a dictionary package. but dictionary.com has provided us with a bundle of e-books in two languages. the package is a free download.the books included are the bible, believer's toolkit, word reference, letter by letter, and 1000+ more dictionaries. our group was very pleased with the choices, and you can read a review of each book. the reviewers are mike nett and francis leong. the downloaded file is 126mb in size.</p> -<h2>Oxford Paravia English-Italian-English Dictionary 2006 Download</h2><br /><p><b><b>Download</b> &#10145; <a href="https://gohhs.com/2uEA8c">https://gohhs.com/2uEA8c</a></b></p><br /><br /> -<p>english italian word reference dictionary for the united nations by the united nations - undf - english - italian - pocket translation dict. - italian for test and practice; english-italian dictionary with english translations - kindle edition italiano inglese - dizionario. english italian book dictionary (italian english) | kindle edition; piazzas.it english italian book dictionary (italian english) - il panino - dai 1.935.679 download. download gi; italian-english dictionary vocabulary. available for. the english italian dictionary includes a host of headwords as well as grammar, grammatical and pronunciation. the dictionary has information, which is available in english and italian. one is available as a download version of the dictionary, and.</p> 899543212b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Vola SkiAlp Pro 5.0.17 Crack BEST.md b/spaces/scedlatioru/img-to-music/example/Vola SkiAlp Pro 5.0.17 Crack BEST.md deleted file mode 100644 index 6086d9734068d5e6070bfaf5a366433a5a680c27..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Vola SkiAlp Pro 5.0.17 Crack BEST.md +++ /dev/null @@ -1,108 +0,0 @@ -<br /> -<h1>Vola SkiAlp Pro 5.0.17 Crack: Everything You Need to Know</h1> -<p>Vola SkiAlp Pro 5.0.17 is one of the most popular and advanced ski timing software in the market. It can help you organize, manage and run any kind of ski competition, from alpine skiing to speed skiing, from telemark skiing to ski school races. But what if you don't have the license key to activate it? Is there a way to get Vola SkiAlp Pro 5.0.17 Crack and use it for free? In this article, we will tell you everything you need to know about Vola SkiAlp Pro 5.0.17 Crack, including what it is, how it works, where to download it and how to use it safely and effectively.</p> -<h2>Vola SkiAlp Pro 5.0.17 Crack</h2><br /><p><b><b>Download</b> &#10042; <a href="https://gohhs.com/2uEzej">https://gohhs.com/2uEzej</a></b></p><br /><br /> - -<h2>What is Vola SkiAlp Pro 5.0.17?</h2> -<p>Vola SkiAlp Pro 5.0.17 is the latest version of Vola SkiAlp Pro, a software suite developed by Vola Timing, a French company that specializes in timing systems for sports events. Vola SkiAlp Pro is designed to meet the needs of ski organizers, timers, judges and media who want to run ski competitions smoothly and efficiently. Some of the features of Vola SkiAlp Pro 5.0.17 are:</p> -<ul> -<li>It supports various types of ski races, such as alpine skiing, speed skiing, telemark skiing, ski school races, etc.</li> -<li>It can handle multiple races simultaneously with different formats and rules.</li> -<li>It can import and export data from various sources, such as FIS, USSA, ACA, etc.</li> -<li>It can generate and print reports, rankings, start lists, result lists, etc.</li> -<li>It can display live results on screens, websites or mobile devices.</li> -<li>It can integrate with various hardware devices, such as timers, photocells, printers, etc.</li> -</ul> -<p>Vola SkiAlp Pro 5.0.17 is compatible with Windows operating systems and requires a license key to activate it. The license key can be purchased from the official website of Vola Timing or from authorized resellers. However, the price of the license key may vary depending on the type and number of races you want to manage with the software.</p> - -<h2>What is Vola SkiAlp Pro 5.0.17 Crack?</h2> -<p>Vola SkiAlp Pro 5.0.17 Crack is a software tool that can bypass the license key verification process of Vola SkiAlp Pro 5.0.17 and allow you to use it without any restrictions. Vola SkiAlp Pro 5.0.17 Crack is real and proven, at least for our team. In a couple of months it has been downloaded by hundreds of thousands of users. The software is not new but it's enhanced, it's optimized and updated. Vola SkiAlp Pro 5.0.17 Crack has been tested by many experts and users who have confirmed its functionality and safety. Some of the benefits of using Vola SkiAlp Pro 5.0.17 Crack are:</p> -<ul> -<li>It is free and easy to use.</li> -<li>It does not require any installation or registration.</li> -<li>It does not affect the performance or quality of Vola SkiAlp Pro 5.0.17.</li> -<li>It does not contain any viruses or malware.</li> -<li>It works with any version of Windows operating system.</li> -</ul> -<p>Vola SkiAlp Pro 5.0.17 Crack is available for download from various sources on the internet . However, you should be careful when choosing where to download it from, as some websites may offer fake or harmful versions of the software that can damage your computer or steal your personal information.</p> - -<h2>How to Use Vola SkiAlp Pro 5.0.17 Crack?</h2> -<p>To use Vola SkiAlp Pro 5.0</p> -<p>Once you have downloaded Vola SkiAlp Pro 5.0.17 Crack, you can use it to activate Vola SkiAlp Pro 5.0.17 and enjoy all its features for free. Here is how to do it:</p> -<p></p> -<ol> -<li>Open the folder where you have extracted Vola SkiAlp Pro 5.0.17 Crack.</li> -<li>Copy the crack.exe file and paste it into the folder where you have installed Vola SkiAlp Pro 5.0.17.</li> -<li>Run the crack.exe file as administrator and wait for a few seconds.</li> -<li>A message will appear saying that Vola SkiAlp Pro 5.0.17 has been successfully cracked.</li> -<li>Close the crack.exe file and open Vola SkiAlp Pro 5.0.17.</li> -<li>You will see that the software is activated and ready to use.</li> -</ol> -<p>Congratulations! You have successfully cracked Vola SkiAlp Pro 5.0.17 and can now use it for any ski competition you want.</p> - -<h2>Is Vola SkiAlp Pro 5.0.17 Crack Safe and Legal?</h2> -<p>Vola SkiAlp Pro 5.0.17 Crack is a software tool that can help you save money and time by allowing you to use Vola SkiAlp Pro 5.0.17 for free. However, you may wonder if it is safe and legal to use it.</p> -<p>The answer is: it depends. Vola SkiAlp Pro 5.0.17 Crack is safe to use if you download it from a reliable source and scan it with an antivirus program before using it. However, some websites may offer fake or harmful versions of the software that can damage your computer or steal your personal information. Therefore, you should be careful when choosing where to download Vola SkiAlp Pro 5.0.17 Crack from and always check the reviews and ratings of the source before downloading anything.</p> -<p>Vola SkiAlp Pro 5.0.17 Crack is legal to use if you use it for personal or educational purposes only. However, if you use it for commercial or professional purposes, you may violate the terms and conditions of Vola Timing and infringe their intellectual property rights. Therefore, you should respect the rights of the software developers and purchase a license key from them or their authorized resellers if you want to use Vola SkiAlp Pro 5.0.17 for your business or organization.</p> - -<h2>Conclusion</h2> -<p>Vola SkiAlp Pro 5.0 -<p>17 Crack is a software tool that can help you use Vola SkiAlp Pro 5.0.17 for free and enjoy its professional features for ski timing. However, you should be aware of the risks and responsibilities of using it, as it may violate the rights of the software developers and expose you to legal or security issues.</p> - -<h2>Alternatives to Vola SkiAlp Pro 5.0.17 Crack</h2> -<p>If you are not comfortable with using Vola SkiAlp Pro 5.0.17 Crack, or if you want to support the software developers and respect their work, you may want to consider some alternatives to Vola SkiAlp Pro 5.0.17 Crack. Some of them are:</p> -<ul> -<li>Purchase a license key from Vola Timing or their authorized resellers. This is the most legitimate and ethical way to use Vola SkiAlp Pro 5.0.17, as you will get access to all its features and updates, as well as technical support and customer service from the developers. The price of the license key may vary depending on the type and number of races you want to manage with the software, but it is usually affordable and worth the investment.</li> -<li>Use a free trial version of Vola SkiAlp Pro 5.0.17. This is a good option if you want to test the software before buying it, or if you only need it for a short period of time or a limited number of races. The free trial version of Vola SkiAlp Pro 5.0.17 allows you to use all its features for 30 days, after which you will need to purchase a license key to continue using it.</li> -<li>Use a different ski timing software that is free or cheaper than Vola SkiAlp Pro 5.0.17. This is a possible option if you are not satisfied with Vola SkiAlp Pro 5.0.17, or if you are looking for a simpler or more affordable solution for ski timing. However, you should be aware that other ski timing software may not have the same features, quality or compatibility as Vola SkiAlp Pro 5.0.17, and may not meet your expectations or needs.</li> -</ul> -<p>These are some of the alternatives to Vola SkiAlp Pro 5.0 -<p>17 Crack and how to use it for free and enjoy its professional features for ski timing. However, you should be aware of the risks and responsibilities of using it, as it may violate the rights of the software developers and expose you to legal or security issues.</p> - -<h2>Comparison of Vola SkiAlp Pro 5.0.17 Crack with Other Ski Timing Software</h2> -<p>Vola SkiAlp Pro 5.0.17 Crack is a software tool that can help you use Vola SkiAlp Pro 5.0.17 for free and enjoy its professional features for ski timing. But how does it compare with other ski timing software available in the market? In this section, we will compare Vola SkiAlp Pro 5.0.17 Crack with some of the most popular and widely used ski timing software, such as:</p> -<ul> -<li>SkiPro: SkiPro is a software suite that allows you to manage, time and process results for alpine skiing competitions. It supports various types of races, such as slalom, giant slalom, super-G, downhill, etc. It can import and export data from FIS and other sources. It can generate and print reports, rankings, start lists, result lists, etc. It can display live results on screens, websites or mobile devices. It can integrate with various hardware devices, such as timers, photocells, printers, etc.</li> -<li>SkiTime: SkiTime is a software suite that allows you to manage, time and process results for alpine skiing competitions. It supports various types of races, such as slalom, giant slalom, super-G, downhill, etc. It can import and export data from FIS and other sources. It can generate and print reports, rankings, start lists, result lists, etc. It can display live results on screens, websites or mobile devices. It can integrate with various hardware devices, such as timers, photocells, printers, etc.</li> -<li>SkiRace: SkiRace is a software suite that allows you to manage -<p>, time and process results for alpine skiing competitions. It supports various types of races, such as slalom, giant slalom, super-G, downhill, etc. It can import and export data from FIS and other sources. It can generate and print reports, rankings, start lists, result lists, etc. It can display live results on screens, websites or mobile devices. It can integrate with various hardware devices, such as timers, photocells, printers, etc.</li> -</ul> -<p>These are some of the most popular and widely used ski timing software in the market, but there are also others that you can explore and compare. However, none of them can offer you the same features and quality as Vola SkiAlp Pro 5.0.17, which is why many ski organizers, timers and judges prefer to use it for their ski competitions.</p> - -<h2>Pros and Cons of Using Vola SkiAlp Pro 5.0.17 Crack</h2> -<p>Vola SkiAlp Pro 5.0.17 Crack is a software tool that can help you use Vola SkiAlp Pro 5.0.17 for free and enjoy its professional features for ski timing. However, like any other software tool, it has its pros and cons that you should consider before using it. Here are some of them:</p> -<table> -<tr> -<th>Pros</th> -<th>Cons</th> -</tr> -<tr> -<td>It is free and easy to use.</td> -<td>It may violate the rights of the software developers and expose you to legal or security issues.</td> -</tr> -<tr> -<td>It does not require any installation or registration.</td> -<td>It may not work with future updates or versions of Vola SkiAlp Pro 5.0.17.</td> -</tr> -<tr> -<td>It does not affect the performance or quality of Vola SkiAlp Pro 5.0.17.</td> -<td>It may not be compatible with some hardware devices or data sources.</td> -</tr> -<tr> -<td>It does not contain any viruses or malware.</td> -<td>It may be hard to find a reliable source to download it from.</td> -</tr> -<tr> -<td>It works with any version of Windows operating system.</td> -<td>It may not be ethical or fair to use it for commercial or professional purposes.</td> -</tr> -</table> -<p>These are some of the pros and cons of using Vola SkiAlp Pro 5 -<h2>Conclusion</h2> -<p>Vola SkiAlp Pro 5.0.17 is one of the best ski timing software in the market, as it can help you organize, manage and run any kind of ski competition with ease and accuracy. However, it requires a license key to activate it, which may be expensive or hard to obtain for some users. That's why some people may look for Vola SkiAlp Pro 5.0.17 Crack, a software tool that can bypass the license key verification process and allow you to use Vola SkiAlp Pro 5.0.17 for free.</p> -<p>Vola SkiAlp Pro 5.0.17 Crack is real and proven, at least for our team. It has been downloaded by hundreds of thousands of users and tested by many experts and users who have confirmed its functionality and safety. It is free and easy to use, and it does not affect the performance or quality of Vola SkiAlp Pro 5.0.17. However, it also has some drawbacks and risks that you should be aware of before using it.</p> -<p>Vola SkiAlp Pro 5.0.17 Crack may violate the rights of the software developers and expose you to legal or security issues. It may not work with future updates or versions of Vola SkiAlp Pro 5.0.17, or with some hardware devices or data sources. It may be hard to find a reliable source to download it from, and it may not be ethical or fair to use it for commercial or professional purposes.</p> -<p>Therefore, you should weigh the pros and cons of using Vola SkiAlp Pro 5.0.17 Crack carefully and decide whether it is worth it or not. If you decide to use it, you should do it at your own risk and responsibility, and only for personal or educational purposes. If you want to support the software developers and respect their work, you should purchase a license key from them or their authorized resellers.</p> -<p>We hope this article has been helpful and informative for you. If you have any questions or comments, please feel free to share them with us below.</p> 3cee63e6c2<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/segments-tobias/conex/espnet2/bin/launch.py b/spaces/segments-tobias/conex/espnet2/bin/launch.py deleted file mode 100644 index c1c86f9b7dab514fadc76034f4026664b5af7f9e..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/bin/launch.py +++ /dev/null @@ -1,385 +0,0 @@ -#!/usr/bin/env python3 -import argparse -import logging -import os -from pathlib import Path -import shlex -import shutil -import subprocess -import sys -import uuid - -from espnet.utils.cli_utils import get_commandline_args -from espnet2.utils.types import str2bool -from espnet2.utils.types import str_or_none - - -def get_parser(): - parser = argparse.ArgumentParser( - description="Launch distributed process with appropriate options. ", - formatter_class=argparse.ArgumentDefaultsHelpFormatter, - ) - parser.add_argument( - "--cmd", - help="The path of cmd script of Kaldi: run.pl. queue.pl, or slurm.pl", - default="utils/run.pl", - ) - parser.add_argument( - "--log", - help="The path of log file used by cmd", - default="run.log", - ) - parser.add_argument( - "--max_num_log_files", - help="The maximum number of log-files to be kept", - default=1000, - ) - parser.add_argument( - "--ngpu", type=int, default=1, help="The number of GPUs per node" - ) - egroup = parser.add_mutually_exclusive_group() - egroup.add_argument("--num_nodes", type=int, default=1, help="The number of nodes") - egroup.add_argument( - "--host", - type=str, - default=None, - help="Directly specify the host names. The job are submitted via SSH. " - "Multiple host names can be specified by splitting by comma. e.g. host1,host2" - " You can also the device id after the host name with ':'. e.g. " - "host1:0:2:3,host2:0:2. If the device ids are specified in this way, " - "the value of --ngpu is ignored.", - ) - parser.add_argument( - "--envfile", - type=str_or_none, - default="path.sh", - help="Source the shell script before executing command. " - "This option is used when --host is specified.", - ) - - parser.add_argument( - "--multiprocessing_distributed", - type=str2bool, - default=True, - help="Distributed method is used when single-node mode.", - ) - parser.add_argument( - "--master_port", - type=int, - default=None, - help="Specify the port number of master" - "Master is a host machine has RANK0 process.", - ) - parser.add_argument( - "--master_addr", - type=str, - default=None, - help="Specify the address s of master. " - "Master is a host machine has RANK0 process.", - ) - parser.add_argument( - "--init_file_prefix", - type=str, - default=".dist_init_", - help="The file name prefix for init_file, which is used for " - "'Shared-file system initialization'. " - "This option is used when --port is not specified", - ) - parser.add_argument("args", type=str, nargs="+") - return parser - - -def main(cmd=None): - logfmt = "%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s" - logging.basicConfig(level=logging.INFO, format=logfmt) - logging.info(get_commandline_args()) - - parser = get_parser() - args = parser.parse_args(cmd) - args.cmd = shlex.split(args.cmd) - - if args.host is None and shutil.which(args.cmd[0]) is None: - raise RuntimeError( - f"The first args of --cmd should be a script path. e.g. utils/run.pl: " - f"{args.cmd[0]}" - ) - - # Specify init_method: - # See: https://pytorch.org/docs/stable/distributed.html#initialization - if args.host is None and args.num_nodes <= 1: - # Automatically set init_method if num_node=1 - init_method = None - else: - if args.master_port is None: - # Try "shared-file system initialization" if master_port is not specified - # Give random name to avoid reusing previous file - init_file = args.init_file_prefix + str(uuid.uuid4()) - init_file = Path(init_file).absolute() - Path(init_file).parent.mkdir(exist_ok=True, parents=True) - init_method = ["--dist_init_method", f"file://{init_file}"] - else: - init_method = ["--dist_master_port", str(args.master_port)] - - # This can be omitted if slurm mode - if args.master_addr is not None: - init_method += ["--dist_master_addr", args.master_addr] - elif args.host is not None: - init_method += [ - "--dist_master_addr", - args.host.split(",")[0].split(":")[0], - ] - - # Log-rotation - for i in range(args.max_num_log_files - 1, -1, -1): - if i == 0: - p = Path(args.log) - pn = p.parent / (p.stem + ".1" + p.suffix) - else: - _p = Path(args.log) - p = _p.parent / (_p.stem + f".{i}" + _p.suffix) - pn = _p.parent / (_p.stem + f".{i + 1}" + _p.suffix) - - if p.exists(): - if i == args.max_num_log_files - 1: - p.unlink() - else: - shutil.move(p, pn) - - processes = [] - # Submit command via SSH - if args.host is not None: - hosts = [] - ids_list = [] - # e.g. args.host = "host1:0:2,host2:0:1" - for host in args.host.split(","): - # e.g host = "host1:0:2" - sps = host.split(":") - host = sps[0] - if len(sps) > 1: - ids = [int(x) for x in sps[1:]] - else: - ids = list(range(args.ngpu)) - hosts.append(host) - ids_list.append(ids) - - world_size = sum(max(len(x), 1) for x in ids_list) - logging.info(f"{len(hosts)}nodes with world_size={world_size} via SSH") - - if args.envfile is not None: - env = f"source {args.envfile}" - else: - env = "" - - if args.log != "-": - Path(args.log).parent.mkdir(parents=True, exist_ok=True) - f = Path(args.log).open("w", encoding="utf-8") - else: - # Output to stdout/stderr - f = None - - rank = 0 - for host, ids in zip(hosts, ids_list): - ngpu = 1 if len(ids) > 0 else 0 - ids = ids if len(ids) > 0 else ["none"] - - for local_rank in ids: - cmd = ( - args.args - + [ - "--ngpu", - str(ngpu), - "--multiprocessing_distributed", - "false", - "--local_rank", - str(local_rank), - "--dist_rank", - str(rank), - "--dist_world_size", - str(world_size), - ] - + init_method - ) - if ngpu == 0: - # Gloo supports both GPU and CPU mode. - # See: https://pytorch.org/docs/stable/distributed.html - cmd += ["--dist_backend", "gloo"] - - heredoc = f"""<< EOF -set -euo pipefail -cd {os.getcwd()} -{env} -{" ".join([c if len(c) != 0 else "''" for c in cmd])} -EOF -""" - - # FIXME(kamo): The process will be alive - # even if this program is stopped because we don't set -t here, - # i.e. not assigning pty, - # and the program is not killed when SSH connection is closed. - process = subprocess.Popen( - ["ssh", host, "bash", heredoc], - stdout=f, - stderr=f, - ) - - processes.append(process) - - rank += 1 - - # If Single node - elif args.num_nodes <= 1: - if args.ngpu > 1: - if args.multiprocessing_distributed: - # NOTE: - # If multiprocessing_distributed=true, - # -> Distributed mode, which is multi-process and Multi-GPUs. - # and TCP initializetion is used if single-node case: - # e.g. init_method="tcp://localhost:20000" - logging.info(f"single-node with {args.ngpu}gpu on distributed mode") - else: - # NOTE: - # If multiprocessing_distributed=false - # -> "DataParallel" mode, which is single-process - # and Multi-GPUs with threading. - # See: - # https://discuss.pytorch.org/t/why-torch-nn-parallel-distributeddataparallel-runs-faster-than-torch-nn-dataparallel-on-single-machine-with-multi-gpu/32977/2 - logging.info(f"single-node with {args.ngpu}gpu using DataParallel") - - # Using cmd as it is simply - cmd = ( - args.cmd - # arguments for ${cmd} - + ["--gpu", str(args.ngpu), args.log] - # arguments for *_train.py - + args.args - + [ - "--ngpu", - str(args.ngpu), - "--multiprocessing_distributed", - str(args.multiprocessing_distributed), - ] - ) - process = subprocess.Popen(cmd) - processes.append(process) - - elif Path(args.cmd[0]).name == "run.pl": - raise RuntimeError("run.pl doesn't support submitting to the other nodes.") - - elif Path(args.cmd[0]).name == "ssh.pl": - raise RuntimeError("Use --host option instead of ssh.pl") - - # If Slurm - elif Path(args.cmd[0]).name == "slurm.pl": - logging.info(f"{args.num_nodes}nodes and {args.ngpu}gpu-per-node using srun") - cmd = ( - args.cmd - # arguments for ${cmd} - + [ - "--gpu", - str(args.ngpu), - "--num_threads", - str(max(args.ngpu, 1)), - "--num_nodes", - str(args.num_nodes), - args.log, - "srun", - # Inherit all enviroment variable from parent process - "--export=ALL", - ] - # arguments for *_train.py - + args.args - + [ - "--ngpu", - str(args.ngpu), - "--multiprocessing_distributed", - "true", - "--dist_launcher", - "slurm", - ] - + init_method - ) - if args.ngpu == 0: - # Gloo supports both GPU and CPU mode. - # See: https://pytorch.org/docs/stable/distributed.html - cmd += ["--dist_backend", "gloo"] - process = subprocess.Popen(cmd) - processes.append(process) - - else: - # This pattern can also works with Slurm. - - logging.info(f"{args.num_nodes}nodes and {args.ngpu}gpu-per-node using mpirun") - cmd = ( - args.cmd - # arguments for ${cmd} - + [ - "--gpu", - str(args.ngpu), - "--num_threads", - str(max(args.ngpu, 1)), - # Make sure scheduler setting, i.e. conf/queue.conf - # so that --num_nodes requires 1process-per-node - "--num_nodes", - str(args.num_nodes), - args.log, - "mpirun", - # -np option can be omitted with Torque/PBS - "-np", - str(args.num_nodes), - ] - # arguments for *_train.py - + args.args - + [ - "--ngpu", - str(args.ngpu), - "--multiprocessing_distributed", - "true", - "--dist_launcher", - "mpi", - ] - + init_method - ) - if args.ngpu == 0: - # Gloo supports both GPU and CPU mode. - # See: https://pytorch.org/docs/stable/distributed.html - cmd += ["--dist_backend", "gloo"] - process = subprocess.Popen(cmd) - processes.append(process) - - logging.info(f"log file: {args.log}") - - failed = False - while any(p.returncode is None for p in processes): - for process in processes: - # If any process is failed, try to kill the other processes too - if failed and process.returncode is not None: - process.kill() - else: - try: - process.wait(0.5) - except subprocess.TimeoutExpired: - pass - - if process.returncode is not None and process.returncode != 0: - failed = True - - for process in processes: - if process.returncode != 0: - print( - subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd), - file=sys.stderr, - ) - p = Path(args.log) - if p.exists(): - with p.open() as f: - lines = list(f) - raise RuntimeError( - f"\n################### The last 1000 lines of {args.log} " - f"###################\n" + "".join(lines[-1000:]) - ) - else: - raise RuntimeError - - -if __name__ == "__main__": - main() diff --git a/spaces/shamaayan/Wisi/README.md b/spaces/shamaayan/Wisi/README.md deleted file mode 100644 index 83422c148d245a690740f97c944e8178ec2000b8..0000000000000000000000000000000000000000 --- a/spaces/shamaayan/Wisi/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Wisi -emoji: 💻 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: true -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/shi-labs/FcF-Inpainting/training/losses/ade20k/segm_lib/nn/modules/__init__.py b/spaces/shi-labs/FcF-Inpainting/training/losses/ade20k/segm_lib/nn/modules/__init__.py deleted file mode 100644 index bc8709d92c610b36e0bcbd7da20c1eb41dc8cfcf..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/FcF-Inpainting/training/losses/ade20k/segm_lib/nn/modules/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# -*- coding: utf-8 -*- -# File : __init__.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -from .batchnorm import SynchronizedBatchNorm1d, SynchronizedBatchNorm2d, SynchronizedBatchNorm3d -from .replicate import DataParallelWithCallback, patch_replication_callback diff --git a/spaces/silencewing/server/youyou/.history/math_20230613231851.html b/spaces/silencewing/server/youyou/.history/math_20230613231851.html deleted file mode 100644 index ded3800d7b012984e352d1574ad84a4ee0147eeb..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/youyou/.history/math_20230613231851.html +++ /dev/null @@ -1,234 +0,0 @@ -<!-- /* - * @Author: Chauncey Yuan - * @Date: 2019-08-01 18:25:30 - * @Last Modified by: Chauncey Yuan - * @Last Modified time: 2019-08-03 08:24:27 - */ --> - -<!DOCTYPE html> -<html lang="en"> - -<head> - <meta charset="UTF-8"> - <meta name="viewport" content="width=device-width, initial-scale=1.0"> - <meta http-equiv="X-UA-Compatible" content="ie=edge"> - <title>Document</title> - <style> - html { - height: 100%; - background: #b3b4a6; - } - .cal_game { - width: 100vw; - margin: 0 auto; - } - - table { - margin: 0 auto; - } - - td { - width: 150px; - height: 50px; - text-align: center; - line-height: 40px; - border: 1px solid #2b7e71; - } - - /* .btn { - width: 150px; - height: 50px; - color: #2b7e71;; - border: 1px solid #2b7e71; - background-color: transparent; - } - - .btn:hover { - background-color: #2b7e71; - color: #fff; - cursor: pointer; - } */ - - input { - height: 30px; - border: 1px solid #999999; - background-color: transparent; - - } - - input:hover { - border: 3px solid #314834; - - } - </style> -</head> - -<body> - <div class="cal_game"> - <table> - <tr> - <td id="output" colspan="3"></td> - </tr> - <tr> - <td>题目</td> - <td id="eq" colspan="2"></td> - </tr> - <tr> - <td>答案</td> - <td colspan="2"><input type="text" name="" id="input" placeholder="请输入计算结果:"></td> - <!-- <td><button class="btn" onclick="xun()">计算</button></td> --> - </tr> - <tr> - <td>正误</td> - <td id="result" colspan="2"></td> - </tr> - <tr> - <td>得分</td> - <td id="score" colspan="2"></td> - </tr> - <tr> - <td id="accuracy" colspan="3"></td> - </tr> - </table> - </div> - <script> - // 定义题号,使用时+1 - var i = 0; - // 定义计算正确的次数 - var right_times = 0; - // 定义分数 - var score = 0; - // 定义正确率 - var accuracy = 0; - // 定义加减法符号数组,用于后边产生0或1的随机数,来确定加减法 - var sign_operation_list = ["+", "-"]; - // 生成0或者1的随机数,确定加减法 - var sign_operation = Math.floor((Math.random() * (1 - 0 + 1)) + 0); - - var max_num = 20; - // 显示题号 - document.getElementById("output").innerHTML = "第" + (i + 1) + "题"; - document.onkeydown = function (e) { - if(e.which == "13" && document.getElementById("input").value){ - xun() - } - else{ - document.getElementById("input").focus() - } - } - - // 如果是减法 - if (sign_operation == 1) { - // 则被减数应该小于减数,结果才不会为负数 - // 减数范围为0-max_num - var num1 = Math.floor(Math.random() * (max_num - 0 + 1) + 0); - // 被减数范围为0-减数 - var num2 = Math.floor(Math.random() * (num1 - 0 + 1) + 0); - } - // 如果是加法 - if (sign_operation == 0) { - // 两数和应小于max_num - // 第一个数的范围是0-max_num - var num1 = Math.floor(Math.random() * (max_num - 0 + 1) + 0); - // 第二个数是0-(max_num-第一个数) - var num2 = Math.floor(Math.random() * (max_num - num1 - 0 + 1) + 0); - } - // 组成算式,显示给用户看 - var eq = String(num1) + sign_operation_list[sign_operation] + String(num2) + "="; - // 页面中显示算式 - document.getElementById("eq").innerHTML = eq; - // console.log(eq); - - // 定义函数,当按钮按下是执行一次 - function xun() { - // console.log(num1, num2); - // 题号加1 - i++; - // 获取用户输入的结果 - var input = Number(document.getElementById("input").value); - // console.log(input, num1, num2); - // 如果是加法 - if (sign_operation == 0) { - // 定义真实结果 - var calResult = num1 + num2; - // 如果用户输入的结果和真实结果相同 - if (input == calResult) { - // 分数加10分 - score += 10; - // console.log("正确!"); - // 显示正确信息 - document.getElementById("result").innerHTML = "正确"; - // 正确的次数加1 - right_times++; - } - // 如果用户输入的结果和真实结果不同 - if (input != calResult) { - // 分数减10分 - score -= 10; - // console.log("错误!"); - // 显示错误信息 - document.getElementById("result").innerHTML = "错误"; - } - } // 如果是减法 - if (sign_operation == 1) { - // 定义真实结果 - var calResult = num1 - num2; - // 如果用户输入的结果和真实结果相同 - if (input == calResult) { - // 分数加10分 - score += 10; - // console.log("正确!"); - // 显示正确信息 - document.getElementById("result").innerHTML = "正确"; - // 正确的次数加1 - right_times++; - } - // 如果用户输入的结果和真实结果不同 - if (input != calResult) { - // 分数减10分 - score -= 10; - // console.log("错误!"); - // 显示错误信息 - document.getElementById("result").innerHTML = "错误。答案: " + calResult; - } - } - // 判断结果后,用户输入框清空 - document.getElementById("input").value = ""; - // 显示分数 - document.getElementById("score").innerHTML = score; - // 显示正确率 - document.getElementById("accuracy").innerHTML = (((right_times / i) * 100).toFixed(2)) + "%"; - - if (score >= 100) { - window.location.href = "game.html"; - } - // 生成0或者1的随机数,确定加减法 - sign_operation = Math.floor((Math.random() * (1 - 0 + 1)) + 0); - // 显示题号 - document.getElementById("output").innerHTML = "第" + (i + 1) + "题"; - // 如果是减法 - if (sign_operation == 1) { - // 则被减数应该小于减数,结果才不会为负数 - // 减数范围为0-max_num - num1 = Math.floor(Math.random() * (max_num - 0 + 1) + 0); - // 被减数范围为0-减数 - num2 = Math.floor(Math.random() * (num1 - 0 + 1) + 0); - } - // 如果是加法 - if (sign_operation == 0) { - // 两数和应小于max_num - // 第一个数的范围是0-max_num - num1 = Math.floor(Math.random() * (max_num - 0 + 1) + 0); - // 第二个数是0-(max_num-第一个数) - num2 = Math.floor(Math.random() * (max_num - num1 - 0 + 1) + 0); - } - // 组成算式,显示给用户看 - eq = String(num1) + sign_operation_list[sign_operation] + String(num2) + "="; - // 页面中显示算式 - document.getElementById("eq").innerHTML = eq; - // console.log(eq); - } - </script> -</body> - -</html> diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Casino World APK Join the Largest Online Casino Community.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Casino World APK Join the Largest Online Casino Community.md deleted file mode 100644 index e717afa1590b50f85963a0f6e3d54e221df4a229..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Casino World APK Join the Largest Online Casino Community.md +++ /dev/null @@ -1,147 +0,0 @@ - -<h1>Casino World APK: A Guide to the Best Social Casino Games on Mobile</h1> -<p>Do you love playing casino games but don't have the time or money to visit a real casino? Do you want to experience the thrill and excitement of gambling without risking your hard-earned cash? If you answered yes to any of these questions, then you should try Casino World APK, one of the most popular free online Vegas slots casino games on mobile. In this article, we will tell you everything you need to know about Casino World APK, including what it is, how to download and install it, how to play it, and what are the benefits of playing it.</p> -<h2>casino world apk</h2><br /><p><b><b>Download Zip</b> &#10002; <a href="https://ssurll.com/2uNWPM">https://ssurll.com/2uNWPM</a></b></p><br /><br /> - <h2>What is Casino World APK?</h2> -<p>Casino World APK is a mobile app that allows you to play a variety of casino games on your smartphone or tablet. It is developed by FlowPlay, LLC, the same company that created the hit game Vegas World. Casino World APK is not just a game, but a whole virtual world where you can interact with other players, customize your avatar, join parties, and build your own casino empire. Here are some of the features that make Casino World APK stand out from other casino apps:</p> - <h3>A massively multiplayer casino RPG</h3> -<p>Casino World APK is not just a collection of casino games, but a role-playing game where you can create your own character and explore a vast online world. You can choose from hundreds of avatars, outfits, accessories, hairstyles, and dance moves to express your personality and style. You can also level up your character by completing quests, challenges, and achievements. As you progress in the game, you can unlock new areas, games, features, and rewards.</p> - <h3>A variety of casino games to choose from</h3> -<p>Casino World APK offers over 27 different types of casino games that you can play for free. You can enjoy classic slots, video slots, poker, video poker, bingo, roulette, solitaire, horse racing, and more. Each game has its own rules, strategies, payouts, and bonuses. You can also play in season challenges and special events to win extra coins and prizes. Whether you prefer skill-based games or luck-based games, you will find something that suits your taste in Casino World APK.</p> - <h3>A way to build your own casino empire</h3> -<p>Casino World APK allows you to be more than just a player; you can also be a casino tycoon. You can buy buildings that generate passive income for you every day. You can also decorate your buildings with various themes and items. The more buildings you own, the more coins you earn. You can also invite other players to visit your casino and earn tips from them. You can even create your own slots with custom reels and symbols. In Casino World APK, you can beat the house and be the house.</p> - <h2>How to Download and Install Casino World APK?</h2> -<p>Downloading and installing Casino World APK is very easy and fast. There are two ways to do it:</p> - <h3>Download from Google Play Store</h3> -<p>The easiest way to download Casino World APK is from the Google Play Store. Just follow these steps:</p> -<ol> -<li>Open the Google Play Store app on your device.</li> -<li>Search for " Casino World" and tap on the app icon.</li> -<li>Tap on the "Install" button and wait for the app to download and install on your device.</li> -<li>Tap on the "Open" button to launch the app and start playing.</li> -</ol> - <h3>Download from APK websites</h3> -<p>If you can't access the Google Play Store or want to download the latest version of Casino World APK, you can also download it from third-party APK websites. However, you need to be careful and only download from trusted and reputable sources, as some APK files may contain malware or viruses. Here are the steps to download Casino World APK from APK websites:</p> -<ol> -<li>Open your web browser and search for "Casino World APK" on Google or any other search engine.</li> -<li>Choose a website that offers the APK file and click on it.</li> -<li>Look for the "Download" button and click on it. You may need to allow unknown sources on your device settings to download the file.</li> -<li>Once the file is downloaded, locate it on your device and tap on it to install it.</li> -<li>Tap on the "Open" button to launch the app and start playing.</li> -</ol> - <h2>How to Play Casino World APK?</h2> -<p>Playing Casino World APK is very simple and fun. You just need to follow these steps:</p> -<p>casino world app download<br /> -casino world mobile game<br /> -casino world free slots<br /> -casino world online multiplayer<br /> -casino world rpg<br /> -casino world android apk<br /> -casino world apk mod<br /> -casino world apk latest version<br /> -casino world apk for pc<br /> -casino world apk offline<br /> -casino world apk hack<br /> -casino world apk unlimited money<br /> -casino world apk no ads<br /> -casino world apk pure<br /> -casino world apk mirror<br /> -casino world apk update<br /> -casino world apk old version<br /> -casino world apk obb<br /> -casino world apk data<br /> -casino world apk revdl<br /> -casino world apk rexdl<br /> -casino world apk uptodown<br /> -casino world apk apkpure<br /> -casino world apk apkmirror<br /> -casino world apk happymod<br /> -casino world apk mod menu<br /> -casino world apk mod money<br /> -casino world apk mod download<br /> -casino world apk mod unlimited coins<br /> -casino world apk mod free shopping<br /> -casino world apk mod vip unlocked<br /> -casino world apk mod premium unlocked<br /> -casino world apk mod pro unlocked<br /> -casino world apk mod all features unlocked<br /> -casino world apk mod no root<br /> -casino world apk mod anti ban<br /> -casino world apk mod online<br /> -casino world apk mod offline mode<br /> -casino world apk mod free spins<br /> -casino world apk mod free coins<br /> -casino world apk mod free gems<br /> -casino world apk mod free charms<br /> -casino world apk mod free bonuses<br /> -casino world apk mod free prizes<br /> -casino world apk mod free rewards<br /> -casino world apk mod free chips<br /> -casino world apk mod free cash<br /> -casino world apk mod free money hack</p> - <h3>Create your avatar and customize your profile</h3> -<p>When you first launch the app, you will be asked to create your avatar and choose your name. You can also customize your avatar's appearance, clothing, accessories, and more. You can also edit your profile information, such as your gender, age, location, hobbies, and interests. You can also upload a photo of yourself or choose from a gallery of images. Your profile will help other players get to know you better and find compatible friends.</p> - <h3>Join parties and chat with other players</h3> -<p>Casino World APK is a social casino game where you can meet and chat with thousands of players from around the world. You can join parties hosted by other players or host your own party. Parties are themed events where you can play games, chat, dance, and have fun. You can also send messages, gifts, emojis, stickers, and more to other players. You can also join clubs or create your own club. Clubs are groups of players who share common interests and goals. You can chat with your club members, play games together, and compete in club tournaments.</p> - <h3>Play slots, poker, bingo, and more</h3> -<p>Casino World APK offers a wide range of casino games that you can play for free. You can choose from over 27 different types of games, such as slots, poker, video poker, bingo, roulette, solitaire, horse racing, and more. Each game has its own rules, strategies, payouts, and bonuses. You can also play in season challenges and special events to win extra coins and prizes. Here is a table that shows some of the most popular games in Casino World APK:</p> - <table> -<tr><th>Game</th><th>Description</th></tr> -<tr><td>Slots</td><td>Spin the reels and match symbols to win coins. You can choose from hundreds of slot machines with different themes, features, and jackpots.</td></tr> -<tr><td>Poker</td><td>Play Texas Hold'em or Omaha poker against other players or the dealer. You can bluff, fold, raise, or go all-in to win the pot.</td></tr> -<tr><td>Bingo</td><td>Mark off numbers on your bingo card as they are called out. You can win by completing a line, a column, a diagonal, or a full card.</td></tr> -<tr><td>Roulette</td><td>Place your bets on the roulette table and watch the wheel spin. You can bet on numbers, colors, odds or evens, or combinations of them.</td></tr> -<tr><td>Solitaire</td><td>Solve classic solitaire puzzles by moving cards from the tableau to the foundation piles. You can choose from different modes and difficulties.</td></tr> -<tr><td>Horse Racing</td><td>Bet on horses based on their odds, speed, stamina, and form. You can watch the race live or skip to the results.</td></tr> -</table> - <h3>Earn coins, charms, gems, and prizes</h3> -<p>Casino World APK is a free-to-play game where you can earn coins by playing games, completing quests, challenges , and achievements. You can also earn charms by playing games, hosting parties, or joining clubs. Charms are special items that boost your winnings and give you extra benefits. You can also earn gems by buying them with real money or watching ads. Gems are premium currency that can be used to buy exclusive items, unlock new games, or access VIP features. You can also win prizes by spinning the daily wheel, opening treasure chests, or playing in tournaments. Prizes include coins, charms, gems, outfits, accessories, and more.</p> - <h2>What are the Benefits of Playing Casino World APK?</h2> -<p>Playing Casino World APK is not only fun, but also beneficial for you. Here are some of the benefits of playing Casino World APK:</p> - <h3>Enjoy a realistic and immersive casino experience</h3> -<p>Casino World APK is designed to simulate a real casino environment with high-quality graphics, sound effects, and animations. You can feel the atmosphere of a Vegas casino with bright lights, lively music, and cheering crowds. You can also interact with other players and dealers with voice and text chat. You can even customize your own casino with your favorite themes and decorations.</p> - <h3>Learn new skills and strategies</h3> -<p>Casino World APK is a great way to learn new skills and strategies for playing casino games. You can practice your skills without risking your money and improve your chances of winning. You can also learn from other players by watching their moves, asking for tips, or joining clubs. You can also access tutorials, guides, and articles that teach you the rules, strategies, and tricks of each game.</p> - <h3>Have fun and socialize with friends</h3> -<p>Casino World APK is a social casino game where you can have fun and socialize with friends. You can invite your friends to play with you or make new friends from around the world. You can chat, party, gift, and compete with other players. You can also share your achievements, photos, and stories on your profile or on social media. You can also join events, contests, and promotions that offer exciting rewards and surprises.</p> - <h2>Conclusion</h2> -<p>Casino World APK is one of the best social casino games on mobile that offers a realistic and immersive casino experience. You can play a variety of casino games for free, create your own avatar and profile, join parties and clubs, build your own casino empire, earn coins, charms, gems, and prizes, learn new skills and strategies, have fun and socialize with friends, and more. Casino World APK is easy to download and install on your device and is compatible with most Android devices. If you are looking for a fun and engaging way to enjoy casino games on your mobile device, you should definitely try Casino World APK.</p> - <h2>FAQs</h2> -<p>Here are some of the frequently asked questions about Casino World APK:</p> - <h4>Q: Is Casino World APK safe to download and play?</h4> -<p>A: Yes, Casino World APK is safe to download and play as long as you download it from the Google Play Store or from trusted APK websites. The app does not contain any malware or viruses that can harm your device or data. However, you should always be careful when downloading any app from unknown sources and check the permissions and reviews before installing it.</p> - <h4>Q: Is Casino World APK free to play?</h4> -<p>A: Yes, Casino World APK is free to play and does not require any real money to play. However, the app does offer in-app purchases that allow you to buy gems or other items with real money. These purchases are optional and not necessary to enjoy the game. You can also earn gems by watching ads or completing offers.</p> - <h4>Q: How do I update Casino World APK?</h4> -<p>A: If you downloaded Casino World APK from the Google Play Store , you can update it by following these steps:</p> -<ol> -<li>Open the Google Play Store app on your device.</li> -<li>Tap on the menu icon and select "My apps & games".</li> -<li>Find Casino World APK on the list of installed apps and tap on it.</li> -<li>Tap on the "Update" button and wait for the app to download and install the latest version.</li> -</ol> -<p>If you downloaded Casino World APK from an APK website, you can update it by following these steps:</p> -<ol> -<li>Open your web browser and search for "Casino World APK" on Google or any other search engine.</li> -<li>Choose a website that offers the latest version of the APK file and click on it.</li> -<li>Download the APK file and overwrite the existing file on your device.</li> -<li>Install the APK file and launch the app.</li> -</ol> - <h4>Q: How do I contact Casino World APK support?</h4> -<p>A: If you have any questions, issues, or feedback about Casino World APK, you can contact the support team by following these steps:</p> -<ol> -<li>Open the Casino World APK app on your device.</li> -<li>Tap on the menu icon and select "Help".</li> -<li>Tap on the "Contact Us" button and fill out the form with your name, email, subject, and message.</li> -<li>Tap on the "Send" button and wait for a response from the support team.</li> -</ol> - <h4>Q: How do I delete Casino World APK?</h4> -<p>A: If you want to delete Casino World APK from your device, you can do so by following these steps:</p> -<ol> -<li>Open the settings app on your device.</li> -<li>Tap on the "Apps" or "Applications" option and find Casino World APK on the list of installed apps.</li> -<li>Tap on Casino World APK and select "Uninstall".</li> -<li>Confirm your action and wait for the app to be removed from your device.</li> -</ol></p> 197e85843d<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 9 Innings 2016 Pro Baseball APK Mod 6.0.7 with Unlimited Points.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 9 Innings 2016 Pro Baseball APK Mod 6.0.7 with Unlimited Points.md deleted file mode 100644 index e8c44bf1f68fd12e3e2f5ce3a02459d7280a5719..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 9 Innings 2016 Pro Baseball APK Mod 6.0.7 with Unlimited Points.md +++ /dev/null @@ -1,85 +0,0 @@ - -<h1>9 Innings 2016 APK Mod: A Baseball Game with Unlimited Features</h1> -<p>If you are a fan of baseball games, you might have heard of <strong>9 Innings 2016</strong>, a realistic baseball simulation game that uses the names, photos, data, and league schedules of real baseball players. The game is developed by Com2uS, a leading mobile game developer, and is available on Google Play Store for free. However, if you want to enjoy the game to the fullest, you might want to try <strong>9 Innings 2016 APK Mod</strong>, a modified version of the game that gives you access to unlimited features and resources. In this article, we will tell you what is 9 Innings 2016 APK Mod, how to download and install it, what are the benefits and risks of using it, and some frequently asked questions about it.</p> -<h2>9 innings 2016 apk mod</h2><br /><p><b><b>Download</b> &#9745; <a href="https://ssurll.com/2uNRoP">https://ssurll.com/2uNRoP</a></b></p><br /><br /> - <h2>What is 9 Innings 2016 APK Mod?</h2> -<h3>A brief introduction to the game and its features</h3> -<p>9 Innings 2016 is a baseball game that lets you play as your favorite team and players in various modes, such as League Mode, Exhibition Mode, Home Run Derby Mode, Challenge Mode, and Season Mode. You can also collect and upgrade over 1,400 player cards across 30 teams. The game features realistic graphics, animations, sound effects, and commentary. You can also customize your own team logo, uniform, stadium, and roster.</p> -<p>9 Innings 2016 APK Mod is a modified version of the game that has been altered by some developers to provide additional features and functions that are not available in the original version. These features can include:</p> -<ul> -<li>Unlimited stars, points, cards, coins, and other resources</li> -<li>Free access to premium features, such as VIP mode, card packs, special items, etc.</li> -<li>Removal of ads and other annoyances</li> -<li>Unlimited customization and tweaking options</li> -<li>Offline functionality and latest updates</li> -</ul> - <h3>How to download and install the modded version of the game</h3> -<p>To download and install 9 Innings 2016 APK Mod, you need to follow these steps:</p> -<ol> -<li>Allow your device to install unknown apps by going to <strong>Settings > Apps > Menu > Special access > Install unknown apps</strong>.</li> -<li>Download the modded APK file from a trusted source. You can find one here. Make sure you check the file size and version before downloading.</li> -<li>Navigate to the location of the downloaded file and tap on it to start the installation process.</li> -<li>Follow the instructions on the screen and wait for the installation to finish.</li> -<li>Launch the game from your app drawer and enjoy.</li> -</ol> - <h2>What are the benefits of using 9 Innings 2016 APK Mod?</h2> -<h3>Access to premium features without paying <p>A fourth risk of using 9 Innings 2016 APK Mod is that you might face compatibility and stability issues. Modded APK files are not optimized or tested for all devices and operating systems. They might not work properly or at all on your device, depending on its model, brand, version, etc. They might also cause crashes, glitches, errors, or other performance problems on your device. They might also interfere with other apps or functions on your device, such as battery life, storage space, security settings, etc. With 9 Innings 2016 APK Mod, you might compromise the quality and reliability of your game and device.</p> - <h2>Conclusion</h2> -<p>9 Innings 2016 APK Mod is a modified version of the popular baseball game that offers unlimited features and resources for free. However, it also comes with several risks and drawbacks that you should be aware of before downloading and installing it. You might face legal issues, account suspension, malware infection, or compatibility and stability issues by using the modded version of the game. Therefore, we recommend that you use the original version of the game from the official source and support the developers who created it. If you still want to try 9 Innings 2016 APK Mod, do it at your own risk and discretion.</p> - <h2>FAQs</h2> -<h4>Is 9 Innings 2016 APK Mod safe to use?</h4> -<p>There is no definitive answer to this question, as different sources and versions of the modded APK file might have different levels of safety and quality. However, in general, modded APK files are not safe to use, as they might contain malware or viruses that could harm your device or data. They might also violate the terms and conditions of the original game developer and publisher, which could result in legal actions or account suspension. Therefore, we advise you to use the original version of the game from the official source and avoid using any modded APK files.</p> -<p>9 innings 2016 pro baseball apk mod unlimited points<br /> -download 9 innings 2016 mod apk latest version<br /> -9 innings 2016 hack apk free full android<br /> -how to install 9 innings 2016 pro baseball mod apk<br /> -9 innings 2016 modded apk offline<br /> -9 innings 2016 cheats apk no root<br /> -9 innings 2016 pro baseball apk mod money<br /> -9 innings 2016 mod apk revdl<br /> -9 innings 2016 hacked apk online<br /> -9 innings 2016 pro baseball mod apk obb<br /> -9 innings 2016 unlimited points apk download<br /> -9 innings 2016 mod apk android 1<br /> -9 innings 2016 pro baseball hack apk ios<br /> -9 innings 2016 mod apk happymod[^1^]<br /> -9 innings 2016 pro baseball apk mod data<br /> -9 innings 2016 mod apk rexdl<br /> -9 innings 2016 hack tool apk<br /> -9 innings 2016 pro baseball mod apk update<br /> -9 innings 2016 cracked apk mirror<br /> -9 innings 2016 mod apk pure<br /> -best site to download 9 innings 2016 mod apk<br /> -tips and tricks for playing 9 innings 2016 mod apk<br /> -reviews of 9 innings 2016 pro baseball mod apk<br /> -features of 9 innings 2016 hack apk<br /> -problems with installing or running 9 innings 2016 mod apk<br /> -alternatives to 9 innings 2016 pro baseball mod apk<br /> -comparison of different versions of 9 innings 2016 mod apk<br /> -benefits of using a vpn for playing 9 innings 2016 hack apk<br /> -how to backup and restore your progress in 9 innings 2016 mod apk<br /> -how to get more players and cards in 9 innings 2016 pro baseball mod apk<br /> -how to unlock all modes and levels in 9 innings 2016 hack apk<br /> -how to fix errors and bugs in 9 innings 2016 mod apk<br /> -how to contact the developers of 9 innings 2016 pro baseball mod apk<br /> -how to join a league or clan in 9 innings 2016 hack apk<br /> -how to earn rewards and achievements in 9 innings 2016 mod apk<br /> -how to customize your team and stadium in 9 innings 2016 pro baseball mod apk<br /> -how to improve your skills and strategies in playing the game with the help of the game's tutorial mode.</p> - <h4>Is 9 Innings 2016 APK Mod compatible with all devices?</h4> -<p>No, 9 Innings 2016 APK Mod is not compatible with all devices and operating systems. Modded APK files are not optimized or tested for all devices and operating systems. They might not work properly or at all on your device, depending on its model, brand, version, etc. They might also cause crashes, glitches, errors, or other performance problems on your device. Therefore, we suggest you to use the original version of the game from the official source and ensure that your device meets the minimum requirements for the game.</p> - <h4>How can I update 9 Innings 2016 APK Mod?</h4> -<p>You cannot update 9 Innings 2016 APK Mod from the original game developer or publisher, as they do not support or endorse the modded version of the game. You can only update the modded version of the game from the source where you downloaded it from. However, this might not be reliable or timely, as different sources and versions of the modded APK file might have different update schedules and features. You might also miss out on some new features and content that are available in the original version of the game. Therefore, we recommend that you use the original version of the game from the official source and get the latest updates from there.</p> - <h4>How can I uninstall 9 Innings 2016 APK Mod?</h4> -<p>You can uninstall 9 Innings 2016 APK Mod by following these steps:</p> -<ol> -<li>Go to <strong>Settings > Apps > 9 Innings 2016</strong>.</li> -<li>Tap on <strong>Uninstall</strong> and confirm your action.</li> -<li>Delete any leftover files or folders related to the modded version of the game from your device.</li> -<li>If you want to install the original version of the game from the official source, go to Google Play Store and search for <strong>9 Innings 2016</strong>.</li> -<li>Tap on <strong>Install</strong> and follow the instructions on the screen.</li> -</ol> - <h4>Where can I find more modded APKs for other games?</h4> -<p>You can find more modded APKs for other games by searching online or visiting some websites that offer them. However, we do not recommend that you use any modded APKs for any games, as they might have several risks and drawbacks that we have mentioned above. They might also ruin your gaming experience and enjoyment by giving you unfair advantages or disadvantages over other players. Therefore, we suggest that you use the original versions of games from their official sources and respect their developers who created them.</p> 401be4b1e0<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Gacha Life APK and Create Your Own Anime World.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Gacha Life APK and Create Your Own Anime World.md deleted file mode 100644 index 070186f122d3bae8f366f94242e3afaa00a4e2b5..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Gacha Life APK and Create Your Own Anime World.md +++ /dev/null @@ -1,96 +0,0 @@ - -<h1>What is Gacha Apk and Why You Should Try It</h1> -<p>If you are a fan of anime, manga, or games, you might have heard of gacha apk. Gacha apk is a term that refers to a collection of mobile games developed by Lunime that feature the gacha mechanic. Gacha is a Japanese word that means "capsule toy vending machine". In gacha games, you can spend in-game currency or real money to get random items or characters. These items or characters can be used to customize your avatar, create scenes, or battle with other players.</p> -<h2>gacha apk</h2><br /><p><b><b>Download</b> &middot;&middot;&middot;&middot;&middot; <a href="https://ssurll.com/2uNTZ3">https://ssurll.com/2uNTZ3</a></b></p><br /><br /> -<p>Gacha apk games are very popular among anime lovers because they offer a lot of creative freedom and fun. You can create your own anime styled characters and dress them up in your favorite fashion outfits. You can also enter the studio mode and create any scene you can imagine with your characters. You can also play mini-games, collect gems, and gacha for rare gifts.</p> -<p>If you are interested in trying out gacha apk games, you will need to download and install them on your device. You can find them on Google Play Store or on Lunime's official website. The games are compatible with Android devices and some of them are also available for Windows PC. The games are free to play, but they may contain ads or in-app purchases.</p> -<p>In this article, we will introduce you to three of the most popular gacha apk games: Gacha Life, Gacha Club, and Gacha Studio. We will explain what each game is about, what you can do in it, and how you can enjoy it. Let's get started!</p> - <h2>Gacha Life</h2> -<p>Gacha Life is one of the most popular gacha apk games. It was released in October 2018 and has over 100 million downloads on Google Play Store. Gacha Life is a casual game that allows you to create your own anime styled characters and dress them up in your favorite fashion outfits. You can choose from hundreds of dresses, shirts, hairstyles, weapons, hats, and more. You can also customize your personal look by changing your hairstyle, eyes, mouth, and more.</p> -<p>After designing your characters, you can enter the studio mode and create any scene you can imagine with them. You can choose from over a hundred backgrounds to create the perfect story. You can also enter custom text for your characters and choose from many different poses. You can make your own stories in the skit maker by combining multiple scenes.</p> -<p>Gacha Life also has a new life mode where you can explore different areas with your own characters such as the town, school, and more. You can discover new NPCs and chat with them to learn more about their lives. They might even give you a surprise. You can also play offline without Wi-Fi.</p> -<p>Another feature of Gacha Life is the gacha games mode where you can choose from eight different mini-games such as Duck & Dodge or Phantom's Remix. You can collect and gacha over 100 gifts to add to your collection. You can also farm for gems easily by playing these mini-games.</p> <h2>Gacha Club</h2> -<p>Gacha Club is the latest gacha apk game from Lunime. It was released in June 2020 and has over 50 million downloads on Google Play Store. Gacha Club is a role-playing game that lets you customize your own club members and take them to battle. You can also create your own stories and scenes in the studio mode.</p> -<p>Gacha Club has more features and elements than Gacha Life. You can customize up to 100 characters and choose from over 600 poses. You can also change the colors of almost every item, including skin, hair, eyes, clothes, accessories, and more. You can also adjust the size and position of your items with the new advanced editing tools.</p> -<p>gacha life apk download<br /> -gacha club apk mod<br /> -gacha studio apk free<br /> -gacha world apk latest version<br /> -gacha resort apk unlimited gems<br /> -gacha life 2 apk release date<br /> -gacha club apk ios<br /> -gacha studio apk mod<br /> -gacha world apk mod<br /> -gacha resort apk mod<br /> -gacha life 2 apk download<br /> -gacha club apk pc<br /> -gacha studio apk download<br /> -gacha world apk download<br /> -gacha resort apk download<br /> -gacha life old version apk<br /> -gacha club apk online<br /> -gacha studio anime dress up apk<br /> -gacha world anime rpg apk<br /> -gacha resort anime beach games apk<br /> -gacha life 1.1.4 apk download<br /> -gacha club apk android 1<br /> -gacha studio anime dress up mod apk<br /> -gacha world anime rpg mod apk<br /> -gacha resort anime beach games mod apk<br /> -gacha life 1.0.9 apk download<br /> -gacha club apk uptodown<br /> -gacha studio anime dress up hack apk<br /> -gacha world anime rpg hack apk<br /> -gacha resort anime beach games hack apk<br /> -gacha life 1.0.8 apk download<br /> -gacha club apk pure<br /> -gacha studio anime dress up unlimited gems apk<br /> -gacha world anime rpg unlimited gems apk<br /> -gacha resort anime beach games unlimited gems apk<br /> -gacha life 1.0.7 apk download<br /> -gacha club full version apk download<br /> -gacha studio anime dress up full version apk download<br /> -gacha world anime rpg full version apk download<br /> -gacha resort anime beach games full version apk download<br /> -gacha life 1.0.6 apk download <br /> -how to install gacha club on android without google play store <br /> -how to get free gems in gacha studio without human verification <br /> -how to hack gacha world with lucky patcher <br /> -how to unlock all characters in gacha resort <br /> -how to update gacha life on android <br /> -how to play online with friends in gacha club <br /> -how to create your own character in gacha studio <br /> -how to get rare items in gacha world <br /> -how to level up fast in gacha resort</p> -<p>One of the most exciting features of Gacha Club is the gacha system. You can gacha for over 180 units to use in battle. Each unit has its own element, rarity, level, and skills. You can also enhance your units with awakening and limit breaking. You can also equip them with pets and objects that boost their stats.</p> -<p>Gacha Club also has a new battle mode where you can fight against enemies and bosses. You can choose from four different modes: story, training, tower, and shadows of corruption. You can also join a club and chat with other players online. You can also challenge other players in the arena and rank up in the leaderboards.</p> - <h2>Gacha Studio</h2> -<p>Gacha Studio is another gacha apk game that focuses on the dress-up and studio aspects. It was released in May 2017 and has over 10 million downloads on Google Play Store. Gacha Studio is a game where you can create your own anime characters and dress them up in the latest anime fashion.</p> -<p>Gacha Studio has a wide variety of clothes, weapons, hats, and accessories to choose from. You can also customize your character's look by changing their hairstyle, eyes, mouth, expression, and more. You can also mix and match different parts of your items to create unique combinations.</p> -<p>After dressing up your characters, you can enter the studio mode and make your own stories and skits. You can choose from hundreds of backgrounds and add text bubbles and props to your scenes. You can also use preset scenes or make your own with the custom mode.</p> -<p>Gacha Studio also has a gacha mode where you can gacha for rare anime characters to add to your collection. You can also play mini-games to earn gems and bytes that you can use to gacha more items. You can also chat with other players online or offline.</p> - <h2>Conclusion</h2> -<p>Gacha apk games are fun and creative games that let you express your anime fandom. You can create your own characters and scenes with endless customization options. You can also play mini-games, collect gems, gacha for rare items, and battle with other players.</p> -<p>If you are looking for a casual game that lets you design your own anime styled characters and dress them up in your favorite fashion outfits, you should try Gacha Life or Gacha Studio. If you are looking for a role-playing game that lets you customize your own club members and take them to battle, you should try Gacha Club.</p> -<p>Whichever game you choose, you will have a lot of fun and creativity. Gacha apk games are free to play, but they may contain ads or in-app purchases. You can download them on Google Play Store or on Lunime's official website.</p> -<p>Have you tried any of these gacha apk games? Which one is your favorite? Let us know in the comments below!</p> - <h2>FAQs</h2> -<h3>What are the best gacha games for 2022?</h3> -<p>There are many gacha games available for Android devices, but some of the best ones for 2022 are:</p> -<ul> -<li><b>Genshin Impact</b>: A stunning open-world action RPG with beautiful graphics, engaging combat, and a rich story.</li> -<li><b>Fate/Grand Order</b>: A popular turn-based RPG based on the Fate series with hundreds of characters to summon and command.</li> -<li><b>Azur Lane</b>: A side-scrolling shooter with cute anime girls that represent historical warships.</li> -<li><b>Arknights</b>: A strategic tower defense game with a dystopian sci-fi setting and a diverse cast of operators.</li> -<li><b>Epic Seven</b>: A gorgeous anime-style RPG with smooth animations, epic battles, and an immersive story.</li> -</ul> - <h3>Are gacha games free to play?</h3> -<p>Most gacha games are free to play, meaning that you can download and play them without paying anything upfront. However, they may contain ads or in-app purchases that allow you to buy more in-game currency or items. These purchases are optional, but they may give you an advantage or access to more features in the game. You should always be careful and responsible when spending real money on gacha games.</p> - <h3>How can I get more gems and bytes in gacha games?</h3> -<p>Gems and bytes are the main currencies in gacha apk games. You can use them to gacha for more items or characters. You can get more gems and bytes by playing mini-games, watching ads, completing tasks, or buying them with real money. You can also get free gems and bytes by logging in daily, joining events, or using codes.</p> - <h3>Can I play gacha games offline?</h3> -<p>Some gacha games require an internet connection to play, while others can be played offline without Wi-Fi. For example, Gacha Life and Gacha Studio can be played offline, but Gacha Club requires an internet connection to access some features such as the club chat or the arena. You should always check the game's description and requirements before downloading it.</p> - <h3>Can I import and export my characters from one gacha game to another?</h3> -<p>Unfortunately, you cannot import or export your characters from one gacha game to another. Each game has its own system and format for saving and loading your characters. However, you can use the screenshot feature to capture your characters and share them with others. You can also use the QR code feature to scan and load other people's characters in some games.</p> 197e85843d<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Undangan Tahlil 1 Lembar Isi 2 Doc Berkualitas dan Bisa Diubah.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Undangan Tahlil 1 Lembar Isi 2 Doc Berkualitas dan Bisa Diubah.md deleted file mode 100644 index 00c7c4d3f270a9badfed101b75e636c814582b44..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Undangan Tahlil 1 Lembar Isi 2 Doc Berkualitas dan Bisa Diubah.md +++ /dev/null @@ -1,127 +0,0 @@ -<br /> -<h1>Download Undangan Tahlil 1 Lembar Isi 2 Doc: A Guide for Creating and Printing Invitation Cards for Tahlil Ceremony</h1> - <p>If you are planning to hold a Tahlil ceremony for your deceased loved ones, you may need to create and print invitation cards to inform and invite your relatives, friends, neighbors, and colleagues. In this article, we will show you how to download Undangan Tahlil 1 Lembar Isi 2 Doc, a Word document format that allows you to customize and print two invitation cards on one sheet of paper. We will also explain what Undangan Tahlil is, why it is important, and how to print it easily and affordably.</p> - <h2>What is Undangan Tahlil?</h2> - <p>Undangan Tahlil is an invitation card that is used to invite people to attend a Tahlil ceremony. Tahlil is a Muslim ritual that involves reciting prayers and verses from the Quran for the souls of the deceased. It is believed that Tahlil can bring peace and blessings to the departed and their families.</p> -<h2>download undangan tahlil 1 lembar isi 2 doc</h2><br /><p><b><b>Download Zip</b> &#10084; <a href="https://ssurll.com/2uNT0i">https://ssurll.com/2uNT0i</a></b></p><br /><br /> - <h3>The meaning and purpose of Tahlil ceremony</h3> - <p>Tahlil comes from the Arabic word "tahleel", which means "to declare the oneness of God". It is a form of dhikr, or remembrance of God, that expresses gratitude, praise, and supplication to Him. Tahlil is usually performed after the funeral prayer (salat al-janazah) and on certain days after the death, such as the third, seventh, fortieth, hundredth, or thousandth day. It can also be done on special occasions, such as birthdays, anniversaries, or religious holidays.</p> - <p>The purpose of Tahlil is to honor the memory of the deceased, to seek forgiveness and mercy for them, to ease their transition to the afterlife, and to strengthen the bond between the living and the dead. It is also a way of showing solidarity and sympathy to the bereaved family and friends.</p> - <h3>The format and content of Undangan Tahlil</h3> - <p>Undangan Tahlil usually contains the following information:</p> - <ul> -<li>The date, time, and location of the Tahlil ceremony</li> -<li>The name of the person who organizes the ceremony</li> -<li>The name of the person who is commemorated</li> -<li>The number of days since the death</li> -<li>A polite request for attendance and prayers</li> -</ul> - <p>Here is an example of Undangan Tahlil:</p> - <table> -<tr> -<td><img src="(^1^)" alt="Undangan Tahlil example"></td> -<td><img src="(^1^)" alt="Undangan Tahlil example"></td> -</tr> -</table> - <h2>How to Download Undangan Tahlil 1 Lembar Isi 2 Doc?</h2> - <p>If you want to create your own Undangan Tahlil, you can use a Word document format that lets you edit and print two invitation cards on one sheet of paper. This can save you time, money, and paper.</p> - <h3>The benefits of using Word document format</h3> - <p>Using a Word document format for your Undangan Tahlil has several advantages:</p> - <ul> -<li>You can easily customize the text, font, color, size, layout, and design according to your preference.</li> -<li>You can insert images, logos, borders, backgrounds, or other elements to make your invitation card more attractive.</ <li>You can preview and print your invitation card as many times as you want until you are satisfied with the result.</li> -<li>You can save your invitation card as a PDF file and share it online or via email if you prefer.</li> -</ul> - <h3>The sources and examples of Undangan Tahlil templates</h3> - <p>There are many sources and examples of Undangan Tahlil templates that you can download and use for free or for a small fee. Some of them are:</p> -<p>download undangan tahlil 1 lembar isi 2 word<br /> -download undangan tahlil 1 lembar isi 2 pdf<br /> -download undangan tahlil 1 lembar isi 2 gratis<br /> -download undangan tahlil 1 lembar isi 2 bisa di edit<br /> -download undangan tahlil 1 lembar isi 2 format doc<br /> -download undangan tahlil 1 lembar isi 2 ms word<br /> -download undangan tahlil 1 lembar isi 2 simple<br /> -download undangan tahlil 1 lembar isi 2 elegan<br /> -download undangan tahlil 1 lembar isi 2 keren<br /> -download undangan tahlil 1 lembar isi 2 terbaru<br /> -download undangan tahlil 1 lembar isi 2 full color<br /> -download undangan tahlil 1 lembar isi 2 hitam putih<br /> -download undangan tahlil 1 lembar isi 2 tanpa gambar<br /> -download undangan tahlil 1 lembar isi 2 dengan gambar<br /> -download undangan tahlil 1 lembar isi 2 untuk anak<br /> -download undangan tahlil 1 lembar isi 2 untuk orang tua<br /> -download undangan tahlil 1 lembar isi 2 untuk saudara<br /> -download undangan tahlil 1 lembar isi 2 untuk teman<br /> -download undangan tahlil 1 lembar isi 2 untuk guru<br /> -download undangan tahlil 1 lembar isi 2 untuk tetangga<br /> -download undangan tahlil satu lembar berisi dua doc<br /> -download contoh undangan tahlil satu lembar dua doc<br /> -cara download undangan tahlil satu lembar dua doc<br /> -link download undangan tahlil satu lembar dua doc<br /> -situs download undangan tahlil satu lembar dua doc<br /> -aplikasi download undangan tahlil satu lembar dua doc<br /> -software download undangan tahlil satu lembar dua doc<br /> -tutorial download undangan tahlil satu lembar dua doc<br /> -tips download undangan tahlil satu lembar dua doc<br /> -trik download undangan tahlil satu lembar dua doc<br /> -desain undangan tahlil satu lembar dua doc untuk diunduh<br /> -template undangan tahlil satu lembar dua doc untuk diunduh<br /> -model undangan tahlil satu lembar dua doc untuk diunduh<br /> -bentuk undangan tahlil satu lembar dua doc untuk diunduh<br /> -jenis undangan tahlil satu lembar dua doc untuk diunduh<br /> -ukuran undangan tahlil satu lembar dua doc untuk diunduh<br /> -warna undangan tahlil satu lembar dua doc untuk diunduh<br /> -font undangan tahlil satu lembar dua doc untuk diunduh<br /> -kalimat undangan tahlil satu lembar dua doc untuk diunduh<br /> -ucapan undangan tahlil satu lembar dua doc untuk diunduh</p> - <ul> -<li>[Undangan Tahlil 1 Lembar Isi 2 Doc]: This is a simple and elegant template that you can download from Google Drive. It has a white background, black text, and a green border. It also has a space for inserting a photo of the deceased. You can edit the text and the photo using Microsoft Word or Google Docs.</li> -<li>[Undangan Tahlil 1 Lembar Isi 2 Doc]: This is another template that you can download from Google Drive. It has a beige background, brown text, and a floral border. It also has a space for inserting a photo of the deceased. You can edit the text and the photo using Microsoft Word or Google Docs.</li> -<li>[Undangan Tahlil 1 Lembar Isi 2 Doc]: This is a more colorful and modern template that you can download from Mediafire. It has a blue background, white text, and a geometric border. It also has a space for inserting a photo of the deceased. You can edit the text and the photo using Microsoft Word or Google Docs.</li> -<li>[Undangan Tahlil 1 Lembar Isi 2 Doc]: This is a more traditional and Islamic template that you can download from Ziddu. It has a green background, gold text, and an Arabic calligraphy border. It also has a space for inserting a photo of the deceased. You can edit the text and the photo using Microsoft Word or Google Docs.</li> -</ul> - <h3>The steps to download and edit Undangan Tahlil templates</h3> - <p>Here are the steps to download and edit Undangan Tahlil templates:</p> - <ol> -<li>Choose the template that you like from the sources above or from other websites.</li> -<li>Click on the download link or button and save the file to your computer or device.</li> -<li>Open the file using Microsoft Word or Google Docs.</li> -<li>Edit the text according to your information and preference. You can change the font, color, size, alignment, or spacing as you wish.</li> -<li>Insert a photo of the deceased by clicking on the placeholder image and choosing an image from your computer or device. You can resize, crop, or rotate the image as you wish.</li> -<li>Save your edited invitation card as a Word document or as a PDF file.</li> -</ol> - <h2>How to Print Undangan Tahlil 1 Lembar Isi 2 Doc?</h2> - <p>After you have created your invitation card, you can print it on your own printer or use a printing service. Here are some tips and tricks for printing Undangan Tahlil:</p> - <h3>The tips and tricks for printing Undangan Tahlil</h3> - <ul> -<li>Use high-quality paper that is thick, smooth, and durable. You can choose glossy, matte, or textured paper depending on your preference.</li> -<li>Use high-resolution images that are clear, sharp, and bright. You can adjust the brightness, contrast, or saturation of your images using an image editor if needed.</li> -<li>Use high-quality ink that is fade-resistant, water-resistant, and smudge-proof. You can choose black or colored ink depending on your preference.</li> -<li>Use the print preview feature to check how your invitation card will look before printing. You can adjust the margins, orientation, or scaling if needed.</li> -<li>Use the duplex printing feature to print two invitation cards on one sheet of paper. You can choose to print on both sides of the paper (double-sided) or on one side of the paper (single-sided).</li> -</ul> - <h3>The options and costs of printing services</h3> - <p>If you do not have your own printer or you want to save time and hassle, you can use a printing service to print your invitation card. There are many options and costs of printing services depending on your location, quantity, quality, and speed. Some of them are:</p> - <ul> -<li>[Printingo]: This is an online printing service that offers fast delivery, low prices, and high quality. You can upload your invitation card as a PDF file and choose from various paper types, sizes, colors, finishes, and quantities. The cost starts from Rp 500 per sheet for A4 size paper with two invitation cards per sheet. You can also choose to have your invitation card laminated, cut, or folded for an additional fee.</li> -<li>[Printzone]: This is an offline printing service that has branches in various cities in Indonesia. You can visit their nearest outlet and bring your invitation card as a Word document or a PDF file on a flash drive or a CD. You can choose from various paper types, sizes, colors, finishes, and quantities. The cost starts from Rp 750 per sheet for A4 size paper with two invitation cards per sheet. You can also choose to have your invitation card laminated, cut, or folded for an additional fee.</li> -<li>[Printshop]: This is another offline printing service that has branches in various cities in Indonesia. You can visit their nearest outlet and bring your invitation card as a Word document or a PDF file on a flash drive or a CD. You can choose from various paper types, sizes, colors, finishes, and quantities. The cost starts from Rp 1000 per sheet for A4 size paper with two invitation cards per sheet. You can also choose to have your invitation card laminated, cut, or folded for an additional fee.</li> -</ul> - <h2>Conclusion</h2> - <p>In this article, we have shown you how to download Undangan Tahlil 1 Lembar Isi 2 Doc, a Word document format that allows you to customize and print two invitation cards on one sheet of paper. We have also explained what Undangan Tahlil is, why it is important, and how to print it easily and affordably. We hope that this article has helped you to create and print your own invitation card for your Tahlil ceremony. We wish you and your family peace and blessings.</p> - <h2>FAQs</h2> - <p>Here are some frequently asked questions about Undangan Tahlil:</p> - <ol> -<li>Q: What is the etiquette for attending a Tahlil ceremony?</li> -<li>A: The etiquette for attending a Tahlil ceremony is to dress modestly, arrive on time, greet the host and the guests, join the prayer session, express condolences to the bereaved family and friends, and avoid talking about inappropriate or irrelevant topics.</li> -<li>Q: How long does a Tahlil ceremony last?</li> -<li>A: A Tahlil ceremony usually lasts for about an hour or less. It consists of reciting prayers and verses from the Quran, followed by a short sermon or speech by the host or a religious leader.</li> -<li>Q: What should I bring to a Tahlil ceremony?</li> -<li>A: You do not need to bring anything to a Tahlil ceremony unless you are invited to do so by the host. However, you can bring some flowers, food, or donations as a gesture of sympathy and support.</li> -<li>Q: Can I hold a Tahlil ceremony for someone who is not Muslim?</li> -<li>A: Yes, you can hold a Tahlil ceremony for someone who is not Muslim as long as you respect their beliefs and wishes. You can also invite non-Muslims to attend your Tahlil ceremony as long as they are comfortable and respectful.</li> -<li>Q: Can I use other formats or designs for my Undangan Tahlil?</li> -<li>A: Yes, you can use other formats or designs for your Undangan Tahlil as long as they are clear, appropriate, and respectful. You can also use other languages or scripts if needed.</li> -</ol></p> 197e85843d<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/skimai/DragGAN_Streamlit/stylegan2/dnnlib/__init__.py b/spaces/skimai/DragGAN_Streamlit/stylegan2/dnnlib/__init__.py deleted file mode 100644 index 2f08cf36f11f9b0fd94c1b7caeadf69b98375b04..0000000000000000000000000000000000000000 --- a/spaces/skimai/DragGAN_Streamlit/stylegan2/dnnlib/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -from .util import EasyDict, make_cache_dir_path diff --git a/spaces/skura/sk-06-SL-AI-Image-Music-Video-UI-UX-URL/app.py b/spaces/skura/sk-06-SL-AI-Image-Music-Video-UI-UX-URL/app.py deleted file mode 100644 index 0f4298365bc4f58d285202fb9442e12805d2db95..0000000000000000000000000000000000000000 --- a/spaces/skura/sk-06-SL-AI-Image-Music-Video-UI-UX-URL/app.py +++ /dev/null @@ -1,45 +0,0 @@ -import streamlit as st -import gradio as gr -import IPython -import streamlit as st -import streamlit.components.v1 as components -from IPython.display import IFrame - -src='' # URL parameter to change the iframe url -def SetIframeURL(option_selected): - if (option_selected=='Collager'): - src='https://www.artbreeder.com/' - if (option_selected=='Midjourney'): - src='https://www.midjourney.com/' - if (option_selected=='DreamStudio'): - src='https://beta.dreamstudio.ai/' - if (option_selected=='NightCafe'): - src='https://creator.nightcafe.studio/' - if (option_selected=='RunwayML'): - src='https://app.runwayml.com/' - if (option_selected=='ArtFromTextandImages'): - src='https://huggingface.co/spaces/awacke1/Art-from-Text-and-Images' - if (option_selected=='Boomy'): - src='https://boomy.com/' - - width = st.sidebar.slider("Width", 200, 1500, 800, 100) - height = st.sidebar.slider("Height", 200, 1500, 900, 100) - st.components.v1.iframe(src, width, height, scrolling=True) - -try: - options = ['Midjourney', 'RunwayML', 'Boomy'] - query_params = st.experimental_get_query_params() - query_option = query_params['option'][0] #throws an exception when visiting http://host:port - option_selected = st.sidebar.selectbox('Pick option', options, index=options.index(query_option)) - if option_selected: - st.experimental_set_query_params(option=option_selected) - SetIframeURL(option_selected) -except: - options = ['Midjourney', 'RunwayML', 'Boomy'] - st.experimental_set_query_params(option=options[1]) # defaults to 1 - query_params = st.experimental_get_query_params() - query_option = query_params['option'][0] - option_selected = st.sidebar.selectbox('Pick option', options, index=options.index(query_option)) - if option_selected: - st.experimental_set_query_params(option=option_selected) - SetIframeURL(option_selected) \ No newline at end of file diff --git a/spaces/snoopyv126/gpt/README.md b/spaces/snoopyv126/gpt/README.md deleted file mode 100644 index 2e9d69a73bbf9b6dc79cd1db81b95cf5cc83e847..0000000000000000000000000000000000000000 --- a/spaces/snoopyv126/gpt/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Gpt -emoji: ⚡ -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/spritlesoftware/Image-Object-Detection/app.py b/spaces/spritlesoftware/Image-Object-Detection/app.py deleted file mode 100644 index 4d98b29515354c6a24e73cc2652efeb6f8f3b828..0000000000000000000000000000000000000000 --- a/spaces/spritlesoftware/Image-Object-Detection/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import gradio as gr -import torch -from transformers import DetrImageProcessor, DetrForObjectDetection -from color import Color -from color_wheel import ColorWheel -from PIL import ImageDraw, ImageFont - -processor = DetrImageProcessor.from_pretrained('facebook/detr-resnet-50') -model = DetrForObjectDetection.from_pretrained('facebook/detr-resnet-50') - -def process_image(image, margin): - if image is None: - yield [None, None, None] - return - - color = Color.fromRgb(0xff, 0x0, 0x0) - cwt = ColorWheel(color) - colors = [] - for t in cwt.tone15: - cw = ColorWheel(Color.fromRgb(t.r, t.g, t.b)) - for h in cw.hue: - colors.append((h.r, h.g, h.b)) - - inputs = processor(images=image, return_tensors='pt') - outputs = model(**inputs) - target_sizes = torch.tensor([image.size[::-1]]) - results = processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.9)[0] - - index = 0 - gallery = [] - labels = [] - drawImage = image.copy() - draw = ImageDraw.Draw(drawImage) - for score, label, box in zip(results['scores'], results['labels'], results['boxes']): - if index >= len(colors): - break - box = [round(i) for i in box.tolist()] - box[0] = max(0, box[0] - margin) - box[1] = max(0, box[1] - margin) - box[2] = min(image.width, box[2] + margin) - box[3] = min(image.height, box[3] + margin) - draw.rectangle([(box[0], box[1]), (box[2], box[3])], outline=colors[index], width=4) - gallery.append(image.crop((box[0], box[1], box[2], box[3]))) - labels.append(model.config.id2label[label.item()]) - index += 1 - yield [drawImage, gallery, ','.join(labels)] - -app = gr.Interface( - title='Object Detection for Image', - fn=process_image, - inputs=[ - gr.Image(type='pil'), - gr.Slider(maximum=100, step=1, label='margin'), - ], - outputs=[ - gr.Image(label='boxes', type='pil'), - gr.Gallery(label='gallery', columns=8, height=140), - gr.Textbox(label='text'), - ], - allow_flagging='never', - examples=[['examples/Wild.jpg', 0], ['examples/Football-Match.jpg', 0]], - #cache_examples=False -) -app.queue(concurrency_count=20) -app.launch() \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/preprocessing/denoiser/utils.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/preprocessing/denoiser/utils.py deleted file mode 100644 index 734d047f1bb8e3aa98c88e152eee7f91fea3d814..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_synthesis/preprocessing/denoiser/utils.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import functools -import logging -from contextlib import contextmanager -import inspect -import time - -logger = logging.getLogger(__name__) - -EPS = 1e-8 - - -def capture_init(init): - """capture_init. - - Decorate `__init__` with this, and you can then - recover the *args and **kwargs passed to it in `self._init_args_kwargs` - """ - @functools.wraps(init) - def __init__(self, *args, **kwargs): - self._init_args_kwargs = (args, kwargs) - init(self, *args, **kwargs) - - return __init__ - - -def deserialize_model(package, strict=False): - """deserialize_model. - - """ - klass = package['class'] - if strict: - model = klass(*package['args'], **package['kwargs']) - else: - sig = inspect.signature(klass) - kw = package['kwargs'] - for key in list(kw): - if key not in sig.parameters: - logger.warning("Dropping inexistant parameter %s", key) - del kw[key] - model = klass(*package['args'], **kw) - model.load_state_dict(package['state']) - return model - - -def copy_state(state): - return {k: v.cpu().clone() for k, v in state.items()} - - -def serialize_model(model): - args, kwargs = model._init_args_kwargs - state = copy_state(model.state_dict()) - return {"class": model.__class__, "args": args, "kwargs": kwargs, "state": state} - - -@contextmanager -def swap_state(model, state): - """ - Context manager that swaps the state of a model, e.g: - - # model is in old state - with swap_state(model, new_state): - # model in new state - # model back to old state - """ - old_state = copy_state(model.state_dict()) - model.load_state_dict(state) - try: - yield - finally: - model.load_state_dict(old_state) - - -def pull_metric(history, name): - out = [] - for metrics in history: - if name in metrics: - out.append(metrics[name]) - return out - - -class LogProgress: - """ - Sort of like tqdm but using log lines and not as real time. - Args: - - logger: logger obtained from `logging.getLogger`, - - iterable: iterable object to wrap - - updates (int): number of lines that will be printed, e.g. - if `updates=5`, log every 1/5th of the total length. - - total (int): length of the iterable, in case it does not support - `len`. - - name (str): prefix to use in the log. - - level: logging level (like `logging.INFO`). - """ - def __init__(self, - logger, - iterable, - updates=5, - total=None, - name="LogProgress", - level=logging.INFO): - self.iterable = iterable - self.total = total or len(iterable) - self.updates = updates - self.name = name - self.logger = logger - self.level = level - - def update(self, **infos): - self._infos = infos - - def __iter__(self): - self._iterator = iter(self.iterable) - self._index = -1 - self._infos = {} - self._begin = time.time() - return self - - def __next__(self): - self._index += 1 - try: - value = next(self._iterator) - except StopIteration: - raise - else: - return value - finally: - log_every = max(1, self.total // self.updates) - # logging is delayed by 1 it, in order to have the metrics from update - if self._index >= 1 and self._index % log_every == 0: - self._log() - - def _log(self): - self._speed = (1 + self._index) / (time.time() - self._begin) - infos = " | ".join(f"{k.capitalize()} {v}" for k, v in self._infos.items()) - if self._speed < 1e-4: - speed = "oo sec/it" - elif self._speed < 0.1: - speed = f"{1/self._speed:.1f} sec/it" - else: - speed = f"{self._speed:.1f} it/sec" - out = f"{self.name} | {self._index}/{self.total} | {speed}" - if infos: - out += " | " + infos - self.logger.log(self.level, out) - - -def colorize(text, color): - """ - Display text with some ANSI color in the terminal. - """ - code = f"\033[{color}m" - restore = "\033[0m" - return "".join([code, text, restore]) - - -def bold(text): - """ - Display text in bold in the terminal. - """ - return colorize(text, "1") - - -def cal_snr(lbl, est): - import torch - y = 10.0 * torch.log10( - torch.sum(lbl**2, dim=-1) / (torch.sum((est-lbl)**2, dim=-1) + EPS) + - EPS - ) - return y diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_to_text/prep_librispeech_data.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_to_text/prep_librispeech_data.py deleted file mode 100644 index f379fa7bf195f48ad6b2ed3dbd93a5fbeb7abf79..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/speech_to_text/prep_librispeech_data.py +++ /dev/null @@ -1,119 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -import shutil -from tempfile import NamedTemporaryFile - -import pandas as pd -from examples.speech_to_text.data_utils import ( - create_zip, - extract_fbank_features, - gen_config_yaml, - gen_vocab, - get_zip_manifest, - save_df_to_tsv, -) -from torchaudio.datasets import LIBRISPEECH -from tqdm import tqdm - - -log = logging.getLogger(__name__) - -SPLITS = [ - "train-clean-100", - "train-clean-360", - "train-other-500", - "dev-clean", - "dev-other", - "test-clean", - "test-other", -] - -MANIFEST_COLUMNS = ["id", "audio", "n_frames", "tgt_text", "speaker"] - - -def process(args): - out_root = Path(args.output_root).absolute() - out_root.mkdir(exist_ok=True) - # Extract features - feature_root = out_root / "fbank80" - feature_root.mkdir(exist_ok=True) - for split in SPLITS: - print(f"Fetching split {split}...") - dataset = LIBRISPEECH(out_root.as_posix(), url=split, download=True) - print("Extracting log mel filter bank features...") - for wav, sample_rate, _, spk_id, chapter_no, utt_no in tqdm(dataset): - sample_id = f"{spk_id}-{chapter_no}-{utt_no}" - extract_fbank_features( - wav, sample_rate, feature_root / f"{sample_id}.npy" - ) - # Pack features into ZIP - zip_path = out_root / "fbank80.zip" - print("ZIPing features...") - create_zip(feature_root, zip_path) - print("Fetching ZIP manifest...") - audio_paths, audio_lengths = get_zip_manifest(zip_path) - # Generate TSV manifest - print("Generating manifest...") - train_text = [] - for split in SPLITS: - manifest = {c: [] for c in MANIFEST_COLUMNS} - dataset = LIBRISPEECH(out_root.as_posix(), url=split) - for _, _, utt, spk_id, chapter_no, utt_no in tqdm(dataset): - sample_id = f"{spk_id}-{chapter_no}-{utt_no}" - manifest["id"].append(sample_id) - manifest["audio"].append(audio_paths[sample_id]) - manifest["n_frames"].append(audio_lengths[sample_id]) - manifest["tgt_text"].append(utt.lower()) - manifest["speaker"].append(spk_id) - save_df_to_tsv( - pd.DataFrame.from_dict(manifest), out_root / f"{split}.tsv" - ) - if split.startswith("train"): - train_text.extend(manifest["tgt_text"]) - # Generate vocab - vocab_size = "" if args.vocab_type == "char" else str(args.vocab_size) - spm_filename_prefix = f"spm_{args.vocab_type}{vocab_size}" - with NamedTemporaryFile(mode="w") as f: - for t in train_text: - f.write(t + "\n") - gen_vocab( - Path(f.name), - out_root / spm_filename_prefix, - args.vocab_type, - args.vocab_size, - ) - # Generate config YAML - gen_config_yaml( - out_root, - spm_filename=spm_filename_prefix + ".model", - specaugment_policy="ld" - ) - # Clean up - shutil.rmtree(feature_root) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("--output-root", "-o", required=True, type=str) - parser.add_argument( - "--vocab-type", - default="unigram", - required=True, - type=str, - choices=["bpe", "unigram", "char"], - ), - parser.add_argument("--vocab-size", default=10000, type=int) - args = parser.parse_args() - - process(args) - - -if __name__ == "__main__": - main() diff --git a/spaces/stomexserde/gpt4-ui/Examples/Cars3Englishfullmoviemp4download !FULL!.md b/spaces/stomexserde/gpt4-ui/Examples/Cars3Englishfullmoviemp4download !FULL!.md deleted file mode 100644 index c5e0f2cd43e6dd493896cc5e2f6b6b036e777201..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Cars3Englishfullmoviemp4download !FULL!.md +++ /dev/null @@ -1,18 +0,0 @@ - -<h1>Cars 3: How to Watch the Full Movie in English Online or Download MP4</h1> -<p>Cars 3 is the third installment of the popular animated franchise that follows the adventures of Lightning McQueen, a legendary race car who faces a new generation of rivals. In this film, McQueen teams up with a young technician named Cruz Ramirez, who helps him train for a comeback on the track. Along the way, he also learns some valuable lessons about life and racing.</p> -<h2>Cars3Englishfullmoviemp4download</h2><br /><p><b><b>Download File</b> &#10003; <a href="https://urlgoal.com/2uI7E8">https://urlgoal.com/2uI7E8</a></b></p><br /><br /> -<p>If you are a fan of Cars 3 and want to watch the full movie in English online or download it in MP4 format, here are some options for you:</p> -<ul> -<li>Disney+: You can stream Cars 3 on Disney+, the official streaming service of Disney that offers a vast library of movies, shows, and originals. You can sign up for a monthly or annual subscription and enjoy unlimited access to Cars 3 and other Disney titles. You can also download Cars 3 on your device for offline viewing.</li> -<li>YouTube: You can rent or buy Cars 3 on YouTube, the popular video-sharing platform that also offers movies and shows on demand. You can choose from different quality options and pay with your Google account. You can also download Cars 3 on your device for offline viewing.</li> -<li>Amazon Prime Video: You can rent or buy Cars 3 on Amazon Prime Video, the online video store that also offers a streaming service for Prime members. You can choose from different quality options and pay with your Amazon account. You can also download Cars 3 on your device for offline viewing.</li> -</ul> -<p>Cars 3 is a fun and heartwarming movie that will appeal to both kids and adults. It has a rating of G and a runtime of 102 minutes. It was released in 2017 and features the voices of Owen Wilson, Cristela Alonzo, Armie Hammer, Larry The Cable Guy, Bonnie Hunt, Cheech Marin, and more[^1^]. If you are looking for a way to watch Cars 3 in English online or download it in MP4 format, you can try any of the options mentioned above.</p> - -<p>Cars 3 is the sequel to Cars (2006) and Cars 2 (2011), which are also available to watch online or download in MP4 format. The Cars franchise is inspired by the real-life world of racing and features anthropomorphic vehicles as the main characters. The films are produced by Pixar Animation Studios and distributed by Walt Disney Pictures.</p> -<p>The plot of Cars 3 revolves around Lightning McQueen's struggle to cope with his aging and declining performance as a racer. He faces a new threat from Jackson Storm, a young and arrogant car who represents the next generation of high-tech racers. McQueen refuses to retire and decides to train with Cruz Ramirez, a spunky and optimistic female car who dreams of becoming a racer herself. Together, they embark on a journey across the country, where they encounter old friends and new challenges.</p> -<p></p> -<p>Cars 3 is a movie that celebrates the themes of friendship, mentorship, legacy, and passion. It also explores the issues of aging, identity, and diversity in a humorous and touching way. The movie has received positive reviews from critics and audiences alike, who praised its animation, story, characters, and emotional depth. It has also been nominated for several awards, including the Golden Globe for Best Animated Feature Film.</p> cec2833e83<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Essentials Of General Organic And Biochemistry Denise Guinn.md b/spaces/stomexserde/gpt4-ui/Examples/Essentials Of General Organic And Biochemistry Denise Guinn.md deleted file mode 100644 index ffa907a31779430b2dfcb4c63b7a92e317579d76..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Essentials Of General Organic And Biochemistry Denise Guinn.md +++ /dev/null @@ -1,14 +0,0 @@ - -<h1>Book Review: Essentials of General, Organic and Biochemistry by Denise Guinn</h1> -<p>Essentials of General, Organic and Biochemistry is a textbook that aims to provide students with a solid foundation in the fundamental concepts of chemistry and their applications in health and medicine. The author, Denise Guinn, is a professor of chemistry at the College of New Rochelle and has a Ph.D. in organic chemistry from the University of Texas at Austin. She has also worked as a research scientist at Abbott Laboratories and has published several papers in the fields of synthetic organic chemistry and medicinal chemistry.</p> -<h2>Essentials of General, Organic and Biochemistry Denise Guinn</h2><br /><p><b><b>Download</b> &rarr; <a href="https://urlgoal.com/2uIbtg">https://urlgoal.com/2uIbtg</a></b></p><br /><br /> -<p>The book is divided into four parts: Part I covers the basics of general chemistry, such as atomic structure, chemical bonding, reactions, solutions, acids and bases, and nuclear chemistry. Part II introduces organic chemistry, including the structure and properties of organic molecules, functional groups, stereochemistry, organic reactions, and biomolecules. Part III focuses on biochemistry, covering the major classes of biomolecules (carbohydrates, lipids, proteins, and nucleic acids), metabolism, enzymes, DNA replication and repair, gene expression, and biotechnology. Part IV explores some topics related to health and medicine, such as nutrition, vitamins and minerals, drugs and drug design, hormones and neurotransmitters, and immunology.</p> -<p>The book is designed to be student-centered and engaging. Each chapter begins with a real-life story that illustrates the relevance of the chapter's topic to health and medicine. The text is clear and concise, with many examples, diagrams, tables, and figures to aid understanding. The book also features learning objectives, key terms, summaries, review questions, exercises, problems, case studies, clinical applications, and online resources for each chapter. The book is suitable for students who are preparing for careers in healthcare or who are interested in learning more about the chemistry behind biological processes.</p> -<p>Essentials of General, Organic and Biochemistry is a comprehensive and accessible textbook that offers a balanced and integrated approach to learning chemistry in the context of health and medicine. It is an ideal choice for students who want to master the essentials of chemistry without sacrificing depth or rigor.</p> - -<p>The book has been revised and updated to reflect the latest developments and discoveries in the fields of chemistry, biochemistry, and health. The third edition features new photos, many taken by the author, that bring authentic images of actual clinical practice to the textbook. The book also covers new topics such as atomic structure, nuclear radiation, measuring matter and energy, DNA repair, gene regulation, epigenetics, CRISPR-Cas9, pharmacogenomics, personalized medicine, and COVID-19.</p> -<p></p> -<p>The book is supported by a rich array of online resources that enhance student learning and instructor teaching. The book is accompanied by SaplingPlus, an online platform that provides students with an interactive eBook, adaptive quizzing, videos, animations, case studies, and clinical applications. SaplingPlus also offers instructors a gradebook, analytics, assignments, and resources for lecture preparation and assessment. In addition, the book has a companion website that provides additional resources such as flashcards, glossary, appendices, and links to relevant websites.</p> -<p>Essentials of General, Organic and Biochemistry is a well-written and well-organized textbook that covers the most important topics in chemistry and biochemistry for students who are interested in health and medicine. The book is engaging and relevant, with numerous examples and applications that connect chemistry to real-life situations. The book is also rigorous and comprehensive, with clear explanations and detailed illustrations that help students master the concepts and skills they need. The book is an excellent resource for students who want to learn chemistry in a meaningful and enjoyable way.</p> e93f5a0c3f<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/EximiousSoft Logo Designer Pro 3.90 Crack Serial Key Free Download [UPDATED].md b/spaces/stomexserde/gpt4-ui/Examples/EximiousSoft Logo Designer Pro 3.90 Crack Serial Key Free Download [UPDATED].md deleted file mode 100644 index 7eb348776a74817a636327b2c392ef300189f729..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/EximiousSoft Logo Designer Pro 3.90 Crack Serial Key Free Download [UPDATED].md +++ /dev/null @@ -1,6 +0,0 @@ -<br /> -<h1>EximiousSoft Logo Designer Pro 3.90 Crack Serial key Free Download: Is It Worth It?</h1> | | Introduction: Explain what EximiousSoft Logo Designer Pro is, what it can do, and why people might want to use it. Mention that some people might be tempted to download a cracked version of the software instead of paying for it. | <p>If you are looking for a powerful and easy-to-use tool to create professional-looking logos for your website, business, or project, you might have heard of EximiousSoft Logo Designer Pro. This software is a feature-rich app that has a wide range of tools to design logos from scratch or customize over 4000+ templates and 5000+ vector graphics symbols. You can also import and export various file formats, apply effects and filters, and edit shapes and texts with full vector-based drawing tools.</p> -<h2>EximiousSoft Logo Designer Pro 3.90 Crack Serial key Free Download</h2><br /><p><b><b>Download File</b> &#8250; <a href="https://urlgoal.com/2uI72d">https://urlgoal.com/2uI72d</a></b></p><br /><br /><p>However, EximiousSoft Logo Designer Pro is not a free software. You need to pay $69.95 to get a lifetime license that includes free updates and technical support. Some people might not be willing or able to afford this price, and they might look for alternative ways to get the software for free. One of these ways is to download a cracked version of the software that bypasses the license key and activation process.</p><p>But is this really a good idea? What are the risks and consequences of using cracked software? And is there a better way to get EximiousSoft Logo Designer Pro without breaking the law or compromising your security? In this article, we will answer these questions and help you make an informed decision.</p> | | H2: What Is Software Cracking and How Does It Work? | <h2>What Is Software Cracking and How Does It Work?</h2> | | Explain the concept of software cracking, how it involves modifying or adding code to bypass licensing restrictions or encryption keys, and what are the common methods used by crackers (such as keygen cracking, patch cracking, loader cracking). | <p>Software cracking is the act of modifying or adding code to a software program to circumvent measures put in place by its developers or publishers to prevent unauthorized copying or distribution. These measures can include licensing restrictions, encryption keys, digital rights management (DRM), or online activation.</p><p>Crackers are people who create or use cracked software for various reasons, such as personal use, sharing with others, or making money from ads or malware. Crackers use different methods to crack software depending on the type and level of protection used by the original software. Some of these methods are:</p><ul><li>Keygen cracking: This involves using a key generation program (keygen) to produce valid license keys for software. The keygen analyzes the algorithm used by the original software to generate legitimate keys and replicates it.</li><li>Patch cracking: This involves using a patch program (patcher) to modify the executable file or other files of the software to remove or bypass the protection code. The patcher can also add new features or fix bugs in the software.</li><li>Loader cracking: This involves using a loader program (loader) to run the software without modifying its files. The loader intercepts the calls made by the software to check its license or activation status and returns fake responses.</li></ul> | | H2: What Are the Risks of Using Cracked Software? | <h2>What Are the Risks of Using Cracked Software?</h2> | | Explain the various risks and dangers of using cracked software, such as malware infections, security issues, legal problems, ethical issues, performance issues, lack of | Outline of the article | HTML formatting of the article | | --- | --- | | H3: Malware Infections | <h3>Malware Infections</h3> | | Explain how cracked software can contain malicious code that can infect your computer with viruses, worms, trojans, ransomware, spyware, adware, or other malware. Give some examples of malware that have been found in cracked software. Explain how malware can damage your files, steal your data, compromise your privacy, or hijack your system. | <p>One of the most common and serious risks of using cracked software is malware infection. Malware is any software that is designed to harm or exploit your computer or network. Malware can include viruses, worms, trojans, ransomware, spyware, adware, or other malicious code. Malware can infect your computer through various ways, such as downloading cracked software from untrusted sources, opening infected attachments or links, or running cracked software without proper antivirus protection.</p><p>Many crackers embed malware into cracked software to make money from ads or to steal your data or resources. For example, some cracked software can display unwanted pop-ups or banners on your screen, redirect your browser to malicious websites, or install unwanted toolbars or extensions on your browser. Some cracked software can also spy on your online activities, keystrokes, passwords, credit card numbers, or personal information. Some cracked software can even encrypt your files and demand a ransom to unlock them, or use your computer as part of a botnet to launch cyberattacks on other targets.</p><p>Malware can cause serious damage to your computer and data. It can corrupt or delete your files, slow down your system, crash your programs, or disable your security features. It can also expose your personal or financial information to hackers or identity thieves, who can use it for fraud or blackmail. It can also compromise your privacy and security by allowing remote access to your camera, microphone, or screen.</p> | | H3: Security Issues | <h3>Security Issues</h3> | | Explain how cracked software can pose security risks to your computer and network by creating vulnerabilities that can be exploited by hackers or cybercriminals. Explain how cracked software can bypass or disable security features such as firewalls, antivirus programs, or updates. Explain how cracked software can make you more susceptible to cyberattacks such as phishing, denial-of-service (DoS), or man-in-the-middle (MITM). | <p>Another risk of using cracked software is security issues. Cracked software can create vulnerabilities in your computer and network that can be exploited by hackers or cybercriminals. Cracked software can bypass or disable security features such as firewalls, antivirus programs, or updates that are designed to protect your system from threats. Cracked software can also make you more susceptible to cyberattacks such as phishing, denial-of-service (DoS), or man-in-the-middle (MITM).</p><p>Phishing is a type of cyberattack that involves sending fraudulent emails or messages that appear to come from legitimate sources but contain malicious links or attachments that can infect your computer with malware or steal your credentials. Cracked software can make you more vulnerable to phishing by altering your email settings, spoofing your sender address, or redirecting you to fake websites.</p> -<p></p><p>Denial-of-service (DoS) is a type of cyberattack that involves overwhelming your computer or network with requests or traffic that consume its resources and prevent it from functioning properly. Cracked software can make you more vulnerable to DoS by opening ports on your firewall, consuming your bandwidth, or joining a botnet that launches DoS attacks on other targets.</p><p>Man-in-the-middle (MITM) is a type of cyberattack that involves intercepting or altering the communication between two parties without their knowledge. Cracked software can make you more vulnerable to MITM by modifying your network settings, installing rogue certificates, or redirecting your traffic to malicious servers.</p> | | H3: Legal Problems | <h3>Legal Problems</h3> | | Explain how cracked software can violate the intellectual property rights of the original software developers or publishers and expose you to legal consequences such as lawsuits, | Outline of the article | HTML formatting of the article | | --- | --- | | fines, or criminal charges. Explain how software piracy is illegal in most countries and can result in civil or criminal penalties. Explain how software developers or publishers can track or sue users of cracked software by using digital watermarks, online verification, or legal actions. | <p>Another risk of using cracked software is legal problems. Cracked software can violate the intellectual property rights of the original software developers or publishers and expose you to legal consequences such as lawsuits, fines, or criminal charges. Software piracy is illegal in most countries and can result in civil or criminal penalties depending on the severity and scale of the infringement.</p><p>Software developers or publishers can track or sue users of cracked software by using various methods to protect their rights and interests. For example, some software can contain digital watermarks that can identify the source and distribution of cracked software. Some software can also require online verification or activation that can detect and block cracked software. Some software developers or publishers can also take legal actions against users of cracked software by sending cease and desist letters, filing lawsuits, or reporting them to law enforcement agencies.</p> | | H3: Ethical Issues | <h3>Ethical Issues</h3> | | Explain how cracked software can harm the original software developers or publishers by depriving them of revenue, recognition, or feedback. Explain how cracked software can also harm other users or customers by reducing the quality, support, or innovation of the software. Explain how cracked software can undermine the values and principles of fair and honest use of software. | <p>Another risk of using cracked software is ethical issues. Cracked software can harm the original software developers or publishers by depriving them of revenue, recognition, or feedback that they deserve for their work and effort. Cracked software can also harm other users or customers by reducing the quality, support, or innovation of the software that they pay for or rely on. Cracked software can undermine the values and principles of fair and honest use of software that respect the rights and interests of both creators and consumers.</p><p>Software developers or publishers invest a lot of time, money, and resources to create and maintain their software products. They also provide updates, fixes, enhancements, and customer service to their users or customers. They rely on the sales or subscriptions of their software to recover their costs and generate profits. When users download or use cracked software, they are stealing from the original developers or publishers and hurting their livelihoods and reputations.</p><p>Software users or customers expect to get high-quality, secure, and reliable software products that meet their needs and expectations. They also expect to get timely and professional support from the original developers or publishers in case they encounter any issues or problems with the software. When users download or use cracked software, they are compromising the quality, security, and reliability of the software and risking their own data and systems. They are also reducing the incentives and resources for the original developers or publishers to improve and innovate their software products.</p><p>Software is a valuable and useful tool that can benefit many people in various ways. However, it also requires a mutual trust and respect between the creators and consumers of software. When users download or use cracked software, they are breaking this trust and respect and violating the ethical norms and standards of using software. They are also setting a bad example for others and encouraging more piracy and cracking.</p> | | H3: Performance Issues | <h3>Performance Issues</h3> | | Explain how cracked software can affect the performance of your computer or system by causing errors, bugs, crashes, compatibility issues, or missing features. Explain how cracked software can also affect your productivity or efficiency by wasting your time, energy, or resources on fixing or troubleshooting issues caused by cracked software. | <p>Another risk of using cracked software is performance issues. Cracked software can affect the performance of your computer or system by causing errors, bugs, crashes, compatibility issues, or missing features. Cracked software can also affect your productivity or efficiency by wasting your time, energy, or resources on fixing or troubleshooting issues caused by cracked software.</p><p>Cracked software is often poorly coded, modified, or corrupted by crackers who do not have access to the source code or documentation of the original software. Cracked software can also be outdated, incomplete, or incompatible with your operating system, hardware, or other programs. As a result, cracked software can cause various problems such as errors in functionality, display, calculation, saving, printing, exporting, importing, etc., bugs that cause glitches, freezes, hangs, | Outline of the article | HTML formatting of the article | | --- | --- | | loops, or crashes, compatibility issues that prevent the software from working with your system or other programs, or missing features that limit the functionality or usability of the software.</p><p>Cracked software can also affect your productivity or efficiency by wasting your time, energy, or resources on fixing or troubleshooting issues caused by cracked software. You might spend hours or days searching for solutions online, downloading patches or fixes, reinstalling or uninstalling the software, or contacting support forums or groups. You might also lose your work or data due to errors or crashes, or have to redo your work due to bugs or missing features. You might also miss deadlines, lose clients, or damage your reputation due to poor quality or performance of your work.</p> | | H2: Is There a Better Way to Get EximiousSoft Logo Designer Pro Without Breaking the Law or Compromising Your Security? | <h2>Is There a Better Way to Get EximiousSoft Logo Designer Pro Without Breaking the Law or Compromising Your Security?</h2> | | Explain that there is a better way to get EximiousSoft Logo Designer Pro without breaking the law or compromising your security, which is to use a free trial version of the software. Explain the benefits and features of the free trial version, such as 30 days of full functionality, no watermark, no registration, no activation, and no risk. Explain how to download and install the free trial version from the official website. Explain how to use the free trial version to create and save logos. Explain how to purchase a license key and activate the full version of the software after the trial period ends. | <p>If you are interested in EximiousSoft Logo Designer Pro but you don't want to break the law or compromise your security by using cracked software, there is a better way to get it. You can use a free trial version of the software that gives you 30 days of full functionality without any limitations or risks.</p><p>The free trial version of EximiousSoft Logo Designer Pro has all the features and tools of the full version, such as over 4000+ templates and 5000+ vector graphics symbols, full vector-based drawing tools, various file formats support, effects and filters, shapes and texts editing, etc. You can create and save as many logos as you want without any watermark, registration, activation, or malware. You can also access free updates and technical support during the trial period.</p><p>To download and install the free trial version of EximiousSoft Logo Designer Pro, you just need to visit the official website of EximiousSoft and click on the "Download" button. You will get a setup file that you can run on your computer and follow the instructions to complete the installation. The installation process is fast and easy, and it does not require any personal information or payment details.</p><p>To use the free trial version of EximiousSoft Logo Designer Pro, you just need to launch the program and start designing your logos. You can choose from the templates and symbols library, or create your own logo from scratch. You can also import and export your logos in various file formats, such as JPG, PNG, GIF, BMP, TIFF, SVG, PDF, etc. You can save your logos on your computer or print them out.</p><p>If you like EximiousSoft Logo Designer Pro and want to continue using it after the trial period ends, you can purchase a license key and activate the full version of the software. The license key costs $69.95 and it gives you a lifetime license that includes free updates and technical support. You can buy the license key online from the official website of EximiousSoft by clicking on the "Buy Now" button. You will receive an email with your license key and instructions on how to activate it. The activation process is simple and quick, and it does not require any internet connection.</p> | | H2: Conclusion: Why You Should Avoid Cracked Software and Use Free Trial Version Instead | <h2>Conclusion: Why You Should Avoid Cracked Software and Use Free Trial Version Instead</h2> | | Summarize the main points of the article and restate why you should avoid cracked software and use free trial version instead. Emphasize | Outline of the article | HTML formatting of the article | | --- | --- | | the benefits and features of EximiousSoft Logo Designer Pro and how you can get it legally and safely. Encourage the reader to try the free trial version and buy the license key if they like it. | <p>In conclusion, EximiousSoft Logo Designer Pro is a powerful and easy-to-use software that can help you create professional-looking logos for your website, business, or project. It has a wide range of features and tools that can help you design logos from scratch or customize over 4000+ templates and 5000+ vector graphics symbols. It also supports various file formats, effects and filters, and vector-based drawing tools.</p><p>However, you should avoid using cracked software to get EximiousSoft Logo Designer Pro for free, as it can expose you to various risks and dangers, such as malware infections, security issues, legal problems, ethical issues, and performance issues. Cracked software can harm your computer, data, privacy, security, and reputation. It can also harm the original software developers or publishers, and other users or customers.</p><p>The better way to get EximiousSoft Logo Designer Pro is to use a free trial version that gives you 30 days of full functionality without any limitations or risks. You can download and install the free trial version from the official website of EximiousSoft and use it to create and save as many logos as you want. You can also access free updates and technical support during the trial period.</p><p>If you like EximiousSoft Logo Designer Pro and want to continue using it after the trial period ends, you can purchase a license key and activate the full version of the software. The license key costs $69.95 and it gives you a lifetime license that includes free updates and technical support. You can buy the license key online from the official website of EximiousSoft and activate it easily and quickly.</p><p>By using the free trial version and buying the license key, you are supporting the original software developers or publishers and respecting their intellectual property rights. You are also ensuring your own security, quality, and performance of the software. You are also following the ethical norms and standards of using software.</p><p>So what are you waiting for? Try EximiousSoft Logo Designer Pro today and see for yourself how it can help you create amazing logos in minutes!</p> | | H2: FAQs | <h2>FAQs</h2> | | Provide 5 unique FAQs related to the topic of the article, such as: What are the system requirements for EximiousSoft Logo Designer Pro? How can I contact EximiousSoft for support or feedback? What are some alternatives to EximiousSoft Logo Designer Pro? How can I uninstall EximiousSoft Logo Designer Pro? How can I update EximiousSoft Logo Designer Pro? | <h3>What are the system requirements for EximiousSoft Logo Designer Pro?</h3><p>To run EximiousSoft Logo Designer Pro on your computer, you need to have:</p><ul><li>Windows XP/2003/Vista/7/8/10 or later</li><li>Pentium 4 processor or higher</li><li>512 MB RAM or more</li><li>100 MB hard disk space or more</li><li>A monitor with 1024x768 resolution or higher</li></ul><h3>How can I contact EximiousSoft for support or feedback?</h3><p>If you have any questions, issues, or suggestions regarding EximiousSoft Logo Designer Pro, you can contact EximiousSoft by:</p><ul><li>Email: support@eximioussoft.com</li><li>Phone: +86-28-85432479</li><li>Fax: +86-28-85432479</li><li>Online form: https://www.eximioussoft.com/contact.htm</li></ul><h3>What are some alternatives to EximiousSoft Logo Designer Pro?</h3><p>If you are looking for other software that can help you create logos, you might want to check out some of these alternatives:</p><ul><li>Adobe Illustrator: This is a vector graphics editor that can help you create logos, icons, illustrations, typography, and more. It has a wide range of tools and features that can help you design logos with precision and creativity. It also integrates with other Adobe products such as Photoshop, InDesign, or After Effects.</li><li>CorelDRAW: This is a graphic design software that can help you create logos, graphics, layouts, illustrations, photos, web images, and more. It has a user-friendly interface and a comprehensive suite of tools and features that can help you design logos with ease and flexibility. It also supports various file formats and devices.</ | Outline of the article | HTML formatting of the article | | --- | --- | | li>Logo Maker: This is an online logo design service that can help you create logos in minutes. It has a simple and intuitive interface and a large collection of templates and icons that you can customize and download. It also offers a free trial and a money-back guarantee.</li><li>Canva: This is an online graphic design platform that can help you create logos, flyers, posters, social media graphics, and more. It has a drag-and-drop interface and a huge library of fonts, images, stickers, and shapes that you can use and edit. It also has a free plan and a premium plan that offers more features and resources.</li></ul><h3>How can I uninstall EximiousSoft Logo Designer Pro?</h3><p>If you want to uninstall EximiousSoft Logo Designer Pro from your computer, you can follow these steps:</p><ol><li>Close EximiousSoft Logo Designer Pro if it is running.</li><li>Go to the Start menu and click on Control Panel.</li><li>Click on Programs and Features or Uninstall a Program.</li><li>Find EximiousSoft Logo Designer Pro in the list of programs and click on Uninstall or Change.</li><li>Follow the instructions on the screen to complete the uninstallation process.</li></ol><h3>How can I update EximiousSoft Logo Designer Pro?</h3><p>If you want to update EximiousSoft Logo Designer Pro to the latest version, you can follow these steps:</p><ol><li>Open EximiousSoft Logo Designer Pro on your computer.</li><li>Go to the Help menu and click on Check for Updates.</li><li>If there is a new version available, click on Download and Install.</li><li>Follow the instructions on the screen to complete the update process.</li></ol> | | Custom message:</p> b2dd77e56b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/utils/test_text.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/utils/test_text.py deleted file mode 100644 index 0caf8abaad540810dbcd44b32640872fe6296587..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/tests/metagpt/utils/test_text.py +++ /dev/null @@ -1,77 +0,0 @@ -import pytest - -from metagpt.utils.text import ( - decode_unicode_escape, - generate_prompt_chunk, - reduce_message_length, - split_paragraph, -) - - -def _msgs(): - length = 20 - while length: - yield "Hello," * 1000 * length - length -= 1 - - -def _paragraphs(n): - return " ".join("Hello World." for _ in range(n)) - - -@pytest.mark.parametrize( - "msgs, model_name, system_text, reserved, expected", - [ - (_msgs(), "gpt-3.5-turbo", "System", 1500, 1), - (_msgs(), "gpt-3.5-turbo-16k", "System", 3000, 6), - (_msgs(), "gpt-3.5-turbo-16k", "Hello," * 1000, 3000, 5), - (_msgs(), "gpt-4", "System", 2000, 3), - (_msgs(), "gpt-4", "Hello," * 1000, 2000, 2), - (_msgs(), "gpt-4-32k", "System", 4000, 14), - (_msgs(), "gpt-4-32k", "Hello," * 2000, 4000, 12), - ] -) -def test_reduce_message_length(msgs, model_name, system_text, reserved, expected): - assert len(reduce_message_length(msgs, model_name, system_text, reserved)) / (len("Hello,")) / 1000 == expected - - -@pytest.mark.parametrize( - "text, prompt_template, model_name, system_text, reserved, expected", - [ - (" ".join("Hello World." for _ in range(1000)), "Prompt: {}", "gpt-3.5-turbo", "System", 1500, 2), - (" ".join("Hello World." for _ in range(1000)), "Prompt: {}", "gpt-3.5-turbo-16k", "System", 3000, 1), - (" ".join("Hello World." for _ in range(4000)), "Prompt: {}", "gpt-4", "System", 2000, 2), - (" ".join("Hello World." for _ in range(8000)), "Prompt: {}", "gpt-4-32k", "System", 4000, 1), - ] -) -def test_generate_prompt_chunk(text, prompt_template, model_name, system_text, reserved, expected): - ret = list(generate_prompt_chunk(text, prompt_template, model_name, system_text, reserved)) - assert len(ret) == expected - - -@pytest.mark.parametrize( - "paragraph, sep, count, expected", - [ - (_paragraphs(10), ".", 2, [_paragraphs(5), f" {_paragraphs(5)}"]), - (_paragraphs(10), ".", 3, [_paragraphs(4), f" {_paragraphs(3)}", f" {_paragraphs(3)}"]), - (f"{_paragraphs(5)}\n{_paragraphs(3)}", "\n.", 2, [f"{_paragraphs(5)}\n", _paragraphs(3)]), - ("......", ".", 2, ["...", "..."]), - ("......", ".", 3, ["..", "..", ".."]), - (".......", ".", 2, ["....", "..."]), - ] -) -def test_split_paragraph(paragraph, sep, count, expected): - ret = split_paragraph(paragraph, sep, count) - assert ret == expected - - -@pytest.mark.parametrize( - "text, expected", - [ - ("Hello\\nWorld", "Hello\nWorld"), - ("Hello\\tWorld", "Hello\tWorld"), - ("Hello\\u0020World", "Hello World"), - ] -) -def test_decode_unicode_escape(text, expected): - assert decode_unicode_escape(text) == expected diff --git a/spaces/suchun/chatGPT_acdemic/crazy_functions/test_project/cpp/cppipc/ipc.cpp b/spaces/suchun/chatGPT_acdemic/crazy_functions/test_project/cpp/cppipc/ipc.cpp deleted file mode 100644 index c713b852ea5a51fbeb4729b64561da482caaf351..0000000000000000000000000000000000000000 --- a/spaces/suchun/chatGPT_acdemic/crazy_functions/test_project/cpp/cppipc/ipc.cpp +++ /dev/null @@ -1,701 +0,0 @@ - -#include <type_traits> -#include <cstring> -#include <algorithm> -#include <utility> // std::pair, std::move, std::forward -#include <atomic> -#include <type_traits> // aligned_storage_t -#include <string> -#include <vector> -#include <array> -#include <cassert> - -#include "libipc/ipc.h" -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/pool_alloc.h" -#include "libipc/queue.h" -#include "libipc/policy.h" -#include "libipc/rw_lock.h" -#include "libipc/waiter.h" - -#include "libipc/utility/log.h" -#include "libipc/utility/id_pool.h" -#include "libipc/utility/scope_guard.h" -#include "libipc/utility/utility.h" - -#include "libipc/memory/resource.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_array.h" - -namespace { - -using msg_id_t = std::uint32_t; -using acc_t = std::atomic<msg_id_t>; - -template <std::size_t DataSize, std::size_t AlignSize> -struct msg_t; - -template <std::size_t AlignSize> -struct msg_t<0, AlignSize> { - msg_id_t cc_id_; - msg_id_t id_; - std::int32_t remain_; - bool storage_; -}; - -template <std::size_t DataSize, std::size_t AlignSize> -struct msg_t : msg_t<0, AlignSize> { - std::aligned_storage_t<DataSize, AlignSize> data_ {}; - - msg_t() = default; - msg_t(msg_id_t cc_id, msg_id_t id, std::int32_t remain, void const * data, std::size_t size) - : msg_t<0, AlignSize> {cc_id, id, remain, (data == nullptr) || (size == 0)} { - if (this->storage_) { - if (data != nullptr) { - // copy storage-id - *reinterpret_cast<ipc::storage_id_t*>(&data_) = - *static_cast<ipc::storage_id_t const *>(data); - } - } - else std::memcpy(&data_, data, size); - } -}; - -template <typename T> -ipc::buff_t make_cache(T& data, std::size_t size) { - auto ptr = ipc::mem::alloc(size); - std::memcpy(ptr, &data, (ipc::detail::min)(sizeof(data), size)); - return { ptr, size, ipc::mem::free }; -} - -struct cache_t { - std::size_t fill_; - ipc::buff_t buff_; - - cache_t(std::size_t f, ipc::buff_t && b) - : fill_(f), buff_(std::move(b)) - {} - - void append(void const * data, std::size_t size) { - if (fill_ >= buff_.size() || data == nullptr || size == 0) return; - auto new_fill = (ipc::detail::min)(fill_ + size, buff_.size()); - std::memcpy(static_cast<ipc::byte_t*>(buff_.data()) + fill_, data, new_fill - fill_); - fill_ = new_fill; - } -}; - -auto cc_acc() { - static ipc::shm::handle acc_h("__CA_CONN__", sizeof(acc_t)); - return static_cast<acc_t*>(acc_h.get()); -} - -IPC_CONSTEXPR_ std::size_t align_chunk_size(std::size_t size) noexcept { - return (((size - 1) / ipc::large_msg_align) + 1) * ipc::large_msg_align; -} - -IPC_CONSTEXPR_ std::size_t calc_chunk_size(std::size_t size) noexcept { - return ipc::make_align(alignof(std::max_align_t), align_chunk_size( - ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>)) + size)); -} - -struct chunk_t { - std::atomic<ipc::circ::cc_t> &conns() noexcept { - return *reinterpret_cast<std::atomic<ipc::circ::cc_t> *>(this); - } - - void *data() noexcept { - return reinterpret_cast<ipc::byte_t *>(this) - + ipc::make_align(alignof(std::max_align_t), sizeof(std::atomic<ipc::circ::cc_t>)); - } -}; - -struct chunk_info_t { - ipc::id_pool<> pool_; - ipc::spin_lock lock_; - - IPC_CONSTEXPR_ static std::size_t chunks_mem_size(std::size_t chunk_size) noexcept { - return ipc::id_pool<>::max_count * chunk_size; - } - - ipc::byte_t *chunks_mem() noexcept { - return reinterpret_cast<ipc::byte_t *>(this + 1); - } - - chunk_t *at(std::size_t chunk_size, ipc::storage_id_t id) noexcept { - if (id < 0) return nullptr; - return reinterpret_cast<chunk_t *>(chunks_mem() + (chunk_size * id)); - } -}; - -auto& chunk_storages() { - class chunk_handle_t { - ipc::shm::handle handle_; - - public: - chunk_info_t *get_info(std::size_t chunk_size) { - if (!handle_.valid() && - !handle_.acquire( ("__CHUNK_INFO__" + ipc::to_string(chunk_size)).c_str(), - sizeof(chunk_info_t) + chunk_info_t::chunks_mem_size(chunk_size) )) { - ipc::error("[chunk_storages] chunk_shm.id_info_.acquire failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - auto info = static_cast<chunk_info_t*>(handle_.get()); - if (info == nullptr) { - ipc::error("[chunk_storages] chunk_shm.id_info_.get failed: chunk_size = %zd\n", chunk_size); - return nullptr; - } - return info; - } - }; - static ipc::map<std::size_t, chunk_handle_t> chunk_hs; - return chunk_hs; -} - -chunk_info_t *chunk_storage_info(std::size_t chunk_size) { - auto &storages = chunk_storages(); - std::decay_t<decltype(storages)>::iterator it; - { - static ipc::rw_lock lock; - IPC_UNUSED_ std::shared_lock<ipc::rw_lock> guard {lock}; - if ((it = storages.find(chunk_size)) == storages.end()) { - using chunk_handle_t = std::decay_t<decltype(storages)>::value_type::second_type; - guard.unlock(); - IPC_UNUSED_ std::lock_guard<ipc::rw_lock> guard {lock}; - it = storages.emplace(chunk_size, chunk_handle_t{}).first; - } - } - return it->second.get_info(chunk_size); -} - -std::pair<ipc::storage_id_t, void*> acquire_storage(std::size_t size, ipc::circ::cc_t conns) { - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return {}; - - info->lock_.lock(); - info->pool_.prepare(); - // got an unique id - auto id = info->pool_.acquire(); - info->lock_.unlock(); - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return {}; - chunk->conns().store(conns, std::memory_order_relaxed); - return { id, chunk->data() }; -} - -void *find_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[find_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return nullptr; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return nullptr; - return info->at(chunk_size, id)->data(); -} - -void release_storage(ipc::storage_id_t id, std::size_t size) { - if (id < 0) { - ipc::error("[release_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template <ipc::relat Rp, ipc::relat Rc> -bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::unicast>, - std::atomic<ipc::circ::cc_t> &/*conns*/, ipc::circ::cc_t /*curr_conns*/, ipc::circ::cc_t /*conn_id*/) noexcept { - return true; -} - -template <ipc::relat Rp, ipc::relat Rc> -bool sub_rc(ipc::wr<Rp, Rc, ipc::trans::broadcast>, - std::atomic<ipc::circ::cc_t> &conns, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) noexcept { - auto last_conns = curr_conns & ~conn_id; - for (unsigned k = 0;;) { - auto chunk_conns = conns.load(std::memory_order_acquire); - if (conns.compare_exchange_weak(chunk_conns, chunk_conns & last_conns, std::memory_order_release)) { - return (chunk_conns & last_conns) == 0; - } - ipc::yield(k); - } -} - -template <typename Flag> -void recycle_storage(ipc::storage_id_t id, std::size_t size, ipc::circ::cc_t curr_conns, ipc::circ::cc_t conn_id) { - if (id < 0) { - ipc::error("[recycle_storage] id is invalid: id = %ld, size = %zd\n", (long)id, size); - return; - } - std::size_t chunk_size = calc_chunk_size(size); - auto info = chunk_storage_info(chunk_size); - if (info == nullptr) return; - - auto chunk = info->at(chunk_size, id); - if (chunk == nullptr) return; - - if (!sub_rc(Flag{}, chunk->conns(), curr_conns, conn_id)) { - return; - } - info->lock_.lock(); - info->pool_.release(id); - info->lock_.unlock(); -} - -template <typename MsgT> -bool clear_message(void* p) { - auto msg = static_cast<MsgT*>(p); - if (msg->storage_) { - std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg->remain_; - if (r_size <= 0) { - ipc::error("[clear_message] invalid msg size: %d\n", (int)r_size); - return true; - } - release_storage( - *reinterpret_cast<ipc::storage_id_t*>(&msg->data_), - static_cast<std::size_t>(r_size)); - } - return true; -} - -struct conn_info_head { - - ipc::string name_; - msg_id_t cc_id_; // connection-info id - ipc::detail::waiter cc_waiter_, wt_waiter_, rd_waiter_; - ipc::shm::handle acc_h_; - - conn_info_head(char const * name) - : name_ {name} - , cc_id_ {(cc_acc() == nullptr) ? 0 : cc_acc()->fetch_add(1, std::memory_order_relaxed)} - , cc_waiter_{("__CC_CONN__" + name_).c_str()} - , wt_waiter_{("__WT_CONN__" + name_).c_str()} - , rd_waiter_{("__RD_CONN__" + name_).c_str()} - , acc_h_ {("__AC_CONN__" + name_).c_str(), sizeof(acc_t)} { - } - - void quit_waiting() { - cc_waiter_.quit_waiting(); - wt_waiter_.quit_waiting(); - rd_waiter_.quit_waiting(); - } - - auto acc() { - return static_cast<acc_t*>(acc_h_.get()); - } - - auto& recv_cache() { - thread_local ipc::unordered_map<msg_id_t, cache_t> tls; - return tls; - } -}; - -template <typename W, typename F> -bool wait_for(W& waiter, F&& pred, std::uint64_t tm) { - if (tm == 0) return !pred(); - for (unsigned k = 0; pred();) { - bool ret = true; - ipc::sleep(k, [&k, &ret, &waiter, &pred, tm] { - ret = waiter.wait_if(std::forward<F>(pred), tm); - k = 0; - }); - if (!ret) return false; // timeout or fail - if (k == 0) break; // k has been reset - } - return true; -} - -template <typename Policy, - std::size_t DataSize = ipc::data_length, - std::size_t AlignSize = (ipc::detail::min)(DataSize, alignof(std::max_align_t))> -struct queue_generator { - - using queue_t = ipc::queue<msg_t<DataSize, AlignSize>, Policy>; - - struct conn_info_t : conn_info_head { - queue_t que_; - - conn_info_t(char const * name) - : conn_info_head{name} - , que_{("__QU_CONN__" + - ipc::to_string(DataSize) + "__" + - ipc::to_string(AlignSize) + "__" + name).c_str()} { - } - - void disconnect_receiver() { - bool dis = que_.disconnect(); - this->quit_waiting(); - if (dis) { - this->recv_cache().clear(); - } - } - }; -}; - -template <typename Policy> -struct detail_impl { - -using policy_t = Policy; -using flag_t = typename policy_t::flag_t; -using queue_t = typename queue_generator<policy_t>::queue_t; -using conn_info_t = typename queue_generator<policy_t>::conn_info_t; - -constexpr static conn_info_t* info_of(ipc::handle_t h) noexcept { - return static_cast<conn_info_t*>(h); -} - -constexpr static queue_t* queue_of(ipc::handle_t h) noexcept { - return (info_of(h) == nullptr) ? nullptr : &(info_of(h)->que_); -} - -/* API implementations */ - -static void disconnect(ipc::handle_t h) { - auto que = queue_of(h); - if (que == nullptr) { - return; - } - que->shut_sending(); - assert(info_of(h) != nullptr); - info_of(h)->disconnect_receiver(); -} - -static bool reconnect(ipc::handle_t * ph, bool start_to_recv) { - assert(ph != nullptr); - assert(*ph != nullptr); - auto que = queue_of(*ph); - if (que == nullptr) { - return false; - } - if (start_to_recv) { - que->shut_sending(); - if (que->connect()) { // wouldn't connect twice - info_of(*ph)->cc_waiter_.broadcast(); - return true; - } - return false; - } - // start_to_recv == false - if (que->connected()) { - info_of(*ph)->disconnect_receiver(); - } - return que->ready_sending(); -} - -static bool connect(ipc::handle_t * ph, char const * name, bool start_to_recv) { - assert(ph != nullptr); - if (*ph == nullptr) { - *ph = ipc::mem::alloc<conn_info_t>(name); - } - return reconnect(ph, start_to_recv); -} - -static void destroy(ipc::handle_t h) { - disconnect(h); - ipc::mem::free(info_of(h)); -} - -static std::size_t recv_count(ipc::handle_t h) noexcept { - auto que = queue_of(h); - if (que == nullptr) { - return ipc::invalid_value; - } - return que->conn_count(); -} - -static bool wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - return false; - } - return wait_for(info_of(h)->cc_waiter_, [que, r_count] { - return que->conn_count() < r_count; - }, tm); -} - -template <typename F> -static bool send(F&& gen_push, ipc::handle_t h, void const * data, std::size_t size) { - if (data == nullptr || size == 0) { - ipc::error("fail: send(%p, %zd)\n", data, size); - return false; - } - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: send, queue_of(h) == nullptr\n"); - return false; - } - if (que->elems() == nullptr) { - ipc::error("fail: send, queue_of(h)->elems() == nullptr\n"); - return false; - } - if (!que->ready_sending()) { - ipc::error("fail: send, que->ready_sending() == false\n"); - return false; - } - ipc::circ::cc_t conns = que->elems()->connections(std::memory_order_relaxed); - if (conns == 0) { - ipc::error("fail: send, there is no receiver on this connection.\n"); - return false; - } - // calc a new message id - auto acc = info_of(h)->acc(); - if (acc == nullptr) { - ipc::error("fail: send, info_of(h)->acc() == nullptr\n"); - return false; - } - auto msg_id = acc->fetch_add(1, std::memory_order_relaxed); - auto try_push = std::forward<F>(gen_push)(info_of(h), que, msg_id); - if (size > ipc::large_msg_limit) { - auto dat = acquire_storage(size, conns); - void * buf = dat.second; - if (buf != nullptr) { - std::memcpy(buf, data, size); - return try_push(static_cast<std::int32_t>(size) - - static_cast<std::int32_t>(ipc::data_length), &(dat.first), 0); - } - // try using message fragment - //ipc::log("fail: shm::handle for big message. msg_id: %zd, size: %zd\n", msg_id, size); - } - // push message fragment - std::int32_t offset = 0; - for (std::int32_t i = 0; i < static_cast<std::int32_t>(size / ipc::data_length); ++i, offset += ipc::data_length) { - if (!try_push(static_cast<std::int32_t>(size) - offset - static_cast<std::int32_t>(ipc::data_length), - static_cast<ipc::byte_t const *>(data) + offset, ipc::data_length)) { - return false; - } - } - // if remain > 0, this is the last message fragment - std::int32_t remain = static_cast<std::int32_t>(size) - offset; - if (remain > 0) { - if (!try_push(remain - static_cast<std::int32_t>(ipc::data_length), - static_cast<ipc::byte_t const *>(data) + offset, - static_cast<std::size_t>(remain))) { - return false; - } - } - return true; -} - -static bool send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - ipc::log("force_push: msg_id = %zd, remain = %d, size = %zd\n", msg_id, remain, size); - if (!que->force_push( - clear_message<typename queue_t::value_t>, - info->cc_id_, msg_id, remain, data, size)) { - return false; - } - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static bool try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return send([tm](auto info, auto que, auto msg_id) { - return [tm, info, que, msg_id](std::int32_t remain, void const * data, std::size_t size) { - if (!wait_for(info->wt_waiter_, [&] { - return !que->push( - [](void*) { return true; }, - info->cc_id_, msg_id, remain, data, size); - }, tm)) { - return false; - } - info->rd_waiter_.broadcast(); - return true; - }; - }, h, data, size); -} - -static ipc::buff_t recv(ipc::handle_t h, std::uint64_t tm) { - auto que = queue_of(h); - if (que == nullptr) { - ipc::error("fail: recv, queue_of(h) == nullptr\n"); - return {}; - } - if (!que->connected()) { - // hasn't connected yet, just return. - return {}; - } - auto& rc = info_of(h)->recv_cache(); - for (;;) { - // pop a new message - typename queue_t::value_t msg; - if (!wait_for(info_of(h)->rd_waiter_, [que, &msg] { - return !que->pop(msg); - }, tm)) { - // pop failed, just return. - return {}; - } - info_of(h)->wt_waiter_.broadcast(); - if ((info_of(h)->acc() != nullptr) && (msg.cc_id_ == info_of(h)->cc_id_)) { - continue; // ignore message to self - } - // msg.remain_ may minus & abs(msg.remain_) < data_length - std::int32_t r_size = static_cast<std::int32_t>(ipc::data_length) + msg.remain_; - if (r_size <= 0) { - ipc::error("fail: recv, r_size = %d\n", (int)r_size); - return {}; - } - std::size_t msg_size = static_cast<std::size_t>(r_size); - // large message - if (msg.storage_) { - ipc::storage_id_t buf_id = *reinterpret_cast<ipc::storage_id_t*>(&msg.data_); - void* buf = find_storage(buf_id, msg_size); - if (buf != nullptr) { - struct recycle_t { - ipc::storage_id_t storage_id; - ipc::circ::cc_t curr_conns; - ipc::circ::cc_t conn_id; - } *r_info = ipc::mem::alloc<recycle_t>(recycle_t{ - buf_id, que->elems()->connections(std::memory_order_relaxed), que->connected_id() - }); - if (r_info == nullptr) { - ipc::log("fail: ipc::mem::alloc<recycle_t>.\n"); - return ipc::buff_t{buf, msg_size}; // no recycle - } else { - return ipc::buff_t{buf, msg_size, [](void* p_info, std::size_t size) { - auto r_info = static_cast<recycle_t *>(p_info); - IPC_UNUSED_ auto finally = ipc::guard([r_info] { - ipc::mem::free(r_info); - }); - recycle_storage<flag_t>(r_info->storage_id, size, r_info->curr_conns, r_info->conn_id); - }, r_info}; - } - } else { - ipc::log("fail: shm::handle for large message. msg_id: %zd, buf_id: %zd, size: %zd\n", msg.id_, buf_id, msg_size); - continue; - } - } - // find cache with msg.id_ - auto cac_it = rc.find(msg.id_); - if (cac_it == rc.end()) { - if (msg_size <= ipc::data_length) { - return make_cache(msg.data_, msg_size); - } - // gc - if (rc.size() > 1024) { - std::vector<msg_id_t> need_del; - for (auto const & pair : rc) { - auto cmp = std::minmax(msg.id_, pair.first); - if (cmp.second - cmp.first > 8192) { - need_del.push_back(pair.first); - } - } - for (auto id : need_del) rc.erase(id); - } - // cache the first message fragment - rc.emplace(msg.id_, cache_t { ipc::data_length, make_cache(msg.data_, msg_size) }); - } - // has cached before this message - else { - auto& cac = cac_it->second; - // this is the last message fragment - if (msg.remain_ <= 0) { - cac.append(&(msg.data_), msg_size); - // finish this message, erase it from cache - auto buff = std::move(cac.buff_); - rc.erase(cac_it); - return buff; - } - // there are remain datas after this message - cac.append(&(msg.data_), ipc::data_length); - } - } -} - -static ipc::buff_t try_recv(ipc::handle_t h) { - return recv(h, 0); -} - -}; // detail_impl<Policy> - -template <typename Flag> -using policy_t = ipc::policy::choose<ipc::circ::elem_array, Flag>; - -} // internal-linkage - -namespace ipc { - -template <typename Flag> -ipc::handle_t chan_impl<Flag>::inited() { - ipc::detail::waiter::init(); - return nullptr; -} - -template <typename Flag> -bool chan_impl<Flag>::connect(ipc::handle_t * ph, char const * name, unsigned mode) { - return detail_impl<policy_t<Flag>>::connect(ph, name, mode & receiver); -} - -template <typename Flag> -bool chan_impl<Flag>::reconnect(ipc::handle_t * ph, unsigned mode) { - return detail_impl<policy_t<Flag>>::reconnect(ph, mode & receiver); -} - -template <typename Flag> -void chan_impl<Flag>::disconnect(ipc::handle_t h) { - detail_impl<policy_t<Flag>>::disconnect(h); -} - -template <typename Flag> -void chan_impl<Flag>::destroy(ipc::handle_t h) { - detail_impl<policy_t<Flag>>::destroy(h); -} - -template <typename Flag> -char const * chan_impl<Flag>::name(ipc::handle_t h) { - auto info = detail_impl<policy_t<Flag>>::info_of(h); - return (info == nullptr) ? nullptr : info->name_.c_str(); -} - -template <typename Flag> -std::size_t chan_impl<Flag>::recv_count(ipc::handle_t h) { - return detail_impl<policy_t<Flag>>::recv_count(h); -} - -template <typename Flag> -bool chan_impl<Flag>::wait_for_recv(ipc::handle_t h, std::size_t r_count, std::uint64_t tm) { - return detail_impl<policy_t<Flag>>::wait_for_recv(h, r_count, tm); -} - -template <typename Flag> -bool chan_impl<Flag>::send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl<policy_t<Flag>>::send(h, data, size, tm); -} - -template <typename Flag> -buff_t chan_impl<Flag>::recv(ipc::handle_t h, std::uint64_t tm) { - return detail_impl<policy_t<Flag>>::recv(h, tm); -} - -template <typename Flag> -bool chan_impl<Flag>::try_send(ipc::handle_t h, void const * data, std::size_t size, std::uint64_t tm) { - return detail_impl<policy_t<Flag>>::try_send(h, data, size, tm); -} - -template <typename Flag> -buff_t chan_impl<Flag>::try_recv(ipc::handle_t h) { - return detail_impl<policy_t<Flag>>::try_recv(h); -} - -template struct chan_impl<ipc::wr<relat::single, relat::single, trans::unicast >>; -// template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::unicast >>; // TBD -// template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::unicast >>; // TBD -template struct chan_impl<ipc::wr<relat::single, relat::multi , trans::broadcast>>; -template struct chan_impl<ipc::wr<relat::multi , relat::multi , trans::broadcast>>; - -} // namespace ipc diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Arma1gamefreedownloadfullversion [BEST].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Arma1gamefreedownloadfullversion [BEST].md deleted file mode 100644 index 32c5b213c3ee1e2b4f3710ad0179808bc0aa5ef6..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Arma1gamefreedownloadfullversion [BEST].md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>arma1gamefreedownloadfullversion</h2><br /><p><b><b>Download File</b> &#10001; <a href="https://cinurl.com/2uEZ2C">https://cinurl.com/2uEZ2C</a></b></p><br /><br /> -<br /> -Microsoft Office 2007 Language Pack ROMANIAN (Proofing Tools) Utorrent · arma1gamefreedownloadfullversion · credit wizard v1.1 · Previous. 1fdad05405<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Deadpool English Language Patch.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Deadpool English Language Patch.md deleted file mode 100644 index 1e95d34f6b5c356f75781155a70b61f08b2e9fb7..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Deadpool English Language Patch.md +++ /dev/null @@ -1,13 +0,0 @@ -<br /> -<h1>How to Play Deadpool in English on Your PC</h1> -<p>Deadpool is a hilarious and action-packed video game based on the Marvel comic book character of the same name. The game features Deadpool's signature fourth-wall breaking humor, as well as his arsenal of weapons and gadgets. However, some players may have trouble playing the game in English, especially if they have a non-English version of the game or a different region setting on their PC. In this article, we will show you how to install an English language patch for Deadpool and enjoy the game in its original language.</p> -<h2>Deadpool english language patch</h2><br /><p><b><b>DOWNLOAD</b> &#10038; <a href="https://cinurl.com/2uEXMZ">https://cinurl.com/2uEXMZ</a></b></p><br /><br /> -<p>The first step is to download the English language patch from a reliable source. One such source is Zeno.FM, which offers a radio station called Deadpool English Language Patch[^1^]. You can listen to this station for free and download the patch file from the link provided in the description. Alternatively, you can search for other sources online, but make sure they are safe and trustworthy.</p> -<p>The next step is to locate your Deadpool game folder on your PC. This may vary depending on where you installed the game, but usually it is in C:\Program Files (x86)\Steam\steamapps\common\Deadpool. Once you find the folder, open it and look for a file called steam_api.ini. This file contains the language setting for the game. Open it with a text editor and change the line that says Language= to Language=english. Save and close the file.</p> -<p>The final step is to copy and paste the patch file that you downloaded into your Deadpool game folder. Overwrite any existing files if prompted. This will replace the original language files with the English ones. Now you can launch the game and enjoy Deadpool's witty remarks and jokes in English.</p> -<p>We hope this article was helpful and informative. If you have any questions or problems, feel free to leave a comment below or contact us through our website. Happy gaming!</p> -<p></p><p>If you want to learn more about Deadpool and his adventures, you can check out the official website of the game, which features trailers, screenshots, wallpapers, and more. You can also read the comic books that inspired the game, which are available in print and digital formats from Marvel Comics. Deadpool is one of the most popular and funny characters in the Marvel universe, and his comic books are full of action, humor, and references to pop culture.</p> -<p>Another way to enjoy Deadpool is to watch the movies that star him, which are also based on the comic books. The first movie, simply titled Deadpool, was released in 2016 and became a huge hit with fans and critics alike. It tells the origin story of Deadpool, how he became a mercenary with superpowers and a twisted sense of humor. The second movie, Deadpool 2, was released in 2018 and introduced new characters such as Cable, Domino, and X-Force. Both movies are rated R for violence, language, and sexual content, so they are not suitable for younger audiences.</p> -<p>Deadpool is a unique and entertaining character that appeals to many people who love action, comedy, and breaking the fourth wall. Whether you play the game, read the comics, or watch the movies, you will have a lot of fun with Deadpool and his antics. Just remember to always play the game in English, because that's how Deadpool would want it.</p> d5da3c52bf<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Kung Fu Panda 2008 Br Rip 1080p Movie Torrents.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Kung Fu Panda 2008 Br Rip 1080p Movie Torrents.md deleted file mode 100644 index 686d440ed953c3061d1104c2e66fee6cdda854a9..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Kung Fu Panda 2008 Br Rip 1080p Movie Torrents.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>Kung Fu Panda 2008 Br Rip 1080p Movie Torrents</h2><br /><p><b><b>Download</b> ===> <a href="https://cinurl.com/2uEXNy">https://cinurl.com/2uEXNy</a></b></p><br /><br /> - -Kung Fu Panda. 2008. Action / Adventure / Animation / Comedy / Family ... 720p.BLU 1080p.BLU. 601.87 MB. 1280*534. English. PG. Subtitles. 4d29de3e1b<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/AMS Bianka Model (Sets 01 11) Rar.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/AMS Bianka Model (Sets 01 11) Rar.md deleted file mode 100644 index 0cf3460fdffb16dbc6fd9b2f14fdd7f41cebbf83..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/AMS Bianka Model (Sets 01 11) Rar.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>AMS Bianka Model (Sets 01 11) Rar</h2><br /><p><b><b>Download Zip</b> &raquo;&raquo;&raquo; <a href="https://urluss.com/2uCGk8">https://urluss.com/2uCGk8</a></b></p><br /><br /> - -AMS Bianka Model (Sets 01 11) rar rar · Lazesoft Recovery Suite 4.2.1 Professional Edition FULL Crack Serial Key · Ryrie Study Bible.pdf 4d29de3e1b<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Autocad 2012 Keygen 64 Bit.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Autocad 2012 Keygen 64 Bit.md deleted file mode 100644 index 5dfa58fea5d06b940dd48693fc519e2e51bcc90d..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Autocad 2012 Keygen 64 Bit.md +++ /dev/null @@ -1,131 +0,0 @@ -<br /> -<h1>How to Download and Activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)</h1> -<p>Autocad 2012 is a powerful and versatile software that allows you to create and edit 2D and 3D designs for various purposes. Whether you are an architect, engineer, designer, or hobbyist, you can use Autocad 2012 to turn your ideas into reality. However, to use Autocad 2012, you need to have a valid product key and activate it with a keygen. In this article, we will show you how to download and activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) for free.</p> -<h2>autocad 2012 keygen 64 bit</h2><br /><p><b><b>Download</b> &#10002; &#10002; &#10002; <a href="https://urluss.com/2uCFHq">https://urluss.com/2uCFHq</a></b></p><br /><br /> -<h2>What is Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h2> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) is a package that contains the installation files, the product key, and the keygen for Autocad 2012. The product key is a 16-digit code that identifies your product and allows you to install it. The keygen is a program that generates an activation code that you need to enter in the activation screen to activate your product. The activation code is a 25-digit code that verifies your product and unlocks its full features.</p> -<h3>How to Download Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h3> -<p>To download Autocad 2012 x64 (64bit) + (Product key and Xforce keygen), you need to follow these steps:</p> -<ol> -<li>Go to one of these links: https://spdyfl.com/bL6B or http://fileml.com/l/04do</li> -<li>Click on the download button and wait for the file to be downloaded.</li> -<li>Extract the file using WinRAR or any other extraction tool.</li> -<li>You will see a folder named "Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)"</li> -<li>Open the folder and run the setup.exe file.</li> -<li>Follow the installation instructions and enter the product key when prompted.</li> -<li>The product key for Autocad 2012 is 001D1.</li> -<li>Finish the installation and restart your computer.</li> -</ol> -<h4>How to Activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h4> -<p>To activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen), you need to follow these steps:</p> -<ol> -<li>Disable your internet connection and antivirus software.</li> -<li>Open Autocad 2012 and click on Activate.</li> -<li>If it tells you that your serial is wrong, click on Close and click on Activate again.</li> -<li>Select "I have an activation code from Autodesk".</li> -<li>Open the folder "Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)" and run the XFORCE Keygen 32bits or 64bits version depending on your system.</li> -<li>Click on Mem Patch (you should see "Successfully patched").</li> -<li>Copy the request code from the activation screen and paste it into the keygen.</li> -<li>Click on Generate and copy the activation code from the keygen.</li> -<li>Paste the activation code into the activation screen and click Next.</li> -<li>You have successfully activated Autocad 2012 x64 (64bit) + (Product key and Xforce keygen).</li> -</ol> -<h5>Conclusion</h5> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) is a great software that can help you create and edit amazing designs. However, to use it, you need to have a valid product key and activate it with a keygen. In this article, we showed you how to download and activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) for free. We hope this article was helpful for you. If you have any questions or comments, feel free to leave them below.</p> -<p></p> -<h6>What are the Features of Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h6> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) has many features that make it a powerful and versatile software for your design needs. Some of the features are:</p> -<ul> -<li>It supports both 2D and 3D design and modeling, with a wide range of tools and commands.</li> -<li>It has a user-friendly interface that allows you to work efficiently and intuitively.</li> -<li>It has a powerful and flexible drawing engine that supports various file formats, such as DWG, DXF, DWF, PDF, etc.</li> -<li>It has a comprehensive documentation system that allows you to create and edit annotations, dimensions, tables, etc.</li> -<li>It has a dynamic block feature that allows you to create and modify complex objects with ease.</li> -<li>It has a parametric drawing feature that allows you to define and maintain geometric and dimensional relationships between objects.</li> -<li>It has an associative array feature that allows you to create and edit rectangular, polar, or path arrays of objects.</li> -<li>It has a multi-functional grip feature that allows you to modify objects quickly and easily.</li> -<li>It has an auto-complete command feature that helps you enter commands faster and more accurately.</li> -<li>It has an in-canvas viewport control feature that allows you to switch between different views of your model.</li> -</ul> -<h7>What are the Tips and Tricks for Using Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h7> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) is a software that requires some skills and knowledge to use effectively. Here are some tips and tricks that can help you improve your productivity and creativity with Autocad 2012:</p> -<ul> -<li>Use the keyboard shortcuts to access commands faster and easier.</li> -<li>Use the command line to enter commands directly or to access additional options.</li> -<li>Use the object snap feature to snap to precise points on objects.</li> -<li>Use the object snap tracking feature to align objects along horizontal or vertical lines.</li> -<li>Use the polar tracking feature to draw or move objects along specific angles.</li> -<li>Use the object selection feature to select objects by window, crossing, fence, etc.</li> -<li>Use the quick properties feature to view and edit properties of selected objects.</li> -<li>Use the quick access toolbar to access frequently used commands or tools.</li> -<li>Use the ribbon to access various tabs and panels of commands or tools.</li> -<li>Use the status bar to toggle various modes or settings, such as grid, snap, ortho, etc.</li> -</ul> -<h8>Conclusion</h8> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) is a great software that can help you create and edit amazing designs. However, to use it, you need to have a valid product key and activate it with a keygen. In this article, we showed you how to download and activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) for free. We also showed you some of the benefits, features, tips, and tricks of using Autocad 2012. We hope this article was helpful for you. If you have any questions or comments, feel free to leave them below.</p> -<h6>What are the System Requirements for Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h6> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) is a software that requires some minimum system requirements to run smoothly and efficiently. Here are the system requirements for Autocad 2012:</p> -<ul> -<li>Operating System: Windows XP SP3, Windows Vista SP2, Windows 7 SP1, Windows 8, Windows 10</li> -<li>Processor: Intel Pentium 4 or AMD Athlon dual-core processor, 3 GHz or higher with SSE2 technology</li> -<li>Memory: 2 GB RAM (4 GB recommended)</li> -<li>Hard Disk: 6 GB free disk space for installation</li> -<li>Graphics: 1024 x 768 display resolution with true color (1600 x 1050 recommended) with DirectX 9 or DirectX 10 capable graphics card</li> -<li>Internet: Internet connection for activation and online services</li> -</ul> -<h7>How to Troubleshoot Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h7> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) is a software that may encounter some problems or errors during installation or activation. Here are some common issues and solutions that can help you troubleshoot Autocad 2012:</p> -<ul> -<li>If you get an error message that says "The serial number you entered is not valid", make sure you enter the correct product key (001D1) and serial number (666-69696969, 667-98989898, or 400-45454545).</li> -<li>If you get an error message that says "The activation code is invalid for this product", make sure you use the correct keygen (32-bit or 64-bit) and copy the request code and activation code correctly.</li> -<li>If you get an error message that says "The license manager is not functioning or is improperly installed", make sure you disable your internet connection and antivirus software before activating Autocad 2012.</li> -<li>If you get an error message that says "The application was unable to start correctly (0xc000007b)", make sure you install the latest updates and service packs for Autocad 2012 and your operating system.</li> -<li>If you get an error message that says "The program can't start because MSVCR100.dll is missing from your computer", make sure you install the Microsoft Visual C++ Redistributable Package for your system.</li> -</ul> -<h8>Conclusion</h8> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) is a great software that can help you create and edit amazing designs. However, to use it, you need to have a valid product key and activate it with a keygen. In this article, we showed you how to download and activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) for free. We also showed you some of the benefits, features, tips, tricks, system requirements, and troubleshooting of using Autocad 2012. We hope this article was helpful for you. If you have any questions or comments, feel free to leave them below.</p> -<h1>How to Download and Activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)</h1> -<p>Autocad 2012 is a powerful and versatile software that allows you to create and edit 2D and 3D designs for various purposes. Whether you are an architect, engineer, designer, or hobbyist, you can use Autocad 2012 to turn your ideas into reality. However, to use Autocad 2012, you need to have a valid product key and activate it with a keygen. In this article, we will show you how to download and activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) for free.</p> -<h2>What is Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h2> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) is a package that contains the installation files, the product key, and the keygen for Autocad 2012. The product key is a 16-digit code that identifies your product and allows you to install it. The keygen is a program that generates an activation code that you need to enter in the activation screen to activate your product. The activation code is a 25-digit code that verifies your product and unlocks its full features.</p> -<h3>How to Download Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h3> -<p>To download Autocad 2012 x64 (64bit) + (Product key and Xforce keygen), you need to follow these steps:</p> -<ol> -<li>Go to one of these links: https://spdyfl.com/bL6B or http://fileml.com/l/04do</li> -<li>Click on the download button and wait for the file to be downloaded.</li> -<li>Extract the file using WinRAR or any other extraction tool.</li> -<li>You will see a folder named "Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)"</li> -<li>Open the folder and run the setup.exe file.</li> -<li>Follow the installation instructions and enter the product key when prompted.</li> -<li>The product key for Autocad 2012 is 001D1.</li> -<li>Finish the installation and restart your computer.</li> -</ol> -<h4>How to Activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h4> -<p>To activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen), you need to follow these steps:</p> -<ol> -<li>Disable your internet connection and antivirus software.</li> -<li>Open Autocad 2012 and click on Activate.</li> -<li>If it tells you that your serial is wrong, click on Close and click on Activate again.</li> -<li>Select "I have an activation code from Autodesk".</li> -<li>Open the folder "Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)" and run the XFORCE Keygen 32bits or 64bits version depending on your system.</li> -<li>Click on Mem Patch (you should see "Successfully patched").</li> -<li>Copy the request code from the activation screen and paste it into the keygen.</li> -<li>Click on Generate and copy the activation code from the keygen.</li> -<li>Paste the activation code into the activation screen and click Next.</li> -<li>You have successfully activated Autocad 2012 x64 (64bit) + (Product key and Xforce keygen).</li> -</ol> -<h5>What are the Benefits of Using Autocad 2012 x64 (64bit) + (Product key and Xforce keygen)?</h5> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) has many benefits that make it a great software for your design needs. Some of the benefits are:</p> -<ul> -<li>It supports both 2D and 3D design and modeling, with a wide range of tools and commands.</li> -<li>It has a user-friendly interface that allows you to work efficiently and intuitively.</li> -<li>It has a powerful and flexible drawing engine that supports various file formats, such as DWG, DXF, DWF, PDF, etc.</li> -<li>It has a comprehensive documentation system that allows you to create and edit annotations, dimensions, tables, etc.</li> -<li>It has a dynamic block feature that allows you to create and modify complex objects with ease.</li> -<li>It has a parametric drawing feature that allows you to define and maintain geometric and dimensional relationships between objects.</li> -<li>It has an associative array feature that allows you to create and edit rectangular, polar, or path arrays of objects.</li> -<li>It has a multi-functional grip feature that allows you to modify objects quickly -<h8>Conclusion</h8> -<p>Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) is a great software that can help you create and edit amazing designs. However, to use it, you need to have a valid product key and activate it with a keygen. In this article, we showed you how to download and activate Autocad 2012 x64 (64bit) + (Product key and Xforce keygen) for free. We also showed you some of the benefits, features, tips, tricks, system requirements, and troubleshooting of using Autocad 2012. We hope this article was helpful for you. If you have any questions or comments, feel free to leave them below.</p> 3cee63e6c2<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/core/evaluation/metrics.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/core/evaluation/metrics.py deleted file mode 100644 index ec4819e60e51a498fe7295498f09873a0705f308..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/core/evaluation/metrics.py +++ /dev/null @@ -1,326 +0,0 @@ -from collections import OrderedDict - -import annotator.uniformer.mmcv as mmcv -import numpy as np -import torch - - -def f_score(precision, recall, beta=1): - """calcuate the f-score value. - - Args: - precision (float | torch.Tensor): The precision value. - recall (float | torch.Tensor): The recall value. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - Returns: - [torch.tensor]: The f-score value. - """ - score = (1 + beta**2) * (precision * recall) / ( - (beta**2 * precision) + recall) - return score - - -def intersect_and_union(pred_label, - label, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate intersection and Union. - - Args: - pred_label (ndarray | str): Prediction segmentation map - or predict result filename. - label (ndarray | str): Ground truth segmentation map - or label filename. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. The parameter will - work only when label is str. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. The parameter will - work only when label is str. Default: False. - - Returns: - torch.Tensor: The intersection of prediction and ground truth - histogram on all classes. - torch.Tensor: The union of prediction and ground truth histogram on - all classes. - torch.Tensor: The prediction histogram on all classes. - torch.Tensor: The ground truth histogram on all classes. - """ - - if isinstance(pred_label, str): - pred_label = torch.from_numpy(np.load(pred_label)) - else: - pred_label = torch.from_numpy((pred_label)) - - if isinstance(label, str): - label = torch.from_numpy( - mmcv.imread(label, flag='unchanged', backend='pillow')) - else: - label = torch.from_numpy(label) - - if label_map is not None: - for old_id, new_id in label_map.items(): - label[label == old_id] = new_id - if reduce_zero_label: - label[label == 0] = 255 - label = label - 1 - label[label == 254] = 255 - - mask = (label != ignore_index) - pred_label = pred_label[mask] - label = label[mask] - - intersect = pred_label[pred_label == label] - area_intersect = torch.histc( - intersect.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_pred_label = torch.histc( - pred_label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_label = torch.histc( - label.float(), bins=(num_classes), min=0, max=num_classes - 1) - area_union = area_pred_label + area_label - area_intersect - return area_intersect, area_union, area_pred_label, area_label - - -def total_intersect_and_union(results, - gt_seg_maps, - num_classes, - ignore_index, - label_map=dict(), - reduce_zero_label=False): - """Calculate Total Intersection and Union. - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - - Returns: - ndarray: The intersection of prediction and ground truth histogram - on all classes. - ndarray: The union of prediction and ground truth histogram on all - classes. - ndarray: The prediction histogram on all classes. - ndarray: The ground truth histogram on all classes. - """ - num_imgs = len(results) - assert len(gt_seg_maps) == num_imgs - total_area_intersect = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_union = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_pred_label = torch.zeros((num_classes, ), dtype=torch.float64) - total_area_label = torch.zeros((num_classes, ), dtype=torch.float64) - for i in range(num_imgs): - area_intersect, area_union, area_pred_label, area_label = \ - intersect_and_union( - results[i], gt_seg_maps[i], num_classes, ignore_index, - label_map, reduce_zero_label) - total_area_intersect += area_intersect - total_area_union += area_union - total_area_pred_label += area_pred_label - total_area_label += area_label - return total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label - - -def mean_iou(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: - <aAcc> float: Overall accuracy on all images. - <Acc> ndarray: Per category accuracy, shape (num_classes, ). - <IoU> ndarray: Per category IoU, shape (num_classes, ). - """ - iou_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mIoU'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return iou_result - - -def mean_dice(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False): - """Calculate Mean Dice (mDice) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - - Returns: - dict[str, float | ndarray]: Default metrics. - <aAcc> float: Overall accuracy on all images. - <Acc> ndarray: Per category accuracy, shape (num_classes, ). - <Dice> ndarray: Per category dice, shape (num_classes, ). - """ - - dice_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mDice'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label) - return dice_result - - -def mean_fscore(results, - gt_seg_maps, - num_classes, - ignore_index, - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate Mean Intersection and Union (mIoU) - - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - beta (int): Determines the weight of recall in the combined score. - Default: False. - - - Returns: - dict[str, float | ndarray]: Default metrics. - <aAcc> float: Overall accuracy on all images. - <Fscore> ndarray: Per category recall, shape (num_classes, ). - <Precision> ndarray: Per category precision, shape (num_classes, ). - <Recall> ndarray: Per category f-score, shape (num_classes, ). - """ - fscore_result = eval_metrics( - results=results, - gt_seg_maps=gt_seg_maps, - num_classes=num_classes, - ignore_index=ignore_index, - metrics=['mFscore'], - nan_to_num=nan_to_num, - label_map=label_map, - reduce_zero_label=reduce_zero_label, - beta=beta) - return fscore_result - - -def eval_metrics(results, - gt_seg_maps, - num_classes, - ignore_index, - metrics=['mIoU'], - nan_to_num=None, - label_map=dict(), - reduce_zero_label=False, - beta=1): - """Calculate evaluation metrics - Args: - results (list[ndarray] | list[str]): List of prediction segmentation - maps or list of prediction result filenames. - gt_seg_maps (list[ndarray] | list[str]): list of ground truth - segmentation maps or list of label filenames. - num_classes (int): Number of categories. - ignore_index (int): Index that will be ignored in evaluation. - metrics (list[str] | str): Metrics to be evaluated, 'mIoU' and 'mDice'. - nan_to_num (int, optional): If specified, NaN values will be replaced - by the numbers defined by the user. Default: None. - label_map (dict): Mapping old labels to new labels. Default: dict(). - reduce_zero_label (bool): Whether ignore zero label. Default: False. - Returns: - float: Overall accuracy on all images. - ndarray: Per category accuracy, shape (num_classes, ). - ndarray: Per category evaluation metrics, shape (num_classes, ). - """ - if isinstance(metrics, str): - metrics = [metrics] - allowed_metrics = ['mIoU', 'mDice', 'mFscore'] - if not set(metrics).issubset(set(allowed_metrics)): - raise KeyError('metrics {} is not supported'.format(metrics)) - - total_area_intersect, total_area_union, total_area_pred_label, \ - total_area_label = total_intersect_and_union( - results, gt_seg_maps, num_classes, ignore_index, label_map, - reduce_zero_label) - all_acc = total_area_intersect.sum() / total_area_label.sum() - ret_metrics = OrderedDict({'aAcc': all_acc}) - for metric in metrics: - if metric == 'mIoU': - iou = total_area_intersect / total_area_union - acc = total_area_intersect / total_area_label - ret_metrics['IoU'] = iou - ret_metrics['Acc'] = acc - elif metric == 'mDice': - dice = 2 * total_area_intersect / ( - total_area_pred_label + total_area_label) - acc = total_area_intersect / total_area_label - ret_metrics['Dice'] = dice - ret_metrics['Acc'] = acc - elif metric == 'mFscore': - precision = total_area_intersect / total_area_pred_label - recall = total_area_intersect / total_area_label - f_value = torch.tensor( - [f_score(x[0], x[1], beta) for x in zip(precision, recall)]) - ret_metrics['Fscore'] = f_value - ret_metrics['Precision'] = precision - ret_metrics['Recall'] = recall - - ret_metrics = { - metric: value.numpy() - for metric, value in ret_metrics.items() - } - if nan_to_num is not None: - ret_metrics = OrderedDict({ - metric: np.nan_to_num(metric_value, nan=nan_to_num) - for metric, metric_value in ret_metrics.items() - }) - return ret_metrics diff --git a/spaces/szukevin/VISOR-GPT/train/finetune/run_c3.py b/spaces/szukevin/VISOR-GPT/train/finetune/run_c3.py deleted file mode 100644 index 3bb3d410a77d2a34ffe0fdc277c5c952eb245bf5..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/finetune/run_c3.py +++ /dev/null @@ -1,215 +0,0 @@ -""" -This script provides an example to wrap TencentPretrain for C3 (a multiple choice dataset). -""" -import sys -import os -import argparse -import json -import random -import torch -import torch.nn as nn - -tencentpretrain_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) -sys.path.append(tencentpretrain_dir) - -from tencentpretrain.embeddings import * -from tencentpretrain.encoders import * -from tencentpretrain.utils.constants import * -from tencentpretrain.utils import * -from tencentpretrain.utils.optimizers import * -from tencentpretrain.utils.config import load_hyperparam -from tencentpretrain.utils.seed import set_seed -from tencentpretrain.utils.logging import init_logger -from tencentpretrain.model_saver import save_model -from tencentpretrain.opts import finetune_opts, tokenizer_opts, adv_opts -from finetune.run_classifier import build_optimizer, load_or_initialize_parameters, train_model, batch_loader, evaluate - - -class MultipleChoice(nn.Module): - def __init__(self, args): - super(MultipleChoice, self).__init__() - self.embedding = Embedding(args) - for embedding_name in args.embedding: - tmp_emb = str2embedding[embedding_name](args, len(args.tokenizer.vocab)) - self.embedding.update(tmp_emb, embedding_name) - self.encoder = str2encoder[args.encoder](args) - self.dropout = nn.Dropout(args.dropout) - self.output_layer = nn.Linear(args.hidden_size, 1) - - def forward(self, src, tgt, seg, soft_tgt=None): - """ - Args: - src: [batch_size x choices_num x seq_length] - tgt: [batch_size] - seg: [batch_size x choices_num x seq_length] - """ - - choices_num = src.shape[1] - - src = src.view(-1, src.size(-1)) - seg = seg.view(-1, seg.size(-1)) - - # Embedding. - emb = self.embedding(src, seg) - # Encoder. - output = self.encoder(emb, seg) - output = self.dropout(output) - logits = self.output_layer(output[:, 0, :]) - reshaped_logits = logits.view(-1, choices_num) - - if tgt is not None: - loss = nn.NLLLoss()(nn.LogSoftmax(dim=-1)(reshaped_logits), tgt.view(-1)) - return loss, reshaped_logits - else: - return None, reshaped_logits - - -def read_dataset(args, path): - - with open(path, mode="r", encoding="utf-8") as f: - data = json.load(f) - - examples = [] - for i in range(len(data)): - for j in range(len(data[i][1])): - example = ["\n".join(data[i][0]).lower(), data[i][1][j]["question"].lower()] - for k in range(len(data[i][1][j]["choice"])): - example += [data[i][1][j]["choice"][k].lower()] - for k in range(len(data[i][1][j]["choice"]), args.max_choices_num): - example += ["No Answer"] - - example += [data[i][1][j].get("answer", "").lower()] - - examples += [example] - - dataset = [] - for i, example in enumerate(examples): - tgt = 0 - for k in range(args.max_choices_num): - if example[2 + k] == example[6]: - tgt = k - dataset.append(([], tgt, [])) - - for k in range(args.max_choices_num): - - src_a = args.tokenizer.convert_tokens_to_ids([CLS_TOKEN] + args.tokenizer.tokenize(example[k + 2]) + [SEP_TOKEN]) - src_b = args.tokenizer.convert_tokens_to_ids(args.tokenizer.tokenize(example[1]) + [SEP_TOKEN]) - src_c = args.tokenizer.convert_tokens_to_ids(args.tokenizer.tokenize(example[0]) + [SEP_TOKEN]) - - src = src_a + src_b + src_c - seg = [1] * (len(src_a) + len(src_b)) + [2] * len(src_c) - - if len(src) > args.seq_length: - src = src[: args.seq_length] - seg = seg[: args.seq_length] - PAD_ID = args.tokenizer.convert_tokens_to_ids([PAD_TOKEN])[0] - while len(src) < args.seq_length: - src.append(PAD_ID) - seg.append(0) - - dataset[-1][0].append(src) - dataset[-1][2].append(seg) - - return dataset - - -def main(): - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - - finetune_opts(parser) - - parser.add_argument("--max_choices_num", default=4, type=int, - help="The maximum number of cadicate answer, shorter than this will be padded.") - - tokenizer_opts(parser) - - adv_opts(parser) - - args = parser.parse_args() - args.labels_num = args.max_choices_num - - # Load the hyperparameters from the config file. - args = load_hyperparam(args) - - set_seed(args.seed) - - # Build tokenizer. - args.tokenizer = str2tokenizer[args.tokenizer](args) - - # Build multiple choice model. - model = MultipleChoice(args) - - # Load or initialize parameters. - load_or_initialize_parameters(args, model) - - # Get logger. - args.logger = init_logger(args) - - args.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - model = model.to(args.device) - - # Training phase. - trainset = read_dataset(args, args.train_path) - instances_num = len(trainset) - batch_size = args.batch_size - - args.train_steps = int(instances_num * args.epochs_num / batch_size) + 1 - - args.logger.info("Batch size: {}".format(batch_size)) - args.logger.info("The number of training instances: {}".format(instances_num)) - - optimizer, scheduler = build_optimizer(args, model) - - if args.fp16: - try: - from apex import amp - except ImportError: - raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.") - model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level) - args.amp = amp - - if torch.cuda.device_count() > 1: - args.logger.info("{} GPUs are available. Let's use them.".format(torch.cuda.device_count())) - model = torch.nn.DataParallel(model) - args.model = model - - if args.use_adv: - args.adv_method = str2adv[args.adv_type](model) - - total_loss, result, best_result = 0.0, 0.0, 0.0 - - args.logger.info("Start training.") - - for epoch in range(1, args.epochs_num + 1): - random.shuffle(trainset) - src = torch.LongTensor([example[0] for example in trainset]) - tgt = torch.LongTensor([example[1] for example in trainset]) - seg = torch.LongTensor([example[2] for example in trainset]) - - model.train() - for i, (src_batch, tgt_batch, seg_batch, _) in enumerate(batch_loader(batch_size, src, tgt, seg)): - - loss = train_model(args, model, optimizer, scheduler, src_batch, tgt_batch, seg_batch) - total_loss += loss.item() - - if (i + 1) % args.report_steps == 0: - args.logger.info("Epoch id: {}, Training steps: {}, Avg loss: {:.3f}".format(epoch, i + 1, total_loss / args.report_steps)) - total_loss = 0.0 - - result = evaluate(args, read_dataset(args, args.dev_path)) - if result[0] > best_result: - best_result = result[0] - save_model(model, args.output_model_path) - - # Evaluation phase. - if args.test_path is not None: - args.logger.info("Test set evaluation.") - if torch.cuda.device_count() > 1: - args.model.module.load_state_dict(torch.load(args.output_model_path)) - else: - args.model.load_state_dict(torch.load(args.output_model_path)) - evaluate(args, read_dataset(args, args.test_path)) - - -if __name__ == "__main__": - main() diff --git a/spaces/tangshitao/MVDiffusion/app.py b/spaces/tangshitao/MVDiffusion/app.py deleted file mode 100644 index c995766eab0b7cfd11b1d320f28b7c36ccbaf82c..0000000000000000000000000000000000000000 --- a/spaces/tangshitao/MVDiffusion/app.py +++ /dev/null @@ -1,285 +0,0 @@ -import torch -import torch.nn as nn -import yaml -import cv2 -import numpy as np -from PIL import Image -import gradio as gr -from functools import partial -import lib.Equirec2Perspec as E2P -import lib.Perspec2Equirec as P2E -import lib.multi_Perspec2Equirec as m_P2E -from model import Model, generate_basic, generate_advanced - -def get_K_R(FOV, THETA, PHI, height, width): - f = 0.5 * width * 1 / np.tan(0.5 * FOV / 180.0 * np.pi) - cx = (width - 1) / 2.0 - cy = (height - 1) / 2.0 - K = np.array([ - [f, 0, cx], - [0, f, cy], - [0, 0, 1], - ], np.float32) - - y_axis = np.array([0.0, 1.0, 0.0], np.float32) - x_axis = np.array([1.0, 0.0, 0.0], np.float32) - R1, _ = cv2.Rodrigues(y_axis * np.radians(THETA)) - R2, _ = cv2.Rodrigues(np.dot(R1, x_axis) * np.radians(PHI)) - R = R2 @ R1 - return K, R - - -if __name__=='__main__': - - - example1=[ - "A room with a sofa and coffee table for relaxing.", - "A corner sofa is surrounded by plants.", - "A comfy sofa, bookshelf, and lamp for reading.", - "A bright room with a sofa, TV, and games.", - "A stylish sofa and desk setup for work.", - "A sofa, dining table, and chairs for gatherings.", - "A colorful sofa, art, and music fill the room.", - "A sofa, yoga mat, and meditation corner for calm." - ] - example2=[ - "A room with a sofa and coffee table for relaxing, cartoon style", - "A corner sofa is surrounded by plants, cartoon style", - "A comfy sofa, bookshelf, and lamp for reading, cartoon style", - "A bright room with a sofa, TV, and games, cartoon style", - "A stylish sofa and desk setup for work, cartoon style", - "A sofa, dining table, and chairs for gatherings, cartoon style", - "A colorful sofa, art, and music fill the room, cartoon style", - "A sofa, yoga mat, and meditation corner for calm, cartoon style" - ] - - example3=[ - "A room with a sofa and coffee table for relaxing, oil painting style", - "A corner sofa is surrounded by plants, oil painting style", - "A comfy sofa, bookshelf, and lamp for reading, oil painting style", - "A bright room with a sofa, TV, and games, oil painting style", - "A stylish sofa and desk setup for work, oil painting style", - "A sofa, dining table, and chairs for gatherings, oil painting style", - "A colorful sofa, art, and music fill the room, oil painting style", - "A sofa, yoga mat, and meditation corner for calm, oil painting style" - ] - - example4=[ - "A Japanese room with muted-colored tatami mats.", - "A Japanese room with a simple, folded futon sits to one side.", - "A Japanese room with a low table rests in the room's center.", - "A Japanese room with Shoji screens divide the room softly.", - "A Japanese room with An alcove holds an elegant scroll and flowers.", - "A Japanese room with a tea set rests on a bamboo tray.", - "A Japanese room with a carved wooden cupboard stands against a wall.", - "A Japanese room with a traditional lamp gently lights the room." - ] - example6=[ - 'This kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop', - 'This kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop', - 'This kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop', - 'To the left of the island, a stainless-steel refrigerator stands tall. ', - 'To the left of the island, a stainless-steel refrigerator stands tall. ', - 'a sink surrounded by cabinets', - 'a sink surrounded by cabinets', - 'To the right of the sink, built-in wooden cabinets painted in a muted.' - ] - - example7= [ - "Cobblestone streets curl between old buildings.", - "Shops and cafes display signs and emit pleasant smells.", - "A fruit market scents the air with fresh citrus.", - "A fountain adds calm to one side of the scene.", - "Bicycles rest against walls and posts.", - "Flowers in boxes color the windows.", - "Flowers in boxes color the windows.", - "Cobblestone streets curl between old buildings." - ] - - example8=[ - "The patio is open and airy.", - "A table and chairs sit in the middle.", - "Next the table is flowers.", - "Colorful flowers fill the planters.", - "A grill stands ready for barbecues.", - "A grill stands ready for barbecues.", - "The patio overlooks a lush garden.", - "The patio overlooks a lush garden." - ] - - example9=[ - "A Chinese palace with roofs curve.", - "A Chinese palace, Red and gold accents gleam in the sun.", - "A Chinese palace with a view of mountain in the front.", - "A view of mountain in the front.", - "A Chinese palace with a view of mountain in the front.", - "A Chinese palace with a tree beside.", - "A Chinese palace with a tree beside.", - "A Chinese palace, with a tree beside." - ] - - - - example_b1="This kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop, a sink surrounded by cabinets. To the left of the island, a stainless-steel refrigerator stands tall. To the right of the sink, built-in wooden cabinets painted in a muted." - example_b2="Bursting with vibrant hues and exaggerated proportions, the cartoon-styled room sparkled with whimsy and cheer, with floating shelves crammed with oddly shaped trinkets, a comically oversized polka-dot armchair perched near a gravity-defying, tilted lamp, and the candy-striped wallpaper creating a playful backdrop to the merry chaos, exuding a sense of fun and boundless imagination." - example_b3="Bathed in the pulsating glow of neon lights that painted stark contrasts of shadow and color, the cyberpunk room was a high-tech, low-life sanctuary, where sleek, metallic surfaces met jagged, improvised tech; a wall of glitchy monitors flickered with unending streams of data, and the buzz of electric current and the low hum of cooling fans formed a dystopian symphony, adding to the room's relentless, gritty energy." - example_b4="Majestically rising towards the heavens, the snow-capped mountain stood, its jagged peaks cloaked in a shroud of ethereal clouds, its rugged slopes a stark contrast against the serene azure sky, and its silent grandeur exuding an air of ancient wisdom and timeless solitude, commanding awe and reverence from all who beheld it." - example_b5='Bathed in the soft, dappled light of the setting sun, the silent street lay undisturbed, revealing the grandeur of its cobblestone texture, the rusted lampposts bearing witness to forgotten stories, and the ancient, ivy-clad houses standing stoically, their shuttered windows and weather-beaten doors speaking volumes about their passage through time.' - example_b6='Awash with the soothing hues of an array of blossoms, the tranquil garden was a symphony of life and color, where the soft murmur of the babbling brook intertwined with the whispering willows, and the iridescent petals danced in the gentle breeze, creating an enchanting sanctuary of beauty and serenity.' - example_b7="Canopied by a patchwork quilt of sunlight and shadows, the sprawling park was a panorama of lush green grass, meandering trails etched through vibrant wildflowers, towering oaks reaching towards the sky, and tranquil ponds mirroring the clear, blue expanse above, offering a serene retreat in the heart of nature's splendor." - - examples_basic=[example_b1, example_b2, example_b3, example_b4, example_b5, example_b6] - examples_advanced=[example1, example2, example3, example4, example6, example7, example8, example9] - - description="The demo generates 8 perspective images, with FOV of 90 and rotation angle of 45. Please type 8 sentences corresponding to each perspective image." - - outputs=[gr.Image(shape=(484, 2048))] - outputs.extend([gr.Image(shape=(1, 1)) for i in range(8)]) - - def load_example_img(path): - img=Image.open(path) - img.resize((1024, 242)) - return img - - def copy(text): - return [text]*8 - - def clear(): - return None, None, None, None, None, None, None, None, None - - def load_basic(example): - return example - - default_text='This kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop, a sink surrounded by cabinets. To the left of the island, a stainless-steel refrigerator stands tall. To the right of the sink, built-in wooden cabinets painted in a muted.' - css = """ - #warning {background-color: #000000} - .feedback textarea {font-size: 16px !important} - #foo {} - .text111 textarea { - color: rgba(0, 0, 0, 0.5); - } - """ - - inputs=[gr.Textbox(type="text", value=example1[i], label='Text{}'.format(i)) for i in range(8)] - - with gr.Blocks(css=css) as demo: - - with gr.Row(): - gr.Markdown( - """ - # <center>Text2Pano with MVDiffusion</center> - """) - with gr.Row(): - gr.Markdown( - """ - <center>Text2Pano demonstration: Write a scene you want in Text, then click "Generate panorama". Alternatively, you can copy the example text prompts below to the text box. The advanced mode allows to specify text prompts for each perspective image. It takes 3 minitues to generate one panorama.</center> - """) - with gr.Row(): - gr.HTML(""" - <div style='text-align: center; font-size: 25px;'> - <a href='https://mvdiffusion.github.io/'>Project Page</a> - </div> - """) - with gr.Row(): - gr.HTML(""" - <div style='text-align: center; font-size: 20px;'> - It's recommended to use chatGPT to augment prompts to get better results. - </div> - """) - with gr.Tab("Basic"): - with gr.Row(): - textbox1=gr.Textbox(type="text", label='Text', value=default_text, elem_id='warning', elem_classes="feedback") - - with gr.Row(): - submit_btn = gr.Button("Generate panorama") - # clear_btn = gr.Button("Clear all texts") - # clear_btn.click( - # clear, - # outputs=inputs+[textbox1] - # ) - - with gr.Accordion("Expand/hide examples") as acc: - for i in range(0, len(examples_basic)): - with gr.Row(): - gr.Image(load_example_img('assets/basic/img{}.png'.format(i+1)), label='example {}'.format(i+1)) - #gr.Image('demo/assets/basic/img{}.png'.format(i+2), label='example {}'.format(i+2)) - with gr.Row(): - gr.Textbox(type="text", label='Example text {}'.format(i+1), value=examples_basic[i]) - #gr.Textbox(type="text", label='Example text {}'.format(i+2), value=examples_basic[i+1]) - # with gr.Row(): - # load_btn=gr.Button("Load texts to the above box") - # load_btn.click( - # partial(load_basic, examples_basic[i]), - # outputs=[textbox1] - # ) - gr.Row() - gr.Row() - - submit_btn.click( - partial(generate_basic, acc), - inputs=textbox1, - outputs=[acc]+outputs - ) - - with gr.Tab("Advanced"): - with gr.Row(): - for text_bar in inputs[:4]: - text_bar.render() - with gr.Row(): - for text_bar in inputs[4:]: - text_bar.render() - - with gr.Row(): - - submit_btn = gr.Button("Generate panorama") - # clear_btn = gr.Button("Clear all texts") - # clear_btn.click( - # clear, - # outputs=inputs+[textbox1], - # queue=True, - # ) - with gr.Accordion("Expand/hide examples") as acc_advanced: - for i, example in enumerate(examples_advanced): - with gr.Row(): - gr.Image(load_example_img('assets/advanced/img{}.png'.format(i+1)) , label='example {}'.format(i+1)) - with gr.Row(): - gr.Textbox(type="text", label='Text 1', value=example[0]) - gr.Textbox(type="text", label='Text 2', value=example[1]) - gr.Textbox(type="text", label='Text 3', value=example[2]) - gr.Textbox(type="text", label='Text 4', value=example[3]) - with gr.Row(): - gr.Textbox(type="text", label='Text 4', value=example[4]) - gr.Textbox(type="text", label='Text 5', value=example[5]) - gr.Textbox(type="text", label='Text 6', value=example[6]) - gr.Textbox(type="text", label='Text 7', value=example[7]) - # with gr.Row(): - # load_btn=gr.Button("Load text to other text boxes") - # load_btn.click( - # partial(load_basic, example), - # outputs=inputs - # ) - gr.Row() - gr.Row() - submit_btn.click( - partial(generate_advanced, acc_advanced), - inputs=inputs, - outputs=[acc_advanced]+outputs - ) - - with gr.Row(): - outputs[0].render() - with gr.Row(): - outputs[1].render() - outputs[2].render() - with gr.Row(): - outputs[3].render() - outputs[4].render() - with gr.Row(): - outputs[5].render() - outputs[6].render() - with gr.Row(): - outputs[7].render() - outputs[8].render() - - demo.queue() - demo.launch() \ No newline at end of file diff --git a/spaces/temp-late/rhyme-ai/rhyme_with_ai/utils.py b/spaces/temp-late/rhyme-ai/rhyme_with_ai/utils.py deleted file mode 100644 index a3d85a13d612ef5d8f84e503352524e0439a2d0c..0000000000000000000000000000000000000000 --- a/spaces/temp-late/rhyme-ai/rhyme_with_ai/utils.py +++ /dev/null @@ -1,112 +0,0 @@ -import itertools -import string -import random - - -def color_new_words(new: str, old: str, color: str = "#eefa66") -> str: - """Color new words in strings with a span.""" - - def find_diff(new_, old_): - return [ii for ii, (n, o) in enumerate(zip(new_, old_)) if n != o] - - new_words = new.split() - old_words = old.split() - forward = find_diff(new_words, old_words) - backward = find_diff(new_words[::-1], old_words[::-1]) - - if not forward or not backward: - # No difference - return new - - start, end = forward[0], len(new_words) - backward[0] - return ( - " ".join(new_words[:start]) - + " " - + f'<span style="background-color: {color}">' - + " ".join(new_words[start:end]) - + "</span>" - + " " - + " ".join(new_words[end:]) - ) - - -def find_last_word(s): - """Find the last word in a string.""" - # Note: will break on \n, \r, etc. - alpha_only_sentence = "".join([c for c in s if (c.isalpha() or (c == " "))]).strip() - return alpha_only_sentence.split()[-1] - - -def pairwise(iterable): - """s -> (s0,s1), (s1,s2), (s2, s3), ...""" - # https://stackoverflow.com/questions/5434891/iterate-a-list-as-pair-current-next-in-python - a, b = itertools.tee(iterable) - next(b, None) - return zip(a, b) - - -def sanitize(s): - """Remove punctuation from a string.""" - return s.translate(str.maketrans("", "", string.punctuation)) - -def extract(filename): - """Extrait du fichier arguement les deux premiers champs - arg : nom du fichier au format tsv - return : list de tuples (ortho, phon) - """ - words = [] - with open(filename, 'r') as f: - f.readline() # première ligne - for line in f: - ortho, phon = line.split('\t')[0:2] - words.append((ortho, phon)) - return words - -def mk_dico(lexique, n): - """ - Construit un dictionnaire de rimes de longueur n - à partir d'un lexique phonétisé - args : lexique [(ortho, phon)], n int - return : dict {rime : [word1, word2, ..]} - """ - dico = {} - for item in lexique: - if len(item[1]) >= n: - rime = item[1][-n:] - dico.setdefault(rime, []).append(item[0]) - return dico - -def ortho2phon(word, words_list): - """ - Trouve un mot (word) dans une liste (words_list) - et retourne la forme phonétique correspondante - (en cas d'homographe non homophone, retourne le premier trouvé) - args : word (str), words_list [(ortho, phon), (.., ..)] - return : str, "" si word ne fait pas partie de la liste - """ - for item in words_list: - if word == item[0]: - return item[1] - return "" - -def find_rhyme_french(word, dico, lexique, n=3): - """ - Pour un mot donné, retourne un mot au hasard dont les n - derniers phonèmes riment - args : word (str), dico (dict) le dictionnaire de rimes, - lexique (list) lexique ortho, phon, n (int) le nombre de phonèmes terminaux - """ - # 1 trouver la transcription phonétique - phon = ortho2phon(word, lexique) - if not phon: - return None - # 2 extraire de la transcription les 3 derniers phonèmes (ou 2 le cas échéant) - # 3 trouver dans le dictionnaire la liste des mots du lexique qui ont la même suite de phonèmes finaux - if phon[-n:] not in dico: - return None - rhymes = dico[phon[-n:]] - if word in rhymes: - rhymes.remove(word) - # 4. piocher un mot au hasard dans la liste - rand = random.randint(0, len(rhymes) - 1) - return rhymes[rand] \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Assassins Creed Odyssey Deluxe Edition MULTi15 Repack-FitGirl.md b/spaces/terfces0erbo/CollegeProjectV2/Assassins Creed Odyssey Deluxe Edition MULTi15 Repack-FitGirl.md deleted file mode 100644 index 9b0cfd40237f1c0bd7d2d646ab14c4f23165d616..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Assassins Creed Odyssey Deluxe Edition MULTi15 Repack-FitGirl.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>Assassins Creed Odyssey Deluxe Edition MULTi15 Repack-FitGirl</h2><br /><p><b><b>Download File</b> &rarr;&rarr;&rarr; <a href="https://bytlly.com/2uGlU1">https://bytlly.com/2uGlU1</a></b></p><br /><br /> -<br /> -Assassin's.Creed.Odyssey(Deluxe.Edition.v1.0.6.+.3.DLCs)[Fitgirl.Repack] ... Assassin.s.Creed.Origins(v1.5.1.All.DLCs.Crackfix.MULTI15)[FitGirl.Repack], 0, 0 ... 4d29de3e1b<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/terfces0erbo/CollegeProjectV2/Dibac For Sketchup 2015 Crack Full Version !!INSTALL!!.md b/spaces/terfces0erbo/CollegeProjectV2/Dibac For Sketchup 2015 Crack Full Version !!INSTALL!!.md deleted file mode 100644 index 85d6b665b16308f873afe83dcf61234a695b0e16..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Dibac For Sketchup 2015 Crack Full Version !!INSTALL!!.md +++ /dev/null @@ -1,16 +0,0 @@ - -<h1>Dibac for SketchUp 2015: A Powerful Plugin for Architectural Drawing</h1> -<p>If you are looking for a plugin that can help you create realistic and detailed architectural plans using 2D tools and 3D models, you might want to check out Dibac for SketchUp 2015. Dibac is a plugin that works with SketchUp 2015 (both Pro and Make versions) and allows you to draw walls, doors, windows, wardrobes, stairs, roofs, and more with ease. You can also convert your 2D drawings into 3D models with just one click.</p> -<h2>dibac for sketchup 2015 crack full version</h2><br /><p><b><b>Download File</b> &#10001; <a href="https://bytlly.com/2uGkZU">https://bytlly.com/2uGkZU</a></b></p><br /><br /> -<p>Dibac for SketchUp 2015 is designed to make architectural drawing faster and easier. You can use it to create floor plans, elevations, sections, and perspectives of your projects. You can also customize the dimensions, colors, textures, and styles of your elements. Dibac supports layers, groups, components, and scenes to help you organize your work.</p> -<p>One of the best features of Dibac for SketchUp 2015 is that it comes with a trial version that you can use for 16 hours of use. This means that you can test the plugin before buying it and see if it suits your needs. The trial version has all the features of the full version except for the export options. If you decide to buy the plugin, you can do so from the official website of Dibac[^1^]. The price is $69 USD for a single user license.</p> -<p>Dibac for SketchUp 2015 is a plugin that can help you create professional and realistic architectural drawings in SketchUp. Whether you are an architect, a designer, a student, or a hobbyist, you can benefit from using this plugin to enhance your workflow and creativity. Dibac is compatible with Windows and Mac operating systems and works with SketchUp 2015 only.</p> -<p>If you want to learn more about Dibac for SketchUp 2015, you can visit the official website of Dibac[^1^] where you can find more information, tutorials, videos, and examples of projects made with this plugin. You can also download the trial version and try it out for yourself.</p> - -<p>Dibac for SketchUp 2015 is not only a plugin for drawing, but also a plugin for modeling. You can use it to create realistic and detailed 3D models of your architectural projects. You can also apply materials, textures, colors, and styles to your models and render them with SketchUp or any other rendering software.</p> -<p></p> -<p>One of the advantages of Dibac for SketchUp 2015 is that it is compatible with other SketchUp plugins and extensions. You can use it with plugins like Profile Builder, Skalp, Skatter, V-Ray, and more to enhance your workflow and productivity. You can also import and export your models to other formats like DWG, DXF, PDF, OBJ, and more.</p> -<p>Dibac for SketchUp 2015 is a plugin that can help you save time and money in your architectural projects. You can use it to create professional and accurate drawings and models without the need of expensive software or hardware. You can also use it to communicate your ideas and designs to your clients, colleagues, and contractors.</p> -<p>If you are interested in Dibac for SketchUp 2015, you can download the demo version from the official website of Dibac[^1^] and try it out for 16 hours of use. You can also watch some video tutorials[^2^] and read the user manual[^3^] to learn how to use the plugin effectively. You can also see some examples of projects made with Dibac for SketchUp 2015 on the website[^4^] and get inspired by them.</p> d5da3c52bf<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Fbi Faces 40 Free Download.md b/spaces/terfces0erbo/CollegeProjectV2/Fbi Faces 40 Free Download.md deleted file mode 100644 index 0ffbe6ebe4b997f1c7c4a21ff3310898121fe38b..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Fbi Faces 40 Free Download.md +++ /dev/null @@ -1,70 +0,0 @@ - -<h1>What is FBI Faces 40 and How to Download It for Free?</h1> - -<p>If you are interested in creating realistic face images for various purposes, you might have heard of FBI Faces 40. This is a facial composite software that can help you generate photo-like composites from a database of facial features. You can use this software for law enforcement, education, art, or entertainment. In this article, we will explain what FBI Faces 40 is, what features it has, and how you can download it for free.</p> -<h2>Fbi Faces 40 Free Download</h2><br /><p><b><b>Download</b> &#10022;&#10022;&#10022; <a href="https://bytlly.com/2uGiAJ">https://bytlly.com/2uGiAJ</a></b></p><br /><br /> - -<h2>What is FBI Faces 40?</h2> - -<p>FBI Faces 40 is a software that was developed by IQ Biometrix, a company that specializes in biometric identification and analysis. The software is based on FACES, a facial composite system that was used by thousands of police agencies worldwide, including the CIA, FBI, and the US Military. FBI Faces 40 is an improved version of FACES that has more features and capabilities.</p> - -<p>FBI Faces 40 allows you to create face images from a database of over 10,000 facial features, such as eyes, noses, mouths, ears, hair styles, skin tones, facial markings, and accessories. You can select and adjust the features to create a composite that matches your description or imagination. You can also use the software to age progress or regress a face image, or to flip the hair style from side to side.</p> - -<p>FBI Faces 40 can generate a unique alphanumeric code for every composite you create. This code can be used to recreate the same composite on another computer that has the software installed. You can also export the composite as a JPEG file and share it with others. The software also has a slide show capability that allows you to display multiple composites on a screen.</p> - -<p>FBI Faces 40 is compatible with Windows XP and later versions. It can run on any standard desk or laptop computer. The software is easy to use and does not require any special training or skills.</p> - -<h2>What are the benefits of FBI Faces 40?</h2> - -<p>FBI Faces 40 can be used for various purposes and benefits. Here are some of them:</p> - -<ul> -<li>Law enforcement: FBI Faces 40 can help you identify and track criminal suspects by creating composites based on eyewitness descriptions or sketches. You can also use the software to compare composites with mugshots or other databases of faces. The software is endorsed by crime fighting agencies and supported by police as a proven and effective tool.</li> -<li>Education: FBI Faces 40 can help you teach and learn about facial anatomy, recognition, and diversity. You can use the software to create composites of different ethnicities, genders, ages, and expressions. You can also use the software to test your memory and attention skills by recreating composites from codes or images.</li> -<li>Art: FBI Faces 40 can help you create realistic and artistic face images for your projects. You can use the software to create portraits, caricatures, cartoons, or fantasy characters. You can also use the software to experiment with different facial features and combinations.</li> -<li>Entertainment: FBI Faces 40 can help you have fun and entertain yourself or others by creating composites of celebrities, friends, family members, or fictional characters. You can also use the software to prank or surprise someone by creating a composite of them or someone they know.</li> -</ul> - -<h2>How to download FBI Faces 40 for free?</h2> - -<p>If you want to try FBI Faces 40 for yourself, you might be wondering how to download it for free. There are several websites that claim to offer free downloads of FBI Faces 40, but some of them might be unreliable or unsafe. Therefore, we recommend that you download FBI Faces 40 from the official website of IQ Biometrix: https://facialcomposites.com/</p> - -<p>On this website, you can find more information about FBI Faces 40 and its features. You can also download a free demo version of the software that allows you to create up to five composites with limited features. If you want to access the full version of the software with all the features and capabilities, you will need to purchase a license from the website.</p> - -<p>The website offers different versions of FBI Faces 40 for different purposes and users. For example, there is an educational version for teachers and students, a professional version for artists and entertainers, and a law enforcement version for police and security professionals. The prices vary depending on the version and the number of licenses you need.</p> - -<p>If you have any questions or issues regarding FBI Faces 40 or its download process, you can contact IQ Biometrix through their website or email: info@iqbiometrix.com</p> -<p></p> - -<h2>Conclusion</h2> - -<p>FBI Faces 40 is a facial composite software that can help you create realistic face images from a database of facial features. You can use this software for law enforcement, education, art, or entertainment purposes. To download FBI Faces 40 for free, you can visit the official website of IQ Biometrix and download a free demo version of the software. If you want to access the full version of the software with all the features and capabilities, you will need to purchase a license from the website.</p> -<h2>What are the reviews of FBI Faces 40?</h2> - -<p>If you are curious about what other users think of FBI Faces 40, you might want to check some of the reviews of the software online. There are several websites that offer reviews of FBI Faces 40 from different perspectives and experiences. Here are some of them:</p> - -<ul> -<li>Software Informer: This website provides information and ratings of various software products, including FBI Faces 40. You can find the download link, the latest version, the developer's name, and the user's comments on this website. The users who reviewed FBI Faces 40 gave it an average rating of 3.8 out of 5 stars. Some of them praised the software for its accuracy and ease of use, while others complained about its size and compatibility issues.</li> -<li>Academia-ke.org: This website offers a PDF file that contains a brief overview of FBI Faces 40 and a link to download it for free. The file also includes some screenshots of the software and its features. The website claims that FBI Faces 40 is a better version of FACES that has more options and capabilities.</li> -<li>Facialcomposites.com: This is the official website of IQ Biometrix, the developer of FBI Faces 40. On this website, you can find more details about the software and its features, as well as testimonials from satisfied customers. You can also purchase a license for the software or download a free demo version from this website.</li> -</ul> - -<p>These are some of the websites that offer reviews of FBI Faces 40. However, you should be careful when downloading the software from unknown or untrusted sources, as they might contain viruses or malware that can harm your computer. Therefore, we recommend that you download FBI Faces 40 from the official website of IQ Biometrix or from reputable software platforms.</p> -<h2>What are the alternatives to FBI Faces 40?</h2> - -<p>If you are looking for other software that can help you create facial composites, you might want to check some of the alternatives to FBI Faces 40. There are several software products that offer similar or different features and capabilities for creating face images. Here are some of them:</p> - -<ul> -<li>Amped FIVE: This is a forensic image and video enhancement software that can help you process and restore, clarify, and analyze images and video in a simple, fast, and precise way. You can use this software to remove blur, noise, distortion, and other artifacts from images and video. You can also use this software to enhance details, contrast, color, and brightness of images and video.</li> -<li>Robust Motion Deblurring System: This is a software that can help you remove camera shake and motion blur from images. You can use this software to estimate large blur kernels and recover subtle structures and fine details from images. You can also use this software to adjust the sharpness, contrast, and saturation of images.</li> -<li>Focus Magic: This is a software that uses advanced deconvolution technology to undo blur and recover lost detail from images. You can use this software to fix out-of-focus blur, motion blur, gaussian blur, and lens blur from images. You can also use this software to enhance resolution, contrast, and color of images.</li> -</ul> - -<p>These are some of the alternatives to FBI Faces 40 that you can try if you want to create facial composites or enhance images. However, you should be aware that some of these software products might be expensive or require technical skills to use. Therefore, you should compare the features, prices, and reviews of these software products before choosing one.</p> -<h2>Conclusion</h2> - -<p>FBI Faces 40 is a facial composite software that can help you create realistic face images from a database of facial features. You can use this software for law enforcement, education, art, or entertainment purposes. To download FBI Faces 40 for free, you can visit the official website of IQ Biometrix and download a free demo version of the software. If you want to access the full version of the software with all the features and capabilities, you will need to purchase a license from the website. You can also check some of the reviews and alternatives of FBI Faces 40 online to see what other users think of the software and what other options you have for creating facial composites or enhancing images.</p> - -<p>We hope this article has helped you learn more about FBI Faces 40 and how to download it for free. If you have any questions or comments, please feel free to share them with us below.</p> 3cee63e6c2<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/thejagstudio/procom/main/tests.py b/spaces/thejagstudio/procom/main/tests.py deleted file mode 100644 index 7ce503c2dd97ba78597f6ff6e4393132753573f6..0000000000000000000000000000000000000000 --- a/spaces/thejagstudio/procom/main/tests.py +++ /dev/null @@ -1,3 +0,0 @@ -from django.test import TestCase - -# Create your tests here. diff --git a/spaces/thestasi/Webui-Cpu-ExtensionV2-Publictest-WithCivitaiHelper/README.md b/spaces/thestasi/Webui-Cpu-ExtensionV2-Publictest-WithCivitaiHelper/README.md deleted file mode 100644 index c0fc822771658781188165ca66c9fd721af92e2a..0000000000000000000000000000000000000000 --- a/spaces/thestasi/Webui-Cpu-ExtensionV2-Publictest-WithCivitaiHelper/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Webui-Cpu-ExtensionV2-Publictest-WithCivitaiHelper -emoji: 🌍 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: true -duplicated_from: Osmond141319/Webui-Cpu-ExtensionV2-Publictest-WithCivitaiHelper ---- diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Dublagem - Middle Earth Shadow of Mordor PC version download Create your own story with the innovative Nemesis System.md b/spaces/tialenAdioni/chat-gpt-api/logs/Dublagem - Middle Earth Shadow of Mordor PC version download Create your own story with the innovative Nemesis System.md deleted file mode 100644 index 1d3b6a79cc40dec3de0fafbc039ae107d9e347f8..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Dublagem - Middle Earth Shadow of Mordor PC version download Create your own story with the innovative Nemesis System.md +++ /dev/null @@ -1,155 +0,0 @@ -<br /> -<h1>Dublagem - Middle Earth Shadow of Mordor PC version download</h1> - <p>If you are a fan of The Lord of the Rings and The Hobbit, you might have heard of Middle Earth Shadow of Mordor, a popular action-adventure video game set in Tolkien's fantasy world. But did you know that you can play this game in Portuguese with Dublagem? In this article, we will tell you what Dublagem is, why you should download the PC version of Middle Earth Shadow of Mordor with Dublagem, how to do it, and what benefits you can get from playing this amazing game with Dublagem. Let's get started!</p> - <h2>Introduction</h2> - <h3>What is Dublagem?</h3> - <p>Dublagem is the Portuguese word for dubbing, which is the process of replacing the original voice track of a film or a video game with a different language. Dublagem is very common in Brazil, where many foreign movies and games are dubbed into Portuguese for the local audience. Dublagem can also be done by fans who want to enjoy their favorite media in their native language or in a language they prefer.</p> -<h2>Dublagem - Middle Earth Shadow of Mordor PC version download</h2><br /><p><b><b>DOWNLOAD</b> &hArr; <a href="https://urlcod.com/2uKadW">https://urlcod.com/2uKadW</a></b></p><br /><br /> - <h3>What is Middle Earth Shadow of Mordor?</h3> - <p>Middle Earth Shadow of Mordor is a 2014 video game developed by Monolith Productions and published by Warner Bros. Interactive Entertainment. It is based on the works of J.R.R. Tolkien, especially The Lord of the Rings and The Hobbit. The game takes place between the events of The Hobbit and The Lord of the Rings, and follows the story of Talion, a ranger who is killed by Sauron's forces along with his family. However, he is revived by a mysterious spirit called Celebrimbor, who gives him the power to dominate and control his enemies. Together, they seek revenge against Sauron and his army of orcs, uruks, trolls, and other creatures.</p> - <h3>Why download the PC version?</h3> - <p>Middle Earth Shadow of Mordor is available for various platforms, including PlayStation 4, Xbox One, PlayStation 3, Xbox 360, and PC. However, many players prefer to play the game on PC for several reasons. First, the PC version has better graphics and performance than the console versions, especially if you have a powerful computer. Second, the PC version has more options and features to customize your gameplay experience, such as resolution, frame rate, graphics settings, keyboard and mouse controls, mods, etc. Third, the PC version has more content and updates than the console versions, such as DLCs (downloadable content), patches, fixes, etc.</p> - <h2>How to download Middle Earth Shadow of Mordor PC version with Dublagem</h2> - <h3>Requirements and specifications</h3> - <p>Before you download Middle Earth Shadow of Mordor PC version with Dublagem, you need to make sure that your computer meets the minimum or recommended requirements for running the game smoothly. Here are the specifications for Windows:</p> - <table> -<tr> -<th>Minimum</th> -<th>Recommended</th> -</tr> -<tr> -<td>OS: 64-bit Windows Vista SP2/7/8/10</td> -<td>OS: 64-bit Windows 7/8/10</td> -</tr> -<tr> -<td>CPU: Intel Core i5-750 2.67 GHz/AMD Phenom II X4 965 3.4 GHz or equivalent</td> -<td>CPU: Intel Core i7-3770 3.4 GHz/AMD FX-8350 4.0 GHz or equivalent</td> -</tr> -<tr> -<td>RAM: 4 GB</td> -<td>RAM: 8 GB</td> -</tr> -<tr> -<td>GPU: NVIDIA GeForce GTX 460/AMD Radeon HD 5850 or equivalent with 1 GB VRAM</td> -<td>GPU: NVIDIA GeForce GTX 660/AMD Radeon HD 7950 or equivalent with 2 GB VRAM</td> -</tr> -<tr> -<td>HDD: 44 GB available space</td> -<td>HDD: 57 GB available space (includes HD content)</td> -</tr> -<tr> -<td>DirectX: Version 11</td> -<td>DirectX: Version 11</td> -</tr> -<tr> -<td>Sound Card: DirectX compatible sound card</td> -<tr> -<td>Note: Requires Internet connection for online features.</td> -<td>Note: Requires Internet connection for online features.</td> -</tr> -</table> - <h3>Steps to download and install</h3> - <p>To download Middle Earth Shadow of Mordor PC version with Dublagem, you need to follow these steps:</p> - <ol> -<li>Purchase or obtain a copy of Middle Earth Shadow of Mordor PC version from an official source such as Steam or GOG.com.</li> -<li>Download and install Steam or GOG Galaxy on your computer if you don't have them already.</li> -<li>Launch Steam or GOG Galaxy and log in with your account.</li> -<li>Add Middle Earth Shadow of Mordor to your library if you haven't done so already.</li> -<li>Select Middle Earth Shadow of Mordor from your library and click on Install.</li> -<li>Select your preferred language from the list (English by default) and click on Next.</li> -<li>Select your installation folder and click on Next.</li> -<li>Select any additional components you want to install (such as HD content) and click on Next.</li> -<li>Wait for the installation to complete.</li> -<li>To enable Dublagem (Portuguese dubbing), go to Options > Audio > Voice Language > Portuguese (Brazil).</li> -<li>Enjoy playing Middle Earth Shadow of Mordor PC version with Dublagem!</li> -</ol> - <h3>Tips and tricks to optimize performance and enjoy the game</h3> - <p>Once you have downloaded and installed Middle Earth Shadow of Mordor PC version with Dublagem, you might want to know some tips and tricks to make the most out of your gaming experience. Here are some suggestions:</p> - <ul> -<li>Adjust the graphics settings according to your computer's capabilities. You can use the preset options (Low, Medium, High, Ultra) or customize them individually. You can also enable or disable advanced features such as tessellation, ambient occlusion, depth of field, etc.</li> -<li>Use the benchmark tool to test your system's performance and see how well it runs the game. You can access it from the main menu or from the options menu.</li> -<li>Update your drivers and software regularly to ensure compatibility and stability. You can use Steam or GOG Galaxy to check for updates automatically or manually.</li> -<li>Use a controller or a keyboard and mouse according to your preference. You can change the input device and the key bindings from the options menu. You can also use a combination of both for different actions.</li> -<li>Explore the open world of Mordor and discover its secrets. You can use the map and the fast travel points to navigate easily. You can also use your wraith abilities to see hidden objects, enemies, and collectibles.</li> -<li>Interact with the Nemesis System, which creates unique enemies and allies based on your actions. You can dominate orcs and uruks and make them fight for you or against each other. You can also encounter captains and war chiefs who have their own personalities, strengths, weaknesses, and histories.</li> -<li>Complete the main missions and the side quests to progress the story and unlock new skills, weapons, runes, outfits, etc. You can also participate in challenges, events, trials, etc. to earn rewards and achievements.</li> -</ul> - <h2>Benefits of playing Middle Earth Shadow of Mordor PC version with Dublagem</h2> - <h3>Enhanced immersion and realism</h3> - <p>One of the main benefits of playing Middle Earth Shadow of Mordor PC version with Dublagem is that it enhances your immersion and realism in the game world. By hearing the characters speak in Portuguese, you can feel more connected to them and their emotions. You can also appreciate the quality of the voice acting and the synchronization with the lip movements. Moreover, you can enjoy the rich sound effects and music that create a captivating atmosphere.</p> -<p>Dublagem Middle Earth Shadow of Mordor PC game free download<br /> -How to download Dublagem Middle Earth Shadow of Mordor for PC<br /> -Dublagem Middle Earth Shadow of Mordor PC torrent download<br /> -Dublagem Middle Earth Shadow of Mordor PC full version download<br /> -Dublagem Middle Earth Shadow of Mordor PC crack download<br /> -Download Dublagem Middle Earth Shadow of Mordor PC highly compressed<br /> -Dublagem Middle Earth Shadow of Mordor PC download with subtitles<br /> -Dublagem Middle Earth Shadow of Mordor PC download windows 10<br /> -Dublagem Middle Earth Shadow of Mordor PC download steam<br /> -Dublagem Middle Earth Shadow of Mordor PC download size<br /> -Dublagem Middle Earth Shadow of Mordor PC download link<br /> -Dublagem Middle Earth Shadow of Mordor PC download mega<br /> -Dublagem Middle Earth Shadow of Mordor PC download google drive<br /> -Dublagem Middle Earth Shadow of Mordor PC download skidrow<br /> -Dublagem Middle Earth Shadow of Mordor PC download repack<br /> -Dublagem Middle Earth Shadow of Mordor PC download ocean of games<br /> -Dublagem Middle Earth Shadow of Mordor PC download fitgirl<br /> -Dublagem Middle Earth Shadow of Mordor PC download utorrent<br /> -Dublagem Middle Earth Shadow of Mordor PC download iso<br /> -Dublagem Middle Earth Shadow of Mordor PC download rar<br /> -Dublagem Middle Earth Shadow of Mordor PC download zip<br /> -Dublagem Middle Earth Shadow of Mordor PC download online<br /> -Dublagem Middle Earth Shadow of Mordor PC download offline<br /> -Dublagem Middle Earth Shadow of Mordor PC download no survey<br /> -Dublagem Middle Earth Shadow of Mordor PC download no password<br /> -Dublagem Middle Earth Shadow of Mordor PC download no virus<br /> -Dublagem Middle Earth Shadow of Mordor PC download no ads<br /> -Dublagem Middle Earth Shadow of Mordor PC download no steam<br /> -Dublagem Middle Earth Shadow of Mordor PC download no cd key<br /> -Dublagem Middle Earth Shadow of Mordor PC download no registration<br /> -Dublagem Middle Earth Shadow of Mordor PC download gameplay<br /> -Dublagem Middle Earth Shadow of Mordor PC download trailer<br /> -Dublagem Middle Earth Shadow of Mordor PC download review<br /> -Dublagem Middle Earth Shadow of Mordor PC download system requirements<br /> -Dublagem Middle Earth Shadow of Mordor PC download cheats<br /> -Dublagem Middle Earth Shadow of Mordor PC download mods<br /> -Dublagem Middle Earth Shadow of Mordor PC download patch<br /> -Dublagem Middle Earth Shadow of Mordor PC download update<br /> -Dublagem Middle Earth Shadow of Mordor PC download dlc<br /> -Dublagem Middle Earth Shadow of Mordor PC download save file<br /> -Dublagem Middle Earth Shadow of Mordor PC download settings<br /> -Dublagem Middle Earth Shadow of Mordor PC download controller support<br /> -Dublagem Middle Earth Shadow of Mordor PC download keyboard and mouse<br /> -Dublagem Middle Earth Shadow of Mordor PC download graphics quality<br /> -Dublagem Middle Earth Shadow of Mordor PC download sound quality<br /> -Dublagem Middle Earth Shadow of Mordor PC download language options<br /> -Dublagem Middle Earth Shadow of Mordor PC download best price<br /> -Download and install dubbing - middle earth shadow morder pc version in Portuguese </p> - <h3>Improved accessibility and comprehension</h3> - <p>Another benefit of playing Middle Earth Shadow of Mordor PC version with Dublagem is that it improves your accessibility and comprehension of the game content. If you are a native speaker of Portuguese or if you understand it better than English, you can follow the dialogue and the narration more easily. You can also learn new words and expressions that are related to the game's theme and genre. Furthermore, you can avoid missing important information or instructions that might affect your gameplay.</p> - <h3>More fun and satisfaction</h3> - <p>A final benefit of playing Middle Earth Shadow of Mordor PC version with Dublagem is that it makes your gaming experience more fun and satisfying. By playing in your preferred language or in a language you like, you can enjoy the game more fully. You can also express yourself better when interacting with other players online or when sharing your opinions and feedback about the game. Additionally, you can support the work of the dubbing team and appreciate their effort and talent.</p> - <h2>Conclusion</h2> - <h3>Summary of the main points</h3> - and specifications, follow some steps to download and install the game, and apply some tips and tricks to optimize your performance and enjoyment. Playing Middle Earth Shadow of Mordor PC version with Dublagem has many benefits, such as enhanced immersion and realism, improved accessibility and comprehension, and more fun and satisfaction. We hope that this article has helped you to learn more about Dublagem and Middle Earth Shadow of Mordor PC version, and that you will give it a try soon.</p> - <h3>Call to action</h3> - <p>If you are ready to download Middle Earth Shadow of Mordor PC version with Dublagem, you can click on the link below to get it from Steam or GOG.com. You can also check out the official website and the social media pages of the game for more information and updates. Don't miss this opportunity to play one of the best games of 2014 in Portuguese with Dublagem. You won't regret it!</p> - <p><a href="https://store.steampowered.com/app/241930/Middleearth_Shadow_of_Mordor/">Get Middle Earth Shadow of Mordor PC version with Dublagem from Steam</a></p> - <p><a href="https://www.gog.com/game/middle_earth_shadow_of_mordor_game_of_the_year_edition">Get Middle Earth Shadow of Mordor PC version with Dublagem from GOG.com</a></p> - <p><a href="https://www.shadowofmordor.com/">Visit the official website of Middle Earth Shadow of Mordor</a></p> - <p><a href="https://www.facebook.com/ShadowofMordor/">Follow Middle Earth Shadow of Mordor on Facebook</a></p> - <p><a href="https://twitter.com/shadowofmordor">Follow Middle Earth Shadow of Mordor on Twitter</a></p> - <h2>FAQs</h2> - <h3>What is the difference between Dublagem and Legendagem?</h3> - <p>Dublagem is the process of dubbing a film or a video game into a different language, while Legendagem is the process of adding subtitles to a film or a video game in a different language. Both are ways of translating and adapting media for different audiences, but they have different advantages and disadvantages. For example, Dublagem can make the media more immersive and accessible, but it can also alter the original voice acting and sound effects. Legendagem can preserve the original voice acting and sound effects, but it can also distract the viewer or player from the visual elements.</p> - <h3>Is Dublagem available for other languages besides Portuguese?</h3> - <p>Yes, Dublagem is available for other languages besides Portuguese. Middle Earth Shadow of Mordor PC version supports several languages for dubbing, such as English, French, Italian, German, Spanish, Russian, Polish, etc. You can choose your preferred language from the options menu before or during the game.</p> - <h3>Is Dublagem compatible with other mods or DLCs?</h3> - <p>Yes, Dublagem is compatible with other mods or DLCs that you might want to use for Middle Earth Shadow of Mordor PC version. However, you need to make sure that the mods or DLCs are also compatible with each other and with the game version that you have. You also need to follow the instructions for installing and using them correctly.</p> - <h3>How can I give feedback or report issues about Dublagem?</h3> - <p>If you have any feedback or issues about Dublagem, you can contact the dubbing team or the game developers directly. You can also use the forums or the reviews sections on Steam or GOG.com to share your opinions and experiences with other players. Your feedback and issues are valuable for improving Dublagem and making it more enjoyable for everyone.</p> - <h3>Where can I find more games with Dublagem?</h3> - <p>If you liked playing Middle Earth Shadow of Mordor PC version with Dublagem, you might want to try other games with Dublagem as well. There are many games that have official or fan-made Dublagem in Portuguese or other languages. Some examples are Assassin's Creed Valhalla, Cyberpunk 2077, The Witcher 3: Wild Hunt, Horizon Zero Dawn, Red Dead Redemption 2, etc. You can search for them on Steam or GOG.com or on other websites that offer games with Dublagem.</p> - </p> 0a6ba089eb<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Age.of.Empires.II.HD.The.Rise.of.the.Rajas-RELOADED Demo.md b/spaces/tioseFevbu/cartoon-converter/scripts/Age.of.Empires.II.HD.The.Rise.of.the.Rajas-RELOADED Demo.md deleted file mode 100644 index 7e29e43fecb70d2dfb2e70dd4777c6b673f5f9e9..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Age.of.Empires.II.HD.The.Rise.of.the.Rajas-RELOADED Demo.md +++ /dev/null @@ -1,29 +0,0 @@ -<br /> -<h1>How to Download and Play Age of Empires II HD The Rise of the Rajas-RELOADED Demo</h1> -<p>If you are a fan of real-time strategy games, you might be interested in trying out the latest expansion for Age of Empires II HD, the classic game that has been remastered for modern systems. The expansion, called The Rise of the Rajas, adds four new civilizations, new units, technologies, and campaigns set in Southeast Asia. You can experience the history and culture of the Burmese, Khmer, Malay, and Vietnamese civilizations, and command mighty elephants, ballista elephants, arambai, and rattan archers in epic battles on land and water.</p> -<h2>Age.of.Empires.II.HD.The.Rise.of.the.Rajas-RELOADED Demo</h2><br /><p><b><b>Download</b> ---> <a href="https://urlcod.com/2uHxFF">https://urlcod.com/2uHxFF</a></b></p><br /><br /> -<p>The good news is that you can download and play a demo version of the expansion for free, thanks to the RELOADED group that has cracked the game and made it available online. The demo lets you play one of the new campaigns, Gajah Mada, where you have to lead the Majapahit empire to dominate the archipelago. You can also play multiplayer matches with other players who have the demo or the full game.</p> -<p>To download and play the demo, you need to follow these steps:</p> -<ol> -<li>Make sure you have Age of Empires II HD installed on your computer. You can buy it from Steam or other online platforms.</li> -<li>Download the RELOADED crack from this link: <a href="https://reloaded-games.com/age-of-empires-ii-hd-the-rise-of-the-rajas-reloaded-demo/">https://reloaded-games.com/age-of-empires-ii-hd-the-rise-of-the-rajas-reloaded-demo/</a></li> -<li>Extract the files from the zip archive to a folder on your computer.</li> -<li>Copy the contents of the folder to your Age of Empires II HD installation directory, replacing any existing files.</li> -<li>Run the game from the launcher.exe file.</li> -<li>Enjoy playing the demo!</li> -</ol> -<p>Note: This is an unofficial crack and it may not work on all systems or with all versions of the game. It may also contain viruses or malware that could harm your computer. Use it at your own risk and discretion. We do not endorse or support piracy in any way. If you like the game, please support the developers by buying the full version.</p> - -<p>The Rise of the Rajas is the third official expansion for Age of Empires II HD, following The Forgotten and The African Kingdoms. It was released on December 19, 2016, and received positive reviews from critics and players alike. The expansion features improved graphics, sound, and performance, as well as new gameplay elements such as garrisonable buildings, fire ships, and mixed units.</p> -<p></p> -<p>The expansion also introduces four new civilizations that represent the diverse and rich cultures of Southeast Asia. Each civilization has its own unique bonuses, units, technologies, and architecture. The Burmese are a monk and elephant civilization that can research all monastery technologies for free. The Khmer are a siege and elephant civilization that can construct buildings without requiring villagers and have access to the scorpion-mounted ballista elephant. The Malay are a naval and infantry civilization that can advance through the ages faster and have the cheapest but weakest infantry unit in the game, the karambit warrior. The Vietnamese are an archer and guerrilla civilization that can see the enemy's starting position and have the heavily armored rattan archer.</p> -<p>In addition to the new civilizations, the expansion also adds four new campaigns that tell the stories of some of the most influential heroes and leaders in Southeast Asian history. The campaigns are fully voiced and feature historical battles, scenarios, and challenges. The campaigns are:</p> -<ul> -<li>Gajah Mada: Lead the Majapahit empire to unify the archipelago under one banner.</li> -<li>Suryavarman I: Restore the glory of the Khmer empire and build the magnificent Angkor Wat.</li> -<li>Bayinnaung: Conquer Southeast Asia with your army of elephants and become the king of kings.</li> -<li>Lê Lợi: Liberate Vietnam from the Ming dynasty and establish a new golden age.</li> -</ul> -<p>If you are looking for a new challenge and a fresh perspective on Age of Empires II HD, you should definitely give The Rise of the Rajas a try. You can download and play the demo for free or buy the full version from Steam or other online platforms. Experience the history and culture of Southeast Asia in this exciting expansion!</p> e93f5a0c3f<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0.md b/spaces/tioseFevbu/cartoon-converter/scripts/Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0.md deleted file mode 100644 index 942c3d9269c5d743e47b9955f0ab44acca59e275..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0.md +++ /dev/null @@ -1,128 +0,0 @@ -<br /> -<br> - Film frames, darkroom effects, grunge edges, natural media borders, and more <br> - Over 300 pre-made layouts and instant effects <br> - Storyboards and brush-on edges <br> - Compatible with Photoshop, Lightroom, and other photo editing software | | H2: Benefits of Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 | - Enhance your digital images with professional grade effects <br> - Create realistic and artistic results without expensive camera equipment or filters <br> - Customize and fine-tune your effects with easy-to-use controls <br> - Save time and money with one-time purchase and lifetime updates | | H2: How to use Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 | - Download and install the software from the official website <br> - Launch the software as a standalone program or as a plugin for your photo editing software <br> - Choose an image to edit and select an effect category <br> - Browse through the effects and apply the one you like <br> - Adjust the settings and parameters of the effect to suit your preferences <br> - Save or export your edited image | | H2: Examples of Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 | - Show some before-and-after images of different effects applied to different images <br> - Explain how the effects enhance the images and create different moods and styles | | H2: Pros and cons of Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 | - Pros: <br> - Large variety of effects to choose from <br> - High-quality and realistic results <br> - Easy to use and customize <br> - Compatible with most photo editing software <br> - Affordable price and lifetime updates <br> - Cons: <br> - Large file size and system requirements <br> - Some effects may look outdated or overdone <br> - May not work well with some image formats or resolutions | | H2: Alternatives to Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 | - Compare and contrast some other photo editing software that can add edges, frames, and borders to digital images, such as: <br> - ON1 Effects <br> - Topaz Studio <br> - Alien Skin Exposure <br> - AKVIS ArtSuite | | H1: Conclusion | Summarize the main points of the article and give a final verdict on Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 | | H2: FAQs | Answer some frequently asked questions about Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0, such as: <br> - How much does it cost? <br> - How can I get support or updates? <br> - What are the system requirements? <br> - Can I use it on multiple devices? <br> - Is there a trial version or a money-back guarantee? | Table 2: Article with HTML formatting <h1 style="text-align:center;">Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0: A Review</h1> - <p>If you are looking for a way to add some flair and creativity to your digital images, you might want to check out Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0.</p> -<h2>Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0</h2><br /><p><b><b>Download File</b> &#10038; <a href="https://urlcod.com/2uHvqH">https://urlcod.com/2uHvqH</a></b></p><br /><br /> - <p>This is a photo editing software that can help you enhance your photos with over 10,000 edge, frame, and border effects within 32 categories.</p> - <p>In this article, we will review the features, benefits, pros and cons, alternatives, and FAQs of this software.</p> - <p>We will also show you some examples of how it can transform your images <h2 style="text-align:center;">Features of Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0</h2> - <p>Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 is a software that can add edges, frames, and borders to your digital images in a matter of minutes.</p> - <p>It has over 10,000 effects within 32 categories, such as film frames, darkroom effects, grunge edges, natural media borders, and more.</p> - <p>You can browse through the effects and apply them to your images with a simple click. You can also customize and fine-tune the effects with easy-to-use controls, such as opacity, color, size, blur, and texture.</p> -<p></p> - <p>Some of the features of this software are:</p> - <ul> -<li><b>Over 300 pre-made layouts and instant effects</b>: You can choose from hundreds of ready-made layouts and effects that can give your images a professional look. You can also create your own layouts and save them for future use.</li> -<li><b>Storyboards and brush-on edges</b>: You can create stunning storyboards and collages with multiple images and effects. You can also use the brush tool to paint on edges and frames to specific areas of your images.</li> -<li><b>Compatible with Photoshop, Lightroom, and other photo editing software</b>: You can use this software as a standalone program or as a plugin for your favorite photo editing software, such as Photoshop, Lightroom, Elements, PaintShop Pro, and more. You can also import and export images in various formats, such as JPEG, TIFF, PNG, PSD, and RAW.</li> -</ul> - <h2 style="text-align:center;">Benefits of Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0</h2> - <p>Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 is not just a software that can add edges, frames, and borders to your digital images. It is also a software that can enhance your images with professional grade effects that can create realistic and artistic results.</p> - <p>Some of the benefits of this software are:</p> - <ul> -<li><b>Enhance your digital images with professional grade effects</b>: You can use this software to add depth, dimension, and drama to your images. You can make your images look like they were taken with expensive camera equipment or filters. You can also create unique and original effects that can express your style and vision.</li> -<li><b>Create realistic and artistic results without expensive camera equipment or filters</b>: You don't need to spend a lot of money on buying or renting camera equipment or filters to achieve the effects you want. You can use this software to create realistic and artistic results with just a few clicks. You can also experiment with different effects and see the results in real time.</li> -<li><b>Customize and fine-tune your effects with easy-to-use controls</b>: You have full control over the effects you apply to your images. You can adjust the settings and parameters of the effects to suit your preferences. You can also combine multiple effects to create new and unique results.</li> -<li><b>Save time and money with one-time purchase and lifetime updates</b>: You only need to pay once for this software and you will get lifetime updates and support. You don't need to pay for monthly or yearly subscriptions or fees. You also don't need to waste time on downloading or installing updates. The software will automatically update itself whenever there is a new version available.</li> -</ul> <h2 style="text-align:center;">How to use Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0</h2> - <p>Using Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 is very easy and simple. You can follow these steps to start editing your images with this software:</p> - <ol> -<li><b>Download and install the software from the official website </b>: You can download the software from the official website of Auto FX Software. You can choose between the Windows or Mac version, depending on your operating system. The file size is about 2.5 GB, so make sure you have enough space and a stable internet connection. After downloading, you can install the software by following the instructions on the screen.</li> -<li><b>Launch the software as a standalone program or as a plugin for your photo editing software</b>: You can launch the software as a standalone program by clicking on its icon on your desktop or in your start menu. You can also launch it as a plugin for your photo editing software, such as Photoshop, Lightroom, Elements, PaintShop Pro, and more. To do this, you need to open your photo editing software and find the Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 plugin in the menu or toolbar.</li> -<li><b>Choose an image to edit and select an effect category</b>: You can choose an image to edit by clicking on the open button or by dragging and dropping an image file into the software window. You can also use the browse button to navigate through your folders and select an image. After choosing an image, you can select an effect category from the left panel of the software window. There are 32 categories to choose from, such as film frames, darkroom effects, grunge edges, natural media borders, and more.</li> -<li><b>Browse through the effects and apply the one you like</b>: You can browse through the effects within each category by using the arrows or the slider at the bottom of the software window. You can also use the search box to find a specific effect by name or keyword. You can preview each effect by hovering over it with your mouse cursor. You can apply an effect to your image by clicking on it once.</li> -<li><b>Adjust the settings and parameters of the effect to suit your preferences</b>: You can adjust the settings and parameters of the effect to suit your preferences by using the controls on the right panel of the software window. There are different controls for different effects, such as opacity, color, size, blur, and texture. You can also use the zoom and pan tools to view your image in different magnifications and positions.</li> -<li><b>Save or export your edited image</b>: You can save or export your edited image by clicking on the save or export button at the top of the software window. You can choose between different formats, such as JPEG, TIFF, PNG, PSD, and RAW. You can also choose between different quality and resolution options. You can also name and rename your image file and choose a destination folder for it.</li> -</ol> - <h2 style="text-align:center;">Examples of Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0</h2> - <p>To give you an idea of how Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 can transform your images, here are some examples of before-and-after images of different effects applied to different images.</p> - <p>We will also explain how the effects enhance the images and create different moods and styles.</p> - <table style="border:1px solid black;"> -<tr> -<th style="text-align:center;">Before</th> -<th style="text-align:center;">After</th> -<th style="text-align:center;">Effect</th> -<th style="text-align:center;">Explanation</th> -</tr> -<tr> -<td style="text-align:center;"><img src="https://images.unsplash.com/photo-1502082553048-f009c37129b9?ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cG9ydHJhaXR8ZW58MHx8MHx8&ixlib=rb-1.2.1&auto=format&fit=crop&w=500&q=60" alt="Portrait of a woman" width="250" height="250"></td> -<td style="text-align:center;"><img src="https://i.imgur.com/7yqgZ4o.jpg" alt="Portrait of a woman with film frame effect" width="250" height="250"></td> -<td style="text-align:center;">Film Frame 01</td> -<td style="text-align:center;">This effect adds a film frame border to the image, giving it a vintage and nostalgic look. It also adds some grain and scratches to the image, enhancing the film-like effect. It also crops the image slightly to fit the frame.</td> -</tr> -<tr> -<td style="text-align:center;"><img src="https://images.unsplash.com/photo-1494548162494-384bba4ab999?ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8bGFuZHNjYXBlfGVufDB8fDB8fA%3D%3D&ixlib=rb-1.2.1&auto=format&fit=crop&w=500&q=60" alt="Landscape of mountains and lake" width="250" height="250"></td> -<td style="text-align:center;"><img src="https://i.imgur.com/0FqQkQI.jpg" alt="Landscape of mountains and lake with darkroom effect" width="250" height="250"></td> -<td style="text-align:center;">Darkroom 01</td> -<td style="text-align:center;">This effect adds a darkroom border to the image, giving it a dramatic and contrasted look. It also adds some vignetting and burning to the image, creating a dark and moody atmosphere. It also enhances the colors and details of the image.</td> -</tr> -<tr> -<td style="text-align:center;"><img src="https://images.unsplash.com/photo-1506744038136-46273834b3fb?ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8Zmxvd2Vyc3xlbnwwfHwwfHw%3D&ixlib=rb-1.2.1&auto=format&fit=crop&w=500&q=60" alt="Flowers in a vase" width="250" height="250"></td> -<td style="text-align:center;"><img src="https://i.imgur.com/5g0gJ7k.jpg" alt="Flowers in a vase with natural media border effect" width="250" height="250"></td> -<td style="text-align:center;">Natural Media Border 01</td> -<td style="text-align:center;">This effect adds a natural media border to the image, giving it a painterly and artistic look. It also adds some brush strokes and textures to the image, creating a hand-made impression. It also softens the edges and colors of the image.</td> -</tr> -</table> - <h2 style="text-align:center;">Pros and cons of Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0</h2> - <p>Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 is a software that has many advantages and disadvantages. Here are some of them:</p> - <h3>Pros:</h3> - <ul> -<li><b>Large variety of effects to choose from</b>: You can find an effect for any type of image or style you want. You can also mix and match different effects to create your own unique combinations.</li> -<li><b>High-quality and realistic results</b>: The effects are designed to look realistic and professional, without compromising the quality or resolution of your images. You can also adjust the effects to make them more or less realistic, depending on your preference.</li> -<li><b>Easy to use and customize</b>: The software is user-friendly and intuitive, with a simple and clear interface. You can apply and adjust the effects with just a few clicks and sliders. You can also save your custom settings and layouts for future use.</li> -<li><b>Compatible with most photo editing software</b>: You can use this software as a standalone program or as a plugin for your favorite photo editing software, such as Photoshop, Lightroom, Elements, PaintShop Pro, and more. You can also import and export images in various formats, such as JPEG, TIFF, PNG, PSD, and RAW.</li> -<li><b>Affordable price and lifetime updates</b>: You only need to pay once for this software and you will get lifetime updates and support. You don't need to pay for monthly or yearly subscriptions or fees. You also don't need to waste time on downloading or installing updates. The software will automatically update itself whenever there is a new version available.</li> -</ul> - <h3>Cons:</h3> - <ul> -<li><b>Large file size and system requirements</b>: The software is quite large in size, about 2.5 GB, so you need to have enough space on your device and a stable internet connection to download and install it. You also need to have a decent computer system to run it smoothly, with at least 4 GB of RAM and 64-bit operating system.</li> -<li><b>Some effects may look outdated or overdone</b>: Some of the effects may not suit your taste or style, especially if you prefer a more modern or minimalist look. Some of the effects may also look too artificial or exaggerated, especially if you apply them to the whole image or with high intensity.</li> -<li><b>May not work well with some image formats or resolutions</b>: The software may not be able to handle some image formats or resolutions, especially if they are too large or too small. You may encounter some errors or glitches when importing or exporting such images. You may also lose some quality or details when resizing or cropping your images.</li> -</ul> - <h2 style="text-align:center;">Alternatives to Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0</h2> - <p>Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 is not the only software that can add edges, frames, and borders to your digital images. There are some other photo editing software that can do the same thing, with different features and prices.</p> - <p>Here are some of the alternatives to Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0:</p> - <table style="border:1px solid black;"> -<tr> -<th style="text-align:center;">Software</th> -<th style="text-align:center;">Features</th> -<th style="text-align:center;">Price</th> -</tr> -<tr> -<td style="text-align:center;">ON1 Effects </td> -<td style="text-align:center;">- Over 500 effects, filters, and presets within 23 categories <br> - Stackable and blendable effects with masking and brushing tools <br> - AI-powered portrait and landscape enhancements <br> - Compatible with Photoshop, Lightroom, and other photo editing software</td> -<td style="text-align:center;">$69.99 (one-time purchase)</td> -</tr> -<tr> -<td style="text-align:center;">Topaz Studio </td> -<td style="text-align:center;">- Over 200 effects, filters, and presets within 14 categories <br> - Customizable and adjustable effects with sliders and curves <br> - AI-powered noise reduction and sharpening tools <br> - Compatible with Photoshop, Lightroom, and other photo editing software</td> -<td style="text-align:center;">$99.99 (one-time purchase)</td> -</tr> -<tr> -<td style="text-align:center;">Alien Skin Exposure </td> -<td style="text-align:center;">- Over 500 effects, filters, and presets within 12 categories <br> - Film emulation and creative effects with grain and texture tools <br> - Advanced color grading and toning tools <br> - Compatible with Photoshop, Lightroom, and other photo editing software</td> -<td style="text-align:center;">$129 (one-time purchase)</td> -</tr> -<tr> -<td style="text-align:center;">AKVIS ArtSuite </td> -<td style="text-align:center;">- Over 100 effects, filters, and presets within 9 categories <br> - Artistic and decorative effects with patterns and textures <br> - Hand-painted and digital frames with custom designs <br> - Compatible with Photoshop, Lightroom, and other photo editing software</td> -<td style="text-align:center;">$72 (one-time purchase)</td> -</tr> -</table> - <h1 style="text-align:center;">Conclusion</h1> - <p>In conclusion, Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 is a photo editing software that can help you enhance your digital images with over 10,000 edge, frame, and border effects within 32 categories.</p> - <p>It has many features, benefits, pros and cons that you need to consider before buying it.</p> - <p>It is also not the only software that can do this job. There are some alternatives that you can compare and contrast to find the best one for you.</p> - <p>If you are looking for a way to add some flair and creativity to your digital images, you might want to give Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0 a try.</p> - <h2 style="text-align:center;">FAQs</h2> - <p>Here are some frequently asked questions about Auto FX PhotoGraphic Edges Ultimate Bundle Gen2 9.6.0:</p> - <h3>How much does it cost?</h3> - <p>The software costs $249 for a one-time purchase. You can also get a discount if you buy it as part of a bundle with other Auto FX Software products.</p> - <h3>How can I get support or updates?</h3> - <p>You can get support or updates by visiting the official website of Auto FX Software. You can also contact them by email or phone.</p> - <h3>What are the system requirements?</h3> - <p>The system requirements for the software are: - Windows 7, 8, or 10 (64-bit) or Mac OS X 10.10 or higher - 4 GB of RAM or higher - 2.5 GB of free disk space - Internet connection for activation and updates <h3>Can I use it on multiple devices?</h3> - <p>Yes, you can use it on up to two devices with the same license. You can also transfer your license to another device if you change or upgrade your device.</p> - <h3>Is there a trial version or a money-back guarantee?</h3> - <p>Yes, there is a trial version that you can download and use for free for 15 days. You can access all the features and effects of the software during the trial period. You can also request a refund within 30 days of purchase if you are not satisfied with the software.</p> b2dd77e56b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Conwep Software Free Download Rar.md b/spaces/tioseFevbu/cartoon-converter/scripts/Conwep Software Free Download Rar.md deleted file mode 100644 index 4247d3e3e1ec5ec2ae9f56eca7eef49f870504f8..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Conwep Software Free Download Rar.md +++ /dev/null @@ -1,195 +0,0 @@ -<br /> -<h1>Introduction</h1> -<p>Have you ever wondered how to calculate the blast effects of different types of explosives and weapons? Or how to design structures that can withstand conventional weapons attacks? Or how to simulate various scenarios involving explosions, fragments, projectiles, craters, and ground shocks? If you answered yes to any of these questions, then you might be interested in Conwep software.</p> -<h2>Conwep Software Free Download Rar</h2><br /><p><b><b>Download File</b> &#9881; <a href="https://urlcod.com/2uHxXW">https://urlcod.com/2uHxXW</a></b></p><br /><br /> -<p>Conwep software is a powerful tool that can help you perform a range of conventional weapons effects calculations based on the equations and curves of TM 5-855-1, "Design and Analysis of Hardened Structures to Conventional Weapons Effects". It can help you with tasks such as blast load estimation, fragment penetration analysis, concrete wall breaching prediction, projectile penetration into rock and soil evaluation, cratering assessment, and ground shock estimation.</p> -<p>In this article, we will show you how you can download Conwep software for free in a RAR file format. We will also explain what a RAR file format is, how to open and extract Conwep software from it, how to use Conwep software for various purposes, and answer some frequently asked questions about Conwep software. So, if you are ready to learn more about this amazing software, let's get started!</p> - <h1>What is Conwep software?</h1> -<p>Conwep software is a computer program that was developed by the US Army Corps of Engineers (USACE) in the late 1980s and early 1990s. It is based on the technical manual TM 5-855-1, which was published in 1986 and provides a comprehensive set of equations and curves for calculating the effects of conventional weapons on structures and materials.</p> -<p>Conwep software can perform a variety of calculations, such as:</p> -<ul> -<li>Blast load estimation: Conwep software can estimate the blast pressure, impulse, duration, and velocity at any point due to a spherical or hemispherical explosive charge in air or water. It can also estimate the blast load on a flat or curved surface due to a reflected or incident shock wave.</li> -<li>Fragment penetration analysis: Conwep software can calculate the penetration depth and residual velocity of a steel fragment into a concrete target. It can also calculate the number of fragments that can penetrate a given thickness of concrete.</li> -<li>Concrete wall breaching prediction: Conwep software can predict the minimum charge weight and standoff distance required to breach a reinforced concrete wall with a shaped charge or a contact charge. It can also predict the size and shape of the breach hole and the amount of debris generated.</li> -<li>Projectile penetration into rock and soil evaluation: Conwep software can evaluate the penetration depth and residual velocity of a rigid or deformable projectile into a rock or soil target. It can also evaluate the crater diameter and volume produced by the projectile impact.</li> -<li>Cratering assessment: Conwep software can assess the crater dimensions and volume produced by a buried or surface explosive charge in soil or rock. It can also assess the ejecta mass, velocity, and angle distribution.</li> -<li>Ground shock estimation: Conwep software can estimate the ground shock parameters such as peak particle velocity, displacement, acceleration, and stress at any point due to a buried or surface explosive charge in soil or rock. It can also estimate the damage level to underground structures due to ground shock.</li> -</ul> -<p>The benefits and advantages of using Conwep software are:</p> -<ul> -<li>It is easy to use and user-friendly: Conwep software has a simple and intuitive graphical user interface (GUI) that allows you to input the data, select the options, and view the results in a few clicks. You can also save, load, print, and export your data and results easily.</li> -<li>It is accurate and reliable: Conwep software is based on well-established and validated equations and curves that have been derived from extensive experimental data and theoretical analysis. You can trust that Conwep software will give you accurate and reliable results for your calculations.</li> -<li>It is versatile and flexible: Conwep software can handle a wide range of scenarios and applications involving conventional weapons effects. You can use Conwep software for different types of explosives, weapons, targets, materials, environments, and conditions. You can also customize your calculations by changing the parameters, units, options, and outputs according to your needs.</li> -</ul> - <h1>What is a RAR file format?</h1> -<p>A RAR file format is a type of compressed file format that is used to reduce the size of files and folders. RAR stands for Roshal Archive, which is named after its creator Eugene Roshal. A RAR file format uses a proprietary compression algorithm that can achieve high compression ratios and support various features such as encryption, password protection, error recovery, split archives, and more.</p> -<p>The benefits and advantages of using RAR files are:</p> -<ul> -<li>They save space and time: RAR files can compress files and folders to a smaller size than other file formats such as ZIP. This means that you can save space on your storage device and reduce the time required to upload or download files.</li> -<li>They preserve quality and integrity: RAR files can compress files without losing any quality or data. They also have error recovery features that can repair corrupted or damaged files. This means that you can preserve the quality and integrity of your files even after compression.</li> -<li>They provide security and privacy: RAR files can encrypt files with strong encryption algorithms such as AES-256. They can also protect files with passwords or digital signatures. This means that you can secure your files from unauthorized access or modification.</li> -</ul> -<p>The differences between RAR and other file formats such as ZIP are:</p> -<ul> -<li>RAR files have higher compression ratios than ZIP files: This means that RAR files can make files smaller than ZIP files. <li>RAR files have more features and options than ZIP files: This means that RAR files can support encryption, password protection, error recovery, split archives, and more.</li> -<li>RAR files have a different file extension than ZIP files: This means that RAR files have a .rar file extension, while ZIP files have a .zip file extension.</li> -</ul> - <h1>How to download Conwep software for free in a RAR file format?</h1> -<p>If you want to download Conwep software for free in a RAR file format, you have two options: you can either download it from the official website of the USACE or from other sources that offer it for free. Here are the steps to download Conwep software from both options:</p> -<p></p> - <h2>Option 1: Download Conwep software from the official website of the USACE</h2> -<p>The official website of the USACE is https://www.usace.army.mil/. To download Conwep software from this website, follow these steps:</p> -<ol> -<li>Go to the homepage of the website and click on the "Publications" tab at the top menu.</li> -<li>On the "Publications" page, click on the "Engineering Software" link under the "Engineering and Construction" section.</li> -<li>On the "Engineering Software" page, scroll down to the "Conventional Weapons Effects Program (CONWEP)" section and click on the "Download CONWEP" link.</li> -<li>On the "Download CONWEP" page, read and agree to the terms and conditions of use and click on the "I Agree" button.</li> -<li>On the next page, enter your name, email address, organization, and country in the required fields and click on the "Submit" button.</li> -<li>On the next page, click on the "Download CONWEP" link to start downloading Conwep software in a RAR file format.</li> -<li>Save the RAR file to your desired location on your computer or device.</li> -</ol> - <h2>Option 2: Download Conwep software from other sources</h2> -<p>If you cannot access or download Conwep software from the official website of the USACE, you can try to find other sources that offer it for free. However, you should be careful and cautious when downloading Conwep software from other sources, as they may not be safe or reliable. To download Conwep software from other sources, follow these steps:</p> -<ol> -<li>Search for Conwep software free download rar on your preferred search engine such as Google or Bing.</li> -<li>Look for websites that offer Conwep software for free in a RAR file format. Some examples of such websites are https://download.cnet.com/Conventional-Weapons-Effects-Program/3000-2054_4-78226472.html and https://www.softpedia.com/get/Science-CAD/Conventional-Weapons-Effects-Program.shtml.</li> -<li>Visit the websites that offer Conwep software for free and check their credibility and reputation. You can do this by reading their reviews, ratings, comments, feedback, and testimonials from other users. You can also use tools such as VirusTotal or Norton Safe Web to scan the websites for any malware or viruses.</li> -<li>If you find a website that seems trustworthy and legitimate, click on the download link or button to start downloading Conwep software in a RAR file format.</li> -<li>Save the RAR file to your desired location on your computer or device.</li> -</ol> - <h3>The requirements and precautions to download Conwep software safely and securely</h3> -<p>To download Conwep software safely and securely, you should follow these requirements and precautions:</p> -<ul> -<li>You should have a stable and fast internet connection to download Conwep software without any interruptions or errors.</li> -<li>You should have enough space on your storage device to save Conwep software without any issues or problems.</li> -<li>You should have a valid email address to register and receive confirmation from the official website of the USACE or other sources that offer Conwep software for free.</li> -<li>You should read and agree to the terms and conditions of use of Conwep software before downloading it. You should also respect the intellectual property rights and privacy rights of Conwep software and its developers.</li> -<li>You should scan Conwep software with an antivirus or anti-malware program before opening or extracting it. You should also backup your data and files before installing or running Conwep software on your computer or device.</li> -</ul> - <h3>The tips and tricks to optimize the download speed and performance</h3> -<p>To optimize the download speed and performance of Conwep software, you should follow these tips and tricks:</p <ul> -<li>You should close any unnecessary programs or applications that may slow down your internet connection or consume your bandwidth.</li> -<li>You should use a download manager or accelerator program that can speed up your download process and resume your download if it is interrupted or paused.</li> -<li>You should choose a reliable and fast source to download Conwep software from. You should avoid sources that have low ratings, negative reviews, or suspicious links.</li> -<li>You should check your firewall or antivirus settings and make sure they do not block or interfere with your download process.</li> -<li>You should clear your browser cache and cookies and update your browser to the latest version.</li> -</ul> - <h1>How to open and extract Conwep software from a RAR file format?</h1> -<p>After you have downloaded Conwep software in a RAR file format, you need to open and extract it to access its contents and install it on your computer or device. To open and extract Conwep software from a RAR file format, you need a file extractor program that can handle RAR files, such as WinRAR, 7-Zip, PeaZip, or B1 Free Archiver. Here are the steps to open and extract Conwep software from a RAR file format using WinRAR as an example:</p> -<ol> -<li>Download and install WinRAR from its official website https://www.win-rar.com/ or other sources that offer it for free.</li> -<li>Locate the RAR file that contains Conwep software on your computer or device and right-click on it.</li> -<li>Select "Open with WinRAR" from the context menu that appears.</li> -<li>A new window will open showing the contents of the RAR file. You will see a folder named "Conwep" that contains the files and folders of Conwep software.</li> -<li>Select the "Conwep" folder and click on the "Extract to" button at the top menu of the window.</li> -<li>A new window will open asking you to choose a destination path for the extracted folder. You can either keep the default path or browse to another location on your computer or device.</li> -<li>Click on the "OK" button to start extracting the folder.</li> -<li>Wait for the extraction process to finish. You will see a message saying "All OK" when it is done.</li> -<li>Close the WinRAR window and go to the destination path where you extracted the folder. You will see a folder named "Conwep" that contains the files and folders of Conwep software.</li> -</ol> - <h3>The requirements and precautions to open and extract Conwep software correctly and efficiently</h3> -<p>To open and extract Conwep software correctly and efficiently, you should follow these requirements and precautions:</p> -<ul> -<li>You should have a file extractor program that can handle RAR files, such as WinRAR, 7-Zip, PeaZip, or B1 Free Archiver. You should download and install it from a trusted source and update it to the latest version.</li> -<li>You should have enough space on your storage device to save the extracted folder of Conwep software without any issues or problems.</li> -<li>You should scan the RAR file and the extracted folder of Conwep software with an antivirus or anti-malware program before opening or extracting them. You should also backup your data and files before installing or running Conwep software on your computer or device.</li> -</ul> - <h3>The tips and tricks to troubleshoot any issues or errors that may occur</h3> -<p>To troubleshoot any issues or errors that may occur when opening or extracting Conwep software from a RAR file format, you should follow these tips and tricks:</p documentation file of Conwep software. This will tell you if Conwep software is compatible with your operating system and device, and what are the minimum and recommended system requirements for Conwep software.</li> -<li>You should check the permissions and settings of your file extractor program and your computer or device. You can do this by running your file extractor program as an administrator or in compatibility mode. You can also check your firewall or antivirus settings and make sure they do not block or interfere with your file extractor program or Conwep software.</li> -<li>You should contact the support team of Conwep software or your file extractor program if you encounter any issues or errors that you cannot resolve by yourself. You can do this by visiting their official websites or forums and submitting a ticket or a post describing your problem. You can also email them or call them if they provide such options.</li> -</ul> - <h1>How to use Conwep software for various purposes?</h1> -<p>After you have opened and extracted Conwep software from a RAR file format, you can use it for various purposes involving conventional weapons effects calculations. To use Conwep software for various purposes, you need to install and run it on your computer or device. Here are the steps to install and run Conwep software:</p> -<ol> -<li>Go to the folder where you extracted Conwep software and locate the setup.exe file.</li> -<li>Double-click on the setup.exe file to launch the installation wizard of Conwep software.</li> -<li>Follow the instructions on the screen to complete the installation process of Conwep software. You can choose the installation directory, the start menu folder, and the desktop shortcut options according to your preferences.</li> -<li>After the installation process is finished, click on the "Finish" button to exit the installation wizard.</li> -<li>Go to the start menu or the desktop and click on the "Conwep" icon to launch Conwep software.</li> -<li>A new window will open showing the main interface of Conwep software. You will see a menu bar, a toolbar, a status bar, and a workspace area.</li> -</ol> - <h2>The basic tutorial and guide on how to use Conwep software for different scenarios and applications</h2> -<p>To use Conwep software for different scenarios and applications, you need to select the appropriate calculation type, input the required data, and view the results. Here is a basic tutorial and guide on how to use Conwep software for different scenarios and applications:</p> - <h3>Scenario 1: Blast load estimation</h3> -<p>If you want to estimate the blast load at any point due to a spherical or hemispherical explosive charge in air or water, follow these steps:</p> -<ol> -<li>On the menu bar of Conwep software, click on "Calculation" and select "Blast Load Estimation".</li> -<li>A new window will open asking you to choose the explosive type, charge shape, charge weight, standoff distance, ambient pressure, ambient temperature, and output units. Enter the values in the corresponding fields or use the default values if you are not sure.</li> -<li>Click on the "OK" button to perform the calculation.</li> -<li>A new window will open showing the results of the calculation. You will see the blast pressure, impulse, duration, and velocity at any point due to the explosive charge. You can also see a graph of the blast load versus time or distance.</li> -<li>You can save, print, or export the results in various formats such as TXT, CSV, XLS, PDF, PNG, JPG, BMP, or GIF.</li> -</ol> - <h3>Scenario 2: Fragment penetration analysis</h3> -<p>If you want to calculate the penetration depth and residual velocity of a steel fragment into a concrete target, follow these steps:</p <ol> -<li>On the menu bar of Conwep software, click on "Calculation" and select "Fragment Penetration Analysis".</li> -<li>A new window will open asking you to choose the fragment type, fragment weight, fragment velocity, fragment angle, target type, target thickness, and output units. Enter the values in the corresponding fields or use the default values if you are not sure.</li> -<li>Click on the "OK" button to perform the calculation.</li> -<li>A new window will open showing the results of the calculation. You will see the penetration depth and residual velocity of the fragment into the target. You can also see a graph of the penetration depth versus velocity or weight.</li> -<li>You can save, print, or export the results in various formats such as TXT, CSV, XLS, PDF, PNG, JPG, BMP, or GIF.</li> -</ol> - <h3>Scenario 3: Concrete wall breaching prediction</h3> -<p>If you want to predict the minimum charge weight and standoff distance required to breach a reinforced concrete wall with a shaped charge or a contact charge, follow these steps:</p> -<ol> -<li>On the menu bar of Conwep software, click on "Calculation" and select "Concrete Wall Breaching Prediction".</li> -<li>A new window will open asking you to choose the charge type, charge shape, charge diameter, wall type, wall thickness, reinforcement ratio, and output units. Enter the values in the corresponding fields or use the default values if you are not sure.</li> -<li>Click on the "OK" button to perform the calculation.</li> -<li>A new window will open showing the results of the calculation. You will see the minimum charge weight and standoff distance required to breach the wall. You will also see the size and shape of the breach hole and the amount of debris generated.</li> -<li>You can save, print, or export the results in various formats such as TXT, CSV, XLS, PDF, PNG, JPG, BMP, or GIF.</li> -</ol> - <h3>Scenario 4: Projectile penetration into rock and soil evaluation</h3> -<p>If you want to evaluate the penetration depth and residual velocity of a rigid or deformable projectile into a rock or soil target, follow these steps:</p <ol> -<li>On the menu bar of Conwep software, click on "Calculation" and select "Projectile Penetration into Rock and Soil Evaluation".</li> -<li>A new window will open asking you to choose the projectile type, projectile weight, projectile diameter, projectile velocity, projectile angle, target type, target density, target strength, and output units. Enter the values in the corresponding fields or use the default values if you are not sure.</li> -<li>Click on the "OK" button to perform the calculation.</li> -<li>A new window will open showing the results of the calculation. You will see the penetration depth and residual velocity of the projectile into the target. You will also see the crater diameter and volume produced by the projectile impact.</li> -<li>You can save, print, or export the results in various formats such as TXT, CSV, XLS, PDF, PNG, JPG, BMP, or GIF.</li> -</ol> - <h3>Scenario 5: Cratering assessment</h3> -<p>If you want to assess the crater dimensions and volume produced by a buried or surface explosive charge in soil or rock, follow these steps:</p> -<ol> -<li>On the menu bar of Conwep software, click on "Calculation" and select "Cratering Assessment".</li> -<li>A new window will open asking you to choose the charge type, charge weight, charge depth, target type, target density, target strength, and output units. Enter the values in the corresponding fields or use the default values if you are not sure.</li> -<li>Click on the "OK" button to perform the calculation.</li> -<li>A new window will open showing the results of the calculation. You will see the crater diameter and volume produced by the explosive charge. You will also see the ejecta mass, velocity, and angle distribution.</li> -<li>You can save, print, or export the results in various formats such as TXT, CSV, XLS, PDF, PNG, JPG, BMP, or GIF.</li> -</ol> - <h3>Scenario 6: Ground shock estimation</h3> -<p>If you want to estimate the ground shock parameters at any point due to a buried or surface explosive charge in soil or rock, follow these steps:</p <ol> -<li>On the menu bar of Conwep software, click on "Calculation" and select "Ground Shock Estimation".</li> -<li>A new window will open asking you to choose the charge type, charge weight, charge depth, target type, target density, target strength, observation point distance, and output units. Enter the values in the corresponding fields or use the default values if you are not sure.</li> -<li>Click on the "OK" button to perform the calculation.</li> -<li>A new window will open showing the results of the calculation. You will see the peak particle velocity, displacement, acceleration, and stress at the observation point due to the explosive charge. You will also see a graph of the ground shock parameters versus time or distance.</li> -<li>You can save, print, or export the results in various formats such as TXT, CSV, XLS, PDF, PNG, JPG, BMP, or GIF.</li> -</ol> - <h2>The best practices and recommendations on how to use Conwep software effectively and productively</h2> -<p>To use Conwep software effectively and productively, you should follow these best practices and recommendations:</p> -<ul> -<li>You should familiarize yourself with the equations and curves of TM 5-855-1 before using Conwep software. You can find a copy of TM 5-855-1 on the official website of the USACE or other sources that offer it for free. You can also find a summary of TM 5-855-1 in the help file or the documentation file of Conwep software.</li> -<li>You should verify and validate your data and results before using them for any purpose. You can do this by comparing your data and results with other sources or methods that are relevant and reliable. You can also use the "Sensitivity Analysis" function of Conwep software to check how your results change with different input values.</li> -<li>You should use Conwep software with caution and responsibility. You should not use Conwep software for any illegal or unethical purposes. You should also not use Conwep software for any critical or life-threatening situations without proper consultation and verification from experts or authorities.</li> -</ul> - <h1>Conclusion</h1> -<p>In this article, we have shown you how you can download Conwep software for free in a RAR file format. We have also explained what a RAR file format is, how to open and extract Conwep software from it, how to use Conwep software for various purposes, and answered some frequently asked questions about Conwep software.</p> -<p>We hope that you have found this article useful and informative. If you are interested in Conwep software, we encourage you to download and try it for yourself. You will be amazed by what Conwep software can do for you.</p> -<p>Thank you for reading this article. If you have any feedback or questions, please feel free to leave a comment below or contact us through our website. We would love to hear from you.</p> - <h1>FAQs</h1> -<h2>What are the system requirements for Conwep software?</h2> -<p>The system requirements for Conwep software are:</p> -<ul> -<li>Operating system: Windows XP/Vista/7/8/10</li> -<li>Processor: Pentium III or higher</li> -<li>Memory: 256 MB RAM or higher</li> -<li>Disk space: 50 MB free disk space or higher</li> -<li>Display: 800 x 600 pixels or higher</li> -</ul> - <h2>Is Conwep software compatible with Windows 11 or Windows 10?</h2> -<p>Conwep software is compatible with Windows 11 or Windows 10. However, you may need to run it in compatibility mode or as an administrator if you encounter any issues or errors.</p> - <h2>How much does Conwep software cost?</h2> -<p>Conwep software is free to download and use for personal or educational purposes. However, if you want to use Conwep software for commercial or professional purposes, you may need to obtain a license from the USACE or its authorized distributors.</p> - <h2>Is Conwep software safe and reliable?</h2 <p>Conwep software is safe and reliable. It is based on well-established and validated equations and curves that have been derived from extensive experimental data and theoretical analysis. It is also developed and maintained by the USACE, which is a reputable and authoritative organization in the field of conventional weapons effects. However, you should always scan Conwep software with an antivirus or anti-malware program before opening or extracting it. You should also backup your data and files before installing or running Conwep software on your computer or device.</p> - <h2>Where can I find more information or support for Conwep software?</h2> -<p>You can find more information or support for Conwep software on the official website of the USACE or other sources that offer it for free. You can also find a help file or a documentation file in the folder where you extracted Conwep software. These files contain detailed instructions and examples on how to use Conwep software for various purposes. You can also contact the support team of Conwep software by emailing them at conwep@usace.army.mil or calling them at (202) 761-0011.</p> - <h1></h1></p> b2dd77e56b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Dr Ghulam Jilani Barq Books Free Download !!HOT!! Pdf.md b/spaces/tioseFevbu/cartoon-converter/scripts/Dr Ghulam Jilani Barq Books Free Download !!HOT!! Pdf.md deleted file mode 100644 index 14ae36e20dcd22277d32cb8d1a78388e723a89e7..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Dr Ghulam Jilani Barq Books Free Download !!HOT!! Pdf.md +++ /dev/null @@ -1,15 +0,0 @@ -<br /> -<h1>Dr Ghulam Jilani Barq Books Free Download Pdf</h1> -<p>Dr Ghulam Jilani Barq was a renowned scholar of Islam and Urdu literature. He wrote many books on various topics, such as Quran, Islam, history, philosophy, and poetry. Some of his famous books are:</p> -<h2>Dr Ghulam Jilani Barq Books Free Download Pdf</h2><br /><p><b><b>DOWNLOAD</b> &#10084;&#10084;&#10084; <a href="https://urlcod.com/2uHxeR">https://urlcod.com/2uHxeR</a></b></p><br /><br /> -<ul> -<li><strong>Dou Quran</strong>: This book is a comparative study of the Quran and the Bible. It analyzes the similarities and differences between the two scriptures, and highlights the superiority of the Quran over the Bible. The book also refutes some of the common misconceptions and allegations against the Quran by Christian missionaries. The book is available for free download in PDF format from <a href="https://archive.org/details/dou-quran-by-dr-ghulam-jelani-barq">Internet Archive</a>[^1^].</li> -<li><strong>Do Islam</strong>: This book is a critical examination of the two major sects of Islam: Sunni and Shia. It traces the historical origins and development of these sects, and exposes their deviations from the true teachings of Islam. The book also discusses the role of these sects in the political and social affairs of the Muslim world, and warns against their sectarianism and extremism. The book is available for free download in PDF format from <a href="https://archive.org/details/DoIslamByGhulamJilaniBarq">Internet Archive</a>[^2^].</li> -<li><strong>Do Quran</strong>: This book is a sequel to Dou Quran. It compares the Quran with other religious scriptures, such as Vedas, Avesta, Torah, Psalms, Gospels, and Book of Mormon. It shows how the Quran confirms, corrects, or rejects the teachings of these scriptures, and proves its authenticity and universality. The book also answers some of the common objections and questions raised by non-Muslims about the Quran. The book is available for free download in PDF format from <a href="https://www.scribd.com/doc/153428583/DO-QURAN-by-dr-ghulam-jilani-barq">Scribd</a>[^3^].</li> -</ul> -<p>These books are highly recommended for anyone who wants to learn more about Islam and its relation with other religions. They are also useful for refuting the false propaganda and misinformation spread by some anti-Islamic forces.</p><p>Dr Ghulam Jilani Barq was not only a prolific writer, but also a distinguished scholar of Islam and Urdu literature. He had a Ph.D degree in Islamic studies from Harvard and Oxford universities, and was one of the few scholars who had mastered both Arabic and English languages. He taught at various colleges and universities in India and Pakistan, and also served as an editor of several journals and magazines. He was a recipient of many awards and honors for his academic and literary achievements.</p> -<p>Dr Ghulam Jilani Barq was also a poet and a critic. He wrote poetry in Urdu and Persian languages, and also translated some of the works of Rumi, Saadi, Iqbal, and Ghalib into English. He also wrote essays and articles on various topics, such as history, philosophy, culture, and politics. He was a staunch supporter of the Pakistan movement, and advocated for the unity and solidarity of the Muslim Ummah. He was also a defender of the Quran and Islam against the attacks of orientalists and missionaries.</p> -<p></p> -<p>Dr Ghulam Jilani Barq was a man of great vision and wisdom. He had a deep insight into the problems and challenges faced by the Muslims in the modern world. He also had a keen interest in the scientific and technological developments of his time. He believed that Islam was compatible with reason and science, and that Muslims should adopt a progressive and dynamic approach to their religion. He also emphasized the importance of education, research, and dialogue for the advancement of the Muslim society.</p> 7196e7f11a<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Free Download Sothink Swf Decompiler Serial Key ((INSTALL)).md b/spaces/tioseFevbu/cartoon-converter/scripts/Free Download Sothink Swf Decompiler Serial Key ((INSTALL)).md deleted file mode 100644 index f6cba722bd32e09096c2e2d6522dfb6208478ca9..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Free Download Sothink Swf Decompiler Serial Key ((INSTALL)).md +++ /dev/null @@ -1,113 +0,0 @@ -<br /> -<h1>Free Download Sothink SWF Decompiler Serial Key</h1> -<p>If you are looking for a way to decompile, edit, and convert SWF files, you may have heard of Sothink SWF Decompiler, a powerful and professional tool that can help you do all these tasks. But how can you get a free download of Sothink SWF Decompiler serial key? Is it safe and legal to use a cracked serial key for this software? In this article, we will answer these questions and provide you with some useful information about Sothink SWF Decompiler, SWF file format, and how to get a genuine serial key for this software.</p> - <h2>What is Sothink SWF Decompiler?</h2> -<p>Sothink SWF Decompiler is a software product developed by SourceTec Software, a company that specializes in Flash and web development tools. It is the first Flash decompiling tool in the market, and it can decompile SWF files to FLA, FLEX, HTML5, or other formats. It can also extract elements from SWF files, such as images, sounds, texts, fonts, buttons, sprites, ActionScript, etc. Moreover, it can edit and replace elements in SWF files, such as changing shapes, colors, texts, sounds, etc. It can also convert Flash to HTML5 files, which can be played on any browser that supports HTML5.</p> -<h2>Free Download Sothink Swf Decompiler Serial Key</h2><br /><p><b><b>Download File</b> &#10037;&#10037;&#10037; <a href="https://urlcod.com/2uHwL1">https://urlcod.com/2uHwL1</a></b></p><br /><br /> - <h3>Features and benefits of Sothink SWF Decompiler</h3> -<p>Sothink SWF Decompiler has many features and benefits that make it a popular choice among Flash developers and users. Some of them are:</p> -<ul> -<li>It supports Flash CS3/CS4/CS5/CS6 and ActionScript 2.0/3.0.</li> -<li>It has a user-friendly interface that supports multiple languages, such as English, German, French, Chinese, Italian, and Korean.</li> -<li>It has a built-in Flash player that can play any SWF/FLV/F4V files smoothly.</li> -<li>It has a free Flash downloader plugin that can capture online Flash from IE or Firefox in one click.</li> -<li>It has a batch mode that can decompile multiple SWF files at once.</li> -<li>It has a global search function that can search all ActionScript in the decompiled files.</li> -<li>It has an advanced settings option that can customize the output format and quality.</li> -</ul> - <h3>How to use Sothink SWF Decompiler to decompile SWF files</h3> -<p>Using Sothink SWF Decompiler to decompile SWF files is very easy and simple. Here are the steps:</p> -<ol> -<li>Download and install Sothink SWF Decompiler from its official website or other trusted sources.</li> -<li>Launch the program and open the SWF file you want to decompile from the left explorer panel.</li> -<li>The selected SWF file will be automatically decompiled and displayed in the resources tree panel. You can view all the elements in the preview window.</li> -<li>Select the element you want to extract or edit from the resources tree panel. You can right-click on it and choose the option you need.</li> -<li>If you want to convert the whole SWF file to another format, such as FLA or HTML5, you can click on the Export button on the top toolbar and choose the output format and settings.</li> -<li>Save the decompiled or converted file to your desired location.</li> -</ol> -<p>Congratulations, you have successfully decompiled a SWF file using Sothink SWF Decompiler!</p> - <h2>What is SWF file format?</h2> -<p>SWF stands for Small Web Format, and it is a file format that can store vector graphics, animations, audio, video, and interactive content. It was originally created by Macromedia in 1996 as a format for Flash movies, and later acquired by Adobe in 2005. SWF files can be played by Adobe Flash Player, which is a browser plugin that can run on various platforms and devices. SWF files can also be embedded in web pages using HTML tags or JavaScript code.</p> - <h3>The history and development of SWF file format</h3> -<p>The SWF file format has a long and rich history of development and evolution. Here are some of the major milestones in its history:</p> -<ul> -<li>In 1996, Macromedia released the first version of Flash, which was a software for creating vector-based animations and interactive content. The Flash movies were saved as SWF files, which stood for ShockWave Flash.</li> -<li>In 1998, Macromedia released Flash 3, which introduced ActionScript, a scripting language that could control the behavior and logic of the Flash movies. ActionScript was based on ECMAScript, the same standard as JavaScript.</li> -<li>In 2000, Macromedia released Flash 5, which improved the performance and functionality of ActionScript. It also added support for MP3 audio and XML data.</li> -<li>In 2002, Macromedia released Flash MX, which added support for video and bitmap graphics. It also introduced Flash Remoting, a technology that allowed Flash movies to communicate with server-side applications.</li> -<li>In 2005, Adobe acquired Macromedia and took over the development of Flash and SWF file format. Adobe released Flash 8, which added support for alpha transparency, filters, and blend modes.</li> -<li>In 2007, Adobe released Flash CS3, which added support for ActionScript 3.0, a more powerful and object-oriented version of ActionScript. It also added support for H.264 video and AAC audio.</li> -<li>In 2010, Adobe released Flash CS5, which added support for iPhone and iPad devices. It also introduced Adobe AIR, a cross-platform runtime that could run SWF files as standalone applications on desktops and mobile devices.</li> -<li>In 2011, Adobe announced that it would stop developing Flash Player for mobile browsers, due to the rise of HTML5 and other web standards. It also renamed the SWF file format to Small Web Format, to reflect its broader scope beyond Flash.</li> -<li>In 2012, Adobe released Flash CS6, which added support for Android devices. It also introduced Stage3D, a technology that enabled hardware-accelerated 3D graphics in SWF files.</li> -<li>In 2015, Adobe released Animate CC, which replaced Flash as the main software for creating SWF files. Animate CC could also create HTML5 and WebGL content.</li> -<li>In 2017, Adobe announced that it would end support for Flash Player by the end of 2020, due to the declining usage and security issues of the plugin. It also encouraged users to migrate their SWF content to HTML5 or other formats.</li> -</ul> - <h3>The advantages and disadvantages of SWF file format</h3> -<p>The SWF file format has some advantages and disadvantages that make it suitable or unsuitable for different purposes. Some of them are:</p> - <table> -<tr><th>Advantages</th><th>Disadvantages</th></tr> -<tr><td>It can store vector graphics that can scale without losing quality.</td><td>It requires a plugin or a runtime to play on browsers or devices.</td></tr> -<tr><td>It can store animations that can be interactive and dynamic.</td><td>It can consume a lot of CPU and memory resources when playing complex animations.</td></tr> -<tr><td>It can store audio and video that can be synchronized with the animations.</td><td>It can have compatibility and security issues with different browsers or platforms.</td></tr> -<tr><td>It can store ActionScript that can add logic and functionality to the animations.</td><td>It can be vulnerable to malicious code or exploits that can harm the users' systems.</td></tr> -<tr><td>It can be compressed to reduce the file size and bandwidth usage.</td><td>It can be difficult to edit or modify without specialized software or tools.</td></tr> -</table> - <h2>Why do you need a serial key for Sothink SWF Decompiler?</h2> -<p>Sothink SWF Decompiler is a paid software that requires a serial key to activate and use its full features. A serial key is a unique code that is generated by the software vendor and assigned to each user who purchases the software. The serial key is used to verify the authenticity and validity of the software and prevent unauthorized or illegal use of the software.</p> -<p></p> - <h3>The limitations of the trial version of Sothink SWF Decompiler</h3> -<p>If you do not have a serial key for Sothink SWF Decompiler, you can still download and install the trial version of the software from its official website or other trusted sources. However, the trial version has some limitations that restrict its functionality and usability. Some of them are:</p> -<ul> -<li>The trial version can only be used for 30 days from the date of installation.</li> -<li>The trial version can only decompile 10 SWF files in total.</li> -<li>The trial version can only export 50% of each resource in a SWF file.</li> -<li>The trial version can only convert 30 seconds of each video or audio in a SWF file.</li> -<li>The trial version can only replace one element in a SWF file at a time.</li> -<li>The trial version can only export HTML5 files with a watermark.</li> -</ul> -<p>These limitations can make the trial version unsuitable for your needs, especially if you want to decompile, edit, or convert many SWF files or large SWF files. Therefore, you may need a serial key to unlock the full features of Sothink SWF Decompiler.</p> - <h3>The risks and drawbacks of using a cracked serial key for Sothink SWF Decompiler</h3> -<p>Some people may try to find a cracked serial key for Sothink SWF Decompiler on the internet, hoping to get a free download of the software without paying for it. A cracked serial key is a code that has been illegally generated or modified by hackers or crackers to bypass the activation process of the software. However, using a cracked serial key for Sothink SWF Decompiler is not a good idea, as it can bring you many risks and drawbacks. Some of them are:</p> -<ul> -<li>It is illegal and unethical to use a cracked serial key for Sothink SWF Decompiler, as it violates the terms and conditions of the software license agreement. You may face legal consequences or penalties if you are caught using a cracked serial key.</li> -<li>It is unsafe and risky to use a cracked serial key for Sothink SWF Decompiler, as it may contain viruses, malware, spyware, or other harmful programs that can damage your system or steal your personal information. You may also expose your system to hackers or attackers who can access your files or data through the cracked serial key.</li> -<li>It is unreliable and unstable to use a cracked serial key for Sothink SWF Decompiler, as it may not work properly or cause errors or crashes in the software. You may also lose your data or work if the software fails to save or export your files correctly. You may also miss out on the updates or support from the software vendor, as they may detect and block your cracked serial key.</li> -</ul> -<p>These risks and drawbacks can outweigh the benefits of using a cracked serial key for Sothink SWF Decompiler, as it can cost you more time, money, and trouble than buying a genuine serial key. Therefore, you should avoid using a cracked serial key for Sothink SWF Decompiler.</p> - <h2>How to get a genuine serial key for Sothink SWF Decompiler?</h2> -<p>If you want to get a genuine serial key for Sothink SWF Decompiler, you have two main options: buying it from the official website or getting it from alternative ways. Let's look at each option in detail.</p> - <h3>The official website and pricing of Sothink SWF Decompiler</h3> -<p>The official website of Sothink SWF Decompiler is , where you can find more information about the software, such as features, screenshots, tutorials, reviews, etc. You can also download the trial version or buy the full version of the software from this website.</p> - <p>The pricing of Sothink SWF Decompiler is as follows:</p> - <table> -<tr><th>Lifetime License</th><th>1-Year License</th></tr> -<tr><td>$79.99</td><td>$39.99</td></tr> -<tr><td>You can use the software forever without any time limit.</td><td>You can use the software for one year from the date of purchase.</td></tr> -<tr><td>You can get free lifetime updates and support from the software vendor.</td><td>You can get free updates and support for one year from the date of purchase.</td></tr> -</table> - <p>You can choose the license type that suits your needs and budget. You can pay by credit card, PayPal, or other methods. After you complete the payment, you will receive an email with the serial key and the download link for the software. You can then use the serial key to activate the software and enjoy its full features.</p> - <h3>The alternative ways to get a discount or a free serial key for Sothink SWF Decompiler</h3> -<p>If you do not want to pay the full price for Sothink SWF Decompiler, you may try some alternative ways to get a discount or a free serial key for the software. Some of them are:</p> -<ul> -<li>You can look for coupon codes or promo codes that can give you a percentage off or a fixed amount off the original price of the software. You can find these codes on some websites that offer coupons or deals for various products and services, such as RetailMeNot, Slickdeals, CouponChief, etc. You can also search for these codes on Google or other search engines, using keywords like "Sothink SWF Decompiler coupon code" or "Sothink SWF Decompiler promo code". However, you should be careful and check the validity and reliability of these codes before using them, as some of them may be expired or fake.</li> -<li>You can look for giveaways or contests that can give you a chance to win a free serial key for Sothink SWF Decompiler. You can find these giveaways or contests on some websites that host them regularly, such as Giveaway of the Day, SharewareOnSale, BitsDuJour, etc. You can also search for these giveaways or contests on Google or other search engines, using keywords like "Sothink SWF Decompiler giveaway" or "Sothink SWF Decompiler contest". However, you should be aware and follow the rules and requirements of these giveaways or contests before entering them, as some of them may have restrictions or conditions.</li> -<li>You can look for reviews or testimonials that can give you a free serial key for Sothink SWF Decompiler in exchange for your honest feedback or opinion about the software. You can find these reviews or testimonials on some websites that offer them occasionally, such as CNET, Softpedia, Software Informer, etc. You can also search for these reviews or testimonials on Google or other search engines, using keywords like "Sothink SWF Decompiler review" or "Sothink SWF Decompiler testimonial". However, you should be genuine and ethical when writing your review or testimonial, as it may affect the reputation and quality of the software.</li> -</ul> -<p>These alternative ways can help you save some money or get a free serial key for Sothink SWF Decompiler. However, they are not guaranteed to work all the time, and they may have some risks or drawbacks. Therefore, you should use them at your own discretion and responsibility.</p> - <h2>Conclusion</h2> -<p>In conclusion, Sothink SWF Decompiler is a great tool that can help you decompile, edit, and convert SWF files easily and quickly. However, you need a serial key to activate and use its full features. You can buy a genuine serial key from its official website , or you can try some alternative ways to get a discount or a free serial key for the software. However, you should avoid using a cracked serial key for Sothink SWF Decompiler, as it is illegal, unsafe, unreliable, and unethical. We hope this article has given you some useful information and tips about Sothink SWF Decompiler and how to get a free download of its serial key.</p> - <h2>FAQs</h2> -<p>Here are some frequently asked questions about Sothink SWF Decompiler and its serial key:</p> -<h3>Q: Can I use Sothink SWF Decompiler on Mac?</h3> -<p>A: Yes, you can use Sothink SWF Decompiler on Mac. There is a Mac version of the software that is compatible with Mac OS X 10.6 or later. You can download it from its official website .</p> - <h3>Q: Can I use Sothink SWF Decompiler on multiple computers?</h3> -<p>A: Yes, you can use Sothink SWF Decompiler on multiple computers. However, you need to buy a separate license for each computer that you want to use the software on. Alternatively, you can buy a multi-user license that allows you to use the software on up to 10 computers with one serial key.</p> - <h3>Q: Can I decompile Flash games with Sothink SWF Decompiler?</h3> -<p>A: Yes, you can decompile Flash games with Sothink SWF Decompiler. However, you should respect the intellectual property rights of the game developers and publishers, and only decompile Flash games for personal use or learning purposes. You should not decompile Flash games for commercial use or distribution without the permission of the game owners.</p> - <h3>Q: Can I recover the source code of ActionScript from SWF files with Sothink SWF Decompiler?</h3> -<p>A: Yes, you can recover the source code of ActionScript from SWF files with Sothink SWF Decompiler. However, you should note that the recovered source code may not be exactly the same as the original source code, as some information or comments may be lost or changed during the compilation process. You should also note that the recovered source code may be protected by encryption or obfuscation techniques, which can make it hard to read or understand.</p> - <h3>Q: Can I get a refund if I am not satisfied with Sothink SWF Decompiler?</h3> -<p>A: Yes, you can get a refund if you are not satisfied with Sothink SWF Decompiler. The software vendor offers a 30-day money-back guarantee for all its products, which means that you can request a full refund within 30 days of your purchase if you are not happy with the software. You can contact the customer service team via email or phone to initiate the refund process.</p> b2dd77e56b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/version.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/version.py deleted file mode 100644 index a08a06b9a8778863e91d1bd4cbaac6a4b9730a62..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/chardet/version.py +++ /dev/null @@ -1,9 +0,0 @@ -""" -This module exists only to simplify retrieving the version number of chardet -from within setup.py and from chardet subpackages. - -:author: Dan Blanchard (dan.blanchard@gmail.com) -""" - -__version__ = "5.0.0" -VERSION = __version__.split(".") diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/diagram/__init__.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/diagram/__init__.py deleted file mode 100644 index 1506d66bf4e93afb60ad46c23f234b31c46b3a7e..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/pyparsing/diagram/__init__.py +++ /dev/null @@ -1,642 +0,0 @@ -import railroad -from pip._vendor import pyparsing -import typing -from typing import ( - List, - NamedTuple, - Generic, - TypeVar, - Dict, - Callable, - Set, - Iterable, -) -from jinja2 import Template -from io import StringIO -import inspect - - -jinja2_template_source = """\ -<!DOCTYPE html> -<html> -<head> - {% if not head %} - <style type="text/css"> - .railroad-heading { - font-family: monospace; - } - </style> - {% else %} - {{ head | safe }} - {% endif %} -</head> -<body> -{{ body | safe }} -{% for diagram in diagrams %} - <div class="railroad-group"> - <h1 class="railroad-heading">{{ diagram.title }}</h1> - <div class="railroad-description">{{ diagram.text }}</div> - <div class="railroad-svg"> - {{ diagram.svg }} - </div> - </div> -{% endfor %} -</body> -</html> -""" - -template = Template(jinja2_template_source) - -# Note: ideally this would be a dataclass, but we're supporting Python 3.5+ so we can't do this yet -NamedDiagram = NamedTuple( - "NamedDiagram", - [("name", str), ("diagram", typing.Optional[railroad.DiagramItem]), ("index", int)], -) -""" -A simple structure for associating a name with a railroad diagram -""" - -T = TypeVar("T") - - -class EachItem(railroad.Group): - """ - Custom railroad item to compose a: - - Group containing a - - OneOrMore containing a - - Choice of the elements in the Each - with the group label indicating that all must be matched - """ - - all_label = "[ALL]" - - def __init__(self, *items): - choice_item = railroad.Choice(len(items) - 1, *items) - one_or_more_item = railroad.OneOrMore(item=choice_item) - super().__init__(one_or_more_item, label=self.all_label) - - -class AnnotatedItem(railroad.Group): - """ - Simple subclass of Group that creates an annotation label - """ - - def __init__(self, label: str, item): - super().__init__(item=item, label="[{}]".format(label) if label else label) - - -class EditablePartial(Generic[T]): - """ - Acts like a functools.partial, but can be edited. In other words, it represents a type that hasn't yet been - constructed. - """ - - # We need this here because the railroad constructors actually transform the data, so can't be called until the - # entire tree is assembled - - def __init__(self, func: Callable[..., T], args: list, kwargs: dict): - self.func = func - self.args = args - self.kwargs = kwargs - - @classmethod - def from_call(cls, func: Callable[..., T], *args, **kwargs) -> "EditablePartial[T]": - """ - If you call this function in the same way that you would call the constructor, it will store the arguments - as you expect. For example EditablePartial.from_call(Fraction, 1, 3)() == Fraction(1, 3) - """ - return EditablePartial(func=func, args=list(args), kwargs=kwargs) - - @property - def name(self): - return self.kwargs["name"] - - def __call__(self) -> T: - """ - Evaluate the partial and return the result - """ - args = self.args.copy() - kwargs = self.kwargs.copy() - - # This is a helpful hack to allow you to specify varargs parameters (e.g. *args) as keyword args (e.g. - # args=['list', 'of', 'things']) - arg_spec = inspect.getfullargspec(self.func) - if arg_spec.varargs in self.kwargs: - args += kwargs.pop(arg_spec.varargs) - - return self.func(*args, **kwargs) - - -def railroad_to_html(diagrams: List[NamedDiagram], **kwargs) -> str: - """ - Given a list of NamedDiagram, produce a single HTML string that visualises those diagrams - :params kwargs: kwargs to be passed in to the template - """ - data = [] - for diagram in diagrams: - if diagram.diagram is None: - continue - io = StringIO() - diagram.diagram.writeSvg(io.write) - title = diagram.name - if diagram.index == 0: - title += " (root)" - data.append({"title": title, "text": "", "svg": io.getvalue()}) - - return template.render(diagrams=data, **kwargs) - - -def resolve_partial(partial: "EditablePartial[T]") -> T: - """ - Recursively resolves a collection of Partials into whatever type they are - """ - if isinstance(partial, EditablePartial): - partial.args = resolve_partial(partial.args) - partial.kwargs = resolve_partial(partial.kwargs) - return partial() - elif isinstance(partial, list): - return [resolve_partial(x) for x in partial] - elif isinstance(partial, dict): - return {key: resolve_partial(x) for key, x in partial.items()} - else: - return partial - - -def to_railroad( - element: pyparsing.ParserElement, - diagram_kwargs: typing.Optional[dict] = None, - vertical: int = 3, - show_results_names: bool = False, - show_groups: bool = False, -) -> List[NamedDiagram]: - """ - Convert a pyparsing element tree into a list of diagrams. This is the recommended entrypoint to diagram - creation if you want to access the Railroad tree before it is converted to HTML - :param element: base element of the parser being diagrammed - :param diagram_kwargs: kwargs to pass to the Diagram() constructor - :param vertical: (optional) - int - limit at which number of alternatives should be - shown vertically instead of horizontally - :param show_results_names - bool to indicate whether results name annotations should be - included in the diagram - :param show_groups - bool to indicate whether groups should be highlighted with an unlabeled - surrounding box - """ - # Convert the whole tree underneath the root - lookup = ConverterState(diagram_kwargs=diagram_kwargs or {}) - _to_diagram_element( - element, - lookup=lookup, - parent=None, - vertical=vertical, - show_results_names=show_results_names, - show_groups=show_groups, - ) - - root_id = id(element) - # Convert the root if it hasn't been already - if root_id in lookup: - if not element.customName: - lookup[root_id].name = "" - lookup[root_id].mark_for_extraction(root_id, lookup, force=True) - - # Now that we're finished, we can convert from intermediate structures into Railroad elements - diags = list(lookup.diagrams.values()) - if len(diags) > 1: - # collapse out duplicate diags with the same name - seen = set() - deduped_diags = [] - for d in diags: - # don't extract SkipTo elements, they are uninformative as subdiagrams - if d.name == "...": - continue - if d.name is not None and d.name not in seen: - seen.add(d.name) - deduped_diags.append(d) - resolved = [resolve_partial(partial) for partial in deduped_diags] - else: - # special case - if just one diagram, always display it, even if - # it has no name - resolved = [resolve_partial(partial) for partial in diags] - return sorted(resolved, key=lambda diag: diag.index) - - -def _should_vertical( - specification: int, exprs: Iterable[pyparsing.ParserElement] -) -> bool: - """ - Returns true if we should return a vertical list of elements - """ - if specification is None: - return False - else: - return len(_visible_exprs(exprs)) >= specification - - -class ElementState: - """ - State recorded for an individual pyparsing Element - """ - - # Note: this should be a dataclass, but we have to support Python 3.5 - def __init__( - self, - element: pyparsing.ParserElement, - converted: EditablePartial, - parent: EditablePartial, - number: int, - name: str = None, - parent_index: typing.Optional[int] = None, - ): - #: The pyparsing element that this represents - self.element: pyparsing.ParserElement = element - #: The name of the element - self.name: typing.Optional[str] = name - #: The output Railroad element in an unconverted state - self.converted: EditablePartial = converted - #: The parent Railroad element, which we store so that we can extract this if it's duplicated - self.parent: EditablePartial = parent - #: The order in which we found this element, used for sorting diagrams if this is extracted into a diagram - self.number: int = number - #: The index of this inside its parent - self.parent_index: typing.Optional[int] = parent_index - #: If true, we should extract this out into a subdiagram - self.extract: bool = False - #: If true, all of this element's children have been filled out - self.complete: bool = False - - def mark_for_extraction( - self, el_id: int, state: "ConverterState", name: str = None, force: bool = False - ): - """ - Called when this instance has been seen twice, and thus should eventually be extracted into a sub-diagram - :param el_id: id of the element - :param state: element/diagram state tracker - :param name: name to use for this element's text - :param force: If true, force extraction now, regardless of the state of this. Only useful for extracting the - root element when we know we're finished - """ - self.extract = True - - # Set the name - if not self.name: - if name: - # Allow forcing a custom name - self.name = name - elif self.element.customName: - self.name = self.element.customName - else: - self.name = "" - - # Just because this is marked for extraction doesn't mean we can do it yet. We may have to wait for children - # to be added - # Also, if this is just a string literal etc, don't bother extracting it - if force or (self.complete and _worth_extracting(self.element)): - state.extract_into_diagram(el_id) - - -class ConverterState: - """ - Stores some state that persists between recursions into the element tree - """ - - def __init__(self, diagram_kwargs: typing.Optional[dict] = None): - #: A dictionary mapping ParserElements to state relating to them - self._element_diagram_states: Dict[int, ElementState] = {} - #: A dictionary mapping ParserElement IDs to subdiagrams generated from them - self.diagrams: Dict[int, EditablePartial[NamedDiagram]] = {} - #: The index of the next unnamed element - self.unnamed_index: int = 1 - #: The index of the next element. This is used for sorting - self.index: int = 0 - #: Shared kwargs that are used to customize the construction of diagrams - self.diagram_kwargs: dict = diagram_kwargs or {} - self.extracted_diagram_names: Set[str] = set() - - def __setitem__(self, key: int, value: ElementState): - self._element_diagram_states[key] = value - - def __getitem__(self, key: int) -> ElementState: - return self._element_diagram_states[key] - - def __delitem__(self, key: int): - del self._element_diagram_states[key] - - def __contains__(self, key: int): - return key in self._element_diagram_states - - def generate_unnamed(self) -> int: - """ - Generate a number used in the name of an otherwise unnamed diagram - """ - self.unnamed_index += 1 - return self.unnamed_index - - def generate_index(self) -> int: - """ - Generate a number used to index a diagram - """ - self.index += 1 - return self.index - - def extract_into_diagram(self, el_id: int): - """ - Used when we encounter the same token twice in the same tree. When this - happens, we replace all instances of that token with a terminal, and - create a new subdiagram for the token - """ - position = self[el_id] - - # Replace the original definition of this element with a regular block - if position.parent: - ret = EditablePartial.from_call(railroad.NonTerminal, text=position.name) - if "item" in position.parent.kwargs: - position.parent.kwargs["item"] = ret - elif "items" in position.parent.kwargs: - position.parent.kwargs["items"][position.parent_index] = ret - - # If the element we're extracting is a group, skip to its content but keep the title - if position.converted.func == railroad.Group: - content = position.converted.kwargs["item"] - else: - content = position.converted - - self.diagrams[el_id] = EditablePartial.from_call( - NamedDiagram, - name=position.name, - diagram=EditablePartial.from_call( - railroad.Diagram, content, **self.diagram_kwargs - ), - index=position.number, - ) - - del self[el_id] - - -def _worth_extracting(element: pyparsing.ParserElement) -> bool: - """ - Returns true if this element is worth having its own sub-diagram. Simply, if any of its children - themselves have children, then its complex enough to extract - """ - children = element.recurse() - return any(child.recurse() for child in children) - - -def _apply_diagram_item_enhancements(fn): - """ - decorator to ensure enhancements to a diagram item (such as results name annotations) - get applied on return from _to_diagram_element (we do this since there are several - returns in _to_diagram_element) - """ - - def _inner( - element: pyparsing.ParserElement, - parent: typing.Optional[EditablePartial], - lookup: ConverterState = None, - vertical: int = None, - index: int = 0, - name_hint: str = None, - show_results_names: bool = False, - show_groups: bool = False, - ) -> typing.Optional[EditablePartial]: - - ret = fn( - element, - parent, - lookup, - vertical, - index, - name_hint, - show_results_names, - show_groups, - ) - - # apply annotation for results name, if present - if show_results_names and ret is not None: - element_results_name = element.resultsName - if element_results_name: - # add "*" to indicate if this is a "list all results" name - element_results_name += "" if element.modalResults else "*" - ret = EditablePartial.from_call( - railroad.Group, item=ret, label=element_results_name - ) - - return ret - - return _inner - - -def _visible_exprs(exprs: Iterable[pyparsing.ParserElement]): - non_diagramming_exprs = ( - pyparsing.ParseElementEnhance, - pyparsing.PositionToken, - pyparsing.And._ErrorStop, - ) - return [ - e - for e in exprs - if not (e.customName or e.resultsName or isinstance(e, non_diagramming_exprs)) - ] - - -@_apply_diagram_item_enhancements -def _to_diagram_element( - element: pyparsing.ParserElement, - parent: typing.Optional[EditablePartial], - lookup: ConverterState = None, - vertical: int = None, - index: int = 0, - name_hint: str = None, - show_results_names: bool = False, - show_groups: bool = False, -) -> typing.Optional[EditablePartial]: - """ - Recursively converts a PyParsing Element to a railroad Element - :param lookup: The shared converter state that keeps track of useful things - :param index: The index of this element within the parent - :param parent: The parent of this element in the output tree - :param vertical: Controls at what point we make a list of elements vertical. If this is an integer (the default), - it sets the threshold of the number of items before we go vertical. If True, always go vertical, if False, never - do so - :param name_hint: If provided, this will override the generated name - :param show_results_names: bool flag indicating whether to add annotations for results names - :returns: The converted version of the input element, but as a Partial that hasn't yet been constructed - :param show_groups: bool flag indicating whether to show groups using bounding box - """ - exprs = element.recurse() - name = name_hint or element.customName or element.__class__.__name__ - - # Python's id() is used to provide a unique identifier for elements - el_id = id(element) - - element_results_name = element.resultsName - - # Here we basically bypass processing certain wrapper elements if they contribute nothing to the diagram - if not element.customName: - if isinstance( - element, - ( - # pyparsing.TokenConverter, - # pyparsing.Forward, - pyparsing.Located, - ), - ): - # However, if this element has a useful custom name, and its child does not, we can pass it on to the child - if exprs: - if not exprs[0].customName: - propagated_name = name - else: - propagated_name = None - - return _to_diagram_element( - element.expr, - parent=parent, - lookup=lookup, - vertical=vertical, - index=index, - name_hint=propagated_name, - show_results_names=show_results_names, - show_groups=show_groups, - ) - - # If the element isn't worth extracting, we always treat it as the first time we say it - if _worth_extracting(element): - if el_id in lookup: - # If we've seen this element exactly once before, we are only just now finding out that it's a duplicate, - # so we have to extract it into a new diagram. - looked_up = lookup[el_id] - looked_up.mark_for_extraction(el_id, lookup, name=name_hint) - ret = EditablePartial.from_call(railroad.NonTerminal, text=looked_up.name) - return ret - - elif el_id in lookup.diagrams: - # If we have seen the element at least twice before, and have already extracted it into a subdiagram, we - # just put in a marker element that refers to the sub-diagram - ret = EditablePartial.from_call( - railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"] - ) - return ret - - # Recursively convert child elements - # Here we find the most relevant Railroad element for matching pyparsing Element - # We use ``items=[]`` here to hold the place for where the child elements will go once created - if isinstance(element, pyparsing.And): - # detect And's created with ``expr*N`` notation - for these use a OneOrMore with a repeat - # (all will have the same name, and resultsName) - if not exprs: - return None - if len(set((e.name, e.resultsName) for e in exprs)) == 1: - ret = EditablePartial.from_call( - railroad.OneOrMore, item="", repeat=str(len(exprs)) - ) - elif _should_vertical(vertical, exprs): - ret = EditablePartial.from_call(railroad.Stack, items=[]) - else: - ret = EditablePartial.from_call(railroad.Sequence, items=[]) - elif isinstance(element, (pyparsing.Or, pyparsing.MatchFirst)): - if not exprs: - return None - if _should_vertical(vertical, exprs): - ret = EditablePartial.from_call(railroad.Choice, 0, items=[]) - else: - ret = EditablePartial.from_call(railroad.HorizontalChoice, items=[]) - elif isinstance(element, pyparsing.Each): - if not exprs: - return None - ret = EditablePartial.from_call(EachItem, items=[]) - elif isinstance(element, pyparsing.NotAny): - ret = EditablePartial.from_call(AnnotatedItem, label="NOT", item="") - elif isinstance(element, pyparsing.FollowedBy): - ret = EditablePartial.from_call(AnnotatedItem, label="LOOKAHEAD", item="") - elif isinstance(element, pyparsing.PrecededBy): - ret = EditablePartial.from_call(AnnotatedItem, label="LOOKBEHIND", item="") - elif isinstance(element, pyparsing.Group): - if show_groups: - ret = EditablePartial.from_call(AnnotatedItem, label="", item="") - else: - ret = EditablePartial.from_call(railroad.Group, label="", item="") - elif isinstance(element, pyparsing.TokenConverter): - ret = EditablePartial.from_call( - AnnotatedItem, label=type(element).__name__.lower(), item="" - ) - elif isinstance(element, pyparsing.Opt): - ret = EditablePartial.from_call(railroad.Optional, item="") - elif isinstance(element, pyparsing.OneOrMore): - ret = EditablePartial.from_call(railroad.OneOrMore, item="") - elif isinstance(element, pyparsing.ZeroOrMore): - ret = EditablePartial.from_call(railroad.ZeroOrMore, item="") - elif isinstance(element, pyparsing.Group): - ret = EditablePartial.from_call( - railroad.Group, item=None, label=element_results_name - ) - elif isinstance(element, pyparsing.Empty) and not element.customName: - # Skip unnamed "Empty" elements - ret = None - elif len(exprs) > 1: - ret = EditablePartial.from_call(railroad.Sequence, items=[]) - elif len(exprs) > 0 and not element_results_name: - ret = EditablePartial.from_call(railroad.Group, item="", label=name) - else: - terminal = EditablePartial.from_call(railroad.Terminal, element.defaultName) - ret = terminal - - if ret is None: - return - - # Indicate this element's position in the tree so we can extract it if necessary - lookup[el_id] = ElementState( - element=element, - converted=ret, - parent=parent, - parent_index=index, - number=lookup.generate_index(), - ) - if element.customName: - lookup[el_id].mark_for_extraction(el_id, lookup, element.customName) - - i = 0 - for expr in exprs: - # Add a placeholder index in case we have to extract the child before we even add it to the parent - if "items" in ret.kwargs: - ret.kwargs["items"].insert(i, None) - - item = _to_diagram_element( - expr, - parent=ret, - lookup=lookup, - vertical=vertical, - index=i, - show_results_names=show_results_names, - show_groups=show_groups, - ) - - # Some elements don't need to be shown in the diagram - if item is not None: - if "item" in ret.kwargs: - ret.kwargs["item"] = item - elif "items" in ret.kwargs: - # If we've already extracted the child, don't touch this index, since it's occupied by a nonterminal - ret.kwargs["items"][i] = item - i += 1 - elif "items" in ret.kwargs: - # If we're supposed to skip this element, remove it from the parent - del ret.kwargs["items"][i] - - # If all this items children are none, skip this item - if ret and ( - ("items" in ret.kwargs and len(ret.kwargs["items"]) == 0) - or ("item" in ret.kwargs and ret.kwargs["item"] is None) - ): - ret = EditablePartial.from_call(railroad.Terminal, name) - - # Mark this element as "complete", ie it has all of its children - if el_id in lookup: - lookup[el_id].complete = True - - if el_id in lookup and lookup[el_id].extract and lookup[el_id].complete: - lookup.extract_into_diagram(el_id) - if ret is not None: - ret = EditablePartial.from_call( - railroad.NonTerminal, text=lookup.diagrams[el_id].kwargs["name"] - ) - - return ret diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/help.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/help.py deleted file mode 100644 index 2d292c2f062cd80cd108aac503eae7b635ceec8d..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/help.py +++ /dev/null @@ -1,131 +0,0 @@ -"""Module containing bug report helper(s).""" - -import json -import platform -import ssl -import sys - -from pip._vendor import idna -from pip._vendor import urllib3 - -from . import __version__ as requests_version - -charset_normalizer = None - -try: - from pip._vendor import chardet -except ImportError: - chardet = None - -try: - from pip._vendor.urllib3.contrib import pyopenssl -except ImportError: - pyopenssl = None - OpenSSL = None - cryptography = None -else: - import cryptography - import OpenSSL - - -def _implementation(): - """Return a dict with the Python implementation and version. - - Provide both the name and the version of the Python implementation - currently running. For example, on CPython 3.10.3 it will return - {'name': 'CPython', 'version': '3.10.3'}. - - This function works best on CPython and PyPy: in particular, it probably - doesn't work for Jython or IronPython. Future investigation should be done - to work out the correct shape of the code for those platforms. - """ - implementation = platform.python_implementation() - - if implementation == "CPython": - implementation_version = platform.python_version() - elif implementation == "PyPy": - implementation_version = "{}.{}.{}".format( - sys.pypy_version_info.major, - sys.pypy_version_info.minor, - sys.pypy_version_info.micro, - ) - if sys.pypy_version_info.releaselevel != "final": - implementation_version = "".join( - [implementation_version, sys.pypy_version_info.releaselevel] - ) - elif implementation == "Jython": - implementation_version = platform.python_version() # Complete Guess - elif implementation == "IronPython": - implementation_version = platform.python_version() # Complete Guess - else: - implementation_version = "Unknown" - - return {"name": implementation, "version": implementation_version} - - -def info(): - """Generate information for a bug report.""" - try: - platform_info = { - "system": platform.system(), - "release": platform.release(), - } - except OSError: - platform_info = { - "system": "Unknown", - "release": "Unknown", - } - - implementation_info = _implementation() - urllib3_info = {"version": urllib3.__version__} - charset_normalizer_info = {"version": None} - chardet_info = {"version": None} - if charset_normalizer: - charset_normalizer_info = {"version": charset_normalizer.__version__} - if chardet: - chardet_info = {"version": chardet.__version__} - - pyopenssl_info = { - "version": None, - "openssl_version": "", - } - if OpenSSL: - pyopenssl_info = { - "version": OpenSSL.__version__, - "openssl_version": f"{OpenSSL.SSL.OPENSSL_VERSION_NUMBER:x}", - } - cryptography_info = { - "version": getattr(cryptography, "__version__", ""), - } - idna_info = { - "version": getattr(idna, "__version__", ""), - } - - system_ssl = ssl.OPENSSL_VERSION_NUMBER - system_ssl_info = {"version": f"{system_ssl:x}" if system_ssl is not None else ""} - - return { - "platform": platform_info, - "implementation": implementation_info, - "system_ssl": system_ssl_info, - "using_pyopenssl": pyopenssl is not None, - "using_charset_normalizer": chardet is None, - "pyOpenSSL": pyopenssl_info, - "urllib3": urllib3_info, - "chardet": chardet_info, - "charset_normalizer": charset_normalizer_info, - "cryptography": cryptography_info, - "idna": idna_info, - "requests": { - "version": requests_version, - }, - } - - -def main(): - """Pretty-print the bug information as JSON.""" - print(json.dumps(info(), sort_keys=True, indent=2)) - - -if __name__ == "__main__": - main() diff --git a/spaces/tomofi/MMOCR/docs/zh_cn/deployment.md b/spaces/tomofi/MMOCR/docs/zh_cn/deployment.md deleted file mode 100644 index e4eb3fb6d7f79f03b49ee8fd0188b2e051444461..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/docs/zh_cn/deployment.md +++ /dev/null @@ -1,309 +0,0 @@ -# 部署 - -我们在 `tools/deployment` 目录下提供了一些部署工具。 - -## 转换至 ONNX (试验性的) - -我们提供了将模型转换至 [ONNX](https://github.com/onnx/onnx) 格式的脚本。转换后的模型可以使用诸如 [Netron](https://github.com/lutzroeder/netron) 的工具可视化。 此外,我们也支持比较 PyTorch 和 ONNX 模型的输出结果。 - -```bash -python tools/deployment/pytorch2onnx.py - ${MODEL_CONFIG_PATH} \ - ${MODEL_CKPT_PATH} \ - ${MODEL_TYPE} \ - ${IMAGE_PATH} \ - --output-file ${OUTPUT_FILE} \ - --device-id ${DEVICE_ID} \ - --opset-version ${OPSET_VERSION} \ - --verify \ - --verbose \ - --show \ - --dynamic-export -``` - -参数说明: - -| 参数 | 类型 | 描述 | -| ------------------ | -------------- | ------------------------------------------------------------ | -| `model_config` | str | 模型配置文件的路径。 | -| `model_ckpt` | str | 模型权重文件的路径。 | -| `model_type` | 'recog', 'det' | 配置文件对应的模型类型。 | -| `image_path` | str | 输入图片的路径。 | -| `--output-file` | str | 输出的 ONNX 模型路径。 默认为 `tmp.onnx`。 | -| `--device-id` | int | 使用哪块 GPU。默认为0。 | -| `--opset-version` | int | ONNX 操作集版本。默认为11。 | -| `--verify` | bool | 决定是否验证输出模型的正确性。默认为 `False`。 | -| `--verbose` | bool | 决定是否打印导出模型的结构,默认为 `False`。 | -| `--show` | bool | 决定是否可视化 ONNXRuntime 和 PyTorch 的输出。默认为 `False`。 | -| `--dynamic-export` | bool | 决定是否导出有动态输入和输出尺寸的 ONNX 模型。默认为 `False`。 | - -:::{note} - 这个工具仍然是试验性的。一些定制的操作没有被支持,并且我们目前仅支持一部分的文本检测和文本识别算法。 -::: - -### 支持导出到 ONNX 的模型列表 - -下表列出的模型可以保证导出到 ONNX 并且可以在 ONNX Runtime 下运行。 - -| 模型 | 配置 | 动态尺寸 | 批推理 | 注 | -|:------:|:------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------:|:---------------:|:----:| -| DBNet | [dbnet_r18_fpnc_1200e_icdar2015.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py) | Y | N | | -| PSENet | [psenet_r50_fpnf_600e_ctw1500.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py) | Y | Y | | -| PSENet | [psenet_r50_fpnf_600e_icdar2015.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py) | Y | Y | | -| PANet | [panet_r18_fpem_ffm_600e_ctw1500.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/panet/panet_r18_fpem_ffm_600e_ctw1500.py) | Y | Y | | -| PANet | [panet_r18_fpem_ffm_600e_icdar2015.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py) | Y | Y | | -| CRNN | [crnn_academic_dataset.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textrecog/crnn/crnn_academic_dataset.py) | Y | Y | CRNN 仅接受高度为32的输入 | - -:::{note} -- *以上所有模型测试基于 PyTorch==1.8.1,onnxruntime==1.7.0 进行* -- 如果你在上述模型中遇到问题,请创建一个issue,我们会尽快处理。 -- 因为这个特性是试验性的,可能变动很快,请尽量使用最新版的 `mmcv` 和 `mmocr` 尝试。 -::: - -## ONNX 转 TensorRT (试验性的) - -我们也提供了从 [ONNX](https://github.com/onnx/onnx) 模型转换至 [TensorRT](https://github.com/NVIDIA/TensorRT) 格式的脚本。另外,我们支持比较 ONNX 和 TensorRT 模型的输出结果。 - - -```bash -python tools/deployment/onnx2tensorrt.py - ${MODEL_CONFIG_PATH} \ - ${MODEL_TYPE} \ - ${IMAGE_PATH} \ - ${ONNX_FILE} \ - --trt-file ${OUT_TENSORRT} \ - --max-shape INT INT INT INT \ - --min-shape INT INT INT INT \ - --workspace-size INT \ - --fp16 \ - --verify \ - --show \ - --verbose -``` - -参数说明: - -| 参数 | 类型 | 描述 | -| ------------------ | -------------- | ------------------------------------------------------------ | -| `model_config` | str | 模型配置文件的路径。 | -| `model_type` | 'recog', 'det' | 配置文件对应的模型类型。 | -| `image_path` | str | 输入图片的路径。 | -| `onnx_file` | str | 输入的 ONNX 文件路径。 | -| `--trt-file` | str | 输出的 TensorRT 模型路径。默认为 `tmp.trt`。 | -| `--max-shape` | int * 4 | 模型输入的最大尺寸。 | -| `--min-shape` | int * 4 | 模型输入的最小尺寸。 | -| `--workspace-size` | int | 最大工作空间大小,单位为 GiB。默认为1。 | -| `--fp16` | bool | 决定是否输出 fp16 模式的 TensorRT 模型。默认为 `False`。 | -| `--verify` | bool | 决定是否验证输出模型的正确性。默认为 `False`。 | -| `--show` | bool | 决定是否可视化 ONNX 和 TensorRT 的输出。默认为 `False`。 | -| `--verbose` | bool | 决定是否在创建 TensorRT 引擎时打印日志信息。默认为 `False`。 | - -:::{note} - 这个工具仍然是试验性的。一些定制的操作模型没有被支持。我们目前仅支持一部的文本检测和文本识别算法。 -::: - -### 支持导出到 TensorRT 的模型列表 - -下表列出的模型可以保证导出到 TensorRT 引擎并且可以在 TensorRT 下运行。 - -| 模型 | 配置 | 动态尺寸 | 批推理 | 注 | -|:------:|:------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------:|:---------------:|:----:| -| DBNet | [dbnet_r18_fpnc_1200e_icdar2015.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py) | Y | N | | -| PSENet | [psenet_r50_fpnf_600e_ctw1500.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py) | Y | Y | | -| PSENet | [psenet_r50_fpnf_600e_icdar2015.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/psenet/psenet_r50_fpnf_600e_icdar2015.py) | Y | Y | | -| PANet | [panet_r18_fpem_ffm_600e_ctw1500.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/panet/panet_r18_fpem_ffm_600e_ctw1500.py) | Y | Y | | -| PANet | [panet_r18_fpem_ffm_600e_icdar2015.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/panet/panet_r18_fpem_ffm_600e_icdar2015.py) | Y | Y | | -| CRNN | [crnn_academic_dataset.py](https://github.com/open-mmlab/mmocr/blob/main/configs/textrecog/crnn/crnn_academic_dataset.py) | Y | Y | CRNN 仅接受高度为32的输入 | - -:::{note} -- *以上所有模型测试基于 PyTorch==1.8.1,onnxruntime==1.7.0,tensorrt==7.2.1.6 进行* -- 如果你在上述模型中遇到问题,请创建一个 issue,我们会尽快处理。 -- 因为这个特性是试验性的,可能变动很快,请尽量使用最新版的 `mmcv` 和 `mmocr` 尝试。 -::: - - -## 评估 ONNX 和 TensorRT 模型(试验性的) - -我们在 `tools/deployment/deploy_test.py ` 中提供了评估 TensorRT 和 ONNX 模型的方法。 - -### 前提条件 -在评估 ONNX 和 TensorRT 模型之前,首先需要安装 ONNX,ONNXRuntime 和 TensorRT。根据 [ONNXRuntime in mmcv](https://mmcv.readthedocs.io/en/latest/onnxruntime_op.html) 和 [TensorRT plugin in mmcv](https://github.com/open-mmlab/mmcv/blob/master/docs/tensorrt_plugin.md) 安装 ONNXRuntime 定制操作和 TensorRT 插件。 - -### 使用 - -```bash -python tools/deploy_test.py \ - ${CONFIG_FILE} \ - ${MODEL_PATH} \ - ${MODEL_TYPE} \ - ${BACKEND} \ - --eval ${METRICS} \ - --device ${DEVICE} -``` - -### 参数说明 - -| 参数 | 类型 | 描述 | -| -------------- | ------------------------- | ------------------------------------------------------ | -| `model_config` | str | 模型配置文件的路径。 | -| `model_file` | str | TensorRT 或 ONNX 模型路径。 | -| `model_type` | 'recog', 'det' | 部署检测还是识别模型。 | -| `backend` | 'TensorRT', 'ONNXRuntime' | 测试后端。 | -| `--eval` | 'acc', 'hmean-iou' | 评估指标。“acc”用于识别模型,“hmean-iou”用于检测模型。 | -| `--device` | str | 评估使用的设备。默认为 `cuda:0`。 | - -## 结果和模型 - -<table class="tg"> -<thead> - <tr> - <th class="tg-9wq8">模型</th> - <th class="tg-9wq8">配置</th> - <th class="tg-9wq8">数据集</th> - <th class="tg-9wq8">指标</th> - <th class="tg-9wq8">PyTorch</th> - <th class="tg-9wq8">ONNX Runtime</th> - <th class="tg-9wq8">TensorRT FP32</th> - <th class="tg-9wq8">TensorRT FP16</th> - </tr> -</thead> -<tbody> - <tr> - <td class="tg-9wq8" rowspan="3">DBNet</td> - <td class="tg-9wq8" rowspan="3">dbnet_r18_fpnc_1200e_icdar2015.py<br></td> - <td class="tg-9wq8" rowspan="3">icdar2015</td> - <td class="tg-9wq8"><span style="font-style:normal">Recall</span><br></td> - <td class="tg-9wq8">0.731</td> - <td class="tg-9wq8">0.731</td> - <td class="tg-9wq8">0.678</td> - <td class="tg-9wq8">0.679</td> - </tr> - <tr> - <td class="tg-9wq8">Precision</td> - <td class="tg-9wq8"><span style="font-weight:400;font-style:normal">0.871</span></td> - <td class="tg-9wq8">0.871</td> - <td class="tg-9wq8">0.844</td> - <td class="tg-9wq8">0.842</td> - </tr> - <tr> - <td class="tg-9wq8"><span style="font-style:normal">Hmean</span></td> - <td class="tg-9wq8"><span style="font-weight:400;font-style:normal">0.795</span></td> - <td class="tg-9wq8">0.795</td> - <td class="tg-9wq8">0.752</td> - <td class="tg-9wq8">0.752</td> - </tr> - <tr> - <td class="tg-9wq8" rowspan="3">DBNet*</td> - <td class="tg-9wq8" rowspan="3">dbnet_r18_fpnc_1200e_icdar2015.py<br></td> - <td class="tg-9wq8" rowspan="3">icdar2015</td> - <td class="tg-9wq8"><span style="font-style:normal">Recall</span><br></td> - <td class="tg-9wq8">0.720</td> - <td class="tg-9wq8">0.720</td> - <td class="tg-9wq8">0.720</td> - <td class="tg-9wq8">0.718</td> - </tr> - <tr> - <td class="tg-9wq8">Precision</td> - <td class="tg-9wq8"><span style="font-weight:400;font-style:normal">0.868</span></td> - <td class="tg-9wq8"><span style="font-weight:400;font-style:normal">0.868</span></td> - <td class="tg-9wq8"><span style="font-weight:400;font-style:normal">0.868</span></td> - <td class="tg-9wq8">0.868</td> - </tr> - <tr> - <td class="tg-9wq8"><span style="font-style:normal">Hmean</span></td> - <td class="tg-9wq8"><span style="font-weight:400;font-style:normal">0.787</span></td> - <td class="tg-9wq8"><span style="font-weight:400;font-style:normal">0.787</span></td> - <td class="tg-9wq8"><span style="font-weight:400;font-style:normal">0.787</span></td> - <td class="tg-9wq8">0.786</td> - </tr> - <tr> - <td class="tg-9wq8" rowspan="3">PSENet</td> - <td class="tg-9wq8" rowspan="3">psenet_r50_fpnf_600e_icdar2015.py<br></td> - <td class="tg-9wq8" rowspan="3">icdar2015</td> - <td class="tg-9wq8"><span style="font-style:normal">Recall</span><br></td> - <td class="tg-9wq8">0.753</td> - <td class="tg-9wq8">0.753</td> - <td class="tg-9wq8">0.753</td> - <td class="tg-9wq8">0.752</td> - </tr> - <tr> - <td class="tg-9wq8">Precision</td> - <td class="tg-9wq8">0.867</td> - <td class="tg-9wq8">0.867</td> - <td class="tg-9wq8">0.867</td> - <td class="tg-9wq8">0.867</td> - </tr> - <tr> - <td class="tg-9wq8"><span style="font-style:normal">Hmean</span></td> - <td class="tg-9wq8">0.806</td> - <td class="tg-9wq8">0.806</td> - <td class="tg-9wq8">0.806</td> - <td class="tg-9wq8">0.805</td> - </tr> - <tr> - <td class="tg-9wq8" rowspan="3">PANet</td> - <td class="tg-9wq8" rowspan="3">panet_r18_fpem_ffm_600e_icdar2015.py<br></td> - <td class="tg-9wq8" rowspan="3">icdar2015</td> - <td class="tg-9wq8">Recall<br></td> - <td class="tg-9wq8">0.740</td> - <td class="tg-9wq8">0.740</td> - <td class="tg-9wq8">0.687</td> - <td class="tg-9wq8">N/A</td> - </tr> - <tr> - <td class="tg-9wq8">Precision</td> - <td class="tg-9wq8">0.860</td> - <td class="tg-9wq8">0.860</td> - <td class="tg-9wq8">0.815</td> - <td class="tg-9wq8">N/A</td> - </tr> - <tr> - <td class="tg-9wq8">Hmean</td> - <td class="tg-9wq8">0.796</td> - <td class="tg-9wq8">0.796</td> - <td class="tg-9wq8">0.746</td> - <td class="tg-9wq8">N/A</td> - </tr> - <tr> - <td class="tg-nrix" rowspan="3">PANet*</td> - <td class="tg-nrix" rowspan="3">panet_r18_fpem_ffm_600e_icdar2015.py<br></td> - <td class="tg-nrix" rowspan="3">icdar2015</td> - <td class="tg-nrix">Recall<br></td> - <td class="tg-nrix">0.736</td> - <td class="tg-nrix">0.736</td> - <td class="tg-nrix">0.736</td> - <td class="tg-nrix">N/A</td> - </tr> - <tr> - <td class="tg-nrix">Precision</td> - <td class="tg-nrix">0.857</td> - <td class="tg-nrix">0.857</td> - <td class="tg-nrix">0.857</td> - <td class="tg-nrix">N/A</td> - </tr> - <tr> - <td class="tg-nrix">Hmean</td> - <td class="tg-nrix">0.792</td> - <td class="tg-nrix">0.792</td> - <td class="tg-nrix">0.792</td> - <td class="tg-nrix">N/A</td> - </tr> - <tr> - <td class="tg-9wq8">CRNN</td> - <td class="tg-9wq8">crnn_academic_dataset.py<br></td> - <td class="tg-9wq8">IIIT5K</td> - <td class="tg-9wq8">Acc</td> - <td class="tg-9wq8">0.806</td> - <td class="tg-9wq8">0.806</td> - <td class="tg-9wq8">0.806</td> - <td class="tg-9wq8">0.806</td> - </tr> -</tbody> -</table> - -:::{note} -- TensorRT 上采样(upsample)操作和 PyTorch 有一点不同。对于 DBNet 和 PANet,我们建议把上采样的最近邻 (nearest) 模式代替成双线性 (bilinear) 模式。 PANet 的替换处在[这里](https://github.com/open-mmlab/mmocr/blob/50a25e718a028c8b9d96f497e241767dbe9617d1/mmocr/models/textdet/necks/fpem_ffm.py#L33) ,DBNet 的替换处在[这里](https://github.com/open-mmlab/mmocr/blob/50a25e718a028c8b9d96f497e241767dbe9617d1/mmocr/models/textdet/necks/fpn_cat.py#L111)和[这里](https://github.com/open-mmlab/mmocr/blob/50a25e718a028c8b9d96f497e241767dbe9617d1/mmocr/models/textdet/necks/fpn_cat.py#L121)。如在上表中显示的,带有标记*的网络的上采样模式均被改变了。 -- 注意到,相比最近邻模式,使用更改后的上采样模式会降低性能。然而,默认网络的权重是通过最近邻模式训练的。为了保持在部署中的最佳性能,建议在训练和 TensorRT 部署中使用双线性模式。 -- 所有 ONNX 和 TensorRT 模型都使用数据集上的动态尺寸进行评估,图像根据原始配置文件进行预处理。 -- 这个工具仍然是试验性的。一些定制的操作模型没有被支持。并且我们目前仅支持一部分的文本检测和文本识别算法。 -::: diff --git a/spaces/tomofi/MMOCR/tools/deployment/mmocr2torchserve.py b/spaces/tomofi/MMOCR/tools/deployment/mmocr2torchserve.py deleted file mode 100644 index 9f9e2f470f2dbc476f1c6bce114723ed5b612715..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/tools/deployment/mmocr2torchserve.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from argparse import ArgumentParser, Namespace -from pathlib import Path -from tempfile import TemporaryDirectory - -import mmcv - -try: - from model_archiver.model_packaging import package_model - from model_archiver.model_packaging_utils import ModelExportUtils -except ImportError: - package_model = None - - -def mmocr2torchserve( - config_file: str, - checkpoint_file: str, - output_folder: str, - model_name: str, - model_version: str = '1.0', - force: bool = False, -): - """Converts MMOCR model (config + checkpoint) to TorchServe `.mar`. - - Args: - config_file: - In MMOCR config format. - The contents vary for each task repository. - checkpoint_file: - In MMOCR checkpoint format. - The contents vary for each task repository. - output_folder: - Folder where `{model_name}.mar` will be created. - The file created will be in TorchServe archive format. - model_name: - If not None, used for naming the `{model_name}.mar` file - that will be created under `output_folder`. - If None, `{Path(checkpoint_file).stem}` will be used. - model_version: - Model's version. - force: - If True, if there is an existing `{model_name}.mar` - file under `output_folder` it will be overwritten. - """ - mmcv.mkdir_or_exist(output_folder) - - config = mmcv.Config.fromfile(config_file) - - with TemporaryDirectory() as tmpdir: - config.dump(f'{tmpdir}/config.py') - - args = Namespace( - **{ - 'model_file': f'{tmpdir}/config.py', - 'serialized_file': checkpoint_file, - 'handler': f'{Path(__file__).parent}/mmocr_handler.py', - 'model_name': model_name or Path(checkpoint_file).stem, - 'version': model_version, - 'export_path': output_folder, - 'force': force, - 'requirements_file': None, - 'extra_files': None, - 'runtime': 'python', - 'archive_format': 'default' - }) - manifest = ModelExportUtils.generate_manifest_json(args) - package_model(args, manifest) - - -def parse_args(): - parser = ArgumentParser( - description='Convert MMOCR models to TorchServe `.mar` format.') - parser.add_argument('config', type=str, help='config file path') - parser.add_argument('checkpoint', type=str, help='checkpoint file path') - parser.add_argument( - '--output-folder', - type=str, - required=True, - help='Folder where `{model_name}.mar` will be created.') - parser.add_argument( - '--model-name', - type=str, - default=None, - help='If not None, used for naming the `{model_name}.mar`' - 'file that will be created under `output_folder`.' - 'If None, `{Path(checkpoint_file).stem}` will be used.') - parser.add_argument( - '--model-version', - type=str, - default='1.0', - help='Number used for versioning.') - parser.add_argument( - '-f', - '--force', - action='store_true', - help='overwrite the existing `{model_name}.mar`') - args = parser.parse_args() - - return args - - -if __name__ == '__main__': - args = parse_args() - - if package_model is None: - raise ImportError('`torch-model-archiver` is required.' - 'Try: pip install torch-model-archiver') - - mmocr2torchserve(args.config, args.checkpoint, args.output_folder, - args.model_name, args.model_version, args.force) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/version.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/version.py deleted file mode 100644 index a3b741aed16212ad1dee277d519b259ae3184b19..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/version.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. - -__version__ = '2.11.0' -short_version = __version__ - - -def parse_version_info(version_str): - version_info = [] - for x in version_str.split('.'): - if x.isdigit(): - version_info.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - version_info.append(int(patch_version[0])) - version_info.append(f'rc{patch_version[1]}') - return tuple(version_info) - - -version_info = parse_version_info(__version__) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_onnx/utils.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_onnx/utils.py deleted file mode 100644 index 89b9c13aa6cbee616bf9f3ef87f3ed3166a29f15..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/tests/test_onnx/utils.py +++ /dev/null @@ -1,136 +0,0 @@ -import os -import os.path as osp -import warnings - -import numpy as np -import onnx -import onnxruntime as ort -import torch -import torch.nn as nn - -ort_custom_op_path = '' -try: - from mmcv.ops import get_onnxruntime_op_path - ort_custom_op_path = get_onnxruntime_op_path() -except (ImportError, ModuleNotFoundError): - warnings.warn('If input model has custom op from mmcv, \ - you may have to build mmcv with ONNXRuntime from source.') - - -class WrapFunction(nn.Module): - """Wrap the function to be tested for torch.onnx.export tracking.""" - - def __init__(self, wrapped_function): - super(WrapFunction, self).__init__() - self.wrapped_function = wrapped_function - - def forward(self, *args, **kwargs): - return self.wrapped_function(*args, **kwargs) - - -def ort_validate(model, feats, onnx_io='tmp.onnx'): - """Validate the output of the onnxruntime backend is the same as the output - generated by torch. - - Args: - model (nn.Module | function): the function of model or model - to be verified. - feats (tuple(list(torch.Tensor)) | list(torch.Tensor) | torch.Tensor): - the input of model. - onnx_io (str): the name of onnx output file. - """ - # if model is not an instance of nn.Module, then it is a normal - # function and it should be wrapped. - if isinstance(model, nn.Module): - wrap_model = model - else: - wrap_model = WrapFunction(model) - wrap_model.cpu().eval() - with torch.no_grad(): - torch.onnx.export( - wrap_model, - feats, - onnx_io, - export_params=True, - keep_initializers_as_inputs=True, - do_constant_folding=True, - verbose=False, - opset_version=11) - - if isinstance(feats, tuple): - ort_feats = [] - for feat in feats: - ort_feats += feat - else: - ort_feats = feats - # default model name: tmp.onnx - onnx_outputs = get_ort_model_output(ort_feats) - - # remove temp file - if osp.exists(onnx_io): - os.remove(onnx_io) - - if isinstance(feats, tuple): - torch_outputs = convert_result_list(wrap_model.forward(*feats)) - else: - torch_outputs = convert_result_list(wrap_model.forward(feats)) - torch_outputs = [ - torch_output.detach().numpy() for torch_output in torch_outputs - ] - - # match torch_outputs and onnx_outputs - for i in range(len(onnx_outputs)): - np.testing.assert_allclose( - torch_outputs[i], onnx_outputs[i], rtol=1e-03, atol=1e-05) - - -def get_ort_model_output(feat, onnx_io='tmp.onnx'): - """Run the model in onnxruntime env. - - Args: - feat (list[Tensor]): A list of tensors from torch.rand, - each is a 4D-tensor. - - Returns: - list[np.array]: onnxruntime infer result, each is a np.array - """ - - onnx_model = onnx.load(onnx_io) - onnx.checker.check_model(onnx_model) - - session_options = ort.SessionOptions() - # register custom op for onnxruntime - if osp.exists(ort_custom_op_path): - session_options.register_custom_ops_library(ort_custom_op_path) - sess = ort.InferenceSession(onnx_io, session_options) - if isinstance(feat, torch.Tensor): - onnx_outputs = sess.run(None, - {sess.get_inputs()[0].name: feat.numpy()}) - else: - onnx_outputs = sess.run(None, { - sess.get_inputs()[i].name: feat[i].numpy() - for i in range(len(feat)) - }) - return onnx_outputs - - -def convert_result_list(outputs): - """Convert the torch forward outputs containing tuple or list to a list - only containing torch.Tensor. - - Args: - output (list(Tensor) | tuple(list(Tensor) | ...): the outputs - in torch env, maybe containing nested structures such as list - or tuple. - - Returns: - list(Tensor): a list only containing torch.Tensor - """ - # recursive end condition - if isinstance(outputs, torch.Tensor): - return [outputs] - - ret = [] - for sub in outputs: - ret += convert_result_list(sub) - return ret diff --git a/spaces/trttung1610/musicgen/Dockerfile b/spaces/trttung1610/musicgen/Dockerfile deleted file mode 100644 index efc2431ec0fe674c22fe2fdb9d7045cdf6cd2748..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/Dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -FROM nvidia/cuda:11.8.0-base-ubuntu22.04 - -ENV DEBIAN_FRONTEND=noninteractive \ - PYTHONUNBUFFERED=1 \ - PYTHONIOENCODING=UTF-8 -RUN --mount=type=cache,target=/var/cache/apt --mount=type=cache,target=/var/lib/apt apt update &&\ - apt install -y \ - wget \ - git \ - pkg-config \ - python3 \ - python3-pip \ - python-is-python3 \ - ffmpeg \ - libnvrtc11.2 \ - libtcmalloc-minimal4 - -RUN useradd -m -u 1000 ac -RUN --mount=type=cache,target=/root/.cache python -m pip install --upgrade pip wheel -ENV TORCH_COMMAND="pip install torch==2.0.1+cu118 torchaudio --extra-index-url https://download.pytorch.org/whl/cu118" -RUN --mount=type=cache,target=/root/.cache python -m $TORCH_COMMAND -RUN ln -s /usr/lib/x86_64-linux-gnu/libnvrtc.so.11.2 /usr/lib/x86_64-linux-gnu/libnvrtc.so -USER 1000 -RUN mkdir ~/.cache -RUN --mount=type=cache,target=/home/ac/.cache --mount=source=.,target=/home/ac/audiocraft python -m pip install -r /home/ac/audiocraft/requirements.txt -WORKDIR /home/ac/audiocraft \ No newline at end of file diff --git a/spaces/trttung1610/musicgen/app_v2.py b/spaces/trttung1610/musicgen/app_v2.py deleted file mode 100644 index 70a3dfae4ac5c02bc1d9e78b8d5d0c2139ba503b..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/app_v2.py +++ /dev/null @@ -1,1839 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Updated to account for UI changes from https://github.com/rkfg/audiocraft/blob/long/app.py -# also released under the MIT license. - -import argparse -from concurrent.futures import ProcessPoolExecutor -import os -from pathlib import Path -import subprocess as sp -from tempfile import NamedTemporaryFile -import time -import warnings -import glob -import re -from PIL import Image -from pydub import AudioSegment -from datetime import datetime - -import json -import shutil -import taglib -import torch -import torchaudio -import gradio as gr -import numpy as np -import typing as tp - -from audiocraft.data.audio_utils import convert_audio -from audiocraft.data.audio import audio_write -from audiocraft.models import AudioGen, MusicGen, MultiBandDiffusion -from audiocraft.utils import ui -import random, string - -version = "2.0.0a" - -theme = gr.themes.Base( - primary_hue="lime", - secondary_hue="lime", - neutral_hue="neutral", -).set( - button_primary_background_fill_hover='*primary_500', - button_primary_background_fill_hover_dark='*primary_500', - button_secondary_background_fill_hover='*primary_500', - button_secondary_background_fill_hover_dark='*primary_500' -) - -MODEL = None # Last used model -MODELS = None -UNLOAD_MODEL = False -MOVE_TO_CPU = False -IS_BATCHED = "facebook/MusicGen" in os.environ.get('SPACE_ID', '') -print(IS_BATCHED) -MAX_BATCH_SIZE = 12 -BATCHED_DURATION = 15 -INTERRUPTING = False -MBD = None -# We have to wrap subprocess call to clean a bit the log when using gr.make_waveform -_old_call = sp.call - - -def generate_random_string(length): - characters = string.ascii_letters + string.digits - return ''.join(random.choice(characters) for _ in range(length)) - - -def resize_video(input_path, output_path, target_width, target_height): - ffmpeg_cmd = [ - 'ffmpeg', - '-y', - '-i', input_path, - '-vf', f'scale={target_width}:{target_height}', - '-c:a', 'copy', - output_path - ] - sp.run(ffmpeg_cmd) - - -def _call_nostderr(*args, **kwargs): - # Avoid ffmpeg vomiting on the logs. - kwargs['stderr'] = sp.DEVNULL - kwargs['stdout'] = sp.DEVNULL - _old_call(*args, **kwargs) - - -sp.call = _call_nostderr -# Preallocating the pool of processes. -pool = ProcessPoolExecutor(4) -pool.__enter__() - - -def interrupt(): - global INTERRUPTING - INTERRUPTING = True - - -class FileCleaner: - def __init__(self, file_lifetime: float = 3600): - self.file_lifetime = file_lifetime - self.files = [] - - def add(self, path: tp.Union[str, Path]): - self._cleanup() - self.files.append((time.time(), Path(path))) - - def _cleanup(self): - now = time.time() - for time_added, path in list(self.files): - if now - time_added > self.file_lifetime: - if path.exists(): - path.unlink() - self.files.pop(0) - else: - break - - -file_cleaner = FileCleaner() - - -def make_waveform(*args, **kwargs): - # Further remove some warnings. - be = time.time() - with warnings.catch_warnings(): - warnings.simplefilter('ignore') - height = kwargs.pop('height') - width = kwargs.pop('width') - if height < 256: - height = 256 - if width < 256: - width = 256 - waveform_video = gr.make_waveform(*args, **kwargs) - out = f"{generate_random_string(12)}.mp4" - image = kwargs.get('bg_image', None) - if image is None: - resize_video(waveform_video, out, 900, 300) - else: - resize_video(waveform_video, out, width, height) - print("Make a video took", time.time() - be) - return out - - -def load_model(version='GrandaddyShmax/musicgen-melody', custom_model=None, base_model='GrandaddyShmax/musicgen-medium', gen_type="music"): - global MODEL, MODELS - print("Loading model", version) - if MODELS is None: - if version == 'GrandaddyShmax/musicgen-custom': - MODEL = MusicGen.get_pretrained(base_model) - file_path = os.path.abspath("models/" + str(custom_model) + ".pt") - MODEL.lm.load_state_dict(torch.load(file_path)) - else: - if gen_type == "music": - MODEL = MusicGen.get_pretrained(version) - elif gen_type == "audio": - MODEL = AudioGen.get_pretrained(version) - - return - - else: - t1 = time.monotonic() - if MODEL is not None: - MODEL.to('cpu') # move to cache - print("Previous model moved to CPU in %.2fs" % (time.monotonic() - t1)) - t1 = time.monotonic() - if version != 'GrandaddyShmax/musicgen-custom' and MODELS.get(version) is None: - print("Loading model %s from disk" % version) - if gen_type == "music": - result = MusicGen.get_pretrained(version) - elif gen_type == "audio": - result = AudioGen.get_pretrained(version) - MODELS[version] = result - print("Model loaded in %.2fs" % (time.monotonic() - t1)) - MODEL = result - return - result = MODELS[version].to('cuda') - print("Cached model loaded in %.2fs" % (time.monotonic() - t1)) - MODEL = result - -def get_audio_info(audio_path): - if audio_path is not None: - if audio_path.name.endswith(".wav") or audio_path.name.endswith(".mp4") or audio_path.name.endswith(".json"): - if not audio_path.name.endswith(".json"): - with taglib.File(audio_path.name, save_on_exit=False) as song: - if 'COMMENT' not in song.tags: - return "No tags found. Either the file is not generated by MusicGen+ V1.2.7 and higher or the tags are corrupted. (Discord removes metadata from mp4 and wav files, so you can't use them)" - json_string = song.tags['COMMENT'][0] - data = json.loads(json_string) - global_prompt = str("\nGlobal Prompt: " + (data['global_prompt'] if data['global_prompt'] != "" else "none")) if 'global_prompt' in data else "" - bpm = str("\nBPM: " + data['bpm']) if 'bpm' in data else "" - key = str("\nKey: " + data['key']) if 'key' in data else "" - scale = str("\nScale: " + data['scale']) if 'scale' in data else "" - prompts = str("\nPrompts: " + (data['texts'] if data['texts'] != "['']" else "none")) if 'texts' in data else "" - duration = str("\nDuration: " + data['duration']) if 'duration' in data else "" - overlap = str("\nOverlap: " + data['overlap']) if 'overlap' in data else "" - seed = str("\nSeed: " + data['seed']) if 'seed' in data else "" - audio_mode = str("\nAudio Mode: " + data['audio_mode']) if 'audio_mode' in data else "" - input_length = str("\nInput Length: " + data['input_length']) if 'input_length' in data else "" - channel = str("\nChannel: " + data['channel']) if 'channel' in data else "" - sr_select = str("\nSample Rate: " + data['sr_select']) if 'sr_select' in data else "" - gen_type = str(data['generator'] + "gen-") if 'generator' in data else "" - model = str("\nModel: " + gen_type + data['model']) if 'model' in data else "" - custom_model = str("\nCustom Model: " + data['custom_model']) if 'custom_model' in data else "" - base_model = str("\nBase Model: " + data['base_model']) if 'base_model' in data else "" - decoder = str("\nDecoder: " + data['decoder']) if 'decoder' in data else "" - topk = str("\nTopk: " + data['topk']) if 'topk' in data else "" - topp = str("\nTopp: " + data['topp']) if 'topp' in data else "" - temperature = str("\nTemperature: " + data['temperature']) if 'temperature' in data else "" - cfg_coef = str("\nClassifier Free Guidance: " + data['cfg_coef']) if 'cfg_coef' in data else "" - version = str("Version: " + data['version']) if 'version' in data else "Version: Unknown" - info = str(version + global_prompt + bpm + key + scale + prompts + duration + overlap + seed + audio_mode + input_length + channel + sr_select + model + custom_model + base_model + decoder + topk + topp + temperature + cfg_coef) - if info == "": - return "No tags found. Either the file is not generated by MusicGen+ V1.2.7 and higher or the tags are corrupted. (Discord removes metadata from mp4 and wav files, so you can't use them)" - return info - else: - with open(audio_path.name) as json_file: - data = json.load(json_file) - #if 'global_prompt' not in data: - #return "No tags found. Either the file is not generated by MusicGen+ V1.2.8a and higher or the tags are corrupted." - global_prompt = str("\nGlobal Prompt: " + (data['global_prompt'] if data['global_prompt'] != "" else "none")) if 'global_prompt' in data else "" - bpm = str("\nBPM: " + data['bpm']) if 'bpm' in data else "" - key = str("\nKey: " + data['key']) if 'key' in data else "" - scale = str("\nScale: " + data['scale']) if 'scale' in data else "" - prompts = str("\nPrompts: " + (data['texts'] if data['texts'] != "['']" else "none")) if 'texts' in data else "" - duration = str("\nDuration: " + data['duration']) if 'duration' in data else "" - overlap = str("\nOverlap: " + data['overlap']) if 'overlap' in data else "" - seed = str("\nSeed: " + data['seed']) if 'seed' in data else "" - audio_mode = str("\nAudio Mode: " + data['audio_mode']) if 'audio_mode' in data else "" - input_length = str("\nInput Length: " + data['input_length']) if 'input_length' in data else "" - channel = str("\nChannel: " + data['channel']) if 'channel' in data else "" - sr_select = str("\nSample Rate: " + data['sr_select']) if 'sr_select' in data else "" - gen_type = str(data['generator'] + "gen-") if 'generator' in data else "" - model = str("\nModel: " + gen_type + data['model']) if 'model' in data else "" - custom_model = str("\nCustom Model: " + data['custom_model']) if 'custom_model' in data else "" - base_model = str("\nBase Model: " + data['base_model']) if 'base_model' in data else "" - decoder = str("\nDecoder: " + data['decoder']) if 'decoder' in data else "" - topk = str("\nTopk: " + data['topk']) if 'topk' in data else "" - topp = str("\nTopp: " + data['topp']) if 'topp' in data else "" - temperature = str("\nTemperature: " + data['temperature']) if 'temperature' in data else "" - cfg_coef = str("\nClassifier Free Guidance: " + data['cfg_coef']) if 'cfg_coef' in data else "" - version = str("Version: " + data['version']) if 'version' in data else "Version: Unknown" - info = str(version + global_prompt + bpm + key + scale + prompts + duration + overlap + seed + audio_mode + input_length + channel + sr_select + model + custom_model + base_model + decoder + topk + topp + temperature + cfg_coef) - if info == "": - return "No tags found. Either the file is not generated by MusicGen+ V1.2.7 and higher or the tags are corrupted." - return info - else: - return "Only .wav ,.mp4 and .json files are supported" - else: - return None - - -def info_to_params(audio_path): - if audio_path is not None: - if audio_path.name.endswith(".wav") or audio_path.name.endswith(".mp4") or audio_path.name.endswith(".json"): - if not audio_path.name.endswith(".json"): - with taglib.File(audio_path.name, save_on_exit=False) as song: - if 'COMMENT' not in song.tags: - return "Default", False, "", 120, "C", "Major", "large", None, "medium", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, "sample", 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - json_string = song.tags['COMMENT'][0] - data = json.loads(json_string) - struc_prompt = (False if data['bpm'] == "none" else True) if 'bpm' in data else False - global_prompt = data['global_prompt'] if 'global_prompt' in data else "" - bpm = (120 if data['bpm'] == "none" else int(data['bpm'])) if 'bpm' in data else 120 - key = ("C" if data['key'] == "none" else data['key']) if 'key' in data else "C" - scale = ("Major" if data['scale'] == "none" else data['scale']) if 'scale' in data else "Major" - model = data['model'] if 'model' in data else "large" - custom_model = (data['custom_model'] if data['custom_model'] in get_available_models() else None) if 'custom_model' in data else None - base_model = data['base_model'] if 'base_model' in data else "medium" - decoder = data['decoder'] if 'decoder' in data else "Default" - if 'texts' not in data: - unique_prompts = 1 - text = ["", "", "", "", "", "", "", "", "", ""] - repeat = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] - else: - s = data['texts'] - s = re.findall(r"'(.*?)'", s) - text = [] - repeat = [] - i = 0 - for elem in s: - if elem.strip(): - if i == 0 or elem != s[i-1]: - text.append(elem) - repeat.append(1) - else: - repeat[-1] += 1 - i += 1 - text.extend([""] * (10 - len(text))) - repeat.extend([1] * (10 - len(repeat))) - unique_prompts = len([t for t in text if t]) - audio_mode = ("sample" if data['audio_mode'] == "none" else data['audio_mode']) if 'audio_mode' in data else "sample" - duration = int(data['duration']) if 'duration' in data else 10 - topk = float(data['topk']) if 'topk' in data else 250 - topp = float(data['topp']) if 'topp' in data else 0 - temperature = float(data['temperature']) if 'temperature' in data else 1.0 - cfg_coef = float(data['cfg_coef']) if 'cfg_coef' in data else 5.0 - seed = int(data['seed']) if 'seed' in data else -1 - overlap = int(data['overlap']) if 'overlap' in data else 12 - channel = data['channel'] if 'channel' in data else "stereo" - sr_select = data['sr_select'] if 'sr_select' in data else "48000" - return decoder, struc_prompt, global_prompt, bpm, key, scale, model, custom_model, base_model, unique_prompts, text[0], text[1], text[2], text[3], text[4], text[5], text[6], text[7], text[8], text[9], repeat[0], repeat[1], repeat[2], repeat[3], repeat[4], repeat[5], repeat[6], repeat[7], repeat[8], repeat[9], audio_mode, duration, topk, topp, temperature, cfg_coef, seed, overlap, channel, sr_select - else: - with open(audio_path.name) as json_file: - data = json.load(json_file) - struc_prompt = (False if data['bpm'] == "none" else True) if 'bpm' in data else False - global_prompt = data['global_prompt'] if 'global_prompt' in data else "" - bpm = (120 if data['bpm'] == "none" else int(data['bpm'])) if 'bpm' in data else 120 - key = ("C" if data['key'] == "none" else data['key']) if 'key' in data else "C" - scale = ("Major" if data['scale'] == "none" else data['scale']) if 'scale' in data else "Major" - model = data['model'] if 'model' in data else "large" - custom_model = (data['custom_model'] if data['custom_model'] in get_available_models() else None) if 'custom_model' in data else None - base_model = data['base_model'] if 'base_model' in data else "medium" - decoder = data['decoder'] if 'decoder' in data else "Default" - if 'texts' not in data: - unique_prompts = 1 - text = ["", "", "", "", "", "", "", "", "", ""] - repeat = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] - else: - s = data['texts'] - s = re.findall(r"'(.*?)'", s) - text = [] - repeat = [] - i = 0 - for elem in s: - if elem.strip(): - if i == 0 or elem != s[i-1]: - text.append(elem) - repeat.append(1) - else: - repeat[-1] += 1 - i += 1 - text.extend([""] * (10 - len(text))) - repeat.extend([1] * (10 - len(repeat))) - unique_prompts = len([t for t in text if t]) - audio_mode = ("sample" if data['audio_mode'] == "none" else data['audio_mode']) if 'audio_mode' in data else "sample" - duration = int(data['duration']) if 'duration' in data else 10 - topk = float(data['topk']) if 'topk' in data else 250 - topp = float(data['topp']) if 'topp' in data else 0 - temperature = float(data['temperature']) if 'temperature' in data else 1.0 - cfg_coef = float(data['cfg_coef']) if 'cfg_coef' in data else 5.0 - seed = int(data['seed']) if 'seed' in data else -1 - overlap = int(data['overlap']) if 'overlap' in data else 12 - channel = data['channel'] if 'channel' in data else "stereo" - sr_select = data['sr_select'] if 'sr_select' in data else "48000" - return decoder, struc_prompt, global_prompt, bpm, key, scale, model, custom_model, base_model, unique_prompts, text[0], text[1], text[2], text[3], text[4], text[5], text[6], text[7], text[8], text[9], repeat[0], repeat[1], repeat[2], repeat[3], repeat[4], repeat[5], repeat[6], repeat[7], repeat[8], repeat[9], audio_mode, duration, topk, topp, temperature, cfg_coef, seed, overlap, channel, sr_select - else: - return "Default", False, "", 120, "C", "Major", "large", None, "medium", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, "sample", 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - else: - return "Default", False, "", 120, "C", "Major", "large", None, "medium", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, "sample", 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - - -def info_to_params_a(audio_path): - if audio_path is not None: - if audio_path.name.endswith(".wav") or audio_path.name.endswith(".mp4") or audio_path.name.endswith(".json"): - if not audio_path.name.endswith(".json"): - with taglib.File(audio_path.name, save_on_exit=False) as song: - if 'COMMENT' not in song.tags: - return "Default", False, "", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - json_string = song.tags['COMMENT'][0] - data = json.loads(json_string) - struc_prompt = (False if data['global_prompt'] == "" else True) if 'global_prompt' in data else False - global_prompt = data['global_prompt'] if 'global_prompt' in data else "" - decoder = data['decoder'] if 'decoder' in data else "Default" - if 'texts' not in data: - unique_prompts = 1 - text = ["", "", "", "", "", "", "", "", "", ""] - repeat = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] - else: - s = data['texts'] - s = re.findall(r"'(.*?)'", s) - text = [] - repeat = [] - i = 0 - for elem in s: - if elem.strip(): - if i == 0 or elem != s[i-1]: - text.append(elem) - repeat.append(1) - else: - repeat[-1] += 1 - i += 1 - text.extend([""] * (10 - len(text))) - repeat.extend([1] * (10 - len(repeat))) - unique_prompts = len([t for t in text if t]) - duration = int(data['duration']) if 'duration' in data else 10 - topk = float(data['topk']) if 'topk' in data else 250 - topp = float(data['topp']) if 'topp' in data else 0 - temperature = float(data['temperature']) if 'temperature' in data else 1.0 - cfg_coef = float(data['cfg_coef']) if 'cfg_coef' in data else 5.0 - seed = int(data['seed']) if 'seed' in data else -1 - overlap = int(data['overlap']) if 'overlap' in data else 12 - channel = data['channel'] if 'channel' in data else "stereo" - sr_select = data['sr_select'] if 'sr_select' in data else "48000" - return decoder, struc_prompt, global_prompt, unique_prompts, text[0], text[1], text[2], text[3], text[4], text[5], text[6], text[7], text[8], text[9], repeat[0], repeat[1], repeat[2], repeat[3], repeat[4], repeat[5], repeat[6], repeat[7], repeat[8], repeat[9], duration, topk, topp, temperature, cfg_coef, seed, overlap, channel, sr_select - else: - with open(audio_path.name) as json_file: - data = json.load(json_file) - struc_prompt = (False if data['global_prompt'] == "" else True) if 'global_prompt' in data else False - global_prompt = data['global_prompt'] if 'global_prompt' in data else "" - decoder = data['decoder'] if 'decoder' in data else "Default" - if 'texts' not in data: - unique_prompts = 1 - text = ["", "", "", "", "", "", "", "", "", ""] - repeat = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] - else: - s = data['texts'] - s = re.findall(r"'(.*?)'", s) - text = [] - repeat = [] - i = 0 - for elem in s: - if elem.strip(): - if i == 0 or elem != s[i-1]: - text.append(elem) - repeat.append(1) - else: - repeat[-1] += 1 - i += 1 - text.extend([""] * (10 - len(text))) - repeat.extend([1] * (10 - len(repeat))) - unique_prompts = len([t for t in text if t]) - duration = int(data['duration']) if 'duration' in data else 10 - topk = float(data['topk']) if 'topk' in data else 250 - topp = float(data['topp']) if 'topp' in data else 0 - temperature = float(data['temperature']) if 'temperature' in data else 1.0 - cfg_coef = float(data['cfg_coef']) if 'cfg_coef' in data else 5.0 - seed = int(data['seed']) if 'seed' in data else -1 - overlap = int(data['overlap']) if 'overlap' in data else 12 - channel = data['channel'] if 'channel' in data else "stereo" - sr_select = data['sr_select'] if 'sr_select' in data else "48000" - return decoder, struc_prompt, global_prompt, unique_prompts, text[0], text[1], text[2], text[3], text[4], text[5], text[6], text[7], text[8], text[9], repeat[0], repeat[1], repeat[2], repeat[3], repeat[4], repeat[5], repeat[6], repeat[7], repeat[8], repeat[9], duration, topk, topp, temperature, cfg_coef, seed, overlap, channel, sr_select - - else: - return "Default", False, "", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - else: - return "Default", False, "", 1, "", "", "", "", "", "", "", "", "", "", 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 10, 250, 0, 1.0, 5.0, -1, 12, "stereo", "48000" - - -def make_pseudo_stereo (filename, sr_select, pan, delay): - if pan: - temp = AudioSegment.from_wav(filename) - if sr_select != "32000": - temp = temp.set_frame_rate(int(sr_select)) - left = temp.pan(-0.5) - 5 - right = temp.pan(0.6) - 5 - temp = left.overlay(right, position=5) - temp.export(filename, format="wav") - if delay: - waveform, sample_rate = torchaudio.load(filename) # load mono WAV file - delay_seconds = 0.01 # set delay 10ms - delay_samples = int(delay_seconds * sample_rate) # Calculating delay value in number of samples - stereo_waveform = torch.stack([waveform[0], torch.cat((torch.zeros(delay_samples), waveform[0][:-delay_samples]))]) # Generate a stereo file with original mono audio and delayed version - torchaudio.save(filename, stereo_waveform, sample_rate) - return - - -def normalize_audio(audio_data): - audio_data = audio_data.astype(np.float32) - max_value = np.max(np.abs(audio_data)) - audio_data /= max_value - return audio_data - - -def load_diffusion(): - global MBD - if MBD is None: - print("loading MBD") - MBD = MultiBandDiffusion.get_mbd_musicgen() - - -def unload_diffusion(): - global MBD - if MBD is not None: - print("unloading MBD") - MBD = None - - -def _do_predictions(gen_type, texts, melodies, sample, trim_start, trim_end, duration, image, height, width, background, bar1, bar2, channel, sr_select, progress=False, **gen_kwargs): - if gen_type == "music": - maximum_size = 29.5 - elif gen_type == "audio": - maximum_size = 9.5 - cut_size = 0 - input_length = 0 - sampleP = None - if sample is not None: - globalSR, sampleM = sample[0], sample[1] - sampleM = normalize_audio(sampleM) - sampleM = torch.from_numpy(sampleM).t() - if sampleM.dim() == 1: - sampleM = sampleM.unsqueeze(0) - sample_length = sampleM.shape[sampleM.dim() - 1] / globalSR - if trim_start >= sample_length: - trim_start = sample_length - 0.5 - if trim_end >= sample_length: - trim_end = sample_length - 0.5 - if trim_start + trim_end >= sample_length: - tmp = sample_length - 0.5 - trim_start = tmp / 2 - trim_end = tmp / 2 - sampleM = sampleM[..., int(globalSR * trim_start):int(globalSR * (sample_length - trim_end))] - sample_length = sample_length - (trim_start + trim_end) - if sample_length > maximum_size: - cut_size = sample_length - maximum_size - sampleP = sampleM[..., :int(globalSR * cut_size)] - sampleM = sampleM[..., int(globalSR * cut_size):] - if sample_length >= duration: - duration = sample_length + 0.5 - input_length = sample_length - global MODEL - MODEL.set_generation_params(duration=(duration - cut_size), **gen_kwargs) - print("new batch", len(texts), texts, [None if m is None else (m[0], m[1].shape) for m in melodies], [None if sample is None else (sample[0], sample[1].shape)]) - be = time.time() - processed_melodies = [] - if gen_type == "music": - target_sr = 32000 - elif gen_type == "audio": - target_sr = 16000 - target_ac = 1 - - for melody in melodies: - if melody is None: - processed_melodies.append(None) - else: - sr, melody = melody[0], torch.from_numpy(melody[1]).to(MODEL.device).float().t() - if melody.dim() == 1: - melody = melody[None] - melody = melody[..., :int(sr * duration)] - melody = convert_audio(melody, sr, target_sr, target_ac) - processed_melodies.append(melody) - - if sample is not None: - if sampleP is None: - if gen_type == "music": - outputs = MODEL.generate_continuation( - prompt=sampleM, - prompt_sample_rate=globalSR, - descriptions=texts, - progress=progress, - return_tokens=USE_DIFFUSION - ) - elif gen_type == "audio": - outputs = MODEL.generate_continuation( - prompt=sampleM, - prompt_sample_rate=globalSR, - descriptions=texts, - progress=progress - ) - else: - if sampleP.dim() > 1: - sampleP = convert_audio(sampleP, globalSR, target_sr, target_ac) - sampleP = sampleP.to(MODEL.device).float().unsqueeze(0) - if gen_type == "music": - outputs = MODEL.generate_continuation( - prompt=sampleM, - prompt_sample_rate=globalSR, - descriptions=texts, - progress=progress, - return_tokens=USE_DIFFUSION - ) - elif gen_type == "audio": - outputs = MODEL.generate_continuation( - prompt=sampleM, - prompt_sample_rate=globalSR, - descriptions=texts, - progress=progress - ) - outputs = torch.cat([sampleP, outputs], 2) - - elif any(m is not None for m in processed_melodies): - if gen_type == "music": - outputs = MODEL.generate_with_chroma( - descriptions=texts, - melody_wavs=processed_melodies, - melody_sample_rate=target_sr, - progress=progress, - return_tokens=USE_DIFFUSION - ) - elif gen_type == "audio": - outputs = MODEL.generate_with_chroma( - descriptions=texts, - melody_wavs=processed_melodies, - melody_sample_rate=target_sr, - progress=progress - ) - else: - if gen_type == "music": - outputs = MODEL.generate(texts, progress=progress, return_tokens=USE_DIFFUSION) - elif gen_type == "audio": - outputs = MODEL.generate(texts, progress=progress) - - if USE_DIFFUSION: - print("outputs: " + str(outputs)) - outputs_diffusion = MBD.tokens_to_wav(outputs[1]) - outputs = torch.cat([outputs[0], outputs_diffusion], dim=0) - outputs = outputs.detach().cpu().float() - backups = outputs - if channel == "stereo": - outputs = convert_audio(outputs, target_sr, int(sr_select), 2) - elif channel == "mono" and sr_select != "32000": - outputs = convert_audio(outputs, target_sr, int(sr_select), 1) - out_files = [] - out_audios = [] - out_backup = [] - for output in outputs: - with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file: - audio_write( - file.name, output, (MODEL.sample_rate if channel == "stereo effect" else int(sr_select)), strategy="loudness", - loudness_headroom_db=16, loudness_compressor=True, add_suffix=False) - - if channel == "stereo effect": - make_pseudo_stereo(file.name, sr_select, pan=True, delay=True); - - out_files.append(pool.submit(make_waveform, file.name, bg_image=image, bg_color=background, bars_color=(bar1, bar2), fg_alpha=1.0, bar_count=75, height=height, width=width)) - out_audios.append(file.name) - file_cleaner.add(file.name) - print(f'wav: {file.name}') - for backup in backups: - with NamedTemporaryFile("wb", suffix=".wav", delete=False) as file: - audio_write( - file.name, backup, MODEL.sample_rate, strategy="loudness", - loudness_headroom_db=16, loudness_compressor=True, add_suffix=False) - out_backup.append(file.name) - file_cleaner.add(file.name) - res = [out_file.result() for out_file in out_files] - res_audio = out_audios - res_backup = out_backup - for file in res: - file_cleaner.add(file) - print(f'video: {file}') - print("batch finished", len(texts), time.time() - be) - print("Tempfiles currently stored: ", len(file_cleaner.files)) - if MOVE_TO_CPU: - MODEL.to('cpu') - if UNLOAD_MODEL: - MODEL = None - torch.cuda.empty_cache() - torch.cuda.ipc_collect() - return res, res_audio, res_backup, input_length - - -def predict_batched(texts, melodies): - max_text_length = 512 - texts = [text[:max_text_length] for text in texts] - load_model('melody') - res = _do_predictions(texts, melodies, BATCHED_DURATION) - return res - - -def add_tags(filename, tags): - json_string = None - - data = { - "global_prompt": tags[0], - "bpm": tags[1], - "key": tags[2], - "scale": tags[3], - "texts": tags[4], - "duration": tags[5], - "overlap": tags[6], - "seed": tags[7], - "audio_mode": tags[8], - "input_length": tags[9], - "channel": tags[10], - "sr_select": tags[11], - "model": tags[12], - "custom_model": tags[13], - "base_model": tags[14], - "decoder": tags[15], - "topk": tags[16], - "topp": tags[17], - "temperature": tags[18], - "cfg_coef": tags[19], - "generator": tags[20], - "version": version - } - - json_string = json.dumps(data) - - if os.path.exists(filename): - with taglib.File(filename, save_on_exit=True) as song: - song.tags = {'COMMENT': json_string } - - json_file = open(tags[7] + '.json', 'w') - json_file.write(json_string) - json_file.close() - - return json_file.name; - - -def save_outputs(mp4, wav_tmp, tags, gen_type): - # mp4: .mp4 file name in root running folder of app.py - # wav_tmp: temporary wav file located in %TEMP% folder - # seed - used seed - # exanple BgnJtr4Pn1AJ.mp4, C:\Users\Alex\AppData\Local\Temp\tmp4ermrebs.wav, 195123182343465 - # procedure read generated .mp4 and wav files, rename it by using seed as name, - # and will store it to ./output/today_date/wav and ./output/today_date/mp4 folders. - # if file with same seed number already exist its make postfix in name like seed(n) - # where is n - consiqunce number 1-2-3-4 and so on - # then we store generated mp4 and wav into destination folders. - - current_date = datetime.now().strftime("%Y%m%d") - wav_directory = os.path.join(os.getcwd(), 'output', current_date, gen_type,'wav') - mp4_directory = os.path.join(os.getcwd(), 'output', current_date, gen_type,'mp4') - json_directory = os.path.join(os.getcwd(), 'output', current_date, gen_type,'json') - os.makedirs(wav_directory, exist_ok=True) - os.makedirs(mp4_directory, exist_ok=True) - os.makedirs(json_directory, exist_ok=True) - - filename = str(tags[7]) + '.wav' - target = os.path.join(wav_directory, filename) - counter = 1 - while os.path.exists(target): - filename = str(tags[7]) + f'({counter})' + '.wav' - target = os.path.join(wav_directory, filename) - counter += 1 - - shutil.copyfile(wav_tmp, target); # make copy of original file - json_file = add_tags(target, tags); - - wav_target=target; - target=target.replace('wav', 'mp4'); - mp4_target=target; - - mp4=r'./' +mp4; - shutil.copyfile(mp4, target); # make copy of original file - _ = add_tags(target, tags); - - target=target.replace('mp4', 'json'); # change the extension to json - json_target=target; # store the json target - - with open(target, 'w') as f: # open a writable file object - shutil.copyfile(json_file, target); # make copy of original file - - os.remove(json_file) - - return wav_target, mp4_target, json_target; - - -def clear_cash(): - # delete all temporary files genegated my system - current_date = datetime.now().date() - current_directory = os.getcwd() - files = glob.glob(os.path.join(current_directory, '*.mp4')) - for file in files: - creation_date = datetime.fromtimestamp(os.path.getctime(file)).date() - if creation_date == current_date: - os.remove(file) - - temp_directory = os.environ.get('TEMP') - files = glob.glob(os.path.join(temp_directory, 'tmp*.mp4')) - for file in files: - creation_date = datetime.fromtimestamp(os.path.getctime(file)).date() - if creation_date == current_date: - os.remove(file) - - files = glob.glob(os.path.join(temp_directory, 'tmp*.wav')) - for file in files: - creation_date = datetime.fromtimestamp(os.path.getctime(file)).date() - if creation_date == current_date: - os.remove(file) - - files = glob.glob(os.path.join(temp_directory, 'tmp*.png')) - for file in files: - creation_date = datetime.fromtimestamp(os.path.getctime(file)).date() - if creation_date == current_date: - os.remove(file) - return - - -def s2t(seconds, seconds2): - # convert seconds to time format - # seconds - time in seconds - # return time in format 00:00 - m, s = divmod(seconds, 60) - m2, s2 = divmod(seconds2, 60) - if seconds != 0 and seconds < seconds2: - s = s + 1 - return ("%02d:%02d - %02d:%02d" % (m, s, m2, s2)) - - -def calc_time(gen_type, s, duration, overlap, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9): - # calculate the time of generation - # overlap - overlap in seconds - # d0-d9 - drag - # return time in seconds - d_amount = [int(d0), int(d1), int(d2), int(d3), int(d4), int(d5), int(d6), int(d7), int(d8), int(d9)] - calc = [] - tracks = [] - time = 0 - s = s - 1 - max_time = duration - max_limit = 0 - if gen_type == "music": - max_limit = 30 - elif gen_type == "audio": - max_limit = 10 - track_add = max_limit - overlap - tracks.append(max_limit + ((d_amount[0] - 1) * track_add)) - for i in range(1, 10): - tracks.append(d_amount[i] * track_add) - - if tracks[0] >= max_time or s == 0: - calc.append(s2t(time, max_time)) - time = max_time - else: - calc.append(s2t(time, tracks[0])) - time = tracks[0] - - for i in range(1, 10): - if time + tracks[i] >= max_time or i == s: - calc.append(s2t(time, max_time)) - time = max_time - else: - calc.append(s2t(time, time + tracks[i])) - time = time + tracks[i] - - return calc[0], calc[1], calc[2], calc[3], calc[4], calc[5], calc[6], calc[7], calc[8], calc[9] - - -def predict_full(gen_type, model, decoder, custom_model, base_model, prompt_amount, struc_prompt, bpm, key, scale, global_prompt, p0, p1, p2, p3, p4, p5, p6, p7, p8, p9, d0, d1, d2, d3, d4, d5, d6, d7, d8, d9, audio, mode, trim_start, trim_end, duration, topk, topp, temperature, cfg_coef, seed, overlap, image, height, width, background, bar1, bar2, channel, sr_select, progress=gr.Progress()): - global INTERRUPTING - global USE_DIFFUSION - INTERRUPTING = False - - if gen_type == "audio": - custom_model = None - base_model = "medium" - - if temperature < 0: - raise gr.Error("Temperature must be >= 0.") - if topk < 0: - raise gr.Error("Topk must be non-negative.") - if topp < 0: - raise gr.Error("Topp must be non-negative.") - - if trim_start < 0: - trim_start = 0 - if trim_end < 0: - trim_end = 0 - - topk = int(topk) - - if decoder == "MultiBand_Diffusion": - USE_DIFFUSION = True - load_diffusion() - else: - USE_DIFFUSION = False - unload_diffusion() - - if gen_type == "music": - model_shrt = model - model = "GrandaddyShmax/musicgen-" + model - elif gen_type == "audio": - model_shrt = model - model = "GrandaddyShmax/audiogen-" + model - base_model_shrt = base_model - base_model = "GrandaddyShmax/musicgen-" + base_model - - if MODEL is None or MODEL.name != (model): - load_model(model, custom_model, base_model, gen_type) - else: - if MOVE_TO_CPU: - MODEL.to('cuda') - - if seed < 0: - seed = random.randint(0, 0xffff_ffff_ffff) - torch.manual_seed(seed) - - def _progress(generated, to_generate): - progress((min(generated, to_generate), to_generate)) - if INTERRUPTING: - raise gr.Error("Interrupted.") - MODEL.set_custom_progress_callback(_progress) - - audio_mode = "none" - melody = None - sample = None - if audio: - audio_mode = mode - if mode == "sample": - sample = audio - elif mode == "melody": - melody = audio - - base_model = "none" if model != "custom" else base_model - custom_model = "none" if model != "custom" else custom_model - - text_cat = [p0, p1, p2, p3, p4, p5, p6, p7, p8, p9] - drag_cat = [d0, d1, d2, d3, d4, d5, d6, d7, d8, d9] - texts = [] - raw_texts = [] - ind = 0 - ind2 = 0 - while ind < prompt_amount: - for ind2 in range(int(drag_cat[ind])): - if not struc_prompt: - texts.append(text_cat[ind]) - global_prompt = "none" - bpm = "none" - key = "none" - scale = "none" - raw_texts.append(text_cat[ind]) - else: - if gen_type == "music": - bpm_str = str(bpm) + " bpm" - key_str = ", " + str(key) + " " + str(scale) - global_str = (", " + str(global_prompt)) if str(global_prompt) != "" else "" - elif gen_type == "audio": - bpm_str = "" - key_str = "" - global_str = (str(global_prompt)) if str(global_prompt) != "" else "" - texts_str = (", " + str(text_cat[ind])) if str(text_cat[ind]) != "" else "" - texts.append(bpm_str + key_str + global_str + texts_str) - raw_texts.append(text_cat[ind]) - ind2 = 0 - ind = ind + 1 - - outs, outs_audio, outs_backup, input_length = _do_predictions( - gen_type, [texts], [melody], sample, trim_start, trim_end, duration, image, height, width, background, bar1, bar2, channel, sr_select, progress=True, - top_k=topk, top_p=topp, temperature=temperature, cfg_coef=cfg_coef, extend_stride=MODEL.max_duration-overlap) - tags = [str(global_prompt), str(bpm), str(key), str(scale), str(raw_texts), str(duration), str(overlap), str(seed), str(audio_mode), str(input_length), str(channel), str(sr_select), str(model_shrt), str(custom_model), str(base_model_shrt), str(decoder), str(topk), str(topp), str(temperature), str(cfg_coef), str(gen_type)] - wav_target, mp4_target, json_target = save_outputs(outs[0], outs_audio[0], tags, gen_type); - # Removes the temporary files. - for out in outs: - os.remove(out) - for out in outs_audio: - os.remove(out) - - return mp4_target, wav_target, outs_backup[0], [mp4_target, wav_target, json_target], seed - - -max_textboxes = 10 - - -def get_available_models(): - return sorted([re.sub('.pt$', '', item.name) for item in list(Path('models/').glob('*')) if item.name.endswith('.pt')]) - - -def toggle_audio_src(choice): - if choice == "mic": - return gr.update(source="microphone", value=None, label="Microphone") - else: - return gr.update(source="upload", value=None, label="File") - - -def ui_full(launch_kwargs): - with gr.Blocks(title='AudioCraft Plus', theme=theme) as interface: - gr.Markdown( - """ - # AudioCraft Plus - v2.0.0a - - ### An All-in-One AudioCraft WebUI - - #### **Disclaimer:** This will not run on CPU only. Its best to clone this App and run on GPU instance! - **Alternatively**, you can run this for free on a google colab: - https://colab.research.google.com/github/camenduru/MusicGen-colab/blob/main/MusicGen_ClownOfMadness_plus_colab.ipynb - - **Or**, run this locally on your PC: - https://github.com/GrandaddyShmax/audiocraft_plus/tree/main - - Thanks to: facebookresearch, Camenduru, rkfg, oobabooga, AlexHK and GrandaddyShmax - """ - ) - with gr.Tab("MusicGen"): - gr.Markdown( - """ - ### MusicGen - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Tab("Generation"): - with gr.Accordion("Structure Prompts", open=False): - with gr.Column(): - with gr.Row(): - struc_prompts = gr.Checkbox(label="Enable", value=False, interactive=True, container=False) - bpm = gr.Number(label="BPM", value=120, interactive=True, scale=1, precision=0) - key = gr.Dropdown(["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "Bb", "B"], label="Key", value="C", interactive=True) - scale = gr.Dropdown(["Major", "Minor"], label="Scale", value="Major", interactive=True) - with gr.Row(): - global_prompt = gr.Text(label="Global Prompt", interactive=True, scale=3) - with gr.Row(): - s = gr.Slider(1, max_textboxes, value=1, step=1, label="Prompts:", interactive=True, scale=2) - #s_mode = gr.Radio(["segmentation", "batch"], value="segmentation", interactive=True, scale=1, label="Generation Mode") - with gr.Column(): - textboxes = [] - prompts = [] - repeats = [] - calcs = [] - with gr.Row(): - text0 = gr.Text(label="Input Text", interactive=True, scale=4) - prompts.append(text0) - drag0 = gr.Number(label="Repeat", value=1, interactive=True, scale=1) - repeats.append(drag0) - calc0 = gr.Text(interactive=False, value="00:00 - 00:00", scale=1, label="Time") - calcs.append(calc0) - for i in range(max_textboxes): - with gr.Row(visible=False) as t: - text = gr.Text(label="Input Text", interactive=True, scale=3) - repeat = gr.Number(label="Repeat", minimum=1, value=1, interactive=True, scale=1) - calc = gr.Text(interactive=False, value="00:00 - 00:00", scale=1, label="Time") - textboxes.append(t) - prompts.append(text) - repeats.append(repeat) - calcs.append(calc) - to_calc = gr.Button("Calculate Timings", variant="secondary") - with gr.Row(): - duration = gr.Slider(minimum=1, maximum=300, value=10, step=1, label="Duration", interactive=True) - with gr.Row(): - overlap = gr.Slider(minimum=1, maximum=29, value=12, step=1, label="Overlap", interactive=True) - with gr.Row(): - seed = gr.Number(label="Seed", value=-1, scale=4, precision=0, interactive=True) - gr.Button('\U0001f3b2\ufe0f', scale=1).click(fn=lambda: -1, outputs=[seed], queue=False) - reuse_seed = gr.Button('\u267b\ufe0f', scale=1) - - with gr.Tab("Audio"): - with gr.Row(): - with gr.Column(): - input_type = gr.Radio(["file", "mic"], value="file", label="Input Type (optional)", interactive=True) - mode = gr.Radio(["melody", "sample"], label="Input Audio Mode (optional)", value="sample", interactive=True) - with gr.Row(): - trim_start = gr.Number(label="Trim Start", value=0, interactive=True) - trim_end = gr.Number(label="Trim End", value=0, interactive=True) - audio = gr.Audio(source="upload", type="numpy", label="Input Audio (optional)", interactive=True) - - with gr.Tab("Customization"): - with gr.Row(): - with gr.Column(): - background = gr.ColorPicker(value="#0f0f0f", label="background color", interactive=True, scale=0) - bar1 = gr.ColorPicker(value="#84cc16", label="bar color start", interactive=True, scale=0) - bar2 = gr.ColorPicker(value="#10b981", label="bar color end", interactive=True, scale=0) - with gr.Column(): - image = gr.Image(label="Background Image", type="filepath", interactive=True, scale=4) - with gr.Row(): - height = gr.Number(label="Height", value=512, interactive=True) - width = gr.Number(label="Width", value=768, interactive=True) - - with gr.Tab("Settings"): - with gr.Row(): - channel = gr.Radio(["mono", "stereo", "stereo effect"], label="Output Audio Channels", value="stereo", interactive=True, scale=1) - sr_select = gr.Dropdown(["11025", "16000", "22050", "24000", "32000", "44100", "48000"], label="Output Audio Sample Rate", value="48000", interactive=True) - with gr.Row(): - model = gr.Radio(["melody", "small", "medium", "large", "custom"], label="Model", value="large", interactive=True, scale=1) - with gr.Column(): - dropdown = gr.Dropdown(choices=get_available_models(), value=("No models found" if len(get_available_models()) < 1 else get_available_models()[0]), label='Custom Model (models folder)', elem_classes='slim-dropdown', interactive=True) - ui.create_refresh_button(dropdown, lambda: None, lambda: {'choices': get_available_models()}, 'refresh-button') - basemodel = gr.Radio(["small", "medium", "melody", "large"], label="Base Model", value="medium", interactive=True, scale=1) - with gr.Row(): - decoder = gr.Radio(["Default", "MultiBand_Diffusion"], label="Decoder", value="Default", interactive=True) - with gr.Row(): - topk = gr.Number(label="Top-k", value=250, interactive=True) - topp = gr.Number(label="Top-p", value=0, interactive=True) - temperature = gr.Number(label="Temperature", value=1.0, interactive=True) - cfg_coef = gr.Number(label="Classifier Free Guidance", value=3.0, interactive=True) - with gr.Row(): - submit = gr.Button("Generate", variant="primary") - # Adapted from https://github.com/rkfg/audiocraft/blob/long/app.py, MIT license. - _ = gr.Button("Interrupt").click(fn=interrupt, queue=False) - with gr.Column() as c: - with gr.Tab("Output"): - output = gr.Video(label="Generated Music", scale=0) - with gr.Row(): - audio_only = gr.Audio(type="numpy", label="Audio Only", interactive=False) - backup_only = gr.Audio(type="numpy", label="Backup Audio", interactive=False, visible=False) - send_audio = gr.Button("Send to Input Audio") - seed_used = gr.Number(label='Seed used', value=-1, interactive=False) - download = gr.File(label="Generated Files", interactive=False) - with gr.Tab("Wiki"): - gr.Markdown( - """ - - **[Generate (button)]:** - Generates the music with the given settings and prompts. - - - **[Interrupt (button)]:** - Stops the music generation as soon as it can, providing an incomplete output. - - --- - - ### Generation Tab: - - #### Structure Prompts: - - This feature helps reduce repetetive prompts by allowing you to set global prompts - that will be used for all prompt segments. - - - **[Structure Prompts (checkbox)]:** - Enable/Disable the structure prompts feature. - - - **[BPM (number)]:** - Beats per minute of the generated music. - - - **[Key (dropdown)]:** - The key of the generated music. - - - **[Scale (dropdown)]:** - The scale of the generated music. - - - **[Global Prompt (text)]:** - Here write the prompt that you wish to be used for all prompt segments. - - #### Multi-Prompt: - - This feature allows you to control the music, adding variation to different time segments. - You have up to 10 prompt segments. the first prompt will always be 30s long - the other prompts will be [30s - overlap]. - for example if the overlap is 10s, each prompt segment will be 20s. - - - **[Prompt Segments (number)]:** - Amount of unique prompt to generate throughout the music generation. - - - **[Prompt/Input Text (prompt)]:** - Here describe the music you wish the model to generate. - - - **[Repeat (number)]:** - Write how many times this prompt will repeat (instead of wasting another prompt segment on the same prompt). - - - **[Time (text)]:** - The time of the prompt segment. - - - **[Calculate Timings (button)]:** - Calculates the timings of the prompt segments. - - - **[Duration (number)]:** - How long you want the generated music to be (in seconds). - - - **[Overlap (number)]:** - How much each new segment will reference the previous segment (in seconds). - For example, if you choose 20s: Each new segment after the first one will reference the previous segment 20s - and will generate only 10s of new music. The model can only process 30s of music. - - - **[Seed (number)]:** - Your generated music id. If you wish to generate the exact same music, - place the exact seed with the exact prompts - (This way you can also extend specific song that was generated short). - - - **[Random Seed (button)]:** - Gives "-1" as a seed, which counts as a random seed. - - - **[Copy Previous Seed (button)]:** - Copies the seed from the output seed (if you don't feel like doing it manualy). - - --- - - ### Audio Tab: - - - **[Input Type (selection)]:** - `File` mode allows you to upload an audio file to use as input - `Mic` mode allows you to use your microphone as input - - - **[Input Audio Mode (selection)]:** - `Melody` mode only works with the melody model: it conditions the music generation to reference the melody - `Sample` mode works with any model: it gives a music sample to the model to generate its continuation. - - - **[Trim Start and Trim End (numbers)]:** - `Trim Start` set how much you'd like to trim the input audio from the start - `Trim End` same as the above but from the end - - - **[Input Audio (audio file)]:** - Input here the audio you wish to use with "melody" or "sample" mode. - - --- - - ### Customization Tab: - - - **[Background Color (color)]:** - Works only if you don't upload image. Color of the background of the waveform. - - - **[Bar Color Start (color)]:** - First color of the waveform bars. - - - **[Bar Color End (color)]:** - Second color of the waveform bars. - - - **[Background Image (image)]:** - Background image that you wish to be attached to the generated video along with the waveform. - - - **[Height and Width (numbers)]:** - Output video resolution, only works with image. - (minimum height and width is 256). - - --- - - ### Settings Tab: - - - **[Output Audio Channels (selection)]:** - With this you can select the amount of channels that you wish for your output audio. - `mono` is a straightforward single channel audio - `stereo` is a dual channel audio but it will sound more or less like mono - `stereo effect` this one is also dual channel but uses tricks to simulate a stereo audio. - - - **[Output Audio Sample Rate (dropdown)]:** - The output audio sample rate, the model default is 32000. - - - **[Model (selection)]:** - Here you can choose which model you wish to use: - `melody` model is based on the medium model with a unique feature that lets you use melody conditioning - `small` model is trained on 300M parameters - `medium` model is trained on 1.5B parameters - `large` model is trained on 3.3B parameters - `custom` model runs the custom model that you provided. - - - **[Custom Model (selection)]:** - This dropdown will show you models that are placed in the `models` folder - you must select `custom` in the model options in order to use it. - - - **[Refresh (button)]:** - Refreshes the dropdown list for custom model. - - - **[Base Model (selection)]:** - Choose here the model that your custom model is based on. - - - **[Decoder (selection)]:** - Choose here the decoder that you wish to use: - `Default` is the default decoder - `MultiBand_Diffusion` is a decoder that uses diffusion to generate the audio. - - - **[Top-k (number)]:** - is a parameter used in text generation models, including music generation models. It determines the number of most likely next tokens to consider at each step of the generation process. The model ranks all possible tokens based on their predicted probabilities, and then selects the top-k tokens from the ranked list. The model then samples from this reduced set of tokens to determine the next token in the generated sequence. A smaller value of k results in a more focused and deterministic output, while a larger value of k allows for more diversity in the generated music. - - - **[Top-p (number)]:** - also known as nucleus sampling or probabilistic sampling, is another method used for token selection during text generation. Instead of specifying a fixed number like top-k, top-p considers the cumulative probability distribution of the ranked tokens. It selects the smallest possible set of tokens whose cumulative probability exceeds a certain threshold (usually denoted as p). The model then samples from this set to choose the next token. This approach ensures that the generated output maintains a balance between diversity and coherence, as it allows for a varying number of tokens to be considered based on their probabilities. - - - **[Temperature (number)]:** - is a parameter that controls the randomness of the generated output. It is applied during the sampling process, where a higher temperature value results in more random and diverse outputs, while a lower temperature value leads to more deterministic and focused outputs. In the context of music generation, a higher temperature can introduce more variability and creativity into the generated music, but it may also lead to less coherent or structured compositions. On the other hand, a lower temperature can produce more repetitive and predictable music. - - - **[Classifier Free Guidance (number)]:** - refers to a technique used in some music generation models where a separate classifier network is trained to provide guidance or control over the generated music. This classifier is trained on labeled data to recognize specific musical characteristics or styles. During the generation process, the output of the generator model is evaluated by the classifier, and the generator is encouraged to produce music that aligns with the desired characteristics or style. This approach allows for more fine-grained control over the generated music, enabling users to specify certain attributes they want the model to capture. - """ - ) - with gr.Tab("AudioGen"): - gr.Markdown( - """ - ### AudioGen - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Tab("Generation"): - with gr.Accordion("Structure Prompts", open=False): - with gr.Row(): - struc_prompts_a = gr.Checkbox(label="Enable", value=False, interactive=True, container=False) - global_prompt_a = gr.Text(label="Global Prompt", interactive=True, scale=3) - with gr.Row(): - s_a = gr.Slider(1, max_textboxes, value=1, step=1, label="Prompts:", interactive=True, scale=2) - with gr.Column(): - textboxes_a = [] - prompts_a = [] - repeats_a = [] - calcs_a = [] - with gr.Row(): - text0_a = gr.Text(label="Input Text", interactive=True, scale=4) - prompts_a.append(text0_a) - drag0_a = gr.Number(label="Repeat", value=1, interactive=True, scale=1) - repeats_a.append(drag0_a) - calc0_a = gr.Text(interactive=False, value="00:00 - 00:00", scale=1, label="Time") - calcs_a.append(calc0_a) - for i in range(max_textboxes): - with gr.Row(visible=False) as t_a: - text_a = gr.Text(label="Input Text", interactive=True, scale=3) - repeat_a = gr.Number(label="Repeat", minimum=1, value=1, interactive=True, scale=1) - calc_a = gr.Text(interactive=False, value="00:00 - 00:00", scale=1, label="Time") - textboxes_a.append(t_a) - prompts_a.append(text_a) - repeats_a.append(repeat_a) - calcs_a.append(calc_a) - to_calc_a = gr.Button("Calculate Timings", variant="secondary") - with gr.Row(): - duration_a = gr.Slider(minimum=1, maximum=300, value=10, step=1, label="Duration", interactive=True) - with gr.Row(): - overlap_a = gr.Slider(minimum=1, maximum=9, value=2, step=1, label="Overlap", interactive=True) - with gr.Row(): - seed_a = gr.Number(label="Seed", value=-1, scale=4, precision=0, interactive=True) - gr.Button('\U0001f3b2\ufe0f', scale=1).click(fn=lambda: -1, outputs=[seed_a], queue=False) - reuse_seed_a = gr.Button('\u267b\ufe0f', scale=1) - - with gr.Tab("Audio"): - with gr.Row(): - with gr.Column(): - input_type_a = gr.Radio(["file", "mic"], value="file", label="Input Type (optional)", interactive=True) - mode_a = gr.Radio(["sample"], label="Input Audio Mode (optional)", value="sample", interactive=False, visible=False) - with gr.Row(): - trim_start_a = gr.Number(label="Trim Start", value=0, interactive=True) - trim_end_a = gr.Number(label="Trim End", value=0, interactive=True) - audio_a = gr.Audio(source="upload", type="numpy", label="Input Audio (optional)", interactive=True) - - with gr.Tab("Customization"): - with gr.Row(): - with gr.Column(): - background_a = gr.ColorPicker(value="#0f0f0f", label="background color", interactive=True, scale=0) - bar1_a = gr.ColorPicker(value="#84cc16", label="bar color start", interactive=True, scale=0) - bar2_a = gr.ColorPicker(value="#10b981", label="bar color end", interactive=True, scale=0) - with gr.Column(): - image_a = gr.Image(label="Background Image", type="filepath", interactive=True, scale=4) - with gr.Row(): - height_a = gr.Number(label="Height", value=512, interactive=True) - width_a = gr.Number(label="Width", value=768, interactive=True) - - with gr.Tab("Settings"): - with gr.Row(): - channel_a = gr.Radio(["mono", "stereo", "stereo effect"], label="Output Audio Channels", value="stereo", interactive=True, scale=1) - sr_select_a = gr.Dropdown(["11025", "16000", "22050", "24000", "32000", "44100", "48000"], label="Output Audio Sample Rate", value="48000", interactive=True) - with gr.Row(): - model_a = gr.Radio(["medium"], label="Model", value="medium", interactive=False, visible=False) - decoder_a = gr.Radio(["Default"], label="Decoder", value="Default", interactive=False, visible=False) - with gr.Row(): - topk_a = gr.Number(label="Top-k", value=250, interactive=True) - topp_a = gr.Number(label="Top-p", value=0, interactive=True) - temperature_a = gr.Number(label="Temperature", value=1.0, interactive=True) - cfg_coef_a = gr.Number(label="Classifier Free Guidance", value=3.0, interactive=True) - with gr.Row(): - submit_a = gr.Button("Generate", variant="primary") - _ = gr.Button("Interrupt").click(fn=interrupt, queue=False) - with gr.Column(): - with gr.Tab("Output"): - output_a = gr.Video(label="Generated Audio", scale=0) - with gr.Row(): - audio_only_a = gr.Audio(type="numpy", label="Audio Only", interactive=False) - backup_only_a = gr.Audio(type="numpy", label="Backup Audio", interactive=False, visible=False) - send_audio_a = gr.Button("Send to Input Audio") - seed_used_a = gr.Number(label='Seed used', value=-1, interactive=False) - download_a = gr.File(label="Generated Files", interactive=False) - with gr.Tab("Wiki"): - gr.Markdown( - """ - - **[Generate (button)]:** - Generates the audio with the given settings and prompts. - - - **[Interrupt (button)]:** - Stops the audio generation as soon as it can, providing an incomplete output. - - --- - - ### Generation Tab: - - #### Structure Prompts: - - This feature helps reduce repetetive prompts by allowing you to set global prompts - that will be used for all prompt segments. - - - **[Structure Prompts (checkbox)]:** - Enable/Disable the structure prompts feature. - - - **[Global Prompt (text)]:** - Here write the prompt that you wish to be used for all prompt segments. - - #### Multi-Prompt: - - This feature allows you to control the audio, adding variation to different time segments. - You have up to 10 prompt segments. the first prompt will always be 10s long - the other prompts will be [10s - overlap]. - for example if the overlap is 2s, each prompt segment will be 8s. - - - **[Prompt Segments (number)]:** - Amount of unique prompt to generate throughout the audio generation. - - - **[Prompt/Input Text (prompt)]:** - Here describe the audio you wish the model to generate. - - - **[Repeat (number)]:** - Write how many times this prompt will repeat (instead of wasting another prompt segment on the same prompt). - - - **[Time (text)]:** - The time of the prompt segment. - - - **[Calculate Timings (button)]:** - Calculates the timings of the prompt segments. - - - **[Duration (number)]:** - How long you want the generated audio to be (in seconds). - - - **[Overlap (number)]:** - How much each new segment will reference the previous segment (in seconds). - For example, if you choose 2s: Each new segment after the first one will reference the previous segment 2s - and will generate only 8s of new audio. The model can only process 10s of music. - - - **[Seed (number)]:** - Your generated audio id. If you wish to generate the exact same audio, - place the exact seed with the exact prompts - (This way you can also extend specific song that was generated short). - - - **[Random Seed (button)]:** - Gives "-1" as a seed, which counts as a random seed. - - - **[Copy Previous Seed (button)]:** - Copies the seed from the output seed (if you don't feel like doing it manualy). - - --- - - ### Audio Tab: - - - **[Input Type (selection)]:** - `File` mode allows you to upload an audio file to use as input - `Mic` mode allows you to use your microphone as input - - - **[Trim Start and Trim End (numbers)]:** - `Trim Start` set how much you'd like to trim the input audio from the start - `Trim End` same as the above but from the end - - - **[Input Audio (audio file)]:** - Input here the audio you wish to use. - - --- - - ### Customization Tab: - - - **[Background Color (color)]:** - Works only if you don't upload image. Color of the background of the waveform. - - - **[Bar Color Start (color)]:** - First color of the waveform bars. - - - **[Bar Color End (color)]:** - Second color of the waveform bars. - - - **[Background Image (image)]:** - Background image that you wish to be attached to the generated video along with the waveform. - - - **[Height and Width (numbers)]:** - Output video resolution, only works with image. - (minimum height and width is 256). - - --- - - ### Settings Tab: - - - **[Output Audio Channels (selection)]:** - With this you can select the amount of channels that you wish for your output audio. - `mono` is a straightforward single channel audio - `stereo` is a dual channel audio but it will sound more or less like mono - `stereo effect` this one is also dual channel but uses tricks to simulate a stereo audio. - - - **[Output Audio Sample Rate (dropdown)]:** - The output audio sample rate, the model default is 32000. - - - **[Top-k (number)]:** - is a parameter used in text generation models, including music generation models. It determines the number of most likely next tokens to consider at each step of the generation process. The model ranks all possible tokens based on their predicted probabilities, and then selects the top-k tokens from the ranked list. The model then samples from this reduced set of tokens to determine the next token in the generated sequence. A smaller value of k results in a more focused and deterministic output, while a larger value of k allows for more diversity in the generated music. - - - **[Top-p (number)]:** - also known as nucleus sampling or probabilistic sampling, is another method used for token selection during text generation. Instead of specifying a fixed number like top-k, top-p considers the cumulative probability distribution of the ranked tokens. It selects the smallest possible set of tokens whose cumulative probability exceeds a certain threshold (usually denoted as p). The model then samples from this set to choose the next token. This approach ensures that the generated output maintains a balance between diversity and coherence, as it allows for a varying number of tokens to be considered based on their probabilities. - - - **[Temperature (number)]:** - is a parameter that controls the randomness of the generated output. It is applied during the sampling process, where a higher temperature value results in more random and diverse outputs, while a lower temperature value leads to more deterministic and focused outputs. In the context of music generation, a higher temperature can introduce more variability and creativity into the generated music, but it may also lead to less coherent or structured compositions. On the other hand, a lower temperature can produce more repetitive and predictable music. - - - **[Classifier Free Guidance (number)]:** - refers to a technique used in some music generation models where a separate classifier network is trained to provide guidance or control over the generated music. This classifier is trained on labeled data to recognize specific musical characteristics or styles. During the generation process, the output of the generator model is evaluated by the classifier, and the generator is encouraged to produce music that aligns with the desired characteristics or style. This approach allows for more fine-grained control over the generated music, enabling users to specify certain attributes they want the model to capture. - """ - ) - with gr.Tab("Audio Info"): - gr.Markdown( - """ - ### Audio Info - """ - ) - with gr.Row(): - with gr.Column(): - in_audio = gr.File(type="file", label="Input Any Audio", interactive=True) - with gr.Row(): - send_gen = gr.Button("Send to MusicGen", variant="primary") - send_gen_a = gr.Button("Send to AudioGen", variant="primary") - with gr.Column(): - info = gr.Textbox(label="Audio Info", lines=10, interactive=False) - with gr.Tab("Changelog"): - gr.Markdown( - """ - ## Changelog: - - ### v2.0.0a - - - Forgot to move all the update to app.py from temp2.py... oops - - - - ### v2.0.0 - - - Changed name from MusicGen+ to AudioCraft Plus - - - Complete overhaul of the repo "backend" with the latest changes from the main facebookresearch repo - - - Added a new decoder: MultiBand_Diffusion - - - Added AudioGen: a new tab for generating audio - - - - ### v1.2.8c - - - Implemented Reverse compatibility for audio info tab with previous versions - - - - ### v1.2.8b - - - Fixed the error when loading default models - - - - ### v1.2.8a - - - Adapted Audio info tab to work with the new structure prompts feature - - - Now custom models actually work, make sure you select the correct base model - - - - ### v1.2.8 - - - Now you will also recieve json file with metadata of generated audio - - - Added error messages in Audio Info tab - - - Added structure prompts: you can select bpm, key and global prompt for all prompts - - - Added time display next to each prompt, can be calculated with "Calculate Timings" button - - - - ### v1.2.7 - - - When sending generated audio to Input Audio, it will send a backup audio with default settings - (best for continuos generation) - - - Added Metadata to generated audio (Thanks to AlexHK ♥) - - - Added Audio Info tab that will display the metadata of the input audio - - - Added "send to Text2Audio" button in Audio Info tab - - - Generated audio is now stored in the "output" folder (Thanks to AlexHK ♥) - - - Added an output area with generated files and download buttons - - - Enhanced Stereo effect (Thanks to AlexHK ♥) - - - - ### v1.2.6 - - - Added option to generate in stereo (instead of only mono) - - - Added dropdown for selecting output sample rate (model default is 32000) - - - - ### v1.2.5a - - - Added file cleaner (This comes from the main facebookresearch repo) - - - Reorganized a little, moved audio to a seperate tab - - - - ### v1.2.5 - - - Gave a unique lime theme to the webui - - - Added additional output for audio only - - - Added button to send generated audio to Input Audio - - - Added option to trim Input Audio - - - - ### v1.2.4 - - - Added mic input (This comes from the main facebookresearch repo) - - - - ### v1.2.3 - - - Added option to change video size to fit the image you upload - - - - ### v1.2.2 - - - Added Wiki, Changelog and About tabs - - - - ### v1.2.1 - - - Added tabs and organized the entire interface - - - Added option to attach image to the output video - - - Added option to load fine-tuned models (Yet to be tested) - - - - ### v1.2.0 - - - Added Multi-Prompt - - - - ### v1.1.3 - - - Added customization options for generated waveform - - - - ### v1.1.2 - - - Removed sample length limit: now you can input audio of any length as music sample - - - - ### v1.1.1 - - - Improved music sample audio quality when using music continuation - - - - ### v1.1.0 - - - Rebuilt the repo on top of the latest structure of the main MusicGen repo - - - Improved Music continuation feature - - - - ### v1.0.0 - Stable Version - - - Added Music continuation - """ - ) - with gr.Tab("About"): - gen_type = gr.Text(value="music", interactive=False, visible=False) - gen_type_a = gr.Text(value="audio", interactive=False, visible=False) - gr.Markdown( - """ - This is your private demo for [MusicGen](https://github.com/facebookresearch/audiocraft), a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284) - - ## MusicGen+ is an extended version of the original MusicGen by facebookresearch. - - ### Repo: https://github.com/GrandaddyShmax/audiocraft_plus/tree/plus - - --- - - ### This project was possible thanks to: - - #### GrandaddyShmax - https://github.com/GrandaddyShmax - - #### Camenduru - https://github.com/camenduru - - #### rkfg - https://github.com/rkfg - - #### oobabooga - https://github.com/oobabooga - - #### AlexHK - https://github.com/alanhk147 - """ - ) - - send_gen.click(info_to_params, inputs=[in_audio], outputs=[decoder, struc_prompts, global_prompt, bpm, key, scale, model, dropdown, basemodel, s, prompts[0], prompts[1], prompts[2], prompts[3], prompts[4], prompts[5], prompts[6], prompts[7], prompts[8], prompts[9], repeats[0], repeats[1], repeats[2], repeats[3], repeats[4], repeats[5], repeats[6], repeats[7], repeats[8], repeats[9], mode, duration, topk, topp, temperature, cfg_coef, seed, overlap, channel, sr_select], queue=False) - reuse_seed.click(fn=lambda x: x, inputs=[seed_used], outputs=[seed], queue=False) - send_audio.click(fn=lambda x: x, inputs=[backup_only], outputs=[audio], queue=False) - submit.click(predict_full, inputs=[gen_type, model, decoder, dropdown, basemodel, s, struc_prompts, bpm, key, scale, global_prompt, prompts[0], prompts[1], prompts[2], prompts[3], prompts[4], prompts[5], prompts[6], prompts[7], prompts[8], prompts[9], repeats[0], repeats[1], repeats[2], repeats[3], repeats[4], repeats[5], repeats[6], repeats[7], repeats[8], repeats[9], audio, mode, trim_start, trim_end, duration, topk, topp, temperature, cfg_coef, seed, overlap, image, height, width, background, bar1, bar2, channel, sr_select], outputs=[output, audio_only, backup_only, download, seed_used]) - input_type.change(toggle_audio_src, input_type, [audio], queue=False, show_progress=False) - to_calc.click(calc_time, inputs=[gen_type, s, duration, overlap, repeats[0], repeats[1], repeats[2], repeats[3], repeats[4], repeats[5], repeats[6], repeats[7], repeats[8], repeats[9]], outputs=[calcs[0], calcs[1], calcs[2], calcs[3], calcs[4], calcs[5], calcs[6], calcs[7], calcs[8], calcs[9]], queue=False) - - send_gen_a.click(info_to_params_a, inputs=[in_audio], outputs=[decoder_a, struc_prompts_a, global_prompt_a, s_a, prompts_a[0], prompts_a[1], prompts_a[2], prompts_a[3], prompts_a[4], prompts_a[5], prompts_a[6], prompts_a[7], prompts_a[8], prompts_a[9], repeats_a[0], repeats_a[1], repeats_a[2], repeats_a[3], repeats_a[4], repeats_a[5], repeats_a[6], repeats_a[7], repeats_a[8], repeats_a[9], duration_a, topk_a, topp_a, temperature_a, cfg_coef_a, seed_a, overlap_a, channel_a, sr_select_a], queue=False) - reuse_seed_a.click(fn=lambda x: x, inputs=[seed_used_a], outputs=[seed_a], queue=False) - send_audio_a.click(fn=lambda x: x, inputs=[backup_only_a], outputs=[audio_a], queue=False) - submit_a.click(predict_full, inputs=[gen_type_a, model_a, decoder_a, dropdown, basemodel, s_a, struc_prompts_a, bpm, key, scale, global_prompt_a, prompts_a[0], prompts_a[1], prompts_a[2], prompts_a[3], prompts_a[4], prompts_a[5], prompts_a[6], prompts_a[7], prompts_a[8], prompts_a[9], repeats_a[0], repeats_a[1], repeats_a[2], repeats_a[3], repeats_a[4], repeats_a[5], repeats_a[6], repeats_a[7], repeats_a[8], repeats_a[9], audio_a, mode_a, trim_start_a, trim_end_a, duration_a, topk_a, topp_a, temperature_a, cfg_coef_a, seed_a, overlap_a, image_a, height_a, width_a, background_a, bar1_a, bar2_a, channel_a, sr_select_a], outputs=[output_a, audio_only_a, backup_only_a, download_a, seed_used_a]) - input_type_a.change(toggle_audio_src, input_type_a, [audio_a], queue=False, show_progress=False) - to_calc_a.click(calc_time, inputs=[gen_type_a, s_a, duration_a, overlap_a, repeats_a[0], repeats_a[1], repeats_a[2], repeats_a[3], repeats_a[4], repeats_a[5], repeats_a[6], repeats_a[7], repeats_a[8], repeats_a[9]], outputs=[calcs_a[0], calcs_a[1], calcs_a[2], calcs_a[3], calcs_a[4], calcs_a[5], calcs_a[6], calcs_a[7], calcs_a[8], calcs_a[9]], queue=False) - - in_audio.change(get_audio_info, in_audio, outputs=[info]) - - def variable_outputs(k): - k = int(k) - 1 - return [gr.Textbox.update(visible=True)]*k + [gr.Textbox.update(visible=False)]*(max_textboxes-k) - def get_size(image): - if image is not None: - img = Image.open(image) - img_height = img.height - img_width = img.width - if (img_height%2) != 0: - img_height = img_height + 1 - if (img_width%2) != 0: - img_width = img_width + 1 - return img_height, img_width - else: - return 512, 768 - - image.change(get_size, image, outputs=[height, width]) - image_a.change(get_size, image_a, outputs=[height_a, width_a]) - s.change(variable_outputs, s, textboxes) - s_a.change(variable_outputs, s_a, textboxes_a) - interface.queue().launch(**launch_kwargs) - - -def ui_batched(launch_kwargs): - with gr.Blocks() as demo: - gr.Markdown( - """ - # MusicGen - - This is the demo for [MusicGen](https://github.com/facebookresearch/audiocraft), - a simple and controllable model for music generation - presented at: ["Simple and Controllable Music Generation"](https://huggingface.co/papers/2306.05284). - <br/> - <a href="https://huggingface.co/spaces/facebook/MusicGen?duplicate=true" - style="display: inline-block;margin-top: .5em;margin-right: .25em;" target="_blank"> - <img style="margin-bottom: 0em;display: inline;margin-top: -.25em;" - src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a> - for longer sequences, more control and no queue.</p> - """ - ) - with gr.Row(): - with gr.Column(): - with gr.Row(): - text = gr.Text(label="Describe your music", lines=2, interactive=True) - with gr.Column(): - radio = gr.Radio(["file", "mic"], value="file", - label="Condition on a melody (optional) File or Mic") - melody = gr.Audio(source="upload", type="numpy", label="File", - interactive=True, elem_id="melody-input") - with gr.Row(): - submit = gr.Button("Generate") - with gr.Column(): - output = gr.Video(label="Generated Music") - audio_output = gr.Audio(label="Generated Music (wav)", type='filepath') - submit.click(predict_batched, inputs=[text, melody], - outputs=[output, audio_output], batch=True, max_batch_size=MAX_BATCH_SIZE) - radio.change(toggle_audio_src, radio, [melody], queue=False, show_progress=False) - gr.Examples( - fn=predict_batched, - examples=[ - [ - "An 80s driving pop song with heavy drums and synth pads in the background", - "./assets/bach.mp3", - ], - [ - "A cheerful country song with acoustic guitars", - "./assets/bolero_ravel.mp3", - ], - [ - "90s rock song with electric guitar and heavy drums", - None, - ], - [ - "a light and cheerly EDM track, with syncopated drums, aery pads, and strong emotions bpm: 130", - "./assets/bach.mp3", - ], - [ - "lofi slow bpm electro chill with organic samples", - None, - ], - ], - inputs=[text, melody], - outputs=[output] - ) - gr.Markdown(""" - ### More details - - The model will generate 12 seconds of audio based on the description you provided. - You can optionally provide a reference audio from which a broad melody will be extracted. - The model will then try to follow both the description and melody provided. - All samples are generated with the `melody` model. - - You can also use your own GPU or a Google Colab by following the instructions on our repo. - - See [github.com/facebookresearch/audiocraft](https://github.com/facebookresearch/audiocraft) - for more details. - """) - - demo.queue(max_size=8 * 4).launch(**launch_kwargs) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - '--listen', - type=str, - default='0.0.0.0' if 'SPACE_ID' in os.environ else '127.0.0.1', - help='IP to listen on for connections to Gradio', - ) - parser.add_argument( - '--username', type=str, default='', help='Username for authentication' - ) - parser.add_argument( - '--password', type=str, default='', help='Password for authentication' - ) - parser.add_argument( - '--server_port', - type=int, - default=0, - help='Port to run the server listener on', - ) - parser.add_argument( - '--inbrowser', action='store_true', help='Open in browser' - ) - parser.add_argument( - '--share', action='store_true', help='Share the gradio UI' - ) - parser.add_argument( - '--unload_model', action='store_true', help='Unload the model after every generation to save GPU memory' - ) - - parser.add_argument( - '--unload_to_cpu', action='store_true', help='Move the model to main RAM after every generation to save GPU memory but reload faster than after full unload (see above)' - ) - - parser.add_argument( - '--cache', action='store_true', help='Cache models in RAM to quickly switch between them' - ) - - args = parser.parse_args() - UNLOAD_MODEL = args.unload_model - MOVE_TO_CPU = args.unload_to_cpu - if args.cache: - MODELS = {} - - launch_kwargs = {} - launch_kwargs['server_name'] = args.listen - - if args.username and args.password: - launch_kwargs['auth'] = (args.username, args.password) - if args.server_port: - launch_kwargs['server_port'] = args.server_port - if args.inbrowser: - launch_kwargs['inbrowser'] = args.inbrowser - if args.share: - launch_kwargs['share'] = args.share - - # Show the interface - if IS_BATCHED: - global USE_DIFFUSION - USE_DIFFUSION = False - ui_batched(launch_kwargs) - else: - ui_full(launch_kwargs) \ No newline at end of file diff --git a/spaces/tryolabs/transformers-optimization/README.md b/spaces/tryolabs/transformers-optimization/README.md deleted file mode 100644 index 733d3096a74ff722f4bea9ad4818ee47eda9ff60..0000000000000000000000000000000000000000 --- a/spaces/tryolabs/transformers-optimization/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Transformers Optimization -emoji: 🚀 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/user238921933/stable-diffusion-webui/modules/realesrgan_model.py b/spaces/user238921933/stable-diffusion-webui/modules/realesrgan_model.py deleted file mode 100644 index 41341a1b1f4f00d0f2a68c0b564d9f8ee23d1f2e..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/modules/realesrgan_model.py +++ /dev/null @@ -1,129 +0,0 @@ -import os -import sys -import traceback - -import numpy as np -from PIL import Image -from basicsr.utils.download_util import load_file_from_url -from realesrgan import RealESRGANer - -from modules.upscaler import Upscaler, UpscalerData -from modules.shared import cmd_opts, opts - - -class UpscalerRealESRGAN(Upscaler): - def __init__(self, path): - self.name = "RealESRGAN" - self.user_path = path - super().__init__() - try: - from basicsr.archs.rrdbnet_arch import RRDBNet - from realesrgan import RealESRGANer - from realesrgan.archs.srvgg_arch import SRVGGNetCompact - self.enable = True - self.scalers = [] - scalers = self.load_models(path) - for scaler in scalers: - if scaler.name in opts.realesrgan_enabled_models: - self.scalers.append(scaler) - - except Exception: - print("Error importing Real-ESRGAN:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - self.enable = False - self.scalers = [] - - def do_upscale(self, img, path): - if not self.enable: - return img - - info = self.load_model(path) - if not os.path.exists(info.local_data_path): - print("Unable to load RealESRGAN model: %s" % info.name) - return img - - upsampler = RealESRGANer( - scale=info.scale, - model_path=info.local_data_path, - model=info.model(), - half=not cmd_opts.no_half and not cmd_opts.upcast_sampling, - tile=opts.ESRGAN_tile, - tile_pad=opts.ESRGAN_tile_overlap, - ) - - upsampled = upsampler.enhance(np.array(img), outscale=info.scale)[0] - - image = Image.fromarray(upsampled) - return image - - def load_model(self, path): - try: - info = next(iter([scaler for scaler in self.scalers if scaler.data_path == path]), None) - - if info is None: - print(f"Unable to find model info: {path}") - return None - - info.local_data_path = load_file_from_url(url=info.data_path, model_dir=self.model_path, progress=True) - return info - except Exception as e: - print(f"Error making Real-ESRGAN models list: {e}", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - return None - - def load_models(self, _): - return get_realesrgan_models(self) - - -def get_realesrgan_models(scaler): - try: - from basicsr.archs.rrdbnet_arch import RRDBNet - from realesrgan.archs.srvgg_arch import SRVGGNetCompact - models = [ - UpscalerData( - name="R-ESRGAN General 4xV3", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth", - scale=4, - upscaler=scaler, - model=lambda: SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') - ), - UpscalerData( - name="R-ESRGAN General WDN 4xV3", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-wdn-x4v3.pth", - scale=4, - upscaler=scaler, - model=lambda: SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') - ), - UpscalerData( - name="R-ESRGAN AnimeVideo", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth", - scale=4, - upscaler=scaler, - model=lambda: SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu') - ), - UpscalerData( - name="R-ESRGAN 4x+", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth", - scale=4, - upscaler=scaler, - model=lambda: RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4) - ), - UpscalerData( - name="R-ESRGAN 4x+ Anime6B", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth", - scale=4, - upscaler=scaler, - model=lambda: RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - ), - UpscalerData( - name="R-ESRGAN 2x+", - path="https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth", - scale=2, - upscaler=scaler, - model=lambda: RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2) - ), - ] - return models - except Exception as e: - print("Error making Real-ESRGAN models list:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/examples/YOLOv8-OpenCV-ONNX-Python/main.py b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/examples/YOLOv8-OpenCV-ONNX-Python/main.py deleted file mode 100644 index d1f635c274fb292443e65e1c2f3bde488353cf97..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/examples/YOLOv8-OpenCV-ONNX-Python/main.py +++ /dev/null @@ -1,80 +0,0 @@ -import argparse - -import cv2.dnn -import numpy as np - -from ultralytics.yolo.utils import ROOT, yaml_load -from ultralytics.yolo.utils.checks import check_yaml - -CLASSES = yaml_load(check_yaml('coco128.yaml'))['names'] - -colors = np.random.uniform(0, 255, size=(len(CLASSES), 3)) - - -def draw_bounding_box(img, class_id, confidence, x, y, x_plus_w, y_plus_h): - label = f'{CLASSES[class_id]} ({confidence:.2f})' - color = colors[class_id] - cv2.rectangle(img, (x, y), (x_plus_w, y_plus_h), color, 2) - cv2.putText(img, label, (x - 10, y - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2) - - -def main(onnx_model, input_image): - model: cv2.dnn.Net = cv2.dnn.readNetFromONNX(onnx_model) - original_image: np.ndarray = cv2.imread(input_image) - [height, width, _] = original_image.shape - length = max((height, width)) - image = np.zeros((length, length, 3), np.uint8) - image[0:height, 0:width] = original_image - scale = length / 640 - - blob = cv2.dnn.blobFromImage(image, scalefactor=1 / 255, size=(640, 640), swapRB=True) - model.setInput(blob) - outputs = model.forward() - - outputs = np.array([cv2.transpose(outputs[0])]) - rows = outputs.shape[1] - - boxes = [] - scores = [] - class_ids = [] - - for i in range(rows): - classes_scores = outputs[0][i][4:] - (minScore, maxScore, minClassLoc, (x, maxClassIndex)) = cv2.minMaxLoc(classes_scores) - if maxScore >= 0.25: - box = [ - outputs[0][i][0] - (0.5 * outputs[0][i][2]), outputs[0][i][1] - (0.5 * outputs[0][i][3]), - outputs[0][i][2], outputs[0][i][3]] - boxes.append(box) - scores.append(maxScore) - class_ids.append(maxClassIndex) - - result_boxes = cv2.dnn.NMSBoxes(boxes, scores, 0.25, 0.45, 0.5) - - detections = [] - for i in range(len(result_boxes)): - index = result_boxes[i] - box = boxes[index] - detection = { - 'class_id': class_ids[index], - 'class_name': CLASSES[class_ids[index]], - 'confidence': scores[index], - 'box': box, - 'scale': scale} - detections.append(detection) - draw_bounding_box(original_image, class_ids[index], scores[index], round(box[0] * scale), round(box[1] * scale), - round((box[0] + box[2]) * scale), round((box[1] + box[3]) * scale)) - - cv2.imshow('image', original_image) - cv2.waitKey(0) - cv2.destroyAllWindows() - - return detections - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--model', default='yolov8n.onnx', help='Input your onnx model.') - parser.add_argument('--img', default=str(ROOT / 'assets/bus.jpg'), help='Path to input image.') - args = parser.parse_args() - main(args.model, args.img) diff --git a/spaces/venz/AW-02-H5-AR-VR-IOT/README.md b/spaces/venz/AW-02-H5-AR-VR-IOT/README.md deleted file mode 100644 index a0ed3d6739bdd89584ab896f3013070b65ffd203..0000000000000000000000000000000000000000 --- a/spaces/venz/AW-02-H5-AR-VR-IOT/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: AW 02 H5 AR VR IOT -emoji: ⚡ -colorFrom: purple -colorTo: blue -sdk: static -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/vonbarnekowa/stable-diffusion/ldm/modules/image_degradation/utils_image.py b/spaces/vonbarnekowa/stable-diffusion/ldm/modules/image_degradation/utils_image.py deleted file mode 100644 index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000 --- a/spaces/vonbarnekowa/stable-diffusion/ldm/modules/image_degradation/utils_image.py +++ /dev/null @@ -1,916 +0,0 @@ -import os -import math -import random -import numpy as np -import torch -import cv2 -from torchvision.utils import make_grid -from datetime import datetime -#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py - - -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - - -''' -# -------------------------------------------- -# Kai Zhang (github: https://github.com/cszn) -# 03/Mar/2019 -# -------------------------------------------- -# https://github.com/twhui/SRGAN-pyTorch -# https://github.com/xinntao/BasicSR -# -------------------------------------------- -''' - - -IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif'] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def get_timestamp(): - return datetime.now().strftime('%y%m%d-%H%M%S') - - -def imshow(x, title=None, cbar=False, figsize=None): - plt.figure(figsize=figsize) - plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') - if title: - plt.title(title) - if cbar: - plt.colorbar() - plt.show() - - -def surf(Z, cmap='rainbow', figsize=None): - plt.figure(figsize=figsize) - ax3 = plt.axes(projection='3d') - - w, h = Z.shape[:2] - xx = np.arange(0,w,1) - yy = np.arange(0,h,1) - X, Y = np.meshgrid(xx, yy) - ax3.plot_surface(X,Y,Z,cmap=cmap) - #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap) - plt.show() - - -''' -# -------------------------------------------- -# get image pathes -# -------------------------------------------- -''' - - -def get_image_paths(dataroot): - paths = None # return None if dataroot is None - if dataroot is not None: - paths = sorted(_get_paths_from_images(dataroot)) - return paths - - -def _get_paths_from_images(path): - assert os.path.isdir(path), '{:s} is not a valid directory'.format(path) - images = [] - for dirpath, _, fnames in sorted(os.walk(path)): - for fname in sorted(fnames): - if is_image_file(fname): - img_path = os.path.join(dirpath, fname) - images.append(img_path) - assert images, '{:s} has no valid image file'.format(path) - return images - - -''' -# -------------------------------------------- -# split large images into small images -# -------------------------------------------- -''' - - -def patches_from_image(img, p_size=512, p_overlap=64, p_max=800): - w, h = img.shape[:2] - patches = [] - if w > p_max and h > p_max: - w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int)) - h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int)) - w1.append(w-p_size) - h1.append(h-p_size) -# print(w1) -# print(h1) - for i in w1: - for j in h1: - patches.append(img[i:i+p_size, j:j+p_size,:]) - else: - patches.append(img) - - return patches - - -def imssave(imgs, img_path): - """ - imgs: list, N images of size WxHxC - """ - img_name, ext = os.path.splitext(os.path.basename(img_path)) - - for i, img in enumerate(imgs): - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png') - cv2.imwrite(new_path, img) - - -def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000): - """ - split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size), - and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max) - will be splitted. - Args: - original_dataroot: - taget_dataroot: - p_size: size of small images - p_overlap: patch size in training is a good choice - p_max: images with smaller size than (p_max)x(p_max) keep unchanged. - """ - paths = get_image_paths(original_dataroot) - for img_path in paths: - # img_name, ext = os.path.splitext(os.path.basename(img_path)) - img = imread_uint(img_path, n_channels=n_channels) - patches = patches_from_image(img, p_size, p_overlap, p_max) - imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path))) - #if original_dataroot == taget_dataroot: - #del img_path - -''' -# -------------------------------------------- -# makedir -# -------------------------------------------- -''' - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def mkdirs(paths): - if isinstance(paths, str): - mkdir(paths) - else: - for path in paths: - mkdir(path) - - -def mkdir_and_rename(path): - if os.path.exists(path): - new_name = path + '_archived_' + get_timestamp() - print('Path already exists. Rename it to [{:s}]'.format(new_name)) - os.rename(path, new_name) - os.makedirs(path) - - -''' -# -------------------------------------------- -# read image from path -# opencv is fast, but read BGR numpy image -# -------------------------------------------- -''' - - -# -------------------------------------------- -# get uint8 image of size HxWxn_channles (RGB) -# -------------------------------------------- -def imread_uint(path, n_channels=3): - # input: path - # output: HxWx3(RGB or GGG), or HxWx1 (G) - if n_channels == 1: - img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE - img = np.expand_dims(img, axis=2) # HxWx1 - elif n_channels == 3: - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG - else: - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB - return img - - -# -------------------------------------------- -# matlab's imwrite -# -------------------------------------------- -def imsave(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - -def imwrite(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - - - -# -------------------------------------------- -# get single image of size HxWxn_channles (BGR) -# -------------------------------------------- -def read_img(path): - # read image by cv2 - # return: Numpy float32, HWC, BGR, [0,1] - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE - img = img.astype(np.float32) / 255. - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - # some images have 4 channels - if img.shape[2] > 3: - img = img[:, :, :3] - return img - - -''' -# -------------------------------------------- -# image format conversion -# -------------------------------------------- -# numpy(single) <---> numpy(unit) -# numpy(single) <---> tensor -# numpy(unit) <---> tensor -# -------------------------------------------- -''' - - -# -------------------------------------------- -# numpy(single) [0, 1] <---> numpy(unit) -# -------------------------------------------- - - -def uint2single(img): - - return np.float32(img/255.) - - -def single2uint(img): - - return np.uint8((img.clip(0, 1)*255.).round()) - - -def uint162single(img): - - return np.float32(img/65535.) - - -def single2uint16(img): - - return np.uint16((img.clip(0, 1)*65535.).round()) - - -# -------------------------------------------- -# numpy(unit) (HxWxC or HxW) <---> tensor -# -------------------------------------------- - - -# convert uint to 4-dimensional torch tensor -def uint2tensor4(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0) - - -# convert uint to 3-dimensional torch tensor -def uint2tensor3(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.) - - -# convert 2/3/4-dimensional torch tensor to uint -def tensor2uint(img): - img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - return np.uint8((img*255.0).round()) - - -# -------------------------------------------- -# numpy(single) (HxWxC) <---> tensor -# -------------------------------------------- - - -# convert single (HxWxC) to 3-dimensional torch tensor -def single2tensor3(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float() - - -# convert single (HxWxC) to 4-dimensional torch tensor -def single2tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0) - - -# convert torch tensor to single -def tensor2single(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - - return img - -# convert torch tensor to single -def tensor2single3(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - elif img.ndim == 2: - img = np.expand_dims(img, axis=2) - return img - - -def single2tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0) - - -def single32tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0) - - -def single42tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float() - - -# from skimage.io import imread, imsave -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - ''' - Converts a torch Tensor into an image Numpy array of BGR channel order - Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order - Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default) - ''' - tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp - tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] - n_dim = tensor.dim() - if n_dim == 4: - n_img = len(tensor) - img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 3: - img_np = tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 2: - img_np = tensor.numpy() - else: - raise TypeError( - 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) - if out_type == np.uint8: - img_np = (img_np * 255.0).round() - # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. - return img_np.astype(out_type) - - -''' -# -------------------------------------------- -# Augmentation, flipe and/or rotate -# -------------------------------------------- -# The following two are enough. -# (1) augmet_img: numpy image of WxHxC or WxH -# (2) augment_img_tensor4: tensor image 1xCxWxH -# -------------------------------------------- -''' - - -def augment_img(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return np.flipud(np.rot90(img)) - elif mode == 2: - return np.flipud(img) - elif mode == 3: - return np.rot90(img, k=3) - elif mode == 4: - return np.flipud(np.rot90(img, k=2)) - elif mode == 5: - return np.rot90(img) - elif mode == 6: - return np.rot90(img, k=2) - elif mode == 7: - return np.flipud(np.rot90(img, k=3)) - - -def augment_img_tensor4(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return img.rot90(1, [2, 3]).flip([2]) - elif mode == 2: - return img.flip([2]) - elif mode == 3: - return img.rot90(3, [2, 3]) - elif mode == 4: - return img.rot90(2, [2, 3]).flip([2]) - elif mode == 5: - return img.rot90(1, [2, 3]) - elif mode == 6: - return img.rot90(2, [2, 3]) - elif mode == 7: - return img.rot90(3, [2, 3]).flip([2]) - - -def augment_img_tensor(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - img_size = img.size() - img_np = img.data.cpu().numpy() - if len(img_size) == 3: - img_np = np.transpose(img_np, (1, 2, 0)) - elif len(img_size) == 4: - img_np = np.transpose(img_np, (2, 3, 1, 0)) - img_np = augment_img(img_np, mode=mode) - img_tensor = torch.from_numpy(np.ascontiguousarray(img_np)) - if len(img_size) == 3: - img_tensor = img_tensor.permute(2, 0, 1) - elif len(img_size) == 4: - img_tensor = img_tensor.permute(3, 2, 0, 1) - - return img_tensor.type_as(img) - - -def augment_img_np3(img, mode=0): - if mode == 0: - return img - elif mode == 1: - return img.transpose(1, 0, 2) - elif mode == 2: - return img[::-1, :, :] - elif mode == 3: - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 4: - return img[:, ::-1, :] - elif mode == 5: - img = img[:, ::-1, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 6: - img = img[:, ::-1, :] - img = img[::-1, :, :] - return img - elif mode == 7: - img = img[:, ::-1, :] - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - - -def augment_imgs(img_list, hflip=True, rot=True): - # horizontal flip OR rotate - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - return [_augment(img) for img in img_list] - - -''' -# -------------------------------------------- -# modcrop and shave -# -------------------------------------------- -''' - - -def modcrop(img_in, scale): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - if img.ndim == 2: - H, W = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r] - elif img.ndim == 3: - H, W, C = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r, :] - else: - raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim)) - return img - - -def shave(img_in, border=0): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - h, w = img.shape[:2] - img = img[border:h-border, border:w-border] - return img - - -''' -# -------------------------------------------- -# image processing process on numpy image -# channel_convert(in_c, tar_type, img_list): -# rgb2ycbcr(img, only_y=True): -# bgr2ycbcr(img, only_y=True): -# ycbcr2rgb(img): -# -------------------------------------------- -''' - - -def rgb2ycbcr(img, only_y=True): - '''same as matlab rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def ycbcr2rgb(img): - '''same as matlab ycbcr2rgb - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def bgr2ycbcr(img, only_y=True): - '''bgr version of rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def channel_convert(in_c, tar_type, img_list): - # conversion among BGR, gray and y - if in_c == 3 and tar_type == 'gray': # BGR to gray - gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list] - return [np.expand_dims(img, axis=2) for img in gray_list] - elif in_c == 3 and tar_type == 'y': # BGR to y - y_list = [bgr2ycbcr(img, only_y=True) for img in img_list] - return [np.expand_dims(img, axis=2) for img in y_list] - elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR - return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list] - else: - return img_list - - -''' -# -------------------------------------------- -# metric, PSNR and SSIM -# -------------------------------------------- -''' - - -# -------------------------------------------- -# PSNR -# -------------------------------------------- -def calculate_psnr(img1, img2, border=0): - # img1 and img2 have range [0, 255] - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20 * math.log10(255.0 / math.sqrt(mse)) - - -# -------------------------------------------- -# SSIM -# -------------------------------------------- -def calculate_ssim(img1, img2, border=0): - '''calculate SSIM - the same outputs as MATLAB's - img1, img2: [0, 255] - ''' - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - if img1.ndim == 2: - return ssim(img1, img2) - elif img1.ndim == 3: - if img1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(img1[:,:,i], img2[:,:,i])) - return np.array(ssims).mean() - elif img1.shape[2] == 1: - return ssim(np.squeeze(img1), np.squeeze(img2)) - else: - raise ValueError('Wrong input image dimensions.') - - -def ssim(img1, img2): - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -''' -# -------------------------------------------- -# matlab's bicubic imresize (numpy and torch) [0, 1] -# -------------------------------------------- -''' - - -# matlab 'imresize' function, now only support 'bicubic' -def cubic(x): - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \ - (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - if (scale < 1) and (antialiasing): - # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5+scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - P = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view( - 1, P).expand(out_length, P) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices - # apply cubic kernel - if (scale < 1) and (antialiasing): - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, P) - - # If a column in weights is all zero, get rid of it. only consider the first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, P - 2) - weights = weights.narrow(1, 1, P - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, P - 2) - weights = weights.narrow(1, 0, P - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -# -------------------------------------------- -# imresize for tensor image [0, 1] -# -------------------------------------------- -def imresize(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: pytorch tensor, CHW or HW [0,1] - # output: CHW or HW [0,1] w/o round - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(0) - in_C, in_H, in_W = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W) - img_aug.narrow(1, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:, :sym_len_Hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_He:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_C, out_H, in_W) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We) - out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_Ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_We:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_C, out_H, out_W) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - return out_2 - - -# -------------------------------------------- -# imresize for numpy image [0, 1] -# -------------------------------------------- -def imresize_np(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: Numpy, HWC or HW [0,1] - # output: HWC or HW [0,1] w/o round - img = torch.from_numpy(img) - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(2) - - in_H, in_W, in_C = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C) - img_aug.narrow(0, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:sym_len_Hs, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[-sym_len_He:, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(out_H, in_W, in_C) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C) - out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :sym_len_Ws, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, -sym_len_We:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(out_H, out_W, in_C) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - - return out_2.numpy() - - -if __name__ == '__main__': - print('---') -# img = imread_uint('test.bmp', 3) -# img = uint2single(img) -# img_bicubic = imresize_np(img, 1/4) \ No newline at end of file diff --git a/spaces/w1zrd/MusicGen/tests/data/test_audio_utils.py b/spaces/w1zrd/MusicGen/tests/data/test_audio_utils.py deleted file mode 100644 index 0480671bb17281d61ce02bce6373a5ccec89fece..0000000000000000000000000000000000000000 --- a/spaces/w1zrd/MusicGen/tests/data/test_audio_utils.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import julius -import torch -import pytest - -from audiocraft.data.audio_utils import ( - _clip_wav, - convert_audio_channels, - convert_audio, - normalize_audio -) -from ..common_utils import get_batch_white_noise - - -class TestConvertAudioChannels: - - def test_convert_audio_channels_downmix(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=2) - assert list(mixed.shape) == [b, 2, t] - - def test_convert_audio_channels_nochange(self): - b, c, t = 2, 3, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=c) - assert list(mixed.shape) == list(audio.shape) - - def test_convert_audio_channels_upmix(self): - b, c, t = 2, 1, 100 - audio = get_batch_white_noise(b, c, t) - mixed = convert_audio_channels(audio, channels=3) - assert list(mixed.shape) == [b, 3, t] - - def test_convert_audio_channels_upmix_error(self): - b, c, t = 2, 2, 100 - audio = get_batch_white_noise(b, c, t) - with pytest.raises(ValueError): - convert_audio_channels(audio, channels=3) - - -class TestConvertAudio: - - def test_convert_audio_channels_downmix(self): - b, c, dur = 2, 3, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=2) - assert list(out.shape) == [audio.shape[0], 2, audio.shape[-1]] - - def test_convert_audio_channels_upmix(self): - b, c, dur = 2, 1, 4. - sr = 128 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=sr, to_channels=3) - assert list(out.shape) == [audio.shape[0], 3, audio.shape[-1]] - - def test_convert_audio_upsample(self): - b, c, dur = 2, 1, 4. - sr = 2 - new_sr = 3 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - def test_convert_audio_resample(self): - b, c, dur = 2, 1, 4. - sr = 3 - new_sr = 2 - audio = get_batch_white_noise(b, c, int(sr * dur)) - out = convert_audio(audio, from_rate=sr, to_rate=new_sr, to_channels=c) - out_j = julius.resample.resample_frac(audio, old_sr=sr, new_sr=new_sr) - assert torch.allclose(out, out_j) - - -class TestNormalizeAudio: - - def test_clip_wav(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - _clip_wav(audio) - assert audio.abs().max() <= 1 - - def test_normalize_audio_clip(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='clip') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_rms(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='rms') - assert norm_audio.abs().max() <= 1 - - def test_normalize_audio_peak(self): - b, c, dur = 2, 1, 4. - sr = 3 - audio = 10.0 * get_batch_white_noise(b, c, int(sr * dur)) - norm_audio = normalize_audio(audio, strategy='peak') - assert norm_audio.abs().max() <= 1 diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/prompts/metagpt_sample.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/prompts/metagpt_sample.py deleted file mode 100644 index 24af8d8c352f39d22e39cdee48fc40b0c8c22ff6..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/prompts/metagpt_sample.py +++ /dev/null @@ -1,40 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/6/7 20:29 -@Author : alexanderwu -@File : metagpt_sample.py -""" - -METAGPT_SAMPLE = """ -### 设定 - -你是一个用户的编程助手,可以使用公共库与python系统库进行编程,你的回复应该有且只有一个函数。 -1. 函数本身应尽可能完整,不应缺失需求细节 -2. 你可能需要写一些提示词,用来让LLM(你自己)理解带有上下文的搜索请求 -3. 面对复杂的、难以用简单函数解决的逻辑,尽量交给llm解决 - -### 公共库 - -你可以使用公共库metagpt提供的函数,不能使用其他第三方库的函数。公共库默认已经被import为x变量 -- `import metagpt as x` -- 你可以使用 `x.func(paras)` 方式来对公共库进行调用。 - -公共库中已有函数如下 -- def llm(question: str) -> str # 输入问题,基于大模型进行回答 -- def intent_detection(query: str) -> str # 输入query,分析意图,返回公共库函数名 -- def add_doc(doc_path: str) -> None # 输入文件路径或者文件夹路径,加入知识库 -- def search(query: str) -> list[str] # 输入query返回向量知识库搜索的多个结果 -- def google(query: str) -> list[str] # 使用google查询公网结果 -- def math(query: str) -> str # 输入query公式,返回对公式执行的结果 -- def tts(text: str, wav_path: str) # 输入text文本与对应想要输出音频的路径,将文本转为音频文件 - -### 用户需求 - -我有一个个人知识库文件,我希望基于它来实现一个带有搜索功能的个人助手,需求细则如下 -1. 个人助手会思考是否需要使用个人知识库搜索,如果没有必要,就不使用它 -2. 个人助手会判断用户意图,在不同意图下使用恰当的函数解决问题 -3. 用语音回答 - -""" -# - def summarize(doc: str) -> str # 输入doc返回摘要 diff --git a/spaces/wffcyrus/MetaGPT-v1/metagpt/roles/engineer.py b/spaces/wffcyrus/MetaGPT-v1/metagpt/roles/engineer.py deleted file mode 100644 index 97d0af08714e65aa39a14ee10a2b564b66137d72..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/metagpt/roles/engineer.py +++ /dev/null @@ -1,204 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/11 14:43 -@Author : alexanderwu -@File : engineer.py -""" -import asyncio -from collections import OrderedDict -from pathlib import Path - -import aiofiles - -from metagpt.actions import WriteCode, WriteCodeReview, WriteDesign, WriteTasks -from metagpt.config import CONFIG -from metagpt.logs import logger -from metagpt.roles import Role -from metagpt.schema import Message -from metagpt.utils.common import CodeParser -from metagpt.utils.special_tokens import FILENAME_CODE_SEP, MSG_SEP - - -async def gather_ordered_k(coros, k) -> list: - tasks = OrderedDict() - results = [None] * len(coros) - done_queue = asyncio.Queue() - - for i, coro in enumerate(coros): - if len(tasks) >= k: - done, _ = await asyncio.wait(tasks.keys(), return_when=asyncio.FIRST_COMPLETED) - for task in done: - index = tasks.pop(task) - await done_queue.put((index, task.result())) - task = asyncio.create_task(coro) - tasks[task] = i - - if tasks: - done, _ = await asyncio.wait(tasks.keys()) - for task in done: - index = tasks[task] - await done_queue.put((index, task.result())) - - while not done_queue.empty(): - index, result = await done_queue.get() - results[index] = result - - return results - - -class Engineer(Role): - def __init__( - self, - name="Alex", - profile="Engineer", - goal="Write elegant, readable, extensible, efficient code", - constraints="The code you write should conform to code standard like PEP8, be modular, easy to read and maintain", - n_borg=1, - use_code_review=False, - ): - super().__init__(name, profile, goal, constraints) - self._init_actions([WriteCode]) - self.use_code_review = use_code_review - if self.use_code_review: - self._init_actions([WriteCode, WriteCodeReview]) - self._watch([WriteTasks]) - self.todos = [] - self.n_borg = n_borg - - @classmethod - def parse_tasks(self, task_msg: Message) -> list[str]: - if task_msg.instruct_content: - return task_msg.instruct_content.dict().get("Task list") - return CodeParser.parse_file_list(block="Task list", text=task_msg.content) - - @classmethod - def parse_code(self, code_text: str) -> str: - return CodeParser.parse_code(block="", text=code_text) - - @classmethod - def parse_workspace(cls, system_design_msg: Message) -> str: - if system_design_msg.instruct_content: - return system_design_msg.instruct_content.dict().get("Python package name").strip().strip("'").strip('"') - return CodeParser.parse_str(block="Python package name", text=system_design_msg.content) - - def get_workspace(self) -> Path: - msg = self._rc.memory.get_by_action(WriteDesign)[-1] - if not msg: - return CONFIG.workspace / "src" - workspace = self.parse_workspace(msg) - # Codes are written in workspace/{package_name}/{package_name} - return CONFIG.workspace / workspace - - async def write_file(self, filename: str, code: str): - workspace = self.get_workspace() - filename = filename.replace('"', "").replace("\n", "") - file = workspace / filename - file.parent.mkdir(parents=True, exist_ok=True) - async with aiofiles.open(file, "w") as f: - await f.write(code) - return file - - def recv(self, message: Message) -> None: - self._rc.memory.add(message) - if message in self._rc.important_memory: - self.todos = self.parse_tasks(message) - - async def _act_mp(self) -> Message: - # self.recreate_workspace() - todo_coros = [] - for todo in self.todos: - todo_coro = WriteCode().run( - context=self._rc.memory.get_by_actions([WriteTasks, WriteDesign]), filename=todo - ) - todo_coros.append(todo_coro) - - rsps = await gather_ordered_k(todo_coros, self.n_borg) - for todo, code_rsp in zip(self.todos, rsps): - _ = self.parse_code(code_rsp) - logger.info(todo) - logger.info(code_rsp) - # self.write_file(todo, code) - msg = Message(content=code_rsp, role=self.profile, cause_by=type(self._rc.todo)) - self._rc.memory.add(msg) - del self.todos[0] - - logger.info(f"Done {self.get_workspace()} generating.") - msg = Message(content="all done.", role=self.profile, cause_by=type(self._rc.todo)) - return msg - - async def _act_sp(self) -> Message: - code_msg_all = [] # gather all code info, will pass to qa_engineer for tests later - instruct_content = {} - for todo in self.todos: - code = await WriteCode().run(context=self._rc.history, filename=todo) - # logger.info(todo) - # logger.info(code_rsp) - # code = self.parse_code(code_rsp) - file_path = await self.write_file(todo, code) - msg = Message(content=code, role=self.profile, cause_by=type(self._rc.todo)) - self._rc.memory.add(msg) - instruct_content[todo] = code - - # code_msg = todo + FILENAME_CODE_SEP + str(file_path) - code_msg = (todo, file_path) - code_msg_all.append(code_msg) - - logger.info(f"Done {self.get_workspace()} generating.") - msg = Message( - content=MSG_SEP.join(todo + FILENAME_CODE_SEP + str(file_path) for todo, file_path in code_msg_all), - instruct_content=instruct_content, - role=self.profile, - cause_by=type(self._rc.todo), - send_to="QaEngineer", - ) - return msg - - async def _act_sp_precision(self) -> Message: - code_msg_all = [] # gather all code info, will pass to qa_engineer for tests later - instruct_content = {} - for todo in self.todos: - """ - # 从历史信息中挑选必须的信息,以减少prompt长度(人工经验总结) - 1. Architect全部 - 2. ProjectManager全部 - 3. 是否需要其他代码(暂时需要)? - TODO:目标是不需要。在任务拆分清楚后,根据设计思路,不需要其他代码也能够写清楚单个文件,如果不能则表示还需要在定义的更清晰,这个是代码能够写长的关键 - """ - context = [] - msg = self._rc.memory.get_by_actions([WriteDesign, WriteTasks, WriteCode]) - for m in msg: - context.append(m.content) - context_str = "\n".join(context) - # 编写code - code = await WriteCode().run(context=context_str, filename=todo) - # code review - if self.use_code_review: - try: - rewrite_code = await WriteCodeReview().run(context=context_str, code=code, filename=todo) - code = rewrite_code - except Exception as e: - logger.error("code review failed!", e) - pass - file_path = await self.write_file(todo, code) - msg = Message(content=code, role=self.profile, cause_by=WriteCode) - self._rc.memory.add(msg) - instruct_content[todo] = code - - code_msg = (todo, file_path) - code_msg_all.append(code_msg) - - logger.info(f"Done {self.get_workspace()} generating.") - msg = Message( - content=MSG_SEP.join(todo + FILENAME_CODE_SEP + str(file_path) for todo, file_path in code_msg_all), - instruct_content=instruct_content, - role=self.profile, - cause_by=type(self._rc.todo), - send_to="QaEngineer", - ) - return msg - - async def _act(self) -> Message: - if self.use_code_review: - return await self._act_sp_precision() - return await self._act_sp() diff --git a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/utils/test_pycst.py b/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/utils/test_pycst.py deleted file mode 100644 index 07352eac26238080861007e946d8832d7e3d85cb..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/tests/metagpt/utils/test_pycst.py +++ /dev/null @@ -1,136 +0,0 @@ -from metagpt.utils import pycst - -code = ''' -#!/usr/bin/env python -# -*- coding: utf-8 -*- -from typing import overload - -@overload -def add_numbers(a: int, b: int): - ... - -@overload -def add_numbers(a: float, b: float): - ... - -def add_numbers(a: int, b: int): - return a + b - - -class Person: - def __init__(self, name: str, age: int): - self.name = name - self.age = age - - def greet(self): - return f"Hello, my name is {self.name} and I am {self.age} years old." -''' - -documented_code = ''' -""" -This is an example module containing a function and a class definition. -""" - - -def add_numbers(a: int, b: int): - """This function is used to add two numbers and return the result. - - Parameters: - a: The first integer. - b: The second integer. - - Returns: - int: The sum of the two numbers. - """ - return a + b - -class Person: - """This class represents a person's information, including name and age. - - Attributes: - name: The person's name. - age: The person's age. - """ - - def __init__(self, name: str, age: int): - """Creates a new instance of the Person class. - - Parameters: - name: The person's name. - age: The person's age. - """ - ... - - def greet(self): - """ - Returns a greeting message including the name and age. - - Returns: - str: The greeting message. - """ - ... -''' - - -merged_code = ''' -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -This is an example module containing a function and a class definition. -""" - -from typing import overload - -@overload -def add_numbers(a: int, b: int): - ... - -@overload -def add_numbers(a: float, b: float): - ... - -def add_numbers(a: int, b: int): - """This function is used to add two numbers and return the result. - - Parameters: - a: The first integer. - b: The second integer. - - Returns: - int: The sum of the two numbers. - """ - return a + b - - -class Person: - """This class represents a person's information, including name and age. - - Attributes: - name: The person's name. - age: The person's age. - """ - def __init__(self, name: str, age: int): - """Creates a new instance of the Person class. - - Parameters: - name: The person's name. - age: The person's age. - """ - self.name = name - self.age = age - - def greet(self): - """ - Returns a greeting message including the name and age. - - Returns: - str: The greeting message. - """ - return f"Hello, my name is {self.name} and I am {self.age} years old." -''' - - -def test_merge_docstring(): - data = pycst.merge_docstring(code, documented_code) - print(data) - assert data == merged_code diff --git a/spaces/williambr/AIChatBot-SL-Chatbot-Blenderbot/README.md b/spaces/williambr/AIChatBot-SL-Chatbot-Blenderbot/README.md deleted file mode 100644 index 4ea75c0e8faae737aa3e71bd136a0f08bb43563a..0000000000000000000000000000000000000000 --- a/spaces/williambr/AIChatBot-SL-Chatbot-Blenderbot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AIChatBot SL Chatbot Blenderbot -emoji: 🏃 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wilson1/bingo/src/components/settings.tsx b/spaces/wilson1/bingo/src/components/settings.tsx deleted file mode 100644 index e18aa5b484852bb5d047442a06e7143b6893cb0d..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingo/src/components/settings.tsx +++ /dev/null @@ -1,141 +0,0 @@ -import { useEffect, useState } from 'react' -import { useAtom } from 'jotai' -import { Switch } from '@headlessui/react' -import { toast } from 'react-hot-toast' -import { hashAtom, voiceAtom } from '@/state' -import { - Dialog, - DialogContent, - DialogDescription, - DialogFooter, - DialogHeader, - DialogTitle -} from '@/components/ui/dialog' -import { Button } from './ui/button' -import { Input } from './ui/input' -import { ChunkKeys, parseCookies, extraCurlFromCookie, randomIP, encodeHeadersToCookie } from '@/lib/utils' -import { ExternalLink } from './external-link' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - -export function Settings() { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - const [loc, setLoc] = useAtom(hashAtom) - const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys))) - const [enableTTS, setEnableTTS] = useAtom(voiceAtom) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - - if (loc === 'settings') { - return ( - <Dialog open onOpenChange={() => setLoc('')} modal> - <DialogContent> - <DialogHeader> - <DialogTitle>设置你的用户信息</DialogTitle> - <DialogDescription> - 请使用 Edge 浏览器 - <ExternalLink - href="https://www.bing.com/turing/captcha/challenge" - > - 打开并登录 Bing - </ExternalLink> - ,然后再打开 - <ExternalLink href="https://www.bing.com/turing/captcha/challenge">Challenge 接口</ExternalLink> - 右键 》检查。打开开发者工具,在网络里面找到 Create 接口 》右键复制》复制为 cURL(bash),粘贴到此处,然后保存。 - <div className="h-2" /> - 图文示例: - <ExternalLink href="https://github.com/weaigc/bingo#如何获取%20BING_HEADER">如何获取 BING_HEADER</ExternalLink> - </DialogDescription> - </DialogHeader> - <div className="flex gap-4"> - - </div> - <Input - value={curlValue} - placeholder="在此填写用户信息,格式: curl 'https://www.bing.com/turing/captcha/challenge' ..." - onChange={e => setCurlValue(e.target.value)} - /> - <Button variant="ghost" className="bg-[#F5F5F5] hover:bg-[#F2F2F2]" onClick={() => copyToClipboard(btoa(curlValue))}> - 转成 BING_HEADER 并复制 - </Button> - - <DialogFooter className="items-center"> - <Button - variant="secondary" - className="bg-[#c7f3ff] hover:bg-[#fdc7ff]" - onClick={() => { - let headerValue = curlValue - if (headerValue) { - try { - headerValue = atob(headerValue) - } catch (e) {} - if (!/^\s*curl ['"]https:\/\/www\.bing\.com\/turing\/captcha\/challenge['"]/.test(headerValue)) { - toast.error('格式不正确') - return - } - const maxAge = 86400 * 30 - encodeHeadersToCookie(headerValue).forEach(cookie => document.cookie = `${cookie}; Max-Age=${maxAge}; Path=/; SameSite=None; Secure`) - } else { - [...ChunkKeys, 'BING_COOKIE', 'BING_UA', 'BING_IP'].forEach(key => document.cookie = `${key}=; Path=/; SameSite=None; Secure`) - } - - toast.success('保存成功') - setLoc('') - setTimeout(() => { - location.href = './' - }, 2000) - }} - > - 保存 - </Button> - </DialogFooter> - </DialogContent> - </Dialog> - ) - } else if (loc === 'voice') { - return ( - <Dialog open onOpenChange={() => setLoc('')} modal> - <DialogContent> - <DialogHeader> - <DialogTitle>语音设置</DialogTitle> - <DialogDescription> - 目前仅支持 PC 端 Edge 及 Chrome 浏览器 - </DialogDescription> - </DialogHeader> - - <div className="flex gap-2"> - 启用语音回答 - <Switch - checked={enableTTS} - className={`${enableTTS ? 'bg-blue-600' : 'bg-gray-200'} relative inline-flex h-6 w-11 items-center rounded-full`} - onChange={(checked: boolean) => setEnableTTS(checked)} - > - <span - className={`${enableTTS ? 'translate-x-6' : 'translate-x-1'} inline-block h-4 w-4 transform rounded-full bg-white transition`} - /> - </Switch> - </div> - - <DialogFooter className="items-center"> - <Button - variant="secondary" - onClick={() => { - toast.success('保存成功') - setLoc('') - setTimeout(() => { - location.href = './' - }, 2000) - }} - > - 保存 - </Button> - </DialogFooter> - </DialogContent> - </Dialog> - ) - } - return null -} diff --git a/spaces/wisnuarys15/rvc-wisnu5/infer_pack/attentions.py b/spaces/wisnuarys15/rvc-wisnu5/infer_pack/attentions.py deleted file mode 100644 index 77cb63ffccf3e33badf22d50862a64ba517b487f..0000000000000000000000000000000000000000 --- a/spaces/wisnuarys15/rvc-wisnu5/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from infer_pack import commons -from infer_pack import modules -from infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/wlf/dall-e/index.html b/spaces/wlf/dall-e/index.html deleted file mode 100644 index 918e851d9dd1baf9e4fb4f067fd979d432472161..0000000000000000000000000000000000000000 --- a/spaces/wlf/dall-e/index.html +++ /dev/null @@ -1,24 +0,0 @@ -<!DOCTYPE html> -<html> - <head> - <meta charset="utf-8" /> - <meta name="viewport" content="width=device-width" /> - <title>My static Space</title> - <link rel="stylesheet" href="style.css" /> - </head> - <body> - <div class="card"> - <h1>Welcome to your static Space!</h1> - <p> - You can modify this app directly by editing <i>index.html</i> in the - Files and versions tab. - </p> - <p> - Also don't forget to check the - <a href="https://huggingface.co/docs/hub/spaces" target="_blank" - >Spaces documentation</a - >. - </p> - </div> - </body> -</html> diff --git a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/language/test_parrot_paraphrase.py b/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/language/test_parrot_paraphrase.py deleted file mode 100644 index b1993b1b783a01cceb72b4d52f1eac2850754e0e..0000000000000000000000000000000000000000 --- a/spaces/wliu88/StructDiffusionDemo/src/StructDiffusion/language/test_parrot_paraphrase.py +++ /dev/null @@ -1,38 +0,0 @@ -from parrot import Parrot -import torch -import warnings -warnings.filterwarnings("ignore") - -# [top] - -# Put the [objects] in a [size][shape] on the [x][y] of the table facing [rotation]. -# Build a [size][shape] of the [objects] on the [x][y] of the table facing [rotation]. -# Put the [objects] on the [x][y] of the table and make a [shape] facing [rotation]. -# Rearrange the [objects] into a [shape], and put the structure on the [x][y] of the table facing [rotation]. -# Could you ... -# Please ... -# Pick up the objects, put them into a [size][shape], place the [shape] on the [x][y] of table, make sure the [shape] is facing [rotation]. - -if __name__ == "__main__": - ''' - uncomment to get reproducable paraphrase generations - def random_state(seed): - torch.manual_seed(seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(seed) - - random_state(1234) - ''' - - #Init models (make sure you init ONLY once if you integrate this to your code) - parrot = Parrot(model_tag="prithivida/parrot_paraphraser_on_T5") - - phrases = ["Rearrange the mugs in a circle on the top left of the table."] - - for phrase in phrases: - print("-"*100) - print("Input_phrase: ", phrase) - print("-"*100) - para_phrases = parrot.augment(input_phrase=phrase, use_gpu=False, max_return_phrases=100, do_diverse=True) - for para_phrase in para_phrases: - print(para_phrase) \ No newline at end of file diff --git a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/data_utils.py b/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/data_utils.py deleted file mode 100644 index e9246c6c8f2ff3c37a7f8529ea1593c7f80f887e..0000000000000000000000000000000000000000 --- a/spaces/wzq10314/VITS-Umamusume-voice-synthesizer1/data_utils.py +++ /dev/null @@ -1,393 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_and_text) - self._filter() - - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - audiopath, text = audiopath_and_text[0], audiopath_and_text[1] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - return (text, spec, wav) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths - - -"""Multi speaker version""" -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - audiopath = "E:/uma_voice/" + audiopath - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - return (text, spec, wav, sid) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i+1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid+1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/xfys/yolov5_tracking/val_utils/trackeval/metrics/hota.py b/spaces/xfys/yolov5_tracking/val_utils/trackeval/metrics/hota.py deleted file mode 100644 index f551b766d1b4c3d4f4854bf99b04877dc1fa7c32..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/val_utils/trackeval/metrics/hota.py +++ /dev/null @@ -1,203 +0,0 @@ - -import os -import numpy as np -from scipy.optimize import linear_sum_assignment -from ._base_metric import _BaseMetric -from .. import _timing - - -class HOTA(_BaseMetric): - """Class which implements the HOTA metrics. - See: https://link.springer.com/article/10.1007/s11263-020-01375-2 - """ - - def __init__(self, config=None): - super().__init__() - self.plottable = True - self.array_labels = np.arange(0.05, 0.99, 0.05) - self.integer_array_fields = ['HOTA_TP', 'HOTA_FN', 'HOTA_FP'] - self.float_array_fields = ['HOTA', 'DetA', 'AssA', 'DetRe', 'DetPr', 'AssRe', 'AssPr', 'LocA', 'OWTA'] - self.float_fields = ['HOTA(0)', 'LocA(0)', 'HOTALocA(0)'] - self.fields = self.float_array_fields + self.integer_array_fields + self.float_fields - self.summary_fields = self.float_array_fields + self.float_fields - - @_timing.time - def eval_sequence(self, data): - """Calculates the HOTA metrics for one sequence""" - - # Initialise results - res = {} - for field in self.float_array_fields + self.integer_array_fields: - res[field] = np.zeros((len(self.array_labels)), dtype=np.float) - for field in self.float_fields: - res[field] = 0 - - # Return result quickly if tracker or gt sequence is empty - if data['num_tracker_dets'] == 0: - res['HOTA_FN'] = data['num_gt_dets'] * np.ones((len(self.array_labels)), dtype=np.float) - res['LocA'] = np.ones((len(self.array_labels)), dtype=np.float) - res['LocA(0)'] = 1.0 - return res - if data['num_gt_dets'] == 0: - res['HOTA_FP'] = data['num_tracker_dets'] * np.ones((len(self.array_labels)), dtype=np.float) - res['LocA'] = np.ones((len(self.array_labels)), dtype=np.float) - res['LocA(0)'] = 1.0 - return res - - # Variables counting global association - potential_matches_count = np.zeros((data['num_gt_ids'], data['num_tracker_ids'])) - gt_id_count = np.zeros((data['num_gt_ids'], 1)) - tracker_id_count = np.zeros((1, data['num_tracker_ids'])) - - # First loop through each timestep and accumulate global track information. - for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])): - # Count the potential matches between ids in each timestep - # These are normalised, weighted by the match similarity. - similarity = data['similarity_scores'][t] - sim_iou_denom = similarity.sum(0)[np.newaxis, :] + similarity.sum(1)[:, np.newaxis] - similarity - sim_iou = np.zeros_like(similarity) - sim_iou_mask = sim_iou_denom > 0 + np.finfo('float').eps - sim_iou[sim_iou_mask] = similarity[sim_iou_mask] / sim_iou_denom[sim_iou_mask] - potential_matches_count[gt_ids_t[:, np.newaxis], tracker_ids_t[np.newaxis, :]] += sim_iou - - # Calculate the total number of dets for each gt_id and tracker_id. - gt_id_count[gt_ids_t] += 1 - tracker_id_count[0, tracker_ids_t] += 1 - - # Calculate overall jaccard alignment score (before unique matching) between IDs - global_alignment_score = potential_matches_count / (gt_id_count + tracker_id_count - potential_matches_count) - matches_counts = [np.zeros_like(potential_matches_count) for _ in self.array_labels] - - # Calculate scores for each timestep - for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])): - # Deal with the case that there are no gt_det/tracker_det in a timestep. - if len(gt_ids_t) == 0: - for a, alpha in enumerate(self.array_labels): - res['HOTA_FP'][a] += len(tracker_ids_t) - continue - if len(tracker_ids_t) == 0: - for a, alpha in enumerate(self.array_labels): - res['HOTA_FN'][a] += len(gt_ids_t) - continue - - # Get matching scores between pairs of dets for optimizing HOTA - similarity = data['similarity_scores'][t] - score_mat = global_alignment_score[gt_ids_t[:, np.newaxis], tracker_ids_t[np.newaxis, :]] * similarity - - # Hungarian algorithm to find best matches - match_rows, match_cols = linear_sum_assignment(-score_mat) - - # Calculate and accumulate basic statistics - for a, alpha in enumerate(self.array_labels): - actually_matched_mask = similarity[match_rows, match_cols] >= alpha - np.finfo('float').eps - alpha_match_rows = match_rows[actually_matched_mask] - alpha_match_cols = match_cols[actually_matched_mask] - num_matches = len(alpha_match_rows) - res['HOTA_TP'][a] += num_matches - res['HOTA_FN'][a] += len(gt_ids_t) - num_matches - res['HOTA_FP'][a] += len(tracker_ids_t) - num_matches - if num_matches > 0: - res['LocA'][a] += sum(similarity[alpha_match_rows, alpha_match_cols]) - matches_counts[a][gt_ids_t[alpha_match_rows], tracker_ids_t[alpha_match_cols]] += 1 - - # Calculate association scores (AssA, AssRe, AssPr) for the alpha value. - # First calculate scores per gt_id/tracker_id combo and then average over the number of detections. - for a, alpha in enumerate(self.array_labels): - matches_count = matches_counts[a] - ass_a = matches_count / np.maximum(1, gt_id_count + tracker_id_count - matches_count) - res['AssA'][a] = np.sum(matches_count * ass_a) / np.maximum(1, res['HOTA_TP'][a]) - ass_re = matches_count / np.maximum(1, gt_id_count) - res['AssRe'][a] = np.sum(matches_count * ass_re) / np.maximum(1, res['HOTA_TP'][a]) - ass_pr = matches_count / np.maximum(1, tracker_id_count) - res['AssPr'][a] = np.sum(matches_count * ass_pr) / np.maximum(1, res['HOTA_TP'][a]) - - # Calculate final scores - res['LocA'] = np.maximum(1e-10, res['LocA']) / np.maximum(1e-10, res['HOTA_TP']) - res = self._compute_final_fields(res) - return res - - def combine_sequences(self, all_res): - """Combines metrics across all sequences""" - res = {} - for field in self.integer_array_fields: - res[field] = self._combine_sum(all_res, field) - for field in ['AssRe', 'AssPr', 'AssA']: - res[field] = self._combine_weighted_av(all_res, field, res, weight_field='HOTA_TP') - loca_weighted_sum = sum([all_res[k]['LocA'] * all_res[k]['HOTA_TP'] for k in all_res.keys()]) - res['LocA'] = np.maximum(1e-10, loca_weighted_sum) / np.maximum(1e-10, res['HOTA_TP']) - res = self._compute_final_fields(res) - return res - - def combine_classes_class_averaged(self, all_res, ignore_empty_classes=False): - """Combines metrics across all classes by averaging over the class values. - If 'ignore_empty_classes' is True, then it only sums over classes with at least one gt or predicted detection. - """ - res = {} - for field in self.integer_array_fields: - if ignore_empty_classes: - res[field] = self._combine_sum( - {k: v for k, v in all_res.items() - if (v['HOTA_TP'] + v['HOTA_FN'] + v['HOTA_FP'] > 0 + np.finfo('float').eps).any()}, field) - else: - res[field] = self._combine_sum({k: v for k, v in all_res.items()}, field) - - for field in self.float_fields + self.float_array_fields: - if ignore_empty_classes: - res[field] = np.mean([v[field] for v in all_res.values() if - (v['HOTA_TP'] + v['HOTA_FN'] + v['HOTA_FP'] > 0 + np.finfo('float').eps).any()], - axis=0) - else: - res[field] = np.mean([v[field] for v in all_res.values()], axis=0) - return res - - def combine_classes_det_averaged(self, all_res): - """Combines metrics across all classes by averaging over the detection values""" - res = {} - for field in self.integer_array_fields: - res[field] = self._combine_sum(all_res, field) - for field in ['AssRe', 'AssPr', 'AssA']: - res[field] = self._combine_weighted_av(all_res, field, res, weight_field='HOTA_TP') - loca_weighted_sum = sum([all_res[k]['LocA'] * all_res[k]['HOTA_TP'] for k in all_res.keys()]) - res['LocA'] = np.maximum(1e-10, loca_weighted_sum) / np.maximum(1e-10, res['HOTA_TP']) - res = self._compute_final_fields(res) - return res - - @staticmethod - def _compute_final_fields(res): - """Calculate sub-metric ('field') values which only depend on other sub-metric values. - This function is used both for both per-sequence calculation, and in combining values across sequences. - """ - res['DetRe'] = res['HOTA_TP'] / np.maximum(1, res['HOTA_TP'] + res['HOTA_FN']) - res['DetPr'] = res['HOTA_TP'] / np.maximum(1, res['HOTA_TP'] + res['HOTA_FP']) - res['DetA'] = res['HOTA_TP'] / np.maximum(1, res['HOTA_TP'] + res['HOTA_FN'] + res['HOTA_FP']) - res['HOTA'] = np.sqrt(res['DetA'] * res['AssA']) - res['OWTA'] = np.sqrt(res['DetRe'] * res['AssA']) - - res['HOTA(0)'] = res['HOTA'][0] - res['LocA(0)'] = res['LocA'][0] - res['HOTALocA(0)'] = res['HOTA(0)']*res['LocA(0)'] - return res - - def plot_single_tracker_results(self, table_res, tracker, cls, output_folder): - """Create plot of results""" - - # Only loaded when run to reduce minimum requirements - from matplotlib import pyplot as plt - - res = table_res['COMBINED_SEQ'] - styles_to_plot = ['r', 'b', 'g', 'b--', 'b:', 'g--', 'g:', 'm'] - for name, style in zip(self.float_array_fields, styles_to_plot): - plt.plot(self.array_labels, res[name], style) - plt.xlabel('alpha') - plt.ylabel('score') - plt.title(tracker + ' - ' + cls) - plt.axis([0, 1, 0, 1]) - legend = [] - for name in self.float_array_fields: - legend += [name + ' (' + str(np.round(np.mean(res[name]), 2)) + ')'] - plt.legend(legend, loc='lower left') - out_file = os.path.join(output_folder, cls + '_plot.pdf') - os.makedirs(os.path.dirname(out_file), exist_ok=True) - plt.savefig(out_file) - plt.savefig(out_file.replace('.pdf', '.png')) - plt.clf() diff --git a/spaces/xfys/yolov5_tracking/val_utils/trackeval/metrics/vace.py b/spaces/xfys/yolov5_tracking/val_utils/trackeval/metrics/vace.py deleted file mode 100644 index 81858d429c6e4943ca8c168c6683517ec2907b98..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/val_utils/trackeval/metrics/vace.py +++ /dev/null @@ -1,131 +0,0 @@ -import numpy as np -from scipy.optimize import linear_sum_assignment -from ._base_metric import _BaseMetric -from .. import _timing - - -class VACE(_BaseMetric): - """Class which implements the VACE metrics. - - The metrics are described in: - Manohar et al. (2006) "Performance Evaluation of Object Detection and Tracking in Video" - https://link.springer.com/chapter/10.1007/11612704_16 - - This implementation uses the "relaxed" variant of the metrics, - where an overlap threshold is applied in each frame. - """ - - def __init__(self, config=None): - super().__init__() - self.integer_fields = ['VACE_IDs', 'VACE_GT_IDs', 'num_non_empty_timesteps'] - self.float_fields = ['STDA', 'ATA', 'FDA', 'SFDA'] - self.fields = self.integer_fields + self.float_fields - self.summary_fields = ['SFDA', 'ATA'] - - # Fields that are accumulated over multiple videos. - self._additive_fields = self.integer_fields + ['STDA', 'FDA'] - - self.threshold = 0.5 - - @_timing.time - def eval_sequence(self, data): - """Calculates VACE metrics for one sequence. - - Depends on the fields: - data['num_gt_ids'] - data['num_tracker_ids'] - data['gt_ids'] - data['tracker_ids'] - data['similarity_scores'] - """ - res = {} - - # Obtain Average Tracking Accuracy (ATA) using track correspondence. - # Obtain counts necessary to compute temporal IOU. - # Assume that integer counts can be represented exactly as floats. - potential_matches_count = np.zeros((data['num_gt_ids'], data['num_tracker_ids'])) - gt_id_count = np.zeros(data['num_gt_ids']) - tracker_id_count = np.zeros(data['num_tracker_ids']) - both_present_count = np.zeros((data['num_gt_ids'], data['num_tracker_ids'])) - for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])): - # Count the number of frames in which two tracks satisfy the overlap criterion. - matches_mask = np.greater_equal(data['similarity_scores'][t], self.threshold) - match_idx_gt, match_idx_tracker = np.nonzero(matches_mask) - potential_matches_count[gt_ids_t[match_idx_gt], tracker_ids_t[match_idx_tracker]] += 1 - # Count the number of frames in which the tracks are present. - gt_id_count[gt_ids_t] += 1 - tracker_id_count[tracker_ids_t] += 1 - both_present_count[gt_ids_t[:, np.newaxis], tracker_ids_t[np.newaxis, :]] += 1 - # Number of frames in which either track is present (union of the two sets of frames). - union_count = (gt_id_count[:, np.newaxis] - + tracker_id_count[np.newaxis, :] - - both_present_count) - # The denominator should always be non-zero if all tracks are non-empty. - with np.errstate(divide='raise', invalid='raise'): - temporal_iou = potential_matches_count / union_count - # Find assignment that maximizes temporal IOU. - match_rows, match_cols = linear_sum_assignment(-temporal_iou) - res['STDA'] = temporal_iou[match_rows, match_cols].sum() - res['VACE_IDs'] = data['num_tracker_ids'] - res['VACE_GT_IDs'] = data['num_gt_ids'] - - # Obtain Frame Detection Accuracy (FDA) using per-frame correspondence. - non_empty_count = 0 - fda = 0 - for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])): - n_g = len(gt_ids_t) - n_d = len(tracker_ids_t) - if not (n_g or n_d): - continue - # n_g > 0 or n_d > 0 - non_empty_count += 1 - if not (n_g and n_d): - continue - # n_g > 0 and n_d > 0 - spatial_overlap = data['similarity_scores'][t] - match_rows, match_cols = linear_sum_assignment(-spatial_overlap) - overlap_ratio = spatial_overlap[match_rows, match_cols].sum() - fda += overlap_ratio / (0.5 * (n_g + n_d)) - res['FDA'] = fda - res['num_non_empty_timesteps'] = non_empty_count - - res.update(self._compute_final_fields(res)) - return res - - def combine_classes_class_averaged(self, all_res, ignore_empty_classes=True): - """Combines metrics across all classes by averaging over the class values. - If 'ignore_empty_classes' is True, then it only sums over classes with at least one gt or predicted detection. - """ - res = {} - for field in self.fields: - if ignore_empty_classes: - res[field] = np.mean([v[field] for v in all_res.values() - if v['VACE_GT_IDs'] > 0 or v['VACE_IDs'] > 0], axis=0) - else: - res[field] = np.mean([v[field] for v in all_res.values()], axis=0) - return res - - def combine_classes_det_averaged(self, all_res): - """Combines metrics across all classes by averaging over the detection values""" - res = {} - for field in self._additive_fields: - res[field] = _BaseMetric._combine_sum(all_res, field) - res = self._compute_final_fields(res) - return res - - def combine_sequences(self, all_res): - """Combines metrics across all sequences""" - res = {} - for header in self._additive_fields: - res[header] = _BaseMetric._combine_sum(all_res, header) - res.update(self._compute_final_fields(res)) - return res - - @staticmethod - def _compute_final_fields(additive): - final = {} - with np.errstate(invalid='ignore'): # Permit nan results. - final['ATA'] = (additive['STDA'] / - (0.5 * (additive['VACE_IDs'] + additive['VACE_GT_IDs']))) - final['SFDA'] = additive['FDA'] / additive['num_non_empty_timesteps'] - return final diff --git a/spaces/xiaoxuezi/spleeter/spleeter/separator.py b/spaces/xiaoxuezi/spleeter/spleeter/separator.py deleted file mode 100644 index da3831662b1fee8045c14db65ac55bf32d4aa407..0000000000000000000000000000000000000000 --- a/spaces/xiaoxuezi/spleeter/spleeter/separator.py +++ /dev/null @@ -1,465 +0,0 @@ -#!/usr/bin/env python -# coding: utf8 - -""" - Module that provides a class wrapper for source separation. - - Examples: - - ```python - >>> from spleeter.separator import Separator - >>> separator = Separator('spleeter:2stems') - >>> separator.separate(waveform, lambda instrument, data: ...) - >>> separator.separate_to_file(...) - ``` -""" - -import atexit -import os -from multiprocessing import Pool -from os.path import basename, dirname, join, splitext -from typing import Dict, Generator, Optional - -# pyright: reportMissingImports=false -# pylint: disable=import-error -import numpy as np -import tensorflow as tf -from librosa.core import istft, stft -from scipy.signal.windows import hann - -from spleeter.model.provider import ModelProvider - -from . import SpleeterError -from .audio import Codec, STFTBackend -from .audio.adapter import AudioAdapter -from .audio.convertor import to_stereo -from .model import EstimatorSpecBuilder, InputProviderFactory, model_fn -from .model.provider import ModelProvider -from .types1 import AudioDescriptor -from .utils.configuration import load_configuration - -# pylint: enable=import-error - -__email__ = "spleeter@deezer.com" -__author__ = "Deezer Research" -__license__ = "MIT License" - - -class DataGenerator(object): - """ - Generator object that store a sample and generate it once while called. - Used to feed a tensorflow estimator without knowing the whole data at - build time. - """ - - def __init__(self) -> None: - """ Default constructor. """ - self._current_data = None - - def update_data(self, data) -> None: - """ Replace internal data. """ - self._current_data = data - - def __call__(self) -> Generator: - """ Generation process. """ - buffer = self._current_data - while buffer: - yield buffer - buffer = self._current_data - - -def create_estimator(params, MWF): - """ - Initialize tensorflow estimator that will perform separation - - Params: - - params: a dictionary of parameters for building the model - - Returns: - a tensorflow estimator - """ - # Load model. - provider: ModelProvider = ModelProvider.default() - params["model_dir"] = provider.get(params["model_dir"]) - params["MWF"] = MWF - # Setup config - session_config = tf.compat.v1.ConfigProto() - session_config.gpu_options.per_process_gpu_memory_fraction = 0.7 - config = tf.estimator.RunConfig(session_config=session_config) - # Setup estimator - estimator = tf.estimator.Estimator( - model_fn=model_fn, model_dir=params["model_dir"], params=params, config=config - ) - return estimator - - -class Separator(object): - """ A wrapper class for performing separation. """ - - def __init__( - self, - params_descriptor: str, - MWF: bool = False, - stft_backend: STFTBackend = STFTBackend.AUTO, - multiprocess: bool = True, - ) -> None: - """ - Default constructor. - - Parameters: - params_descriptor (str): - Descriptor for TF params to be used. - MWF (bool): - (Optional) `True` if MWF should be used, `False` otherwise. - """ - self._params = load_configuration(params_descriptor) - self._sample_rate = self._params["sample_rate"] - self._MWF = MWF - self._tf_graph = tf.Graph() - self._prediction_generator = None - self._input_provider = None - self._builder = None - self._features = None - self._session = None - if multiprocess: - self._pool = Pool() - atexit.register(self._pool.close) - else: - self._pool = None - self._tasks = [] - self._params["stft_backend"] = STFTBackend.resolve(stft_backend) - self._data_generator = DataGenerator() - - def __del__(self) -> None: - if self._session: - self._session.close() - - def _get_prediction_generator(self) -> Generator: - """ - Lazy loading access method for internal prediction generator - returned by the predict method of a tensorflow estimator. - - Returns: - Generator: - Generator of prediction. - """ - if self._prediction_generator is None: - estimator = create_estimator(self._params, self._MWF) - - def get_dataset(): - return tf.data.Dataset.from_generator( - self._data_generator, - output_types={"waveform": tf.float32, "audio_id": tf.string}, - output_shapes={"waveform": (None, 2), "audio_id": ()}, - ) - - self._prediction_generator = estimator.predict( - get_dataset, yield_single_examples=False - ) - return self._prediction_generator - - def join(self, timeout: int = 200) -> None: - """ - Wait for all pending tasks to be finished. - - Parameters: - timeout (int): - (Optional) task waiting timeout. - """ - while len(self._tasks) > 0: - task = self._tasks.pop() - task.get() - task.wait(timeout=timeout) - - def _stft( - self, data: np.ndarray, inverse: bool = False, length: Optional[int] = None - ) -> np.ndarray: - """ - Single entrypoint for both stft and istft. This computes stft and - istft with librosa on stereo data. The two channels are processed - separately and are concatenated together in the result. The - expected input formats are: (n_samples, 2) for stft and (T, F, 2) - for istft. - - Parameters: - data (numpy.array): - Array with either the waveform or the complex spectrogram - depending on the parameter inverse - inverse (bool): - (Optional) Should a stft or an istft be computed. - length (Optional[int]): - - Returns: - numpy.ndarray: - Stereo data as numpy array for the transform. The channels - are stored in the last dimension. - """ - assert not (inverse and length is None) - data = np.asfortranarray(data) - N = self._params["frame_length"] - H = self._params["frame_step"] - win = hann(N, sym=False) - fstft = istft if inverse else stft - win_len_arg = {"win_length": None, "length": None} if inverse else {"n_fft": N} - n_channels = data.shape[-1] - out = [] - for c in range(n_channels): - d = ( - np.concatenate((np.zeros((N,)), data[:, c], np.zeros((N,)))) - if not inverse - else data[:, :, c].T - ) - s = fstft(d, hop_length=H, window=win, center=False, **win_len_arg) - if inverse: - s = s[N : N + length] - s = np.expand_dims(s.T, 2 - inverse) - out.append(s) - if len(out) == 1: - return out[0] - return np.concatenate(out, axis=2 - inverse) - - def _get_input_provider(self): - if self._input_provider is None: - self._input_provider = InputProviderFactory.get(self._params) - return self._input_provider - - def _get_features(self): - if self._features is None: - provider = self._get_input_provider() - self._features = provider.get_input_dict_placeholders() - return self._features - - def _get_builder(self): - if self._builder is None: - self._builder = EstimatorSpecBuilder(self._get_features(), self._params) - return self._builder - - def _get_session(self): - if self._session is None: - saver = tf.compat.v1.train.Saver() - provider = ModelProvider.default() - model_directory: str = provider.get(self._params["model_dir"]) - latest_checkpoint = tf.train.latest_checkpoint(model_directory) - self._session = tf.compat.v1.Session() - saver.restore(self._session, latest_checkpoint) - return self._session - - def _separate_librosa( - self, waveform: np.ndarray, audio_descriptor: AudioDescriptor - ) -> Dict: - """ - Performs separation with librosa backend for STFT. - - Parameters: - waveform (numpy.ndarray): - Waveform to be separated (as a numpy array) - audio_descriptor (AudioDescriptor): - """ - with self._tf_graph.as_default(): - out = {} - features = self._get_features() - # TODO: fix the logic, build sometimes return, - # sometimes set attribute. - outputs = self._get_builder().outputs - stft = self._stft(waveform) - if stft.shape[-1] == 1: - stft = np.concatenate([stft, stft], axis=-1) - elif stft.shape[-1] > 2: - stft = stft[:, :2] - sess = self._get_session() - outputs = sess.run( - outputs, - feed_dict=self._get_input_provider().get_feed_dict( - features, stft, audio_descriptor - ), - ) - for inst in self._get_builder().instruments: - out[inst] = self._stft( - outputs[inst], inverse=True, length=waveform.shape[0] - ) - return out - - def _separate_tensorflow( - self, waveform: np.ndarray, audio_descriptor: AudioDescriptor - ) -> Dict: - """ - Performs source separation over the given waveform with tensorflow - backend. - - Parameters: - waveform (numpy.ndarray): - Waveform to be separated (as a numpy array) - audio_descriptor (AudioDescriptor): - - Returns: - Separated waveforms. - """ - if not waveform.shape[-1] == 2: - waveform = to_stereo(waveform) - prediction_generator = self._get_prediction_generator() - # NOTE: update data in generator before performing separation. - self._data_generator.update_data( - {"waveform": waveform, "audio_id": np.array(audio_descriptor)} - ) - # NOTE: perform separation. - prediction = next(prediction_generator) - prediction.pop("audio_id") - return prediction - - def separate( - self, waveform: np.ndarray, audio_descriptor: Optional[str] = "" - ) -> None: - """ - Performs separation on a waveform. - - Parameters: - waveform (numpy.ndarray): - Waveform to be separated (as a numpy array) - audio_descriptor (str): - (Optional) string describing the waveform (e.g. filename). - """ - backend: str = self._params["stft_backend"] - if backend == STFTBackend.TENSORFLOW: - return self._separate_tensorflow(waveform, audio_descriptor) - elif backend == STFTBackend.LIBROSA: - return self._separate_librosa(waveform, audio_descriptor) - raise ValueError(f"Unsupported STFT backend {backend}") - - def separate_to_file( - self, - audio_descriptor: AudioDescriptor, - destination: str, - audio_adapter: Optional[AudioAdapter] = None, - offset: int = 0, - duration: float = 600.0, - codec: Codec = Codec.WAV, - bitrate: str = "128k", - filename_format: str = "{filename}/{instrument}.{codec}", - synchronous: bool = True, - ) -> None: - """ - Performs source separation and export result to file using - given audio adapter. - - Filename format should be a Python formattable string that could - use following parameters : - - - {instrument} - - {filename} - - {foldername} - - {codec}. - - Parameters: - audio_descriptor (AudioDescriptor): - Describe song to separate, used by audio adapter to - retrieve and load audio data, in case of file based - audio adapter, such descriptor would be a file path. - destination (str): - Target directory to write output to. - audio_adapter (Optional[AudioAdapter]): - (Optional) Audio adapter to use for I/O. - offset (int): - (Optional) Offset of loaded song. - duration (float): - (Optional) Duration of loaded song (default: 600s). - codec (Codec): - (Optional) Export codec. - bitrate (str): - (Optional) Export bitrate. - filename_format (str): - (Optional) Filename format. - synchronous (bool): - (Optional) True is should by synchronous. - """ - if audio_adapter is None: - audio_adapter = AudioAdapter.default() - waveform, _ = audio_adapter.load( - audio_descriptor, - offset=offset, - duration=duration, - sample_rate=self._sample_rate, - ) - sources = self.separate(waveform, audio_descriptor) - self.save_to_file( - sources, - audio_descriptor, - destination, - filename_format, - codec, - audio_adapter, - bitrate, - synchronous, - ) - - def save_to_file( - self, - sources: Dict, - audio_descriptor: AudioDescriptor, - destination: str, - filename_format: str = "{filename}/{instrument}.{codec}", - codec: Codec = Codec.WAV, - audio_adapter: Optional[AudioAdapter] = None, - bitrate: str = "128k", - synchronous: bool = True, - ) -> None: - """ - Export dictionary of sources to files. - - Parameters: - sources (Dict): - Dictionary of sources to be exported. The keys are the name - of the instruments, and the values are `N x 2` numpy arrays - containing the corresponding intrument waveform, as - returned by the separate method - audio_descriptor (AudioDescriptor): - Describe song to separate, used by audio adapter to - retrieve and load audio data, in case of file based audio - adapter, such descriptor would be a file path. - destination (str): - Target directory to write output to. - filename_format (str): - (Optional) Filename format. - codec (Codec): - (Optional) Export codec. - audio_adapter (Optional[AudioAdapter]): - (Optional) Audio adapter to use for I/O. - bitrate (str): - (Optional) Export bitrate. - synchronous (bool): - (Optional) True is should by synchronous. - """ - if audio_adapter is None: - audio_adapter = AudioAdapter.default() - foldername = basename(dirname(audio_descriptor)) - filename = splitext(basename(audio_descriptor))[0] - generated = [] - for instrument, data in sources.items(): - path = join( - destination, - filename_format.format( - filename=filename, - instrument=instrument, - foldername=foldername, - codec=codec, - ), - ) - directory = os.path.dirname(path) - if not os.path.exists(directory): - os.makedirs(directory) - if path in generated: - raise SpleeterError( - ( - f"Separated source path conflict : {path}," - "please check your filename format" - ) - ) - generated.append(path) - if self._pool: - task = self._pool.apply_async( - audio_adapter.save, (path, data, self._sample_rate, codec, bitrate) - ) - self._tasks.append(task) - else: - audio_adapter.save(path, data, self._sample_rate, codec, bitrate) - if synchronous and self._pool: - self.join() diff --git a/spaces/xin/PatentSolver/README.md b/spaces/xin/PatentSolver/README.md deleted file mode 100644 index 05be59f6234db7a1c1e6442d437a6bb343da2612..0000000000000000000000000000000000000000 --- a/spaces/xin/PatentSolver/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: PatentSolver -emoji: 🚀 -colorFrom: gray -colorTo: gray -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/xuetao/bingo3/src/components/turn-counter.tsx b/spaces/xuetao/bingo3/src/components/turn-counter.tsx deleted file mode 100644 index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/components/turn-counter.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import React from 'react' -import { Throttling } from '@/lib/bots/bing/types' - -export interface TurnCounterProps { - throttling?: Throttling -} - -export function TurnCounter({ throttling }: TurnCounterProps) { - if (!throttling) { - return null - } - - return ( - <div className="turn-counter"> - <div className="text"> - <span>{throttling.numUserMessagesInConversation}</span> - <span> 共 </span> - <span>{throttling.maxNumUserMessagesInConversation}</span> - </div> - <div className="indicator"></div> - </div> - ) -} diff --git a/spaces/xuetao/bingo3/src/pages/api/sydney.ts b/spaces/xuetao/bingo3/src/pages/api/sydney.ts deleted file mode 100644 index 0e7bbf23d77c2e1a6635185a060eeee58b8c8e66..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/pages/api/sydney.ts +++ /dev/null @@ -1,62 +0,0 @@ -import { NextApiRequest, NextApiResponse } from 'next' -import { WebSocket, debug } from '@/lib/isomorphic' -import { BingWebBot } from '@/lib/bots/bing' -import { websocketUtils } from '@/lib/bots/bing/utils' -import { WatchDog, createHeaders } from '@/lib/utils' - - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - const conversationContext = req.body - const headers = createHeaders(req.cookies) - debug(headers) - res.setHeader('Content-Type', 'text/stream; charset=UTF-8') - - const ws = new WebSocket('wss://sydney.bing.com/sydney/ChatHub', { - headers: { - ...headers, - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - pragma: 'no-cache', - } - }) - - const closeDog = new WatchDog() - const timeoutDog = new WatchDog() - ws.onmessage = (event) => { - timeoutDog.watch(() => { - ws.send(websocketUtils.packMessage({ type: 6 })) - }, 1500) - closeDog.watch(() => { - ws.close() - }, 10000) - res.write(event.data) - if (/\{"type":([367])\}/.test(String(event.data))) { - const type = parseInt(RegExp.$1, 10) - debug('connection type', type) - if (type === 3) { - ws.close() - } else { - ws.send(websocketUtils.packMessage({ type })) - } - } - } - - ws.onclose = () => { - timeoutDog.reset() - closeDog.reset() - debug('connection close') - res.end() - } - - await new Promise((resolve) => ws.onopen = resolve) - ws.send(websocketUtils.packMessage({ protocol: 'json', version: 1 })) - ws.send(websocketUtils.packMessage({ type: 6 })) - ws.send(websocketUtils.packMessage(BingWebBot.buildChatRequest(conversationContext!))) - req.socket.once('close', () => { - ws.close() - if (!res.closed) { - res.end() - } - }) -} diff --git a/spaces/xuxw98/TAPA/tests/test_rope.py b/spaces/xuxw98/TAPA/tests/test_rope.py deleted file mode 100644 index 37e993ab5471cebc1f02b9b96b4762762a2bd648..0000000000000000000000000000000000000000 --- a/spaces/xuxw98/TAPA/tests/test_rope.py +++ /dev/null @@ -1,17 +0,0 @@ -import torch - - -@torch.no_grad() -def test_rope(lit_llama, orig_llama) -> None: - torch.manual_seed(1) - - bs, seq_len, n_head, n_embed = 1, 6, 2, 8 - x = torch.randint(0, 10000, size=(bs, seq_len, n_head, n_embed // n_head)).float() - - freqs_cis = orig_llama.precompute_freqs_cis(n_embed // n_head, seq_len) - llama_rope_cache = lit_llama.build_rope_cache(seq_len, n_embed // n_head, dtype=x.dtype, device=x.device) - torch.testing.assert_close(freqs_cis, torch.view_as_complex(llama_rope_cache)) - - llama_x_rope = lit_llama.apply_rope(x, llama_rope_cache) - orig_llama_x_rope, _ = orig_llama.apply_rotary_emb(x, x, freqs_cis) - torch.testing.assert_close(llama_x_rope, orig_llama_x_rope) diff --git a/spaces/xuyingliKepler/VecDBCompare/README.md b/spaces/xuyingliKepler/VecDBCompare/README.md deleted file mode 100644 index 46e7d0ec683a6bf78f807f84358dab7ce990b4a9..0000000000000000000000000000000000000000 --- a/spaces/xuyingliKepler/VecDBCompare/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: VecDBCompare -emoji: 🦀 -colorFrom: gray -colorTo: pink -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/stores/SettingStore.ts b/spaces/yderre-aubay/midi-player-demo/src/main/stores/SettingStore.ts deleted file mode 100644 index efaa650065f43693aac5bb509fc3902c4ef7e2f8..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/stores/SettingStore.ts +++ /dev/null @@ -1,19 +0,0 @@ -import { makeObservable, observable } from "mobx" -import { makePersistable } from "mobx-persist-store" -import { Language } from "../../common/localize/localizedString" - -export default class SettingStore { - language: Language | null = null - - constructor() { - makeObservable(this, { - language: observable, - }) - - makePersistable(this, { - name: "SettingStore", - properties: ["language"], - storage: window.localStorage, - }) - } -} diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h b/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h deleted file mode 100644 index ad1311a78f61303616504eb991aaa9c4a93d9948..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h +++ /dev/null @@ -1,33 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include <torch/extension.h> - -namespace groundingdino { - -at::Tensor ms_deform_attn_cuda_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector<at::Tensor> ms_deform_attn_cuda_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/glpn/modeling_glpn.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/glpn/modeling_glpn.py deleted file mode 100644 index d2ddef5c41e1e519ecb14ea9bea468ca07c7929d..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/glpn/modeling_glpn.py +++ /dev/null @@ -1,780 +0,0 @@ -# coding=utf-8 -# Copyright 2022 KAIST and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch GLPN model.""" - - -import math -from typing import List, Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn - -from ...activations import ACT2FN -from ...modeling_outputs import BaseModelOutput, DepthEstimatorOutput -from ...modeling_utils import PreTrainedModel -from ...pytorch_utils import find_pruneable_heads_and_indices, prune_linear_layer -from ...utils import ( - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from .configuration_glpn import GLPNConfig - - -logger = logging.get_logger(__name__) - - -# General docstring -_CONFIG_FOR_DOC = "GLPNConfig" - -# Base docstring -_CHECKPOINT_FOR_DOC = "vinvino02/glpn-kitti" -_EXPECTED_OUTPUT_SHAPE = [1, 512, 15, 20] - -GLPN_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "vinvino02/glpn-kitti", - # See all GLPN models at https://huggingface.co/models?filter=glpn -] - - -# Copied from transformers.models.beit.modeling_beit.drop_path -def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor: - """ - Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - - Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks, - however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the - layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the - argument. - """ - if drop_prob == 0.0 or not training: - return input - keep_prob = 1 - drop_prob - shape = (input.shape[0],) + (1,) * (input.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=input.dtype, device=input.device) - random_tensor.floor_() # binarize - output = input.div(keep_prob) * random_tensor - return output - - -# Copied from transformers.models.segformer.modeling_segformer.SegformerDropPath -class GLPNDropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" - - def __init__(self, drop_prob: Optional[float] = None) -> None: - super().__init__() - self.drop_prob = drop_prob - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - return drop_path(hidden_states, self.drop_prob, self.training) - - def extra_repr(self) -> str: - return "p={}".format(self.drop_prob) - - -# Copied from transformers.models.segformer.modeling_segformer.SegformerOverlapPatchEmbeddings -class GLPNOverlapPatchEmbeddings(nn.Module): - """Construct the overlapping patch embeddings.""" - - def __init__(self, patch_size, stride, num_channels, hidden_size): - super().__init__() - self.proj = nn.Conv2d( - num_channels, - hidden_size, - kernel_size=patch_size, - stride=stride, - padding=patch_size // 2, - ) - - self.layer_norm = nn.LayerNorm(hidden_size) - - def forward(self, pixel_values): - embeddings = self.proj(pixel_values) - _, _, height, width = embeddings.shape - # (batch_size, num_channels, height, width) -> (batch_size, num_channels, height*width) -> (batch_size, height*width, num_channels) - # this can be fed to a Transformer layer - embeddings = embeddings.flatten(2).transpose(1, 2) - embeddings = self.layer_norm(embeddings) - return embeddings, height, width - - -# Copied from transformers.models.segformer.modeling_segformer.SegformerEfficientSelfAttention -class GLPNEfficientSelfAttention(nn.Module): - """SegFormer's efficient self-attention mechanism. Employs the sequence reduction process introduced in the [PvT - paper](https://arxiv.org/abs/2102.12122).""" - - def __init__(self, config, hidden_size, num_attention_heads, sequence_reduction_ratio): - super().__init__() - self.hidden_size = hidden_size - self.num_attention_heads = num_attention_heads - - if self.hidden_size % self.num_attention_heads != 0: - raise ValueError( - f"The hidden size ({self.hidden_size}) is not a multiple of the number of attention " - f"heads ({self.num_attention_heads})" - ) - - self.attention_head_size = int(self.hidden_size / self.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(self.hidden_size, self.all_head_size) - self.key = nn.Linear(self.hidden_size, self.all_head_size) - self.value = nn.Linear(self.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - - self.sr_ratio = sequence_reduction_ratio - if sequence_reduction_ratio > 1: - self.sr = nn.Conv2d( - hidden_size, hidden_size, kernel_size=sequence_reduction_ratio, stride=sequence_reduction_ratio - ) - self.layer_norm = nn.LayerNorm(hidden_size) - - def transpose_for_scores(self, hidden_states): - new_shape = hidden_states.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - hidden_states = hidden_states.view(new_shape) - return hidden_states.permute(0, 2, 1, 3) - - def forward( - self, - hidden_states, - height, - width, - output_attentions=False, - ): - query_layer = self.transpose_for_scores(self.query(hidden_states)) - - if self.sr_ratio > 1: - batch_size, seq_len, num_channels = hidden_states.shape - # Reshape to (batch_size, num_channels, height, width) - hidden_states = hidden_states.permute(0, 2, 1).reshape(batch_size, num_channels, height, width) - # Apply sequence reduction - hidden_states = self.sr(hidden_states) - # Reshape back to (batch_size, seq_len, num_channels) - hidden_states = hidden_states.reshape(batch_size, num_channels, -1).permute(0, 2, 1) - hidden_states = self.layer_norm(hidden_states) - - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - context_layer = torch.matmul(attention_probs, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(new_context_layer_shape) - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - return outputs - - -# Copied from transformers.models.segformer.modeling_segformer.SegformerSelfOutput -class GLPNSelfOutput(nn.Module): - def __init__(self, config, hidden_size): - super().__init__() - self.dense = nn.Linear(hidden_size, hidden_size) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - return hidden_states - - -# Copied from transformers.models.segformer.modeling_segformer.SegformerAttention with Segformer->GLPN -class GLPNAttention(nn.Module): - def __init__(self, config, hidden_size, num_attention_heads, sequence_reduction_ratio): - super().__init__() - self.self = GLPNEfficientSelfAttention( - config=config, - hidden_size=hidden_size, - num_attention_heads=num_attention_heads, - sequence_reduction_ratio=sequence_reduction_ratio, - ) - self.output = GLPNSelfOutput(config, hidden_size=hidden_size) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward(self, hidden_states, height, width, output_attentions=False): - self_outputs = self.self(hidden_states, height, width, output_attentions) - - attention_output = self.output(self_outputs[0], hidden_states) - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -# Copied from transformers.models.segformer.modeling_segformer.SegformerDWConv -class GLPNDWConv(nn.Module): - def __init__(self, dim=768): - super().__init__() - self.dwconv = nn.Conv2d(dim, dim, 3, 1, 1, bias=True, groups=dim) - - def forward(self, hidden_states, height, width): - batch_size, seq_len, num_channels = hidden_states.shape - hidden_states = hidden_states.transpose(1, 2).view(batch_size, num_channels, height, width) - hidden_states = self.dwconv(hidden_states) - hidden_states = hidden_states.flatten(2).transpose(1, 2) - - return hidden_states - - -# Copied from transformers.models.segformer.modeling_segformer.SegformerMixFFN with Segformer->GLPN -class GLPNMixFFN(nn.Module): - def __init__(self, config, in_features, hidden_features=None, out_features=None): - super().__init__() - out_features = out_features or in_features - self.dense1 = nn.Linear(in_features, hidden_features) - self.dwconv = GLPNDWConv(hidden_features) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - self.dense2 = nn.Linear(hidden_features, out_features) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, height, width): - hidden_states = self.dense1(hidden_states) - hidden_states = self.dwconv(hidden_states, height, width) - hidden_states = self.intermediate_act_fn(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.dense2(hidden_states) - hidden_states = self.dropout(hidden_states) - return hidden_states - - -# Copied from transformers.models.segformer.modeling_segformer.SegformerLayer with Segformer->GLPN -class GLPNLayer(nn.Module): - """This corresponds to the Block class in the original implementation.""" - - def __init__(self, config, hidden_size, num_attention_heads, drop_path, sequence_reduction_ratio, mlp_ratio): - super().__init__() - self.layer_norm_1 = nn.LayerNorm(hidden_size) - self.attention = GLPNAttention( - config, - hidden_size=hidden_size, - num_attention_heads=num_attention_heads, - sequence_reduction_ratio=sequence_reduction_ratio, - ) - self.drop_path = GLPNDropPath(drop_path) if drop_path > 0.0 else nn.Identity() - self.layer_norm_2 = nn.LayerNorm(hidden_size) - mlp_hidden_size = int(hidden_size * mlp_ratio) - self.mlp = GLPNMixFFN(config, in_features=hidden_size, hidden_features=mlp_hidden_size) - - def forward(self, hidden_states, height, width, output_attentions=False): - self_attention_outputs = self.attention( - self.layer_norm_1(hidden_states), # in GLPN, layernorm is applied before self-attention - height, - width, - output_attentions=output_attentions, - ) - - attention_output = self_attention_outputs[0] - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - - # first residual connection (with stochastic depth) - attention_output = self.drop_path(attention_output) - hidden_states = attention_output + hidden_states - - mlp_output = self.mlp(self.layer_norm_2(hidden_states), height, width) - - # second residual connection (with stochastic depth) - mlp_output = self.drop_path(mlp_output) - layer_output = mlp_output + hidden_states - - outputs = (layer_output,) + outputs - - return outputs - - -class GLPNEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - - # stochastic depth decay rule - dpr = [x.item() for x in torch.linspace(0, config.drop_path_rate, sum(config.depths))] - - # patch embeddings - embeddings = [] - for i in range(config.num_encoder_blocks): - embeddings.append( - GLPNOverlapPatchEmbeddings( - patch_size=config.patch_sizes[i], - stride=config.strides[i], - num_channels=config.num_channels if i == 0 else config.hidden_sizes[i - 1], - hidden_size=config.hidden_sizes[i], - ) - ) - self.patch_embeddings = nn.ModuleList(embeddings) - - # Transformer blocks - blocks = [] - cur = 0 - for i in range(config.num_encoder_blocks): - # each block consists of layers - layers = [] - if i != 0: - cur += config.depths[i - 1] - for j in range(config.depths[i]): - layers.append( - GLPNLayer( - config, - hidden_size=config.hidden_sizes[i], - num_attention_heads=config.num_attention_heads[i], - drop_path=dpr[cur + j], - sequence_reduction_ratio=config.sr_ratios[i], - mlp_ratio=config.mlp_ratios[i], - ) - ) - blocks.append(nn.ModuleList(layers)) - - self.block = nn.ModuleList(blocks) - - # Layer norms - self.layer_norm = nn.ModuleList( - [nn.LayerNorm(config.hidden_sizes[i]) for i in range(config.num_encoder_blocks)] - ) - - def forward( - self, - pixel_values, - output_attentions=False, - output_hidden_states=False, - return_dict=True, - ): - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - - batch_size = pixel_values.shape[0] - - hidden_states = pixel_values - for idx, x in enumerate(zip(self.patch_embeddings, self.block, self.layer_norm)): - embedding_layer, block_layer, norm_layer = x - # first, obtain patch embeddings - hidden_states, height, width = embedding_layer(hidden_states) - # second, send embeddings through blocks - for i, blk in enumerate(block_layer): - layer_outputs = blk(hidden_states, height, width, output_attentions) - hidden_states = layer_outputs[0] - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - # third, apply layer norm - hidden_states = norm_layer(hidden_states) - # fourth, optionally reshape back to (batch_size, num_channels, height, width) - hidden_states = hidden_states.reshape(batch_size, height, width, -1).permute(0, 3, 1, 2).contiguous() - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple(v for v in [hidden_states, all_hidden_states, all_self_attentions] if v is not None) - return BaseModelOutput( - last_hidden_state=hidden_states, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - ) - - -class GLPNPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = GLPNConfig - base_model_prefix = "glpn" - main_input_name = "pixel_values" - - # Copied from transformers.models.segformer.modeling_segformer.SegformerPreTrainedModel._init_weights - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Conv2d)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - -GLPN_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) sub-class. Use - it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`GLPNConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -GLPN_INPUTS_DOCSTRING = r""" - - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using - [`AutoImageProcessor`]. See [`GLPNImageProcessor.__call__`] for details. - - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare GLPN encoder (Mix-Transformer) outputting raw hidden-states without any specific head on top.", - GLPN_START_DOCSTRING, -) -class GLPNModel(GLPNPreTrainedModel): - # Copied from transformers.models.segformer.modeling_segformer.SegformerModel.__init__ with Segformer->GLPN - def __init__(self, config): - super().__init__(config) - self.config = config - - # hierarchical Transformer encoder - self.encoder = GLPNEncoder(config) - - # Initialize weights and apply final processing - self.post_init() - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - @add_start_docstrings_to_model_forward(GLPN_INPUTS_DOCSTRING.format("(batch_size, sequence_length)")) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutput, - config_class=_CONFIG_FOR_DOC, - modality="vision", - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - # Copied from transformers.models.segformer.modeling_segformer.SegformerModel.forward - def forward( - self, - pixel_values: torch.FloatTensor, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutput]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - encoder_outputs = self.encoder( - pixel_values, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = encoder_outputs[0] - - if not return_dict: - return (sequence_output,) + encoder_outputs[1:] - - return BaseModelOutput( - last_hidden_state=sequence_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - ) - - -class GLPNSelectiveFeatureFusion(nn.Module): - """ - Selective Feature Fusion module, as explained in the [paper](https://arxiv.org/abs/2201.07436) (section 3.4). This - module adaptively selects and integrates local and global features by attaining an attention map for each feature. - """ - - def __init__(self, in_channel=64): - super().__init__() - - self.convolutional_layer1 = nn.Sequential( - nn.Conv2d(in_channels=int(in_channel * 2), out_channels=in_channel, kernel_size=3, stride=1, padding=1), - nn.BatchNorm2d(in_channel), - nn.ReLU(), - ) - - self.convolutional_layer2 = nn.Sequential( - nn.Conv2d(in_channels=in_channel, out_channels=int(in_channel / 2), kernel_size=3, stride=1, padding=1), - nn.BatchNorm2d(int(in_channel / 2)), - nn.ReLU(), - ) - - self.convolutional_layer3 = nn.Conv2d( - in_channels=int(in_channel / 2), out_channels=2, kernel_size=3, stride=1, padding=1 - ) - - self.sigmoid = nn.Sigmoid() - - def forward(self, local_features, global_features): - # concatenate features along the channel dimension - features = torch.cat((local_features, global_features), dim=1) - # pass through convolutional layers - features = self.convolutional_layer1(features) - features = self.convolutional_layer2(features) - features = self.convolutional_layer3(features) - # apply sigmoid to get two-channel attention map - attn = self.sigmoid(features) - # construct hybrid features by adding element-wise - hybrid_features = local_features * attn[:, 0, :, :].unsqueeze(1) + global_features * attn[ - :, 1, :, : - ].unsqueeze(1) - - return hybrid_features - - -class GLPNDecoderStage(nn.Module): - def __init__(self, in_channels, out_channels): - super().__init__() - should_skip = in_channels == out_channels - self.convolution = nn.Conv2d(in_channels, out_channels, kernel_size=1) if not should_skip else nn.Identity() - self.fusion = GLPNSelectiveFeatureFusion(out_channels) - self.upsample = nn.Upsample(scale_factor=2, mode="bilinear", align_corners=False) - - def forward(self, hidden_state, residual=None): - hidden_state = self.convolution(hidden_state) - if residual is not None: - hidden_state = self.fusion(hidden_state, residual) - hidden_state = self.upsample(hidden_state) - - return hidden_state - - hidden_state = self.upsample(hidden_state) - return hidden_state - - -class GLPNDecoder(nn.Module): - def __init__(self, config): - super().__init__() - # we use features from end -> start - reserved_hidden_sizes = config.hidden_sizes[::-1] - out_channels = config.decoder_hidden_size - - self.stages = nn.ModuleList( - [GLPNDecoderStage(hidden_size, out_channels) for hidden_size in reserved_hidden_sizes] - ) - # don't fuse in first stage - self.stages[0].fusion = None - - self.final_upsample = nn.Upsample(scale_factor=2, mode="bilinear", align_corners=False) - - def forward(self, hidden_states: List[torch.Tensor]) -> List[torch.Tensor]: - stage_hidden_states = [] - stage_hidden_state = None - for hidden_state, stage in zip(hidden_states[::-1], self.stages): - stage_hidden_state = stage(hidden_state, stage_hidden_state) - stage_hidden_states.append(stage_hidden_state) - - stage_hidden_states[-1] = self.final_upsample(stage_hidden_state) - - return stage_hidden_states - - -class SiLogLoss(nn.Module): - r""" - Implements the Scale-invariant log scale loss [Eigen et al., 2014](https://arxiv.org/abs/1406.2283). - - $$L=\frac{1}{n} \sum_{i} d_{i}^{2}-\frac{1}{2 n^{2}}\left(\sum_{i} d_{i}^{2}\right)$$ where $d_{i}=\log y_{i}-\log - y_{i}^{*}$. - - """ - - def __init__(self, lambd=0.5): - super().__init__() - self.lambd = lambd - - def forward(self, pred, target): - valid_mask = (target > 0).detach() - diff_log = torch.log(target[valid_mask]) - torch.log(pred[valid_mask]) - loss = torch.sqrt(torch.pow(diff_log, 2).mean() - self.lambd * torch.pow(diff_log.mean(), 2)) - - return loss - - -class GLPNDepthEstimationHead(nn.Module): - def __init__(self, config): - super().__init__() - - self.config = config - - channels = config.decoder_hidden_size - self.head = nn.Sequential( - nn.Conv2d(channels, channels, kernel_size=3, stride=1, padding=1), - nn.ReLU(inplace=False), - nn.Conv2d(channels, 1, kernel_size=3, stride=1, padding=1), - ) - - def forward(self, hidden_states: List[torch.Tensor]) -> torch.Tensor: - # use last features of the decoder - hidden_states = hidden_states[self.config.head_in_index] - - hidden_states = self.head(hidden_states) - - predicted_depth = torch.sigmoid(hidden_states) * self.config.max_depth - predicted_depth = predicted_depth.squeeze(dim=1) - - return predicted_depth - - -@add_start_docstrings( - """GLPN Model transformer with a lightweight depth estimation head on top e.g. for KITTI, NYUv2.""", - GLPN_START_DOCSTRING, -) -class GLPNForDepthEstimation(GLPNPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.glpn = GLPNModel(config) - self.decoder = GLPNDecoder(config) - self.head = GLPNDepthEstimationHead(config) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(GLPN_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=DepthEstimatorOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - pixel_values: torch.FloatTensor, - labels: Optional[torch.FloatTensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], DepthEstimatorOutput]: - r""" - labels (`torch.FloatTensor` of shape `(batch_size, height, width)`, *optional*): - Ground truth depth estimation maps for computing the loss. - - Returns: - - Examples: - - ```python - >>> from transformers import AutoImageProcessor, GLPNForDepthEstimation - >>> import torch - >>> import numpy as np - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> image_processor = AutoImageProcessor.from_pretrained("vinvino02/glpn-kitti") - >>> model = GLPNForDepthEstimation.from_pretrained("vinvino02/glpn-kitti") - - >>> # prepare image for the model - >>> inputs = image_processor(images=image, return_tensors="pt") - - >>> with torch.no_grad(): - ... outputs = model(**inputs) - ... predicted_depth = outputs.predicted_depth - - >>> # interpolate to original size - >>> prediction = torch.nn.functional.interpolate( - ... predicted_depth.unsqueeze(1), - ... size=image.size[::-1], - ... mode="bicubic", - ... align_corners=False, - ... ) - - >>> # visualize the prediction - >>> output = prediction.squeeze().cpu().numpy() - >>> formatted = (output * 255 / np.max(output)).astype("uint8") - >>> depth = Image.fromarray(formatted) - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - - outputs = self.glpn( - pixel_values, - output_attentions=output_attentions, - output_hidden_states=True, # we need the intermediate hidden states - return_dict=return_dict, - ) - - hidden_states = outputs.hidden_states if return_dict else outputs[1] - - out = self.decoder(hidden_states) - predicted_depth = self.head(out) - - loss = None - if labels is not None: - loss_fct = SiLogLoss() - loss = loss_fct(predicted_depth, labels) - - if not return_dict: - if output_hidden_states: - output = (predicted_depth,) + outputs[1:] - else: - output = (predicted_depth,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return DepthEstimatorOutput( - loss=loss, - predicted_depth=predicted_depth, - hidden_states=outputs.hidden_states if output_hidden_states else None, - attentions=outputs.attentions, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/longt5/convert_longt5x_checkpoint_to_flax.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/longt5/convert_longt5x_checkpoint_to_flax.py deleted file mode 100644 index 5a1394c719d2d836ebc59693755671b936291be5..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/longt5/convert_longt5x_checkpoint_to_flax.py +++ /dev/null @@ -1,215 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Convert T5/LongT5X checkpoints from the original repository to JAX/FLAX model. This script is an extension of -'src/transformers/models/t5/convert_t5x_checkpoint_to_flax. -""" - -import argparse - -from t5x import checkpoints - -from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM - - -def convert_t5x_checkpoint_to_flax(t5x_checkpoint_path, config_name, flax_dump_folder_path): - config = AutoConfig.from_pretrained(config_name) - flax_model = FlaxAutoModelForSeq2SeqLM.from_config(config=config) - t5x_model = checkpoints.load_t5x_checkpoint(t5x_checkpoint_path) - - split_mlp_wi = "wi_0" in t5x_model["target"]["encoder"]["layers_0"]["mlp"] - - if config.model_type == "t5": - encoder_attn_name = "SelfAttention" - if config.model_type == "longt5" and config.encoder_attention_type == "local": - encoder_attn_name = "LocalSelfAttention" - elif config.model_type == "longt5" and config.encoder_attention_type == "transient-global": - encoder_attn_name = "TransientGlobalSelfAttention" - else: - raise ValueError( - "Given config is expected to have `model_type='t5'`, or `model_type='longt5` with `encoder_attention_type`" - " attribute with a value from ['local', 'transient-global]." - ) - - # Encoder - for layer_index in range(config.num_layers): - layer_name = f"layers_{str(layer_index)}" - - # Self-Attention - t5x_attention_key = t5x_model["target"]["encoder"][layer_name]["attention"]["key"]["kernel"] - t5x_attention_out = t5x_model["target"]["encoder"][layer_name]["attention"]["out"]["kernel"] - t5x_attention_query = t5x_model["target"]["encoder"][layer_name]["attention"]["query"]["kernel"] - t5x_attention_value = t5x_model["target"]["encoder"][layer_name]["attention"]["value"]["kernel"] - - # Global input layer norm - if config.model_type == "longt5" and config.encoder_attention_type == "transient-global": - t5x_global_layer_norm = t5x_model["target"]["encoder"][layer_name]["attention"]["T5LayerNorm_0"]["scale"] - - # Layer Normalization - t5x_attention_layer_norm = t5x_model["target"]["encoder"][layer_name]["pre_attention_layer_norm"]["scale"] - - if split_mlp_wi: - t5x_mlp_wi_0 = t5x_model["target"]["encoder"][layer_name]["mlp"]["wi_0"]["kernel"] - t5x_mlp_wi_1 = t5x_model["target"]["encoder"][layer_name]["mlp"]["wi_1"]["kernel"] - else: - t5x_mlp_wi = t5x_model["target"]["encoder"][layer_name]["mlp"]["wi"]["kernel"] - - t5x_mlp_wo = t5x_model["target"]["encoder"][layer_name]["mlp"]["wo"]["kernel"] - - # Layer Normalization - t5x_mlp_layer_norm = t5x_model["target"]["encoder"][layer_name]["pre_mlp_layer_norm"]["scale"] - - # Assigning - flax_model_encoder_layer_block = flax_model.params["encoder"]["block"][str(layer_index)]["layer"] - flax_model_encoder_layer_block["0"][encoder_attn_name]["k"]["kernel"] = t5x_attention_key - flax_model_encoder_layer_block["0"][encoder_attn_name]["o"]["kernel"] = t5x_attention_out - flax_model_encoder_layer_block["0"][encoder_attn_name]["q"]["kernel"] = t5x_attention_query - flax_model_encoder_layer_block["0"][encoder_attn_name]["v"]["kernel"] = t5x_attention_value - - flax_model_encoder_layer_block["0"]["layer_norm"]["weight"] = t5x_attention_layer_norm - - # Global input layer norm - if config.model_type == "longt5" and config.encoder_attention_type == "transient-global": - flax_model_encoder_layer_block["0"][encoder_attn_name]["global_input_layer_norm"][ - "weight" - ] = t5x_global_layer_norm - - if split_mlp_wi: - flax_model_encoder_layer_block["1"]["DenseReluDense"]["wi_0"]["kernel"] = t5x_mlp_wi_0 - flax_model_encoder_layer_block["1"]["DenseReluDense"]["wi_1"]["kernel"] = t5x_mlp_wi_1 - else: - flax_model_encoder_layer_block["1"]["DenseReluDense"]["wi"]["kernel"] = t5x_mlp_wi - - flax_model_encoder_layer_block["1"]["DenseReluDense"]["wo"]["kernel"] = t5x_mlp_wo - flax_model_encoder_layer_block["1"]["layer_norm"]["weight"] = t5x_mlp_layer_norm - - flax_model.params["encoder"]["block"][str(layer_index)]["layer"] = flax_model_encoder_layer_block - - # Only for layer 0: - t5x_encoder_rel_embedding = t5x_model["target"]["encoder"]["relpos_bias"]["rel_embedding"].T - flax_model.params["encoder"]["block"]["0"]["layer"]["0"][encoder_attn_name]["relative_attention_bias"][ - "embedding" - ] = t5x_encoder_rel_embedding - - # Side/global relative position_bias + layer norm - if config.model_type == "longt5" and config.encoder_attention_type == "transient-global": - t5x_encoder_global_rel_embedding = t5x_model["target"]["encoder"]["side_relpos_bias"]["rel_embedding"].T - flax_model.params["encoder"]["block"]["0"]["layer"]["0"][encoder_attn_name]["global_relative_attention_bias"][ - "embedding" - ] = t5x_encoder_global_rel_embedding - - # Assigning - t5x_encoder_norm = t5x_model["target"]["encoder"]["encoder_norm"]["scale"] - flax_model.params["encoder"]["final_layer_norm"]["weight"] = t5x_encoder_norm - - # Decoder - for layer_index in range(config.num_layers): - layer_name = f"layers_{str(layer_index)}" - - # Self-Attention - t5x_attention_key = t5x_model["target"]["decoder"][layer_name]["self_attention"]["key"]["kernel"] - t5x_attention_out = t5x_model["target"]["decoder"][layer_name]["self_attention"]["out"]["kernel"] - t5x_attention_query = t5x_model["target"]["decoder"][layer_name]["self_attention"]["query"]["kernel"] - t5x_attention_value = t5x_model["target"]["decoder"][layer_name]["self_attention"]["value"]["kernel"] - - # Layer Normalization - t5x_pre_attention_layer_norm = t5x_model["target"]["decoder"][layer_name]["pre_self_attention_layer_norm"][ - "scale" - ] - - # Encoder-Decoder-Attention - t5x_enc_dec_attention_module = t5x_model["target"]["decoder"][layer_name]["encoder_decoder_attention"] - t5x_enc_dec_attention_key = t5x_enc_dec_attention_module["key"]["kernel"] - t5x_enc_dec_attention_out = t5x_enc_dec_attention_module["out"]["kernel"] - t5x_enc_dec_attention_query = t5x_enc_dec_attention_module["query"]["kernel"] - t5x_enc_dec_attention_value = t5x_enc_dec_attention_module["value"]["kernel"] - - # Layer Normalization - t5x_cross_layer_norm = t5x_model["target"]["decoder"][layer_name]["pre_cross_attention_layer_norm"]["scale"] - - # MLP - if split_mlp_wi: - t5x_mlp_wi_0 = t5x_model["target"]["decoder"][layer_name]["mlp"]["wi_0"]["kernel"] - t5x_mlp_wi_1 = t5x_model["target"]["decoder"][layer_name]["mlp"]["wi_1"]["kernel"] - else: - t5x_mlp_wi = t5x_model["target"]["decoder"][layer_name]["mlp"]["wi"]["kernel"] - - t5x_mlp_wo = t5x_model["target"]["decoder"][layer_name]["mlp"]["wo"]["kernel"] - - # Layer Normalization - tx5_mlp_layer_norm = t5x_model["target"]["decoder"][layer_name]["pre_mlp_layer_norm"]["scale"] - - # Assigning - flax_model_decoder_layer_block = flax_model.params["decoder"]["block"][str(layer_index)]["layer"] - flax_model_decoder_layer_block["0"]["SelfAttention"]["k"]["kernel"] = t5x_attention_key - flax_model_decoder_layer_block["0"]["SelfAttention"]["o"]["kernel"] = t5x_attention_out - flax_model_decoder_layer_block["0"]["SelfAttention"]["q"]["kernel"] = t5x_attention_query - flax_model_decoder_layer_block["0"]["SelfAttention"]["v"]["kernel"] = t5x_attention_value - - flax_model_decoder_layer_block["0"]["layer_norm"]["weight"] = t5x_pre_attention_layer_norm - - flax_model_decoder_layer_block["1"]["EncDecAttention"]["k"]["kernel"] = t5x_enc_dec_attention_key - flax_model_decoder_layer_block["1"]["EncDecAttention"]["o"]["kernel"] = t5x_enc_dec_attention_out - flax_model_decoder_layer_block["1"]["EncDecAttention"]["q"]["kernel"] = t5x_enc_dec_attention_query - flax_model_decoder_layer_block["1"]["EncDecAttention"]["v"]["kernel"] = t5x_enc_dec_attention_value - - flax_model_decoder_layer_block["1"]["layer_norm"]["weight"] = t5x_cross_layer_norm - - if split_mlp_wi: - flax_model_decoder_layer_block["2"]["DenseReluDense"]["wi_0"]["kernel"] = t5x_mlp_wi_0 - flax_model_decoder_layer_block["2"]["DenseReluDense"]["wi_1"]["kernel"] = t5x_mlp_wi_1 - else: - flax_model_decoder_layer_block["2"]["DenseReluDense"]["wi"]["kernel"] = t5x_mlp_wi - - flax_model_decoder_layer_block["2"]["DenseReluDense"]["wo"]["kernel"] = t5x_mlp_wo - - flax_model_decoder_layer_block["2"]["layer_norm"]["weight"] = tx5_mlp_layer_norm - - flax_model.params["decoder"]["block"][str(layer_index)]["layer"] = flax_model_decoder_layer_block - - # Decoder Normalization - tx5_decoder_norm = t5x_model["target"]["decoder"]["decoder_norm"]["scale"] - flax_model.params["decoder"]["final_layer_norm"]["weight"] = tx5_decoder_norm - - # Only for layer 0: - t5x_decoder_rel_embedding = t5x_model["target"]["decoder"]["relpos_bias"]["rel_embedding"].T - flax_model.params["decoder"]["block"]["0"]["layer"]["0"]["SelfAttention"]["relative_attention_bias"][ - "embedding" - ] = t5x_decoder_rel_embedding - - # Token Embeddings - tx5_token_embeddings = t5x_model["target"]["token_embedder"]["embedding"] - flax_model.params["shared"]["embedding"] = tx5_token_embeddings - - # LM Head (only in v1.1 and LongT5 checkpoints) - if "logits_dense" in t5x_model["target"]["decoder"]: - flax_model.params["lm_head"]["kernel"] = t5x_model["target"]["decoder"]["logits_dense"]["kernel"] - - flax_model.save_pretrained(flax_dump_folder_path) - print("T5X Model was sucessfully converted!") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - # Required parameters - parser.add_argument( - "--t5x_checkpoint_path", default=None, type=str, required=True, help="Path the T5X checkpoint." - ) - parser.add_argument("--config_name", default=None, type=str, required=True, help="Config name of LongT5/T5 model.") - parser.add_argument( - "--flax_dump_folder_path", default=None, type=str, required=True, help="Path to the output FLAX model." - ) - args = parser.parse_args() - convert_t5x_checkpoint_to_flax(args.t5x_checkpoint_path, args.config_name, args.flax_dump_folder_path) diff --git a/spaces/yl12053/so-vits-4.1-Kitasan-Black/cluster/kmeans.py b/spaces/yl12053/so-vits-4.1-Kitasan-Black/cluster/kmeans.py deleted file mode 100644 index 6111ea45e66a15d41b5b904be6f75affd3c4369f..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Kitasan-Black/cluster/kmeans.py +++ /dev/null @@ -1,201 +0,0 @@ -import math,pdb -import torch,pynvml -from torch.nn.functional import normalize -from time import time -import numpy as np -# device=torch.device("cuda:0") -def _kpp(data: torch.Tensor, k: int, sample_size: int = -1): - """ Picks k points in the data based on the kmeans++ method. - - Parameters - ---------- - data : torch.Tensor - Expect a rank 1 or 2 array. Rank 1 is assumed to describe 1-D - data, rank 2 multidimensional data, in which case one - row is one observation. - k : int - Number of samples to generate. - sample_size : int - sample data to avoid memory overflow during calculation - - Returns - ------- - init : ndarray - A 'k' by 'N' containing the initial centroids. - - References - ---------- - .. [1] D. Arthur and S. Vassilvitskii, "k-means++: the advantages of - careful seeding", Proceedings of the Eighteenth Annual ACM-SIAM Symposium - on Discrete Algorithms, 2007. - .. [2] scipy/cluster/vq.py: _kpp - """ - batch_size=data.shape[0] - if batch_size>sample_size: - data = data[torch.randint(0, batch_size,[sample_size], device=data.device)] - dims = data.shape[1] if len(data.shape) > 1 else 1 - init = torch.zeros((k, dims)).to(data.device) - r = torch.distributions.uniform.Uniform(0, 1) - for i in range(k): - if i == 0: - init[i, :] = data[torch.randint(data.shape[0], [1])] - else: - D2 = torch.cdist(init[:i, :][None, :], data[None, :], p=2)[0].amin(dim=0) - probs = D2 / torch.sum(D2) - cumprobs = torch.cumsum(probs, dim=0) - init[i, :] = data[torch.searchsorted(cumprobs, r.sample([1]).to(data.device))] - return init -class KMeansGPU: - ''' - Kmeans clustering algorithm implemented with PyTorch - - Parameters: - n_clusters: int, - Number of clusters - - max_iter: int, default: 100 - Maximum number of iterations - - tol: float, default: 0.0001 - Tolerance - - verbose: int, default: 0 - Verbosity - - mode: {'euclidean', 'cosine'}, default: 'euclidean' - Type of distance measure - - init_method: {'random', 'point', '++'} - Type of initialization - - minibatch: {None, int}, default: None - Batch size of MinibatchKmeans algorithm - if None perform full KMeans algorithm - - Attributes: - centroids: torch.Tensor, shape: [n_clusters, n_features] - cluster centroids - ''' - def __init__(self, n_clusters, max_iter=200, tol=1e-4, verbose=0, mode="euclidean",device=torch.device("cuda:0")): - self.n_clusters = n_clusters - self.max_iter = max_iter - self.tol = tol - self.verbose = verbose - self.mode = mode - self.device=device - pynvml.nvmlInit() - gpu_handle = pynvml.nvmlDeviceGetHandleByIndex(device.index) - info = pynvml.nvmlDeviceGetMemoryInfo(gpu_handle) - self.minibatch=int(33e6/self.n_clusters*info.free/ 1024 / 1024 / 1024) - print("free_mem/GB:",info.free/ 1024 / 1024 / 1024,"minibatch:",self.minibatch) - - @staticmethod - def cos_sim(a, b): - """ - Compute cosine similarity of 2 sets of vectors - - Parameters: - a: torch.Tensor, shape: [m, n_features] - - b: torch.Tensor, shape: [n, n_features] - """ - return normalize(a, dim=-1) @ normalize(b, dim=-1).transpose(-2, -1) - - @staticmethod - def euc_sim(a, b): - """ - Compute euclidean similarity of 2 sets of vectors - Parameters: - a: torch.Tensor, shape: [m, n_features] - b: torch.Tensor, shape: [n, n_features] - """ - return 2 * a @ b.transpose(-2, -1) -(a**2).sum(dim=1)[..., :, None] - (b**2).sum(dim=1)[..., None, :] - - def max_sim(self, a, b): - """ - Compute maximum similarity (or minimum distance) of each vector - in a with all of the vectors in b - Parameters: - a: torch.Tensor, shape: [m, n_features] - b: torch.Tensor, shape: [n, n_features] - """ - if self.mode == 'cosine': - sim_func = self.cos_sim - elif self.mode == 'euclidean': - sim_func = self.euc_sim - sim = sim_func(a, b) - max_sim_v, max_sim_i = sim.max(dim=-1) - return max_sim_v, max_sim_i - - def fit_predict(self, X): - """ - Combination of fit() and predict() methods. - This is faster than calling fit() and predict() seperately. - Parameters: - X: torch.Tensor, shape: [n_samples, n_features] - centroids: {torch.Tensor, None}, default: None - if given, centroids will be initialized with given tensor - if None, centroids will be randomly chosen from X - Return: - labels: torch.Tensor, shape: [n_samples] - - mini_=33kk/k*remain - mini=min(mini_,fea_shape) - offset=log2(k/1000)*1.5 - kpp_all=min(mini_*10/offset,fea_shape) - kpp_sample=min(mini_/12/offset,fea_shape) - """ - assert isinstance(X, torch.Tensor), "input must be torch.Tensor" - assert X.dtype in [torch.half, torch.float, torch.double], "input must be floating point" - assert X.ndim == 2, "input must be a 2d tensor with shape: [n_samples, n_features] " - # print("verbose:%s"%self.verbose) - - offset = np.power(1.5,np.log(self.n_clusters / 1000))/np.log(2) - with torch.no_grad(): - batch_size= X.shape[0] - # print(self.minibatch, int(self.minibatch * 10 / offset), batch_size) - start_time = time() - if (self.minibatch*10//offset< batch_size): - x = X[torch.randint(0, batch_size,[int(self.minibatch*10/offset)])].to(self.device) - else: - x = X.to(self.device) - # print(x.device) - self.centroids = _kpp(x, self.n_clusters, min(int(self.minibatch/12/offset),batch_size)) - del x - torch.cuda.empty_cache() - # self.centroids = self.centroids.to(self.device) - num_points_in_clusters = torch.ones(self.n_clusters, device=self.device, dtype=X.dtype)#全1 - closest = None#[3098036]#int64 - if(self.minibatch>=batch_size//2 and self.minibatch<batch_size): - X = X[torch.randint(0, batch_size,[self.minibatch])].to(self.device) - elif(self.minibatch>=batch_size): - X=X.to(self.device) - for i in range(self.max_iter): - iter_time = time() - if self.minibatch<batch_size//2:#可用minibatch数太小,每次都得从内存倒腾到显存 - x = X[torch.randint(0, batch_size, [self.minibatch])].to(self.device) - else:#否则直接全部缓存 - x = X - - closest = self.max_sim(a=x, b=self.centroids)[1].to(torch.int16)#[3098036]#int64#0~999 - matched_clusters, counts = closest.unique(return_counts=True)#int64#1k - expanded_closest = closest[None].expand(self.n_clusters, -1)#[1000, 3098036]#int16#0~999 - mask = (expanded_closest==torch.arange(self.n_clusters, device=self.device)[:, None]).to(X.dtype)#==后者是int64*1000 - c_grad = mask @ x / mask.sum(-1)[..., :, None] - c_grad[c_grad!=c_grad] = 0 # remove NaNs - error = (c_grad - self.centroids).pow(2).sum() - if self.minibatch is not None: - lr = 1/num_points_in_clusters[:,None] * 0.9 + 0.1 - else: - lr = 1 - matched_clusters=matched_clusters.long() - num_points_in_clusters[matched_clusters] += counts#IndexError: tensors used as indices must be long, byte or bool tensors - self.centroids = self.centroids * (1-lr) + c_grad * lr - if self.verbose >= 2: - print('iter:', i, 'error:', error.item(), 'time spent:', round(time()-iter_time, 4)) - if error <= self.tol: - break - - if self.verbose >= 1: - print(f'used {i+1} iterations ({round(time()-start_time, 4)}s) to cluster {batch_size} items into {self.n_clusters} clusters') - return closest diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/centernet_head.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/centernet_head.py deleted file mode 100644 index 57e0960a57c904c097b6a717391474a4a635dd7d..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/centernet_head.py +++ /dev/null @@ -1,162 +0,0 @@ -import math -from typing import List -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.layers import ShapeSpec, get_norm -from detectron2.config import configurable -from ..layers.deform_conv import DFConv2d - -__all__ = ["CenterNetHead"] - -class Scale(nn.Module): - def __init__(self, init_value=1.0): - super(Scale, self).__init__() - self.scale = nn.Parameter(torch.FloatTensor([init_value])) - - def forward(self, input): - return input * self.scale - -class CenterNetHead(nn.Module): - @configurable - def __init__(self, - # input_shape: List[ShapeSpec], - in_channels, - num_levels, - *, - num_classes=80, - with_agn_hm=False, - only_proposal=False, - norm='GN', - num_cls_convs=4, - num_box_convs=4, - num_share_convs=0, - use_deformable=False, - prior_prob=0.01): - super().__init__() - self.num_classes = num_classes - self.with_agn_hm = with_agn_hm - self.only_proposal = only_proposal - self.out_kernel = 3 - - head_configs = { - "cls": (num_cls_convs if not self.only_proposal else 0, \ - use_deformable), - "bbox": (num_box_convs, use_deformable), - "share": (num_share_convs, use_deformable)} - - # in_channels = [s.channels for s in input_shape] - # assert len(set(in_channels)) == 1, \ - # "Each level must have the same channel!" - # in_channels = in_channels[0] - channels = { - 'cls': in_channels, - 'bbox': in_channels, - 'share': in_channels, - } - for head in head_configs: - tower = [] - num_convs, use_deformable = head_configs[head] - channel = channels[head] - for i in range(num_convs): - if use_deformable and i == num_convs - 1: - conv_func = DFConv2d - else: - conv_func = nn.Conv2d - tower.append(conv_func( - in_channels if i == 0 else channel, - channel, - kernel_size=3, stride=1, - padding=1, bias=True - )) - if norm == 'GN' and channel % 32 != 0: - tower.append(nn.GroupNorm(25, channel)) - elif norm != '': - tower.append(get_norm(norm, channel)) - tower.append(nn.ReLU()) - self.add_module('{}_tower'.format(head), - nn.Sequential(*tower)) - - self.bbox_pred = nn.Conv2d( - in_channels, 4, kernel_size=self.out_kernel, - stride=1, padding=self.out_kernel // 2 - ) - - self.scales = nn.ModuleList( - [Scale(init_value=1.0) for _ in range(num_levels)]) - - for modules in [ - self.cls_tower, self.bbox_tower, - self.share_tower, - self.bbox_pred, - ]: - for l in modules.modules(): - if isinstance(l, nn.Conv2d): - torch.nn.init.normal_(l.weight, std=0.01) - torch.nn.init.constant_(l.bias, 0) - - torch.nn.init.constant_(self.bbox_pred.bias, 8.) - prior_prob = prior_prob - bias_value = -math.log((1 - prior_prob) / prior_prob) - - if self.with_agn_hm: - self.agn_hm = nn.Conv2d( - in_channels, 1, kernel_size=self.out_kernel, - stride=1, padding=self.out_kernel // 2 - ) - torch.nn.init.constant_(self.agn_hm.bias, bias_value) - torch.nn.init.normal_(self.agn_hm.weight, std=0.01) - - if not self.only_proposal: - cls_kernel_size = self.out_kernel - self.cls_logits = nn.Conv2d( - in_channels, self.num_classes, - kernel_size=cls_kernel_size, - stride=1, - padding=cls_kernel_size // 2, - ) - - torch.nn.init.constant_(self.cls_logits.bias, bias_value) - torch.nn.init.normal_(self.cls_logits.weight, std=0.01) - - @classmethod - def from_config(cls, cfg, input_shape): - ret = { - # 'input_shape': input_shape, - 'in_channels': [s.channels for s in input_shape][0], - 'num_levels': len(input_shape), - 'num_classes': cfg.MODEL.CENTERNET.NUM_CLASSES, - 'with_agn_hm': cfg.MODEL.CENTERNET.WITH_AGN_HM, - 'only_proposal': cfg.MODEL.CENTERNET.ONLY_PROPOSAL, - 'norm': cfg.MODEL.CENTERNET.NORM, - 'num_cls_convs': cfg.MODEL.CENTERNET.NUM_CLS_CONVS, - 'num_box_convs': cfg.MODEL.CENTERNET.NUM_BOX_CONVS, - 'num_share_convs': cfg.MODEL.CENTERNET.NUM_SHARE_CONVS, - 'use_deformable': cfg.MODEL.CENTERNET.USE_DEFORMABLE, - 'prior_prob': cfg.MODEL.CENTERNET.PRIOR_PROB, - } - return ret - - def forward(self, x): - clss = [] - bbox_reg = [] - agn_hms = [] - for l, feature in enumerate(x): - feature = self.share_tower(feature) - cls_tower = self.cls_tower(feature) - bbox_tower = self.bbox_tower(feature) - if not self.only_proposal: - clss.append(self.cls_logits(cls_tower)) - else: - clss.append(None) - - if self.with_agn_hm: - agn_hms.append(self.agn_hm(bbox_tower)) - else: - agn_hms.append(None) - reg = self.bbox_pred(bbox_tower) - reg = self.scales[l](reg) - bbox_reg.append(F.relu(reg)) - - return clss, bbox_reg, agn_hms \ No newline at end of file diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/grid-row-column.js b/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/grid-row-column.js deleted file mode 100644 index 2199f7834829ca8cc5212ee846e528bbbbbcd688..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/hacks/grid-row-column.js +++ /dev/null @@ -1,33 +0,0 @@ -let Declaration = require('../declaration') -let utils = require('./grid-utils') - -class GridRowColumn extends Declaration { - /** - * Translate grid-row / grid-column to separate -ms- prefixed properties - */ - insert(decl, prefix, prefixes) { - if (prefix !== '-ms-') return super.insert(decl, prefix, prefixes) - - let values = utils.parse(decl) - let [start, span] = utils.translate(values, 0, 1) - - let hasStartValueSpan = values[0] && values[0].includes('span') - - if (hasStartValueSpan) { - span = values[0].join('').replace(/\D/g, '') - } - - ;[ - [decl.prop, start], - [`${decl.prop}-span`, span] - ].forEach(([prop, value]) => { - utils.insertDecl(decl, prop, value) - }) - - return undefined - } -} - -GridRowColumn.names = ['grid-row', 'grid-column'] - -module.exports = GridRowColumn diff --git a/spaces/younker/chatgpt-turbo/client/src/components/File.tsx b/spaces/younker/chatgpt-turbo/client/src/components/File.tsx deleted file mode 100644 index a402e04c1f4f790a92db2aeab0fb6f2472d07e94..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/src/components/File.tsx +++ /dev/null @@ -1,77 +0,0 @@ -import { useState, useCallback, memo } from "react"; -import { Transition } from "@headlessui/react"; -import { - MagnifyingGlassMinusIcon, - MagnifyingGlassPlusIcon, - ArrowTopRightOnSquareIcon, -} from "@heroicons/react/24/outline"; - -import { FileLite } from "../types/file"; - -type FileProps = { - file: FileLite; - showScore?: boolean; -}; - -function File(props: FileProps) { - const [expanded, setExpanded] = useState(false); - - const handleExpand = useCallback(() => { - setExpanded((prev) => !prev); - }, []); - - return ( - <div - className="border-gray-100 border rounded-md shadow p-2 cursor-pointer" - onClick={handleExpand} - > - <div className="flex flex-row justify-between"> - <div className="flex hover:text-gray-600">{props.file.name}</div> - - <div className="flex flex-row space-x-2"> - {props.showScore && props.file.score && ( - <div className="flex text-blue-600 mr-4"> - {props.file.score.toFixed(2)} - </div> - )} - - <div className="ml-auto w-max flex items-center justify-center"> - {expanded ? ( - <MagnifyingGlassMinusIcon className="text-gray-500 h-5" /> - ) : ( - <MagnifyingGlassPlusIcon className="text-gray-500 h-5" /> - )} - </div> - - <a - href={props.file.url} - target="_blank" - rel="noopener noreferrer" - onClick={(e) => e.stopPropagation()} // prevent the click event from bubbling up to the list item - > - <ArrowTopRightOnSquareIcon className="text-gray-500 h-5" /> - </a> - </div> - </div> - <Transition - show={expanded} - enter="transition duration-75 ease-out" - enterFrom="transform translate-y-4 opacity-0" - enterTo="transform translate-y-0 opacity-100" - leave="transition duration-100 ease-out" - leaveFrom="transform translate-y-0 opacity-100" - leaveTo="transform translate-y-4 opacity-0" - > - <div className="items-center mt-2 justify-center"> - <iframe - src={props.file.url} - className="h-full w-full" - title={props.file.name} - ></iframe> - </div> - </Transition> - </div> - ); -} - -export default memo(File); diff --git a/spaces/younker/chatgpt-turbo/client/src/components/FileUploadArea.tsx b/spaces/younker/chatgpt-turbo/client/src/components/FileUploadArea.tsx deleted file mode 100644 index 8ee502e9d6a41aabfd9135915650158841fd20ac..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/src/components/FileUploadArea.tsx +++ /dev/null @@ -1,195 +0,0 @@ -import React, { - Dispatch, - SetStateAction, - useCallback, - useState, - memo, - useRef, -} from "react"; -import axios from "axios"; -import { ArrowUpTrayIcon } from "@heroicons/react/24/outline"; -import { compact } from "lodash"; - -import LoadingText from "./LoadingText"; -import { FileLite } from "../types/file"; -import FileViewerList from "./FileViewerList"; -import { SERVER_ADDRESS } from "../types/constants"; - -type FileUploadAreaProps = { - handleSetFiles: Dispatch<SetStateAction<FileLite[]>>; - maxNumFiles: number; - maxFileSizeMB: number; -}; - -function FileUploadArea(props: FileUploadAreaProps) { - const handleSetFiles = props.handleSetFiles; - - const [files, setFiles] = useState<FileLite[]>([]); - const [loading, setLoading] = useState(false); - const [error, setError] = useState(""); - const [dragOver, setDragOver] = useState(false); - const dropzoneRef = useRef<HTMLLabelElement>(null); - - const handleFileChange = useCallback( - async (selectedFiles: FileList | null) => { - if (selectedFiles && selectedFiles.length > 0) { - setError(""); - - if (files.length + selectedFiles.length > props.maxNumFiles) { - setError(`You can only upload up to ${props.maxNumFiles} files.`); - if (dropzoneRef.current) { - (dropzoneRef.current as any).value = ""; - } - return; - } - - setLoading(true); - - const uploadedFiles = await Promise.all( - Array.from(selectedFiles).map(async (file) => { - // Check the file type - if ( - file.type.match( - /(text\/plain|application\/(pdf|msword|vnd\.openxmlformats-officedocument\.wordprocessingml\.document))/ - ) && // AND file isnt too big - file.size < props.maxFileSizeMB * 1024 * 1024 - ) { - // Check if the file name already exists in the files state - if (files.find((f) => f.name === file.name)) { - return null; // skip this file - } - - const formData = new FormData(); - formData.append("file", file); - - try { - const processFileResponse = await axios.post( - `${SERVER_ADDRESS}/process_file`, - formData, - { - headers: { - "Content-Type": "multipart/form-data", - }, - } - ); - - if ( - processFileResponse.status === 200 && - processFileResponse.data.success - ) { - const fileObject: FileLite = { - name: file.name, - url: URL.createObjectURL(file), - expanded: false, - }; - console.log(fileObject); - - return fileObject; - } else { - console.log("Error processing file"); - return null; - } - } catch (err: any) { - console.log(`error processing file: ${err}`); - return null; - } - } else { - alert( - `Invalid file type or size. Only TXT, PD or DOCX are allowed, up to ${props.maxFileSizeMB}MB.` - ); - return null; // Skip this file - } - }) - ); - - // Filter out any null values from the uploadedFiles array - const validFiles = compact(uploadedFiles); - - // Set the files state with the valid files and the existing files - setFiles((prevFiles) => [...prevFiles, ...validFiles]); - handleSetFiles((prevFiles) => [...prevFiles, ...validFiles]); - - setLoading(false); - } - }, - [files, handleSetFiles, props.maxFileSizeMB, props.maxNumFiles] - ); - - const handleDragEnter = useCallback((event: React.DragEvent) => { - event.preventDefault(); - setDragOver(true); - }, []); - - const handleDragOver = useCallback((event: React.DragEvent) => { - event.preventDefault(); - }, []); - - const handleDragLeave = useCallback((event: React.DragEvent) => { - event.preventDefault(); - setDragOver(false); - }, []); - - const handleDrop = useCallback( - (event: React.DragEvent) => { - event.preventDefault(); - setDragOver(false); - const droppedFiles = event.dataTransfer.files; - handleFileChange(droppedFiles); - }, - [handleFileChange] - ); - - return ( - <div className="flex items-center justify-center w-full flex-col"> - <label - htmlFor="dropzone-file" - className={`flex flex-col shadow items-center justify-center w-full h-36 border-2 border-gray-300 border-dashed rounded-lg cursor-pointer bg-gray-50 hover:bg-gray-100 relative ${ - dragOver ? "border-blue-500 bg-blue-50" : "" - }`} - ref={dropzoneRef} - onDragEnter={handleDragEnter} - onDragOver={handleDragOver} - onDragLeave={handleDragLeave} - onDrop={handleDrop} - > - <div className="flex flex-col items-center justify-center pt-5 pb-6"> - {loading ? ( - <LoadingText text="Uploading..." /> - ) : ( - <div className="text-gray-500 flex flex-col items-center text-center"> - <ArrowUpTrayIcon className="w-7 h-7 mb-4" /> - <p className="mb-2 text-sm"> - <span className="font-semibold">Click to upload</span> or drag - and drop - </p> - <p className="text-xs"> - PDF, DOCX or TXT (max {props.maxFileSizeMB}MB per file) - </p> - <p className="text-xs mt-1"> - You can upload up to {props.maxNumFiles - files.length} more{" "} - {props.maxNumFiles - files.length === 1 ? "file" : "files"} - </p> - <input - id="dropzone-file" - type="file" - className="hidden" - multiple - onChange={(event) => handleFileChange(event.target.files)} - /> - </div> - )} - </div> - </label> - - {error && ( - <div className="flex items-center justify-center w-full mt-4"> - <p className="text-sm text-red-500">{error}</p> - </div> - )} - - <FileViewerList files={files} title="Uploaded Files" /> - </div> - ); -} - -export default memo(FileUploadArea); diff --git a/spaces/ysharma/ControlNet_Image_Comparison/gradio_seg2image.py b/spaces/ysharma/ControlNet_Image_Comparison/gradio_seg2image.py deleted file mode 100644 index 7b38bac7ea6f013889622a185005cbd37aabe3ee..0000000000000000000000000000000000000000 --- a/spaces/ysharma/ControlNet_Image_Comparison/gradio_seg2image.py +++ /dev/null @@ -1,180 +0,0 @@ -# This file is adapted from https://github.com/lllyasviel/ControlNet/blob/f4748e3630d8141d7765e2bd9b1e348f47847707/gradio_seg2image.py -# The original license file is LICENSE.ControlNet in this repo. -import gradio as gr -from PIL import Image - -#first elem of gallery is ^^ - {'name': '/tmp/tmpw60bbw6k.png', 'data': 'file=/tmp/tmpw60bbw6k.png', 'is_file': True} -#first elem of gallery is ^^ - {'name': '/tmp/tmpba0d5dt5.png', 'data': 'file=/tmp/tmpba0d5dt5.png', 'is_file': True} - -import numpy as np -import base64 - -def encode(img_array): - print(f"type of input_image ^^ - {type(img_array)}") - # Convert NumPy array to image - img = Image.fromarray(img_array) - - # Save image to file - img_path = "temp_image.jpeg" - img.save(img_path) - - # Encode image file using Base64 - with open(img_path, "rb") as image_file: - encoded_string = base64.b64encode(image_file.read()).decode("utf-8") - - # Print and return the encoded string - #print(encoded_string) - return encoded_string - -def create_imgcomp(input_image, result_image): #(input_image, filename): - encoded_string_in = encode(input_image) - encoded_string_out = encode(result_image) - - htmltag = '<img src= "data:image/jpeg;base64,' + encoded_string_in + '" alt="Original Image" height="500"/></div> <img src= "data:image/jpeg;base64,' + encoded_string_out + '" alt="Control Net Image" height="500"/>' - #sample - htmltag = '<img src= "data:image/jpeg;base64,' + encoded_string + '" alt="Original Image"/></div> <img src= "https://ysharma-controlnet-image-comparison.hf.space/file=' + filename + '" alt="Control Net Image"/>' - print(f"htmltag is ^^ - {htmltag}") - desc = """ - <!DOCTYPE html> - <html lang="en"> - <head> - <style> - body { - background: rgb(17, 17, 17); - } - - .image-slider { - margin-left: 3rem; - position: relative; - display: inline-block; - line-height: 0; - } - - .image-slider img { - user-select: none; - max-width: 400px; - } - - .image-slider > div { - position: absolute; - width: 25px; - max-width: 100%; - overflow: hidden; - resize: horizontal; - } - - .image-slider > div:before { - content: ''; - display: block; - width: 13px; - height: 13px; - overflow: hidden; - position: absolute; - resize: horizontal; - right: 3px; - bottom: 3px; - background-clip: content-box; - background: linear-gradient(-45deg, black 50%, transparent 0); - -webkit-filter: drop-shadow(0 0 2px black); - filter: drop-shadow(0 0 2px black); - } - </style> - </head> - <body> - <div style="margin: 3rem; - font-family: Roboto, sans-serif"> - </div> <div> <div class="image-slider"> <div> """ + htmltag + "</div> </div> </body> </html> " - return desc - - - -def dummyfun(result_gallery): - print(f"type of gallery is ^^ - {type(result_gallery)}") - print(f"length of gallery is ^^ - {len(result_gallery)}") - print(f"first elem of gallery is ^^ - {result_gallery[0]}") - print(f"first elem of gallery is ^^ - {result_gallery[1]}") - # Load the image - #image = result_gallery[1] #Image.open("example.jpg") - - # Get the filename - #filename = image.filename - - # Print the filename - #print(f"filename is ^^ - {filename}") - return result_gallery[1]['name'] #+ ',' + result_gallery[1]['name'] #filename - -def create_demo(process, max_images=12): - with gr.Blocks(css = "#input_image {width: 512px;} #out_image {width: 512px;}") as demo: - with gr.Row(): - gr.Markdown('## Control Stable Diffusion with Segmentation Maps') - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type='numpy', elem_id='input_image') - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button(label='Run') - with gr.Accordion('Advanced options', open=False, visible=False): - num_samples = gr.Slider(label='Images', - minimum=1, - maximum=max_images, - value=1, - step=1) - image_resolution = gr.Slider(label='Image Resolution', - minimum=256, - maximum=768, - value=512, - step=256) - detect_resolution = gr.Slider( - label='Segmentation Resolution', - minimum=128, - maximum=1024, - value=512, - step=1) - ddim_steps = gr.Slider(label='Steps', - minimum=1, - maximum=100, - value=20, - step=1) - scale = gr.Slider(label='Guidance Scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=-1, - maximum=2147483647, - step=1, - randomize=True, - queue=False) - eta = gr.Number(label='eta (DDIM)', value=0.0) - a_prompt = gr.Textbox( - label='Added Prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative Prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - #<h4 style="color: green"> Observe the Ingenuity of ControlNet by comparing Input and Output images</h4> - #result_gallery = gr.Gallery(label='Output', visible= False, - # show_label=False, - # elem_id='gallery').style( - # grid=2, height='auto') - result_image = gr.Image(visible=False).style(height='auto', type='numpy') - #b1 = gr.Button('Get filenames') - #filename = gr.Textbox(label="image file names", visible=False) - #b2 = gr.Button('Show Image-Comparison') - with gr.Box(): - msg = gr.HTML() - imagecomp = gr.HTML() - ips = [ - input_image, prompt, a_prompt, n_prompt, num_samples, - image_resolution, detect_resolution, ddim_steps, scale, seed, eta - ] - run_button.click(fn=process, - inputs=ips, - outputs=[result_image, msg], #[result_gallery, imagecomp], - api_name='seg') - result_image.change(create_imgcomp, [input_image, result_image], [imagecomp]) - #b2.click(create_imgcomp, [input_image, filename], [imagecomp]) - - return demo diff --git a/spaces/ysharma/LLaVA_v1/scripts/merge_lora_weights.py b/spaces/ysharma/LLaVA_v1/scripts/merge_lora_weights.py deleted file mode 100644 index 3b39cc7beb12301379af7daebbb5553fa92093ea..0000000000000000000000000000000000000000 --- a/spaces/ysharma/LLaVA_v1/scripts/merge_lora_weights.py +++ /dev/null @@ -1,22 +0,0 @@ -import argparse -from llava.model.builder import load_pretrained_model -from llava.mm_utils import get_model_name_from_path - - -def merge_lora(args): - model_name = get_model_name_from_path(args.model_path) - tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name, device_map='cpu') - - model.save_pretrained(args.save_model_path) - tokenizer.save_pretrained(args.save_model_path) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model-path", type=str, required=True) - parser.add_argument("--model-base", type=str, required=True) - parser.add_argument("--save-model-path", type=str, required=True) - - args = parser.parse_args() - - merge_lora(args) diff --git a/spaces/yunfei0710/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp b/spaces/yunfei0710/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp deleted file mode 100644 index 593ce3129dc1574dbc8fc8b088cf595df215de93..0000000000000000000000000000000000000000 --- a/spaces/yunfei0710/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp +++ /dev/null @@ -1,103 +0,0 @@ - -#include <string> -#include <utility> - -#include "libipc/shm.h" - -#include "libipc/utility/pimpl.h" -#include "libipc/memory/resource.h" - -namespace ipc { -namespace shm { - -class handle::handle_ : public pimpl<handle_> { -public: - shm::id_t id_ = nullptr; - void* m_ = nullptr; - - ipc::string n_; - std::size_t s_ = 0; -}; - -handle::handle() - : p_(p_->make()) { -} - -handle::handle(char const * name, std::size_t size, unsigned mode) - : handle() { - acquire(name, size, mode); -} - -handle::handle(handle&& rhs) - : handle() { - swap(rhs); -} - -handle::~handle() { - release(); - p_->clear(); -} - -void handle::swap(handle& rhs) { - std::swap(p_, rhs.p_); -} - -handle& handle::operator=(handle rhs) { - swap(rhs); - return *this; -} - -bool handle::valid() const noexcept { - return impl(p_)->m_ != nullptr; -} - -std::size_t handle::size() const noexcept { - return impl(p_)->s_; -} - -char const * handle::name() const noexcept { - return impl(p_)->n_.c_str(); -} - -std::int32_t handle::ref() const noexcept { - return shm::get_ref(impl(p_)->id_); -} - -void handle::sub_ref() noexcept { - shm::sub_ref(impl(p_)->id_); -} - -bool handle::acquire(char const * name, std::size_t size, unsigned mode) { - release(); - impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode); - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); - return valid(); -} - -std::int32_t handle::release() { - if (impl(p_)->id_ == nullptr) return -1; - return shm::release(detach()); -} - -void* handle::get() const { - return impl(p_)->m_; -} - -void handle::attach(id_t id) { - if (id == nullptr) return; - release(); - impl(p_)->id_ = id; - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); -} - -id_t handle::detach() { - auto old = impl(p_)->id_; - impl(p_)->id_ = nullptr; - impl(p_)->m_ = nullptr; - impl(p_)->s_ = 0; - impl(p_)->n_.clear(); - return old; -} - -} // namespace shm -} // namespace ipc diff --git a/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/utils/merge.js b/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/utils/merge.js deleted file mode 100644 index 1f3440bd6aa2b3815ae547818a41bfd8e86b3d75..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/nodemon/lib/utils/merge.js +++ /dev/null @@ -1,47 +0,0 @@ -var clone = require('./clone'); - -module.exports = merge; - -function typesMatch(a, b) { - return (typeof a === typeof b) && (Array.isArray(a) === Array.isArray(b)); -} - -/** - * A deep merge of the source based on the target. - * @param {Object} source [description] - * @param {Object} target [description] - * @return {Object} [description] - */ -function merge(source, target, result) { - if (result === undefined) { - result = clone(source); - } - - // merge missing values from the target to the source - Object.getOwnPropertyNames(target).forEach(function (key) { - if (source[key] === undefined) { - result[key] = target[key]; - } - }); - - Object.getOwnPropertyNames(source).forEach(function (key) { - var value = source[key]; - - if (target[key] && typesMatch(value, target[key])) { - // merge empty values - if (value === '') { - result[key] = target[key]; - } - - if (Array.isArray(value)) { - if (value.length === 0 && target[key].length) { - result[key] = target[key].slice(0); - } - } else if (typeof value === 'object') { - result[key] = merge(value, target[key]); - } - } - }); - - return result; -} \ No newline at end of file diff --git a/spaces/zhanghaohui/szu-gpt-academic/request_llm/bridge_azure_test.py b/spaces/zhanghaohui/szu-gpt-academic/request_llm/bridge_azure_test.py deleted file mode 100644 index edc68f747d650e20a9e42d65dbcac1923d5cb192..0000000000000000000000000000000000000000 --- a/spaces/zhanghaohui/szu-gpt-academic/request_llm/bridge_azure_test.py +++ /dev/null @@ -1,241 +0,0 @@ -""" - 该文件中主要包含三个函数 - - 不具备多线程能力的函数: - 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程 - - 具备多线程调用能力的函数 - 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑 - 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程 -""" - -import logging -import traceback -import importlib -import openai -import time - - -# 读取config.py文件中关于AZURE OPENAI API的信息 -from toolbox import get_conf, update_ui, clip_history, trimmed_format_exc -TIMEOUT_SECONDS, MAX_RETRY, AZURE_ENGINE, AZURE_ENDPOINT, AZURE_API_VERSION, AZURE_API_KEY = \ - get_conf('TIMEOUT_SECONDS', 'MAX_RETRY',"AZURE_ENGINE","AZURE_ENDPOINT", "AZURE_API_VERSION", "AZURE_API_KEY") - - -def get_full_error(chunk, stream_response): - """ - 获取完整的从Openai返回的报错 - """ - while True: - try: - chunk += next(stream_response) - except: - break - return chunk - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至azure openai api,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - print(llm_kwargs["llm_model"]) - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = inputs - logging.info(f'[raw_input] {raw_input}') - chatbot.append((inputs, "")) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - - payload = generate_azure_payload(inputs, llm_kwargs, history, system_prompt, stream) - - history.append(inputs); history.append("") - - retry = 0 - while True: - try: - - openai.api_type = "azure" - openai.api_version = AZURE_API_VERSION - openai.api_base = AZURE_ENDPOINT - openai.api_key = AZURE_API_KEY - response = openai.ChatCompletion.create(timeout=TIMEOUT_SECONDS, **payload);break - - except: - retry += 1 - chatbot[-1] = ((chatbot[-1][0], "获取response失败,重试中。。。")) - retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else "" - yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面 - if retry > MAX_RETRY: raise TimeoutError - - gpt_replying_buffer = "" - is_head_of_the_stream = True - if stream: - - stream_response = response - - while True: - try: - chunk = next(stream_response) - - except StopIteration: - from toolbox import regular_txt_to_markdown; tb_str = '```\n' + trimmed_format_exc() + '```' - chatbot[-1] = (chatbot[-1][0], f"[Local Message] 远程返回错误: \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk)}") - yield from update_ui(chatbot=chatbot, history=history, msg="远程返回错误:" + chunk) # 刷新界面 - return - - if is_head_of_the_stream and (r'"object":"error"' not in chunk): - # 数据流的第一帧不携带content - is_head_of_the_stream = False; continue - - if chunk: - #print(chunk) - try: - if "delta" in chunk["choices"][0]: - if chunk["choices"][0]["finish_reason"] == "stop": - logging.info(f'[response] {gpt_replying_buffer}') - break - status_text = f"finish_reason: {chunk['choices'][0]['finish_reason']}" - gpt_replying_buffer = gpt_replying_buffer + chunk["choices"][0]["delta"]["content"] - - history[-1] = gpt_replying_buffer - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面 - - except Exception as e: - traceback.print_exc() - yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面 - chunk = get_full_error(chunk, stream_response) - - error_msg = chunk - yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面 - return - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 发送至AZURE OPENAI API,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 - inputs: - 是本次问询的输入 - sys_prompt: - 系统静默prompt - llm_kwargs: - chatGPT的内部调优参数 - history: - 是之前的对话列表 - observe_window = None: - 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗 - """ - watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可 - payload = generate_azure_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True) - retry = 0 - while True: - - try: - openai.api_type = "azure" - openai.api_version = AZURE_API_VERSION - openai.api_base = AZURE_ENDPOINT - openai.api_key = AZURE_API_KEY - response = openai.ChatCompletion.create(timeout=TIMEOUT_SECONDS, **payload);break - - except: - retry += 1 - traceback.print_exc() - if retry > MAX_RETRY: raise TimeoutError - if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……') - - - stream_response = response - result = '' - while True: - try: chunk = next(stream_response) - except StopIteration: - break - except: - chunk = next(stream_response) # 失败了,重试一次?再失败就没办法了。 - - if len(chunk)==0: continue - if not chunk.startswith('data:'): - error_msg = get_full_error(chunk, stream_response) - if "reduce the length" in error_msg: - raise ConnectionAbortedError("AZURE OPENAI API拒绝了请求:" + error_msg) - else: - raise RuntimeError("AZURE OPENAI API拒绝了请求:" + error_msg) - if ('data: [DONE]' in chunk): break - - delta = chunk["delta"] - if len(delta) == 0: break - if "role" in delta: continue - if "content" in delta: - result += delta["content"] - if not console_slience: print(delta["content"], end='') - if observe_window is not None: - # 观测窗,把已经获取的数据显示出去 - if len(observe_window) >= 1: observe_window[0] += delta["content"] - # 看门狗,如果超过期限没有喂狗,则终止 - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("用户取消了程序。") - else: raise RuntimeError("意外Json结构:"+delta) - if chunk['finish_reason'] == 'length': - raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。") - return result - - -def generate_azure_payload(inputs, llm_kwargs, history, system_prompt, stream): - """ - 整合所有信息,选择LLM模型,生成 azure openai api请求,为发送请求做准备 - """ - - conversation_cnt = len(history) // 2 - - messages = [{"role": "system", "content": system_prompt}] - if conversation_cnt: - for index in range(0, 2*conversation_cnt, 2): - what_i_have_asked = {} - what_i_have_asked["role"] = "user" - what_i_have_asked["content"] = history[index] - what_gpt_answer = {} - what_gpt_answer["role"] = "assistant" - what_gpt_answer["content"] = history[index+1] - if what_i_have_asked["content"] != "": - if what_gpt_answer["content"] == "": continue - messages.append(what_i_have_asked) - messages.append(what_gpt_answer) - else: - messages[-1]['content'] = what_gpt_answer['content'] - - what_i_ask_now = {} - what_i_ask_now["role"] = "user" - what_i_ask_now["content"] = inputs - messages.append(what_i_ask_now) - - payload = { - "model": llm_kwargs['llm_model'], - "messages": messages, - "temperature": llm_kwargs['temperature'], # 1.0, - "top_p": llm_kwargs['top_p'], # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - "engine": AZURE_ENGINE - } - try: - print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........") - except: - print('输入中可能存在乱码。') - return payload - - diff --git a/spaces/zhangyd/bingo/src/lib/isomorphic/node.ts b/spaces/zhangyd/bingo/src/lib/isomorphic/node.ts deleted file mode 100644 index da213ad6a86181979f098309c374da02835db5a0..0000000000000000000000000000000000000000 --- a/spaces/zhangyd/bingo/src/lib/isomorphic/node.ts +++ /dev/null @@ -1,26 +0,0 @@ -import Debug from 'debug' - -const { fetch, setGlobalDispatcher, ProxyAgent } = require('undici') -const { HttpsProxyAgent } = require('https-proxy-agent') -const ws = require('ws') - -const debug = Debug('bingo') - -const httpProxy = process.env.http_proxy || process.env.HTTP_PROXY || process.env.https_proxy || process.env.HTTPS_PROXY; -let WebSocket = ws.WebSocket - -if (httpProxy) { - setGlobalDispatcher(new ProxyAgent(httpProxy)) - const agent = new HttpsProxyAgent(httpProxy) - // @ts-ignore - WebSocket = class extends ws.WebSocket { - constructor(address: string | URL, options: typeof ws.WebSocket) { - super(address, { - ...options, - agent, - }) - } - } -} - -export default { fetch, WebSocket, debug } diff --git a/spaces/zht1/test2/utils/tools_gradio.py b/spaces/zht1/test2/utils/tools_gradio.py deleted file mode 100644 index 19b50fc7d4f1da25cbb1681ab9b993a1411a452e..0000000000000000000000000000000000000000 --- a/spaces/zht1/test2/utils/tools_gradio.py +++ /dev/null @@ -1,192 +0,0 @@ -import cv2 -import matplotlib.pyplot as plt -import numpy as np -import torch -from PIL import Image - - -def fast_process( - annotations, - image, - device, - scale, - better_quality=False, - mask_random_color=True, - bbox=None, - use_retina=True, - withContours=True, -): - if isinstance(annotations[0], dict): - annotations = [annotation["segmentation"] for annotation in annotations] - - original_h = image.height - original_w = image.width - if better_quality: - if isinstance(annotations[0], torch.Tensor): - annotations = np.array(annotations.cpu()) - for i, mask in enumerate(annotations): - mask = cv2.morphologyEx( - mask.astype(np.uint8), cv2.MORPH_CLOSE, np.ones((3, 3), np.uint8) - ) - annotations[i] = cv2.morphologyEx( - mask.astype(np.uint8), cv2.MORPH_OPEN, np.ones((8, 8), np.uint8) - ) - if device == "cpu": - annotations = np.array(annotations) - inner_mask = fast_show_mask( - annotations, - plt.gca(), - random_color=mask_random_color, - bbox=bbox, - retinamask=use_retina, - target_height=original_h, - target_width=original_w, - ) - else: - if isinstance(annotations[0], np.ndarray): - annotations = np.array(annotations) - annotations = torch.from_numpy(annotations) - inner_mask = fast_show_mask_gpu( - annotations, - plt.gca(), - random_color=mask_random_color, - bbox=bbox, - retinamask=use_retina, - target_height=original_h, - target_width=original_w, - ) - if isinstance(annotations, torch.Tensor): - annotations = annotations.cpu().numpy() - - if withContours: - contour_all = [] - temp = np.zeros((original_h, original_w, 1)) - for i, mask in enumerate(annotations): - if type(mask) == dict: - mask = mask["segmentation"] - annotation = mask.astype(np.uint8) - if use_retina == False: - annotation = cv2.resize( - annotation, - (original_w, original_h), - interpolation=cv2.INTER_NEAREST, - ) - contours, _ = cv2.findContours( - annotation, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE - ) - for contour in contours: - contour_all.append(contour) - cv2.drawContours(temp, contour_all, -1, (255, 255, 255), 2 // scale) - color = np.array([0 / 255, 0 / 255, 255 / 255, 0.9]) - contour_mask = temp / 255 * color.reshape(1, 1, -1) - - image = image.convert("RGBA") - overlay_inner = Image.fromarray((inner_mask * 255).astype(np.uint8), "RGBA") - image.paste(overlay_inner, (0, 0), overlay_inner) - - if withContours: - overlay_contour = Image.fromarray((contour_mask * 255).astype(np.uint8), "RGBA") - image.paste(overlay_contour, (0, 0), overlay_contour) - - return image - - -# CPU post process -def fast_show_mask( - annotation, - ax, - random_color=False, - bbox=None, - retinamask=True, - target_height=960, - target_width=960, -): - mask_sum = annotation.shape[0] - height = annotation.shape[1] - weight = annotation.shape[2] - # 将annotation 按照面积 排序 - areas = np.sum(annotation, axis=(1, 2)) - sorted_indices = np.argsort(areas)[::1] - annotation = annotation[sorted_indices] - - index = (annotation != 0).argmax(axis=0) - if random_color == True: - color = np.random.random((mask_sum, 1, 1, 3)) - else: - color = np.ones((mask_sum, 1, 1, 3)) * np.array( - [30 / 255, 144 / 255, 255 / 255] - ) - transparency = np.ones((mask_sum, 1, 1, 1)) * 0.6 - visual = np.concatenate([color, transparency], axis=-1) - mask_image = np.expand_dims(annotation, -1) * visual - - mask = np.zeros((height, weight, 4)) - - h_indices, w_indices = np.meshgrid( - np.arange(height), np.arange(weight), indexing="ij" - ) - indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None)) - - mask[h_indices, w_indices, :] = mask_image[indices] - if bbox is not None: - x1, y1, x2, y2 = bbox - ax.add_patch( - plt.Rectangle( - (x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor="b", linewidth=1 - ) - ) - - if retinamask == False: - mask = cv2.resize( - mask, (target_width, target_height), interpolation=cv2.INTER_NEAREST - ) - - return mask - - -def fast_show_mask_gpu( - annotation, - ax, - random_color=False, - bbox=None, - retinamask=True, - target_height=960, - target_width=960, -): - device = annotation.device - mask_sum = annotation.shape[0] - height = annotation.shape[1] - weight = annotation.shape[2] - areas = torch.sum(annotation, dim=(1, 2)) - sorted_indices = torch.argsort(areas, descending=False) - annotation = annotation[sorted_indices] - # 找每个位置第一个非零值下标 - index = (annotation != 0).to(torch.long).argmax(dim=0) - if random_color == True: - color = torch.rand((mask_sum, 1, 1, 3)).to(device) - else: - color = torch.ones((mask_sum, 1, 1, 3)).to(device) * torch.tensor( - [30 / 255, 144 / 255, 255 / 255] - ).to(device) - transparency = torch.ones((mask_sum, 1, 1, 1)).to(device) * 0.6 - visual = torch.cat([color, transparency], dim=-1) - mask_image = torch.unsqueeze(annotation, -1) * visual - # 按index取数,index指每个位置选哪个batch的数,把mask_image转成一个batch的形式 - mask = torch.zeros((height, weight, 4)).to(device) - h_indices, w_indices = torch.meshgrid(torch.arange(height), torch.arange(weight)) - indices = (index[h_indices, w_indices], h_indices, w_indices, slice(None)) - # 使用向量化索引更新show的值 - mask[h_indices, w_indices, :] = mask_image[indices] - mask_cpu = mask.cpu().numpy() - if bbox is not None: - x1, y1, x2, y2 = bbox - ax.add_patch( - plt.Rectangle( - (x1, y1), x2 - x1, y2 - y1, fill=False, edgecolor="b", linewidth=1 - ) - ) - if retinamask == False: - mask_cpu = cv2.resize( - mask_cpu, (target_width, target_height), interpolation=cv2.INTER_NEAREST - ) - return mask_cpu